Quantum Gaussian process regression
Abstract
In this paper, a quantum algorithm based on gaussian process regression model is proposed. The proposed quantum algorithm consists of three sub-algorithms. One is the first quantum sub-algorithm to efficiently generate mean predictor. The improved HHL algorithm is proposed to obtain the sign of outcomes. Therefore, the terrible situation that results is ambiguous in terms of original HHL algorithm is avoided, which makes whole algorithm more clear and exact. The other is to product covariance predictor with same method. Thirdly, the squared exponential covariance matrices are prepared that annihilation operator and generation operator are simulated by the unitary linear decomposition Hamiltonian simulation and kernel function vectors is generated with blocking coding techniques on covariance matrices. In addition, it is shown that the proposed quantum gaussian process regression algorithm can achieve quadratic faster over the classical counterpart.
pacs:
03.67.Dd, 03.65.Ta, 03.67.HkI Introduction
As a subfield at the intersection of computer science and statistics, building a system known as machine learning that is adaptive and able to learn from experience, which has attracted researchers from many fields. However, with the rapid development of information technology, the total amount of global data grows exponentially every year, which makes classical machine learning algorithm will face great challenges in computing performance when processing big data in the future. Quantum computing uses the basic characteristics of quantum mechanics (such as quantum superposition and quantum entanglement) to achieve computing tasks and has a significant speed advantage over classical computing in solving some specific problems [1, 2]. For example, Shor’s quantum factoring algorithm has an exponential acceleration over the classical algorithm [3], posing a serious threat to the security of RSA cryptographic system which is widely used. In recent years, quantum computing has been applied to the field of machine learning. A variety of efficient quantum machine learning algorithms have been proposed, such as quantum clustering analysis [4, 5], quantum neural network [6, 7], quantum classification [8, 9], quantum decision tree [10], quantum association rules [11] and so on. The researches of quantum algorithm and the exploration of the realization method of quantum mechanical properties in the algorithm field can make the further development of artificial intelligence algorithms [12]. Therefore, it is significantly important to combine quantum information with machine learning to come up with more efficient quantum algorithms [13].
One of most positive directions in machine learning is the development of practical Bayesian approaches to solve real challenging problems [14]. Among them, gaussian process is one of most important Bayesian machine learning methods, which is based on a particularly efficient method to place a priori function on the function space. Gaussian process regression was proposed by Carl E. Rasmussen and scholar Christopher K. I. Williams in 1995 [15], aiming at using gaussian process priori to the process regression analysis of data. The model hypothesis includes noise (regression residual) and gaussian process priori. Its solution is carried out according to bayesian inference without limiting the form of its kernel function. It can be used in the fields of time series analysis, image processing and automatic control [16, 17] in terms of convenience. Therefore, this paper attempts to combine gaussian process regression with quantum information and proposes the quantum gaussian process regression.
The rest of this paper is organized as follows. The second section is a review of classical gaussian process regression. The third section introduces quantum gaussian process. The time complexity analysis of quantum algorithm is given in the fourth chapter, The article is ended in the fifth section with a concluding remark.
II A review of classical gaussian process regression
In the simple regression model, a training dataset with data points is given, where is a column vector of independent input variables and is the corresponding scalar of single output variable. The goal of gaussian process regression is to train a linear function in a limited set of training dataset that can fit the relationship between and as well as possible, where is real value, is desired value and is identically distributed gaussian noise which is independent. Once obtained, can be used to predict the output of a new input data .
A gaussian process regression model is fully specified by a mean function and a covariance function (also known as a kernel) . In 2005, Rasmussen and Williams [18] pointed that these two functions can be obtained in terms of weight-space view and function-space view, expressions are as follows,
(1) |
(2) |
where denotes expectation value. Thus, gaussian process regression can be written as . Here, means gaussian process. Generally, classical gaussian process regression specified covariance function as squared exponential covariance function [18], which is denoted as
(3) |
Therefore, the central goal in gaussian process regression models is to predict the mean value and the covariance value of this distribution, also known as mean predictor and variance predictor . The deduction is presented in Ref.[18], through deduction, mean predictor and variance predictor can be re-expressed as
(4) |
(5) |
where is a vector that represents the kernel function of a test point and all training points ’s; denotes a covariance matrix of dimension, that is calculation between training datas; is covariance of test point and itself, which is a constant. Taking Eq. (4) as a linear combination of kernel functions, each of them is centered on training point, thus, Eq. (4) can be written as
(6) |
here ,
Then, we introduce the realization process of gaussian process distribution in classical calculation. Firstly, calculating the Cholesky decomposition of , thus . Finally, mean predictor can be computed by . Since the calculation of Cholesky factor is numerically stable, runtime is proportional to . For , let , thus can be computed with a number of basic arithmetrics. The computation of requires only the time of constant term. Therefore, total runtime is . For the current era of big data, the number of input points is huge, consequently, time complexity is quite large. That’s why we want to use quantum information.
III Quantum Gaussian process regression
Observing mean predictor and variance predictor, there is an inverse process. Thus, it is easy to think of using the HHL algorithm [33] as a subroutine to get desired result. Firstly, covariance matrix is a real covariance matrix, hence, is a real symmetric matrix, namely is Hermitian matrix. Writing in the spectral decomposition form [20]
(7) |
where are the eigenvalues of and are the corresponding eigenvector of . can be represented the linear combination of , that is . Similarly, can also be represented by , namely . Therefore, the expressions of Eq. (4) and Eq. (5) in the mathematical context are
(8) |
(9) |
Next, how to get the values of Eq. (8) and Eq. (9) is described by the following quantum steps.
III.1 Preparing quantum state of and
The specific expression is and . Ref. [28] pointed that the procedure of preparing quantum state and is as follows. Firstly, supposing that the quantum oracles and are provided, which can efficiently access the entries of and in time from quantum random access memory (QRAM) and act as and . Secondly, giving initial state and performing the quantum Fourier transform on the first register, which gets . The quantum Fourier transform on an orthonormal basis is defined to be a linear operator with following action on the basis states,
(10) |
Thirdly, applying the oracle on , which obtains . Then, appending one qubit and performing conditioned rotation on the register a1, which gains state
(11) |
Finally, performing inverse oracle operation and measuring the last register in the basis , the remainder particles are in the state , which has probability ( and are balanced). Thus, we need to perform to get . Moreover, can also be estimated. The total runtime of above operation is . In a similar way, and can be obtained in time .
III.2 Preparing operator that makes
Step B1 Assuming that covariance matrix is prepared (The detailed preparation method will be described in section E). The matrix is Hermitian but not necessarily sparse. Thus, the unitary linear decomposition (LCU) approach [24] is adopted to take the exponentiation of matrix. LCU means under the assumptions that the quantum oracle , which can be efficiently implemented in time and is provided. Zhou and Wang [22] proposed that can be simulated within spectral-norm error in time . Noted that the algorithm decomposes into linear combinations of unitary operators that can be efficiently implemented, that is , where can be implemented using one- or two-qubit gates.
Step B2 Performing the phase estimation [32] on , we can obtain
(12) |
Step B3 Adding an ancilla qubit and performing controlled rotation on it controlled on the second register, the state becomes
(13) |
Step B4 Performing inverse phase estimation and discarding the second register, we can get
(14) |
The above operations are all unitary operations, thus, above processes can be denoted with .
III.3 Computing predictive mean value in the quantum context
Step C1 Preparing quantum initial state . When the third register is , unitary operation on the fourth and fifth register is executed. Otherwise, pauli-X on the fifth register is performed. Thus, the state of system is
(15) |
Step C2 Performing the Hadamard transform on the first register. In the meanwhile, executing pauli-X on the second register. Then, the Hadamard transform on the second register is carried out. In this way, the state becomes
(16) |
Step C3 Doing nothing when the first register is . Otherwise, performing swap operation [25] on the second and the third register, which obtains
(17) |
Step C4 Measuring the first register of Eq. (17) with operator , where the inner product of Eq. (14) and is . Thus, the probability of measurement is . Here, is a constant. Through measurement, the specific value of is attained. Therefore, predicting mean value in quantum context is achieved.
III.4 Computing predictive covariance value in the quantum context
Step D1 Preparing quantum initial state . When the third register is in the state , unitary operation on the fourth register is performed. Otherwise, pauli-X operation is performed on the fifth register. Thus, we can get .
Step D2 Performing the Hadamard transform on the first register. At the same time, executing pauli-X on the second register. Then the Hadamard transform on the second register is carried out, which results in .
Step D3 Doing nothing when the first register is . Otherwise, performing swap operation for the second and the third register. Thus, the state can be obtained
(18) |
Step D4 Measuring the first register with operator , where the inner product of Eq. (14) and is . Thus, the probability of measurement is . Here, is a constant. We can get specific value of from probability. Therefore, predicting covariance value in quantum context is achieved.
III.5 Preparing covariance matrix and kernel function vector
In classical gaussian process regression, covariance function is generally . Thus, the representation of covariance matrix and kernel function vector are respectively
(19) |
(20) |
The preparation of quantum initial states has always been difficult in quantum machine learning. And in this paper, it is also a challenge. Inspired by quantum radial basis network [9], coherent state [26, 27] and block coding techniques [29, 30] are utilized to achieve the preparation of initial states. Coherent states play a crucial role in quantum optics and mathematical physics. They are defined in the Fock states , which is a basis of the infinitely dimensional Hilbert space . Let’s respectively, be the annihilation and creation operators of the harmonic oscillator. Then
(21) |
For any , it is easy to see that
(22) |
Let be a real number, its coherent state is defined by
(23) |
It is a unit eigenvector of a corresponding to the eigenvalue , that is . From Eq. (19) and Eq. (20), we also have , thus is obtained by a unitary operator of dimension infinity.
As for the preparation of in a finite quantum circuit, we can consider its Taylor approximation:
(24) |
Thus, the upper bound on error square is
(25) |
By stirling formula, we can get . Now, keeping error within . Let’s take Eq. (17) , that is only need to satisfy . Thus, it is easy to get the Taylor approximation of .
From the previous analysis, can be obtained by an unitary operator applying to . Because annihilation operators and creation operators are Hermitian, operator can be simulated by LCU. It is also shown that can simulate it with finite dimensions, which get a similar result. Thus for any vector , the definition of coherent state is , which costs . Now, considering the superposition states of training samples’ coherent states:
(26) |
Firstly, applying the Fourier transform on , which results in . Then appending one qubit, which gets . Finally, applying unitary opeartors to the e2 register, Eq. (26) can be attained in time .
Then, taking the partial trace on the e2 register of gives rise to the density operator of covariance matrix
(27) |
For covariance vector , we put the target vector into the training dataset D, which make dataset become . Here is and . Then, computing new covariance matrix that .
Ref. [29] presented a definition that supposed that is an -qubit density operator, and . Then there is a -qubit unitary is -block-encoding of so that , where is the spectral norm, thus, satisfied . Therefore, can be obtained by considering . That is . Then, a projection measurement is performed in the basis , the probability of obtaining measurement outcome is . Here is a density operator, thus . If is a pure state, . Otherwise, . When the eigenvalues of are small, we put a coefficient in front of the matrix in order to obtain with a high probability. Here Gilyén et.al [31] proposed that the block-encoding of is , where is an unitary operator that get a purification by appling on . Thus, can be obtained. In conclusion, covariance matrix and kernel function vector can be prepared. Finally, the preparation of needs . always refer to the complexity of implement the blocking-encoding. Without loss of generality, .
IV Runtime analysis of the algorithm
Let us start with discussing the time complexity of whole algorithm. As shown in TABLE 1, a detailed analysis of each step of this algorithm is depicted as follows. In A section, the state and can be generated in time with the help of QRAM. Then, matrix exponentiation is performed in Step B1, which can be simulated within in time . The phase estimation is performed in Step B2, this step by in estimating eigenvalue. Generally, ( is the conditional number of covariance matrix ), taking induces a final error of . Next, in Step B3, the runtime of controlled rotation is , which is relatively negligible compared to the time taken in Step B2. The inverse phase etimation has same time analysis with the phase estimation in Step B4. The Section C and Section D only utilize unitary operations, which is linear in the number of qubits, and the final measurement only account for a constant factor, so the runtime of these two sections is negligible. The aim of Section E is preparing covariance matrix and kernel function vector . Through coherent state and block-encoding techniques, time complexity are and respectively. Without loss of generality, letting . Therefore, the total runtime of getting predictive mean value and covariance value is , where where the tilde indicates that we are neglecting more slowly growing terms. Compared with the runtime of classical computation, which achieves a quadratic acceleration.
Step of Section | Time Complexity |
---|---|
Section A | |
Step B1 | |
Step B2 | |
Step B3 | |
Step B4 | |
Sectin E |
V Conclusion
Gaussian process regression makes sense in real-world applications, especially when problem involves extrapolating from large data sets. However, classical computers cost too much when data sets is large, thus quantum gaussian regression is proposed. In this work, firstly, we avoid the sparse Hamiltonian simulation and apply the LCU Hamiltonian simulation, making it possible to deal with the non-sparse matrix. Secondly, in order to get exact measurement value, that is the sign of results is not ambiguous, the original HHL algorithm [33] is improved, thus, inner product can get a clear symbol. Another innovation point of this paper is the preparation of covariance matrix. In the existing papers on quantum gaussian process regression [34], authors have given the covariance matrix by default. In our paper, the covariance matrix is obtained by annihilation operator and generation operator and the kernel function vector is obtained by applying block coding. Finally, our algorithm has a quadratic acceleration compared with classical counterpart. We hope that our algorithm, especially, the key technologies used in our algorithm, the improved HHL algorithm and the matrix algorithm, will inspire more efficient quantum machine learning algorithms for application in a wider range of fields.
Acknowledgements.
This work was supported by National Natural Science Foundation of China (Grant Nos. 61772134, 61976053 and 62006105), Fujian Province Natural Science Foundation (Grant No. 2018J01776) and Program for New Century Excellent Talents in Fujian Province University.V.0.1 References
References
- [1] J. Biamonte, P. Wittek and N. Pancotti, Nat. 549, 195 (2017).
- [2] A. D. Alhaidari and T. J. Taiwo, Jour. Math. Phys. 58, 022101 (2017).
- [3] P. W. Shor, J. SIAM, Comput. 26, 1484 (1997).
- [4] D. Cao, Jour. Quan. Elec. 32, 58 (2015).
- [5] K. Yu, G. D. Guo, B. B. Cai, Com. Sci. 43, 80 (2016).
- [6] I. Cong, S. Lukin and D. Mikhail, Nat. Phys. 15, 1273 (2019).
- [7] E. Farhi and H. Neven, arXiv:1802.06002.
- [8] M. Zidan, A. H. Abdel-Aty, M. El-Shafei, M. Feraig, Y. Al-Sbou, H. Eleuch and M. Abdel-Aty, Appl. Sci. 9, 1277 (2019).
- [9] C. Shao, Phys. Rev. A 102, 042418 (2020).
- [10] Y. Shi, Inform. Process. Lett. 81, 23 (2002).
- [11] C. H. Yu, F. Gao and Q. L. Wang. Phys. Rev. A. 94, 042311 (2016).
- [12] C. Silva and J. M. Fonseca, CSOC 2018 (2018).
- [13] D. Loss, D. P. Divincenzo, Phys. Rev. A, 57, 120 (1998).
- [14] N. Hak, Soci. Sci. Elec. Pub. (2020).
- [15] C. Williams, C. E. Rasmussen, NIPS’95, (1995).
- [16] R. Dürichen, M. A. F. Pimentel, L. Clifton, A. Schweikard and D. A. Clifton, IEEE. Trans. Biomed. Eng. 62, 314 (2015).
- [17] L. Ling, C. Pan and P. Li, Appl. Mech. Mater. 421, 523 (2013).
- [18] C. Williams, C. E. Rasmussen, The MIT Press, (2005).
- [19] A. W. Harrow, A. Hassidim, S. Lloyd, Phy. Rev. LetT. 15, 150502 (2009).
- [20] M. F. Ochs, R. Stoyanova, F. Ariasmendoza, Jour. Mag. Res. 137, 161 (1999).
- [21] C. H. Yu, F. Gao, Q. Y. Wen, IEEE Trans. Know. Data Eng. 33, 858 (2021).
- [22] S. S. Zhou and J. B. Wang, R. Soc. Open Sci. 4, 160906 (2017).
- [23] D. W. Berry, A. M. Childs, R. Cleve, R. Kothari, and R. D. Somma, Phys. Rev. Lett. 114, 090502 (2015).
- [24] J. B. Parker, I. Joseph. arXiv:2002.08497.
- [25] H. Buhrman, R. Cleve and J. Watrous, Phys. Rev. Lett. 87, 167902 (2001).
- [26] B. C. Sanders, J. Phys. A: Math. Theor. 45, 244002 (2012).
- [27] K. Fujii, arXiv: quant-ph/0112090 (2001).
- [28] C. H. Yu, F. Gao and Q. Y. Wen, IEEE. T. Knowl. Data. En. 33, 858 (2021).
- [29] C. Shao. J. Phys. A: Math. Theor. 53, 045301 (2019).
- [30] S. Chakraborty, A. Gilyén, S. Jeffery. ICALP 2019. 33 (2019).
- [31] A. Gilyén, Y. Su, G. H. Low and N. Wiebe, STOC (2019), 193 (2018).
- [32] K. Dasaratha, B. Golub and N. Hak, arXiv:1801.02042.
- [33] A. W. Harrow, A. Hassidim, S. Lloyd, Phys. Rev. Lett. 103, 150502 (2009).
- [34] Z. Zhao, J. K. Fitzsimons, J. F. Fitzsimons, Phys. Rev. A 99, 052331 (2019).