Measurement-efficient quantum Krylov subspace diagonalisation
Abstract
The Krylov subspace methods, being one category of the most important classical numerical methods for linear algebra problems, can be much more powerful when generalised to quantum computing. However, quantum Krylov subspace algorithms are prone to errors due to inevitable statistical fluctuations in quantum measurements. To address this problem, we develop a general theoretical framework to analyse the statistical error and measurement cost. Based on the framework, we propose a quantum algorithm to construct the Hamiltonian-power Krylov subspace that can minimise the measurement cost. In our algorithm, the product of power and Gaussian functions of the Hamiltonian is expressed as an integral of the real-time evolution, such that it can be evaluated on a quantum computer. We compare our algorithm with other established quantum Krylov subspace algorithms in solving two prominent examples. To achieve an error comparable to that of the classical Lanczos algorithm at the same subspace dimension, our algorithm typically requires orders of magnitude fewer measurements than others. Such an improvement can be attributed to the reduced cost of composing projectors onto the ground state. These results show that our algorithm is exceptionally robust to statistical fluctuations and promising for practical applications.
1 Introduction
Finding the ground-state energy of a quantum system is of vital importance in many fields of physics [1, 2, 3]. The Lanczos algorithm [4, 5] is one of the most widely used algorithms to solve this problem. It belongs to the Krylov subspace methods [6], in which the solution usually converges to the true answer with an increasing subspace dimension. However, such methods are unscalable for many-body systems because of the exponentially-growing Hilbert space dimension [7]. In quantum computing, there is a family of hybrid quantum-classical algorithms that can be regarded as a quantum generalisation of the classical Lanczos algorithm [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. Following Refs. [11, 15, 16], we call them quantum Krylov subspace diagonalisation (KSD). These algorithms are scalable with the system size by carrying out the classically-intractable vector and matrix arithmetic on the quantum computer. They possess potential advantages as the conventional quantum algorithm to solve the ground-state problem, quantum phase estimation, requires considerable quantum resources [21, 22], while the variational quantum eigensolver is limited by the ansatz and classical optimisation bottlenecks [23, 24].
However, Krylov subspace methods are often confronted with the obstacle that small errors can cause large deviations in the ground-state energy. This issue is rooted in the Krylov subspace that is spanned by an almost linearly dependent basis [25]. Contrary to classical computing, in which one can exponentially suppress rounding errors by increasing the number of bits, quantum computing is inherently subject to statistical error. Since statistical error decreases slowly with the measurement number as , an extremely large can be required to reach an acceptable error. In this case, although quantum KSD algorithms perform well in principle, the measurement cost has to be assessed and optimised for realistic implementations [16, 20].
In this work, we present a general and rigorous analysis of the measurement cost in quantum KSD algorithms. Specifically, we obtain an upper bound formula of the measurement number that is applicable to all quantum KSD algorithms. Then, we propose an algorithm to construct the Hamiltonian-power Krylov subspace [6]. In our algorithm, we express the product of Hamiltonian power and a Gaussian function of Hamiltonian as an integral of real-time evolution. In this way, the statistical error decreases exponentially with the power, which makes our algorithm measurement-efficient. We benchmark quantum KSD algorithms by estimating their measurement costs in solving the anti-ferromagnetic Heisenberg model and the Hubbard model. Various lattices of each model are taken in the benchmarking. It is shown that the measurement cost in our algorithm is typically orders of magnitude fewer than other algorithms to reach an error comparable to the classical Lanczos algorithm. We also demonstrate the measurement efficiency of our algorithm by composing a ground-state projector.
2 Krylov subspace diagonalisation
First, we introduce the KSD algorithm and some relevant notations. The algorithm starts with a reference state (or a set of reference states [10, 11]; we focus on the single-reference case in this work). Then we generate a set of basis states
(1) |
where is the Hamiltonian, and are linearly-independent -degree polynomials (to generate a -dimensional Krylov subspace). For example, it is conventional to take the power (P) function in the Lanczos algorithm. These states span a subspace called the Krylov subspace. We compute the ground-state energy by solving the generalised eigenvalue problem
(2) |
where and . Let be the minimum eigenvalue of the generalised eigenvalue problem. The error in the ground-state energy is
(3) |
where is the true ground-state energy. We call the subspace error. One can also construct generalised Krylov subspaces in which are functions of other than polynomials; see Table 1. Appendix A provides a more detailed introduction to the KSD algorithm.
In the literature, subspaces generated by variational quantum circuits [26, 27, 28] and stochastic time evolution [29] are also proposed. Quantum KSD techniques can also be used to mitigate errors caused by imperfect gates [30, 31, 32]. In this work, we focus on Hamiltonian functions because of their similarities to conventional Krylov subspace methods.
Abbr. | Refs. | |
---|---|---|
P | [19, 12] | |
CP | [20] | |
GP | This work | |
IP | [18] | |
ITE | [8, 9] | |
RTE | [10, 11, 12, 13, 14, 15, 16, 17] | |
F | [15] |
3 Statistical error and regularisation
In addition to , the other error source is the statistical error. In quantum KSD algorithms, matrices and are obtained by measuring qubits at the end of certain quantum circuits. Measurements yield estimators and of the two matrices, respectively. Errors in and depend on the measurement number. Suppose the budget for each complex matrix entry is measurements (the budget for each part is ). Variances of matrix entries have upper bounds in the form
(4) |
and
(5) |
where and are some factors depending on the measurement protocol. For example, suppose each entry of and can be expressed in the form , where are unitary operators, and we measure each term using the Hadamard test [33], then the variance of the entry has the upper bound , where .
We quantify the error in a matrix with the spectral norm. The distributions of and depend on not only the measurement number but also the correlations between matrix entries. We can measure matrix entries independently, then their distributions are uncorrelated. However, certain matrix entries take the same value, and we can measure them collectively. For example, for the P basis, all entries take the same value as . We only need to measure one of them on the quantum computer, and then these entries become correlated. In Appendix B, we analyse the distributions of spectral norms for each basis in Table 1.
Errors in matrices cause an error in the ground-state energy in addition to . It is common that the subspace basis is nearly linearly dependent, which makes the overlap matrix ill-conditioned. Then, even small errors in matrices may cause a serious error in the ground-state energy. Thresholding is a method to overcome this problem [8, 16]. In this work, we consider the regularisation method; see Algorithm 1. An advantage of the regularisation method is that by taking a proper regularisation parameter , the resulting energy is variational, i.e. up to a controllable failure probability.
Protocol | Basis | |||
IM (Chebyshev) | All | |||
IM (Hoeffding) | ||||
CM (Real Hankel) | P, GP, IP, ITE | |||
CM (Real symmetric) | CP, F | |||
CM (Complex Toeplitz) | RTE |
We propose to choose the regularisation parameter according to the distributions of and . Let be the permissible failure probability. We take such that
(6) | |||||
In Table 2, we list satisfying the above condition for each basis. The value of depends on the failure probability and the measurement number . Taking in this way, an upper bound of the energy error is given in the following lemma.
Lemma 1.
Let
(7) |
Under conditions and , the following inequality holds,
(8) |
See Appendix C for the proof. Notice that we can always subtract a sufficiently large positive constant from the Hamiltonian to satisfy the conditions. The upper bound of the energy error in the regularisation method is , up to the failure probability.
4 Analysis of the measurement cost
In this work, we rigorously analyse the number of measurements. Let be the target error in the ground-state energy. We will give a measurement number sufficient (i.e. an upper bound of the measurement number necessary) for achieving the target. For the regularisation method, we already have an upper bound of the error . Therefore, we can derive such a measurement number by solving the equation
(9) |
in which is the unknown. With the solution, we can work out the corresponding measurement number according to Table 2. If matrix entries are measured independently, the total measurement number is ; we have the following theorem.
Theorem 1.
Suppose and . The total measurement number
(10) |
is sufficient for achieving the permissible error and failure probability , i.e. with a probability of at least . Here, , , and is the solution to Eq. (9).
See Appendix C for the proof. Similar to Lemma 1, we can always satisfy the condition by subtracting a sufficiently large positive constant from the Hamiltonian. The above theorem also holds (up to some approximations) for collective measurements if we replace functions and according to Table 2.
4.1 Factoring the measurement cost
To compare different bases, we would like to divide the measurement number into three factors. If matrix entries are measured independently, the first two factors are the same for different bases of Krylov subspaces, and the third factor depends on the basis. We find that the third factor is related to the ill-conditioned problem, and we can drastically reduce it with our measurement-efficient algorithm.
We can always write the total measurement number Eq. (10) as a product of three factors,
(11) |
The first factor is the cost of measuring the energy. Roughly speaking, it is the cost in the ideal case that one can prepare the ground state by an ideal projection; see Appendix D. The spectral norm characterises the range of the energy. Assume that the statistical error and are comparable, according to the standard scaling of the statistical error. is the overhead for achieving the permissible failure probability . The success of KSD algorithms depends on a finite overlap between the reference state and the true ground state [6, 16, 20]. This requirement is reflected in , where . The second factor is the overhead due to measuring matrices. There are more sources of statistical errors (matrix entries) influencing the eventual result when is larger. The remaining factor
(12) |
is related to the spectrum of (i.e. the basis), as we will show in Section 7. Notice that this expression of holds for all the bases. In the numerical result, we find that the typical value of for certain bases can be as large as to achieve certain . We propose the Gaussian-power (GP) basis (see Table 1), and it can reduce to about .
5 Measurement-efficient algorithm
As we have shown, the measurement overhead depends on the overlap matrix , i.e. how we choose the basis of Krylov subspace. The main advantage of our algorithm is that we utilise a basis resulting in a small .
To generate a standard Krylov subspace, we choose operators
(13) |
where is a constant up to choice. We call it the GP basis. The corresponding subspace is a conventional Hamiltonian-power Krylov subspace, but the reference state has been effectively replaced by .
The spectral norm of operators for the GP basis [defined in Eq. (13)] has the upper bound
(14) |
where is Euler’s constant. This upper bound is proven in Lemma 9 in Appendix F. Under the condition , the norm upper bound decreasing exponentially with . This property is essential for the measurement efficiency of our algorithm.
We propose to realise the Gaussian-power basis by expressing the operator of interest as a linear combination of unitaries (LCU). The same approach has been taken to realise other bases like power [19, 12], inverse power [18], imaginary-time evolution [34], filter [15] and Gaussian function of the Hamiltonian [35].
Suppose we want to measure the quantity . Here and for and , respectively. First, we work out an expression in the from , where are unitary operators, and are complex coefficients. Then, there are two ways to measure . When the expression has finite terms, we can apply on the state by using a set of ancilla qubits; In the literature, LCU usually refers to this approach [36]. Alternatively, we can evaluate the summation using the Monte Carlo method [37] by averaging Hadamard-test [33] estimates of randomly selected terms . The second way works for an expression with even infinite terms and requires only one or zero ancilla qubits [38, 39]. We focus on the Monte Carlo method in this work.
Now we give our LCU expression. Suppose we already express as a linear combination of Pauli operators, it is straightforward to work out the LCU expression of given the expression of . Therefore, we only give the expression of . Utilising the Fourier transformation of a modified Hermite function (which is proved in Appendix F), we can express in Eq. (13) as
(15) | |||||
where denotes Hermite polynomials [40], is the normalised Gaussian function, , and is the real-time evolution (RTE) operator. Notice that in Eq. (15) is expressed as a linear combination of RTE operators , and the summation in conventional LCU is replaced with an integral over . See Appendix G for how to evaluate this integral using the Monte Carlo method.
5.1 Real-time evolution
In order to implement the LCU expression Eq. (15), we need to implement the RTE . There are various protocols for implementing RTE, including Trotterisation [41, 42, 43], LCU [36, 44, 45, 37] and qDRIFT [46], etc. In most of the protocols, RTE is inexact but approaches the exact one when the circuit depth increases. Therefore, all these protocols work for our algorithm as long as the circuit is sufficiently deep. In this work, we specifically consider an LCU-type protocol, the zeroth-order leading-order-rotation formula [47].
In the leading-order-rotation protocol, RTE is exact even when the circuit depth is finite. In this protocol, we express RTE in the LCU form
(16) |
where is a rotation or Pauli operator, are complex coefficients, and is the time step number. This exact expression of RTE is worked out in the spirit of Taylor expansion and contains infinite terms. Notice that determines the circuit depth. For details of the zeroth-order leading-order-rotation formula, see Ref. [47] or Appendix G in this paper, where Eq. (80) gives the specific form of Eq. (16). Substituting Eq. (16) into Eq. (15), we obtain the eventual LCU expression of , a concatenation of two LCU expressions, i.e.
(17) | |||||
With the above expression, we have written as the linear combination of . Therefore, we can use the Monte Carlo method to realise : We sample and according to the coefficients, then we implement the corresponding on the quantum computer.
5.2 Bias and variance
Let be the estimator of . There are two types of errors in , bias and variance. Because RTE is exact in the leading-order-rotation protocol, the estimator is unbiased. Therefore, we can focus on the variance from now on.
If we use the Monte Carlo method and the one-ancilla Hadamard test to evaluate the LCU expression of , the variance has the upper bound
(18) |
where is the measurement number, and is the 1-norm of coefficients in . Here, is a summation of unitary operators. The generalisation to integral is straightforward. We call the cost of the LCU expression. We remark that the variance in this form is universal and valid for all algorithms with some proper factor .
The cost of an LCU expression is related to the spectral norm. Notice , we immediately conclude that . Therefore, the best we can achieve is a carefully designed LCU expression that is as close as possible to . Only when decreases exponentially with , it is possible to measure with an error decreasing exponentially with .
In our algorithm, we find that the cost has the form
(19) |
for and , respectively. is the cost due to , and
(20) | |||||
This upper bound holds when , which is a requirement on the circuit depth . The proof is in Appendix G. With the upper bound, we can find that we already approach the lower bound of the variance given by the spectral norm (The difference is a factor of ). As a result, the variance of decreases exponentially with and .
Detailed pseudocodes and analysis are given in Appendix G. There are two ways to apply Theorem 1: First, we can take and ; Second, we can rescale the basis by taking . Matrix entries and of the rescaled basis have variance upper bounds and , respectively. Therefore, and for the rescaled basis. We take the rescaled basis in the numerical study.
6 Measurement overhead benchmarking
With Theorem 1, we can benchmark the measurement cost in quantum KSD algorithms listed in Table 1. We focus on the overhead factor , while ignoring the slight differences in the other two factors in different algorithms (See Table 2). Given a target error , first we work out the solution to Eq. (9), then is calculated according to Eq. (12).
Two models of strongly correlated systems, namely, the anti-ferromagnetic Heisenberg model and Hubbard model [7, 48, 49, 50], are used in benchmarking. For each model, the lattice is taken from two categories: regular lattices, including chain and ladder, and randomly generated graphs. We fix the lattice size such that the Hamiltonian can be encoded into ten qubits. For each Hamiltonian specified by the model and lattice, we evaluate all quantum KSD algorithms with the same subspace dimension taken from . In total, instances of are generated to benchmark quantum KSD algorithms. The details of the numerical calculations including parameters taken in different algorithms are given in Appendix H.
Fig. 1 illustrates the result of the instance , for example. We can find that the error decreases with the cost overhead. When the target error is relatively high, algorithms have similar performance, e.g. when , the cost is sufficient in most algorithms. However, in comparison with others, the error in the GP algorithm decreases much faster. When is sufficiently large, required to achieve the error always follows a scaling of . However, the size of at which different algorithms converge to this behavior varies; see Fig. 9. The GP algorithm converges with a smaller compared to other algorithms. In this instance, the GP algorithm has already converged when , whereas the other algorithms have not yet converged, resulting in poorer performance.

To confirm this advantage of the GP algorithm, we implement the simulation for a number of instances. We choose the power algorithm as the standard for comparison. We remark that the power algorithm and the classical Lanczos algorithm yield the same result when the implementation is error-free (i.e. without statistical error and rounding error), and the eventual error in this ideal case is the subspace error . Given the model, lattice and subspace dimension, we compute of the power algorithm. Then, we take the permissible error and compute the overhead factor for each algorithm. Then, the empirical distribution of is shown in Fig. 2.

From the distribution, we can find that our Gaussian-power algorithm has the smallest measurement overhead, and is smaller than for almost all instances. The filter (F) algorithm is the most well-performed one among others. Part of the reason is that we have taken the optimal parameter found by grid search, of which the details are discussed in Appendix H. The median value of is about for the GP algorithm, for the F algorithm and as large as for some other algorithms. Comparing the GP and F algorithms, although the median values are similar, the cost overhead is larger than in more than instances in the F algorithm, which never happens in the GP algorithm.
The performance of the GP algorithm depends on the choice of parameters: , and . The regularisation parameter is taken according to Table 2. To choose the value of , we need prior knowledge about the true ground-state energy , which can be obtained from certain classical algorithms computing the ground state or the quantum KSD algorithm taking a smaller subspace dimension. Although we prefer an close to , the numerical result suggests that a relatively large difference ( of the entire spectrum) is tolerable. The Gaussian factor in the GP basis acts as a filter on the spectrum. Therefore, if is finite and we take an over large , we may have a vanishing overlap between the effective reference state and the ground state. To avoid this problem, we need to assume an upper bound , then we suggest taking in the range , see Appendix K. Although we take the same for all in each instance in the benchmarking, one can flexibly choose different in practice.
7 Composing a projector
We show that is related to the spectrum of . Let’s consider a small and the Taylor expansion of . The minimum value of the zeroth-order term is according to the Rayleigh quotient theorem [51]. Take this minimum value, we have , where is the first derivative. The solution to is , then
(21) |
where under assumptions and . Eq. (21) is derived in Appendix I. Therefore, when is close to singular, the overhead can be large.
We can explain the measurement efficiency of our algorithm by composing a projector. In our algorithm with the rescaled basis, the solution is a state in the form . Ideally, the linear combination of realises a projector onto the ground state, i.e. . Then, up to a phase and . Then (notice that ), we can find that determines . When is smaller, required for composing the projector is smaller. In our algorithm, decreases exponentially with , and the speed of decreasing is controllable via the parameter . In this way, our algorithm results in a small .
To be concrete, let’s consider a classic way of composing a projector from Hamiltonian powers using Chebyshev polynomials (CPs) , which has been used for proving the convergence of the Lanczos algorithm [52], RTE and CP quantum KSD algorithms [16, 20]. Without loss of generality, we suppose . An approximate projector in the form of Chebyshev polynomials is
(22) |
where , is the energy gap between the ground state and the first excited state, , and is an operator with the upper bound . Notice that the error depends on , and its upper bound decreases exponentially with if is finite. For simplicity, we focus on the case that in the Hamiltonian power. Then, in the expansion , coefficients have upper bounds increasing exponentially with . See Appendix J. In the LCU and Monte Carlo approach, large coefficients lead to large variance. Therefore, it is difficult to realise this projector because of the exponentially increasing coefficients.
In our algorithm, the exponentially decreasing can cancel out the large . We can compose the CP projector with an additional Gaussian factor. Taking and , we have . Because the Gaussian factor is also a projector onto the ground state, the overall operator is a projector with a smaller error than the CP projector. The corresponding overhead factor is
(23) |
When is sufficiently large, . This example shows that the Gaussian-power basis can compose a projector with a small measurement cost.
By composing a projector with an explicit expression, we can obtain the worst-case performance of the algorithm. Similar analyses are reported in Refs. [16] and [20] for RTE and CP algorithms, respectively. Because the projector is approximate, it results in a finite error in the ground-state energy . By solving the generalised eigenvalue problem, we can usually work out a better solution, i.e. the corresponding energy error is lower than . Our result about the measurement cost is applicable to a target error that is not limited by . Theorem 1 holds even when , which is beyond the explicit projector analysis.
The method of constructing polynomial projection operators mentioned above can be applied not only to the GP algorithm but also to the P algorithm. Since the variance of the P algorithm does not decrease exponentially with , it can be seen that our algorithm has an advantage over the P algorithm in this regard. We remark that for this particular projector, the decomposition into Hamiltonian powers is costly, and the existence of a better decomposition is possible. Compared to other algorithms, we have provided a numerical benchmark test in the previous section. Additionally, for the CP and RTE algorithms, it is possible to construct approximate projection operators with constant bounded expansion coefficients [16, 20]. In this case, further methods are needed to theoretically explain the improvement in measurement cost of our algorithm compared to these algorithms.
8 Discussions and Conclusions
We can apply Theorem 1 to estimate the measurement cost of various quantum KSD algorithms for arbitrary target error and subspace dimension . In this paper, we focus on a target error comparable to the Lanczos algorithm at the same , i.e. . The Lanczos algorithm is a well-known classical algorithm. It is natural to demand quantum KSD algorithms, as quantum counterparts of Lanczos algorithm, to reach a comparable target error. In this case, the measurement cost of the GP algorithm is observed much smaller than the others in the benchmarking.
In certain cases of practical applications, there is definite target error epsilon, e.g. the chemical accuracy. In these cases, these target errors are no longer depending on . According to the Kaniel–Paige convergence theory [52], by increasing the subspace dimension and measurement number , we can always achieve arbitrary finite target error. Our results focus on relatively small d, in which case the GP algorithm requires a much smaller than the others, as illustrated in Fig. 2. We show in Fig. 9 that converges to when is sufficiently large, as demonstrated in Refs. [20, 53]. Increasing can weaken the advantage of the GP algorithm in the measurement cost. However, increasing not only brings more measurement entries (i.e. a larger factor ), but also implies deeper quantum circuits and more gate noises.
In conclusion, we have proposed a regularised estimator of the ground-state energy (see Algorithm 1). Even in the presence of statistical errors, the regularised estimator is variational (i.e. equal to or above the true ground-state energy) and has a rigorous upper bound. Because of these properties, it provides a universal way of analysing the measurement cost in quantum KSD algorithms, with the result summarised in Theorem 1. This approach can be generalised to analyse the effect of other errors, e.g. imperfect quantum gates. The robustness to statistical errors in our algorithm implies the robustness to other errors.
To minimise the measurement cost, we have proposed a protocol for constructing Hamiltonian power with an additional Gaussian factor. This Gaussian factor is important because it leads to the statistical error decreasing exponentially with the power. Consequently, we can construct a Chebyshev-type projector onto the ground state with a small measurement cost. We benchmark quantum KSD algorithms with two models of quantum many-body systems. We find that our algorithm requires the smallest measurement number, and a measurement overhead of a hundred is almost always sufficient for approaching the theoretical-limit accuracy of the Lanczos algorithm. In addition to the advantage over the classical Lanczos algorithm in scalability, our result suggests that the quantum algorithm is also competitive in accuracy at a reasonable measurement cost.
This paper primarily focuses on the ground-state problem. Notably, our quantum KSD algorithm holds promise for application to low-lying excited states as well. We leave it for future research.
Appendix A General formalism of KSD algorithm
Given a Hamiltonian , a reference state and an integer , the standard Krylov subspace is . This can be seen as taking polynomials . The subspace is the same when is an arbitrary set of linearly-independent -degree polynomials. Any state in the Krylov subspace can be expressed as
(24) |
The state that minimises the Rayleigh quotient is regarded as the approximate ground state of the Hamiltonian , and the corresponding energy is the approximate ground-state energy, i.e.
(25) | |||||
(26) |
Definition 1 (Rayleigh quotient).
For any matrix , and any non-zero vector , the Rayleigh quotient is defined as
(27) |
The approximate ground state energy can be rewritten as
(28) |
where and are Hermitian matrices as defined in the main text. Notice that is the minimum eigenvalue of Eq. (2).
When increases, converges to the eigenstate (having non-zero overlap with ) with the largest absolute eigenvalue. The eigenstate is usually the ground state (If not, take where is a positive constant). Therefore, it is justified to express the ground state as a linear combination of as long as has a non-zero overlap with the true ground state . However, the convergence with causes inherent trouble that basis vectors of the Krylov subspace are nearly linearly dependent. Then, the overlap matrix , which is a Gram matrix, becomes nearly singular and has a large condition number. As a result, the denominator of the Rayleigh quotient can be very small, and a tiny error in computing may cause a large deviation of the approximate ground-state energy .
According to the Rayleigh-Ritz theorem (Rayleigh quotient theorem) [51], the minimisation problem is equivalent to the generalised eigenvalue problem Eq. (2). The generalised eigenvalue problem can be reduced to an eigenvalue problem while the singularity issue remains. For example, one can use the Cholesky decomposition to express as the product of an upper-triangular matrix and its conjugate transpose, i.e. , and the eigenvalue problem becomes . The Cholesky decomposition, however, also requires the matrix to be positive-definite. When is nearly singular, the Cholesky decomposition is unstable. The QZ algorithm is a more numerically stable method, and it is built into most standard programming languages. One way to address the singularity issue is by adding a positive diagonal matrix to . In this work, we also add a diagonal matrix to to ensure that the Rayleigh quotient is variational, i.e. the energy is not lower than the ground-state energy (with a controllable probability). An alternative way to address the singularity issue is the so-called thresholding method, see Refs. [8, 16].
Appendix B Norm distributions of noise matrices
In this section, we analyse the distributions of and . For the two matrices, there are a total of matrix entries. We can measure each of them independently on the quantum computer. Under this independent measurement protocol, we obtain the most general results that apply to all the bases in Table 1, which are summarised in Lemma 2 and Lemma 3. In the matrix entries, some of them are equivalent. To reduce the measurement cost, we can only measure one entry in each set of equivalent matrix entries, and we call it the collective measurement protocol. Because the equivalent-entry sets depend on the basis, the corresponding distributions are basis-dependent. The results are summarised in Lemma 4 and Lemma 5; we remark that these results are obtained under the assumption that statistical noise is Gaussian.
Lemma 2 (Based on Chebyshev’s inequality).
Let be the failure probability. The inequality (6) holds when .
Proof.
Recall the variance upper bounds of and given in Eq. (4) and (5), respectively. When ,
(29) | |||||
(30) |
According to Chebyshev’s inequality, we have
(31) | |||||
(32) |
Since matrix entries are measured independently, estimators or are independent. We have
(35) | |||||
(36) |
where Bernoulli’s inequality is used.
Let be an matrix with entries , its spectral norm satisfies
(37) |
This is because of the well known relation , where the Frobenius norm is . Therefore, when and for all and , we have and . ∎
The above proof applies to all bases and measurement protocols and holds without any assumption on the statistical noise. As the most general result, Lemma 2 is used to prove Theorem 1 in Appendix C.
To measure matrix entries on a quantum computer, a practical method is using Hadamard-test circuits. Then, measurement outcomes are independent random variables , i.e. the random variables are bounded even taking into account other factors (We will give an example later, see Algorithm 2, where the phase of can be absorbed into ). In this case, we can use Hoeffding’s inequality [55] to obtain an improved result.
Lemma 3 (Based on Hoeffding’s inequality).
Proof.
For each matrix entry, its real (imaginary) part is computed by taking the mean of random variables. According to Hoeffding’s inequality, we have,
(38) | |||||
Similarly,
(39) |
Considering the equivalence between matrix entries, we can find that and are real Hankel matrices for the GP and P, IP and ITE bases in Table 1, real symmetric matrices for the CP and F bases and complex Hermitian-Toeplitz matrices for the RTE basis. Under the collective measurement protocol, random matrices and are matrices of the same type as and . To analyse the distribution of these random matrices, we assume that the distributions of and are Gaussian with the zero mean. Notice that inequivalent entries are independent random variables. Then, we obtain the following results.
Lemma 4 (Real symmetric/Hankel matrices).
Let be the failure probability. If and are random real symmetric matrices following the Gaussian distribution, then the inequality (6) holds when .
Proof.
Applying the matrix Gaussian series theorem [56] to the real symmetric matrix with Gaussian noise, we have
(40) |
where is used. Similarly,
(41) |
Note that the real Hankel matrices with Gaussian noise yield the same inequalities [57].
To make sure and with a probability of at least , we take such that
(42) |
Then,
(43) |
∎
Lemma 5 (Complex Hermitian-Toeplitz matrices).
Let be the failure probability. If and are random complex Hermitian-Toeplitz matrices following the Gaussian distribution, then the inequality (6) holds when .
Proof.
For complex Hermitian-Toeplitz matrices, the matrix Gaussian series theorem [56] gives
(44) |
Similarly,
(45) |
To make sure and with a probability of at least , we take
(46) |
∎
Appendix C Details on the analysis of measurement cost
In this section, we develop theoretical tools for analysing the measurement cost in quantum KSD algorithms. At the end of this section, we give the proofs of Lemma 1 and Theorem 1.
Definition 2.
For simplification, we define the following functions:
(47) | |||||
(48) | |||||
(49) |
Here, functions are defined in Eq. (27). is the minimum eigenvalue of the generalised eigenvalue problem in Eq. (2); is the minimum eigenvalue of the generalised eigenvalue problem solved in Algorithm 1; and Eq. (49) is equivalent to Eq. (7).
Lemma 6.
If and , the following statements hold:
-
(1)
;
-
(2)
If , then ;
-
(3)
If , then .
Proof.
Consider the error in the denominator,
(50) |
from which we obtain
(51) | |||||
Similarly, we have
(52) | |||||
Let , , and be positive numbers, it can be shown that
(53) |
If , then . Under this condition, by using Eqs. (51)-(53), we get
(54) | |||||
i.e. . Both the first and second statements are proven.
If , then . Therefore, . The third statement is proven. ∎
Under the condition , Lemma 6 states , where the first inequality shows that the algorithm is variational, and the second inequality gives the error bound. In the following, we will show that this relation still holds after minimisation.
Lemma 7.
Under conditions
-
(1)
and , and
-
(2)
and ,
the following statement holds,
(55) |
Proof.
According to the first statement in Lemma 6, , . Since and , we have .
Lemma 8.
Suppose is invertible, and . Eq. (9) has a positive solution in , and the solution is unique.
Proof.
Let .
First, we prove that is a strictly monotonically increasing function of when is in the range . Let and be in the range and . By Eq. (53), we have . Since ,
(57) |
Second, we prove that is a continuous function of when . When and , is a continuous function of . Consequently, , such that . Then,
(58) | |||||
Therefore, , such that
(59) | |||||
Combining the two, we conclude that is a strictly monotonically increasing continuous function of when and . Taking , we have . Therefore, when and , the equation has a unique solution. ∎
C.1 Proof of Lemma 1
C.2 Proof of Theorem 1
Proof.
Suppose and . The existence of the solution is proved in Lemma 8. With the solution , Eq. (9) is satisfied.
If we take the measurement number
(60) |
and hold up to the failure probability as proved in Lemma 2. Then, according to Lemma 7, holds up to the failure probability.
In the independent measurement protocol, the total number of measurements is
(61) |
Notice that for each matrix entry or , we need measurements for its real part and measurements for its imaginary part. There are two matrices, and each matrix has entries. To match Eq. (61), we can take and in Eq. (10). ∎
C.3 Remarks on the measurement number
First, in our algorithm (i.e. GP) and some other KSD algorithms (i.e. P, CP, IP, ITE and F), matrices and are real. In this case, we only need to measure the real part. The cost is directly reduced by a factor of two. Only measuring the real part also reduces variances by a factor of two, i.e. from and to and . Because of the reduced variance, the cost is reduced by another factor of two. Overall, the measurement cost is reduced by a factor of four in algorithms using real matrices.
Second, when the failure probability is low, typically and . In this case, the error is overestimated in , and the typical error is approximately given by
(62) |
Notice that a factor of two has been removed from the denominator and numerator compared with . If we consider the typical error instead of the error upper bound , the required sampling cost is reduced by a factor of four.
Appendix D Cost for measuring the energy with an ideal projection
We consider the ideal case that we can realise an ideal projection onto the true ground state. To use the above results, we assume that is the projection operator and take in the KSD algorithm. Then, we have
(63) | |||||
(64) |
and , i.e. .
We suppose and . When we take , . Notice that . Accordingly, the total measurement number in the ideal case has an upper bound
(65) |
which is the first factor in Eq. (11).
Appendix E Optimised and factors
Theorem 1 is proved taking the bound given by Lemma 2. However, Lemma 2 aims to give a general result, leading to a conservative estimation of and . In this section, based on other results in Appendix B, we will show that these two factors can be further reduced. We summarise the reduced and in Table 2 as well.
For the independent measurement protocol, Lemma 3 guarantees that the lower bound of the total measurement number is
(66) |
Assuming and neglecting in logarithm yields and .
Definition 3.
Let and . If , then is a real Hankel matrix.
In our algorithm (i.e. GP) and some other KSD algorithms (i.e. P, IP and ITE), and are real Hankel matrices. The collective measurement protocol only requires measuring matrix entries for each matrix to construct and . Specifically, we measure and with . Then we take and for all and . According to Lemma 4, the total measurement number is
(67) | |||||
Assuming and neglecting in logarithm yields and .
For the CP and F algorithms, and are real symmetric matrices. We need to measure matrix entries to construct and , respectively. According to Lemma 4, the total measurement number is
(68) | |||||
Assuming and neglecting in logarithm yields and .
The RTE algorithm is the only one having complex matrices and . Considering its Hermitian–Toeplitz structure, we need to measure real and imaginary matrix entries to estimate and , respectively. According to Lemma 5, the total measurement number is
(69) | |||||
Assuming and neglecting in logarithm yields and .
Finally, the optimised and are listed in Table 2. Compared with cases, Hankel matrices have smaller and . For the GP algorithm, taking and is sufficient.
Appendix F Gaussian-power basis
F.1 Norm upper bound
F.2 The integral
The generating function for the Hermite polynomials is [40]
(72) |
Lemma 10.
(73) |
holds for arbitrary .
Proof.
Lemma 11.
Eq. (15) is true.
Appendix G Algorithm and variance
In this section, first, we review the zeroth-order leading-order-rotation formula [47], then we analyse the variance and work out the upper bounds of the cost, finally, we give the pseudocode of our measurement-efficient algorithm.
G.1 Zeroth-order leading-order-rotation formula
Assume that the Hamiltonian is expressed as a linear combination of Pauli operators, i.e.
(75) |
The Taylor expansion of the time evolution operator is
(76) |
where the summation of high-order terms is
(77) | |||||
The leading-order terms can be expressed as a linear combination of rotation operators,
(78) |
where , and . The zeroth-order leading-order-rotation formula is
(79) | |||||
which is a linear combination of rotation and Pauli operators. Accordingly, for the evolution time and time step number , the LCU expression of the time evolution operator is
(80) | |||||
The above equation is the explicit form of the expression referred in the main text.
Now, we consider the cost factor, i.e. the 1-norm of coefficients in an LCU expression. For Eq. (79), the corresponding cost factor is
(81) |
Accordingly, the cost factor of Eq. (80) is .
Lemma 12.
(82) |
Proof.
Let , thus . Then
(83) | |||||
Define the function
(84) |
It follows that and
(85) | |||||
(86) |
Let . Since , it is easy to see that when , which indicates that when . Based on Eq. (85) and the fact that , we have when . As a result, when , which means . Therefore,
(87) |
∎
According to Lemma 12, the cost factor has the upper bound
(88) |
Therefore, the cost factor approaches one when the time step number increases.
G.2 Variance analysis
In this section, first, we prove the general upper bound of the variance, then we work out the upper bound of the cost .
G.2.1 Variance upper bound

Given the LCU expression , we can compute using the Monte Carlo method and the one-ancilla Hadamard-test circuit shown in Fig. 3. In the Monte Carlo calculation, we sample the unitary operator with a probability of in the spirit of importance sampling. The corresponding algorithm is given in Algorithm 2.
Lemma 13.
Proof.
First, we rewrite the LCU expression as
(89) |
Then,
(90) |
Each unitary operator has a corresponding Hadamard-test circuit denoted by , which is shown in Fig. 3. When the ancilla qubit is measured in the basis, the measurement has a random outcome . Let be the probability of the measurement outcome in . According to Ref. [33], we have
(91) | |||||
The probability of is . Using Eqs. (90) and (91), we have
(92) | |||||
The corresponding Monte Carlo algorithm is given in Algorithm 2.
The estimator is unbiased. According to Eq. (92), the expected value of is . Therefore, the expected value of is also . Notice that is the average of over samples.
The variance of is
(93) |
∎
Here, we remark that the summation-form LCU expressions can be generalised to the integral-form LCU expressions, which inherit the same properties of the former.
G.2.2 Cost upper bound
We use a two-level LCU to construct : First, is an integral of RTE operators ; Second, RTE is realised using the leading-order-rotation method. Substituting Eq. (80) into Eq. (15), we can obtain the eventual LCU expression of , i.e.
(94) |
Substituting LCU expressions of and as well as the expression of into , we can obtain the LCU expression of . Then, , respectively, where
(95) | |||||
Here, we have replaced with in Eq. (20).
To work out the cost of , we have used that the cost factor is additive and multiplicative. Suppose LCU expressions of and are and , respectively. The cost factors of and are and , respectively. Substituting LCU expressions of and into , the expression of reads . Then, the cost factor of is . Substituting LCU expressions of and into , the expression of reads . Then, the cost factor of is .
The upper bound of is
(96) | |||||
where . Here, we have used Lemma 12. Notice that increases monotonically with (i.e. decreases monotonically with ). Taking (i.e. ), we numerically evaluate the upper bound and plot it in Fig. 4. We can find that
(97) |

G.3 Pseudocode
Given the LCU expression of , we can compute according to Algorithm 2. We would like to repeat how we compose the LCU expression of : Substituting Eq. (80) into Eq. (15), we can obtain the eventual LCU expression of ; Substituting LCU expressions of and as well as the expression of into and (corresponding to and , respectively), we can obtain the LCU expression of . Algorithm 2 does not involve details of the LCU expression. In this section, we give pseudocodes involving details.
We remark that matrix entries of the rescaled basis are and , where and are computed according to the pseudocodes.
In pseudocodes, we use notations and to represent the Hamiltonian . We define probability density functions
(98) | |||||
and probabilities
(99) | |||||
(100) |
We use to denote a subroutine that returns with a probability of . The LCU expression of is sampled according to Algorithms 5, 6 and 7. In Algorithm 5, we generate random time according to the distribution . In Algorithm 6, we realise the RTE by steps leading-order-rotation. In Algorithm 7, we randomly sample rotations or Pauli operators according to Eq. (80).
Appendix H Details in numerical calculation
In this section, first, we describe models and reference states taken in the benchmarking, and then we give details about instances and parameters.
H.1 Models and reference states
Two models are used in the benchmarking: the anti-ferromagnetic Heisenberg model
(101) |
where , and are Pauli operators of the th qubit; and the Hubbard model
(102) | |||||
where is the annihilation operator of the th orbital and spin . For the Heisenberg model, we take the spin number as ten, and for the Hubbard model, we take the orbital number as five. Then both models can be encoded into ten qubits. For the Hubbard model, we take . Without loss of generality, we normalise the Hamiltonian for simplicity: We take such that , i.e. eigenvalues of are in the interval .
For each model, we consider three types of lattices: chain, ladder and randomly generated graphs, see Fig. 5. We generate a random graph in the following way: i) For a vertex , randomly choose a vertex and add the edge ; ii) Repeat step-i two times for the vertex ; iii) Implement steps i and ii for each vertex . On a graph generated in this way, each vertex is connected to four vertices on average. Notice that some pairs of vertices are connected by multiple edges, and then the interaction between vertices in the pair is amplified by the number of edges.

It is non-trivial to choose and prepare a good reference state that has a sufficiently large overlap with the true ground state. Finding the ground-state energy of a local Hamiltonian is QMA-hard [59, 60]. If one can prepare a good reference state, then quantum computing is capable of solving the ground-state energy using quantum phase estimation [61]. So far, there does not exist a universal method for reference state preparation; see Ref. [62] for relevant discussions. Despite this, there are some practical ways of choosing and preparing the reference state. For fermion systems, we can take the mean-field approximate solution, i.e. a Hartree-Fock state, as the reference state. For general systems, one may try adiabatic state preparation, etc. We stress that preparing good reference states is beyond the scope of this work. Here, we choose two reference states which are likely to overlap with true ground states as examples. For the Heisenberg model, we take the pairwise singlet state
(103) |
where
(104) |
is the singlet state of spins and . For the Hubbard model, we take a Hartree-Fock state as the reference state: We compute the ground state of the one-particle Hamiltonian (which is equivalent to taking ) and take the ground state of the one-particle Hamiltonian (a Slater determinant) as the reference state.
H.2 Instances
We test the algorithms listed in Table 1 with many instances. Each instance is a triple , in which takes one of the two models (Heisenberg model and Hubbard model), takes a chain, ladder or random graph, and is the dimension of the subspace.
Instances for computing empirical distributions in Fig. 2 consist of six groups: the Heisenberg model on the chain, ladder and random graphs, and the Hubbard model on the chain, ladder and random graphs. For chain and ladder, we evaluate each subspace dimension ; For a random graph, we only evaluate one subspace dimension randomly taken in the interval. Some instances are discarded. To avoid any potential issue caused by the precision of floating-point numbers, we discard instances that the subspace error of the P algorithm is lower than due to a large . We also discard instances that of the P algorithm is higher than due to a small , because the dimension may be too small to achieve an interesting accuracy in this case. For randomly generated graphs, the two reference states may have a small overlap with the true ground state. Because a good reference state is necessary for all quantum KSD algorithms, we discard graphs with . Eventually, we have eight instances of the Heisenberg chain, seven instances of the Heisenberg Ladder, twenty-two instances of the Hubbard chain and twenty instances of the Hubbard ladder. For each model, we generate a hundred random-graph instances. The total number of instances is .
H.3 Parameters
Abbr. | ||||||
---|---|---|---|---|---|---|
P | N/A | N/A | N/A | |||
CP | N/A | N/A | N/A | |||
GP | Solving Eq. (106) | N/A | N/A | |||
IP | N/A | N/A | N/A | |||
ITE | Solving Eq. (107) | N/A | N/A | |||
RTE | N/A | Minimising | N/A | |||
F | Solving Eq. (106) | Minimising |
In this section, we give parameters taken in each algorithm listed in Table 1, namely, , , , , and . We summarise these parameters in Table 3.
For , we expect that taking close to is optimal for our GP algorithm. As the exact value of is unknown, we assume that we have a preliminary estimation of the ground-state energy with uncertainty as large as of the entire spectrum, i.e. we take . For other algorithms, we take by assuming that the exact value of is known. In the P algorithm, we take , such that the ground state is the eigenstate of with the largest absolute eigenvalue and . Similarly, in the IP algorithm, we take , such that the ground state is the eigenstate of with the largest absolute eigenvalue and . In the ITE algorithm, causes a constant factor in each operator , i.e. determines the spectral norm of . Because the variance is related to the norm, a large is problematic. Therefore, in the ITE algorithm, we take , such that for all . In the F algorithm, we expect that is optimal because is an energy filter centred at the ground-state energy in this case. Therefore, we take in the F algorithm. The RTE algorithm is closely related to the F algorithm. Therefore, we also take in the RTE algorithm.
We remark that in the GP algorithm, we take random uniformly distributed in the interval to generate data in Fig. 2, and we take , where and , to generate data in Fig. 1. We remark that we have normalised the Hamiltonian such that .
For in the F algorithm, we take . For simplicity, we take the limit , i.e.
(105) | |||||
Notice that this is an energy filter centred at , and the filter is narrower when is larger.
Now, we have three algorithms that have the parameter : GP, ITE and F. Similar to the F algorithm, in the GP algorithm is an energy filter centred at . For these two algorithms, if , converges to the true ground state in the limit . It is also similar in the ITE algorithm, in which is a projector onto the ground state, and converges to the true ground state in the limit . Realising a large is at a certain cost, specifically, the circuit depth increases with [42, 8]. Therefore, it is reasonable to take a finite . Next, we give the protocol for determining the value of in each algorithm.
Without getting into details about realising the filters and projectors, we choose under the assumption that if filters and projectors have the same power in computing the ground-state energy, they probably require similar circuit depths. In GP and F algorithms, if , the energy error achieved by filters reads ; and in the ITE algorithm, the energy error achieved by the projector reads . We take such that errors achieved by filters and projectors take the same value . Specifically, for the GP and F algorithms, we determine the value of by solving the equation (taking )
(106) |
and for the ITE algorithm, we determine the value of by solving the equation
(107) |
To choose the value of , we take the P algorithm as the standard, in which is a projector onto the ground state, i.e. converges to the true ground state in the limit . Overall, we determine the value of in the following way: Given an instance , i) first, compute the energy error achieved by the projector in the P algorithm, take ; then, ii) solve equations of . In this way, filters and projectors in P, GP, ITE and F algorithms have the same power.
For in the RTE algorithm and in the F algorithm, there are works on how to choose them in the literature [14, 17, 15]. In this work, we simply determine their values by a grid search. For the RTE algorithm, we take , where and ; we compute the subspace error for all ; and we choose of the minimum . For the F algorithm, we take , where and (In this way, when we take the largest , filters span the entire spectrum); we compute the subspace error for all ; and we choose of the minimum .
For and , we take and in our GP algorithm following the analysis in Appendix G. For other algorithms, we take these two parameters in the following way. For P, IP, ITE and RTE algorithms, we take and according to spectral norms (the lower bound of cost) without getting into details about measuring matrix entries. In these four algorithms, for all and , therefore, we take . In the CP algorithm, we take and in a similar way. Because , the spectrum of is in the interval . In this interval, the Chebyshev polynomial of the first kind takes values in the interval . Therefore, , and consequently . These norms depend on the spectrum of . For simplicity, we take the upper bound of norms, i.e. , in the CP algorithm. In the F algorithm, each is a linear combination of operators, i.e. is expressed in the LCU form, and the cost factor of the LCU expression is one. Therefore, we take in the F algorithm, assuming that is expressed in the LCU form with a cost factor of one (Notice that one is the lower bound).
Appendix I Derivation of Eq. (21)
Define the function . Consider a small and the Taylor expansion of . We have and
(108) | |||||
The minimum value of the zeroth-order term is cccording to the Rayleigh quotient theorem [51]. Take this minimum value, we have , where is the first derivative. The solution to is , then
(109) | |||||
where
(110) | |||||
under assumptions and .
Appendix J Composing Chebyshev polynomials
J.1 Chebyshev-polynomial projector
In this section, we explain how to use Chebyshev polynomials as projectors onto the ground state.
The th Chebyshev polynomial of the first kind is
(113) |
Chebyshev polynomials have the following properties: i) When , ; and ii) when , increases exponentially with .
Let’s consider the spectral decomposition of the Hamiltonian, . Here, is the dimension of the Hilbert space, are eigenenergies of the Hamiltonian, and are eigenstates. We suppose that eigenenergies are sorted in ascending order, i.e. if . Then, is the ground-state energy (accordingly, is the ground state) and is the energy of the first excited state. The energy gap between the ground state and the first excited state is .
To compose a projector, we take
(114) |
Notice that and have the same eigenstates. The spectral decomposition of is , where
(115) |
Then, the ground state corresponds to (suppose the gap is finite), and the first excited state corresponds to . Because , for all . Therefore, except the ground state, of all excited states (i.e. ) is in the interval .
The projector reads
(116) | |||||
where
(117) |
and have the same eigenstates. Because when ,
(118) |
The key in using Chebyshev polynomials as projectors is laying all excited states in the interval and leaving the ground state out of the interval. For the CP basis, all eigenstates are in the interval . Therefore, operators of the CP basis are not projectors in general.
J.2 Expanding the projector as Hamiltonian powers
In this section, we expand the Chebyshev-polynomial projector as a linear combination of Hamiltonian powers . We focus on the case that .
The explicit expression of () is
(119) | |||||
Because
(120) | |||||
we have
(121) |
where
(122) | |||||
Now, we give an upper bound of . First, we consider . In this case,
(123) | |||||
Then, for an arbitrary ,
(124) |
where the inequality
(125) |
has been used.
Finally, taking and
(126) |
we have
(127) |
where are operators defined in Eq. (13). Because of the upper bound
(128) | |||||
the overhead factor is
(129) | |||||
Appendix K The choice of
There are two bounds determining the range of . First, we take , such that the upper bound of decrease exponentially with . Second, under the assumption , we take . To justify the second bound, we plot the absolute value of the rescaled Gaussian-power function with different in Fig. 6, where the instance is taken as an example. This function has maximum points at . When increases from to , the peaks of span in the range , making the basis a filter [2]. It is preferred that the ground-state energy is in the range, which leads to the second bound. Notice that we can always normalise the Hamiltonian by taking . Then, , for all , and the two bounds are consistent.

Appendix L Numerical results of the measurement number
In Theorem 1, we give a theoretical result about the sufficient measurement number, which is used in later numerical study. In this section, we numerically calculate the necessary measurement number and compare the regularisation method to the thresholding method.
L.1 Necessary measurement costs
Due to the statistical error, the output energy follows a certain distribution in the vicinity of the true ground-state energy . Given the permissible energy error and failure probability , a measurement number is said to be sufficient if is in the range with a probability not lower than ; and the necessary measurement number is the minimum sufficient measurement number.
Taking the instance as an example, Fig. 7 shows the errors in the ground-state energy and the corresponding necessary/sufficient measurement numbers. We can find that the sufficient measurement costs are close to the necessary measurement numbers.

When estimating the necessary measurement numbers, we need to numerically simulate the estimators and , which can be regarded as the summations of the matrices and and random noise matrices and , respectively. Considering Gaussian statistical error and the collective measurement protocol, the random matrices and can be numerically generated. The real and imaginary parts of the random matrix entries obey the Gaussian distribution with the mean value (assume the estimation is unbiased for all algorithms) and the standard deviations are and , respectively. Under collective measurement protocols, the random matrices have the same structure as and .
L.2 Comparison using the thresholding method
Ref. [16] provides a detailed analysis of the thresholding procedure. The optimal thresholds are roughly close to the noise rate. Here, we set them to to get relatively stable results. With the same instance , the results shown in Fig. 8 are similar to Fig. 7. This indicates that regardless of which method is used to deal with the ill-conditioned problem, GP algorithm outperforms other algorithms in terms of measurement costs.


References
- [1] Elbio Dagotto. “Correlated electrons in high-temperature superconductors”. Reviews of Modern Physics 66, 763 (1994).
- [2] Michael R Wall and Daniel Neuhauser. “Extraction, through filter-diagonalization, of general quantum eigenvalues or classical normal mode frequencies from a small number of residues or a short-time segment of a signal. i. theory and application to a quantum-dynamics model”. The Journal of chemical physics 102, 8011–8022 (1995).
- [3] Etienne Caurier, Gabriel Martínez-Pinedo, Fréderic Nowacki, Alfredo Poves, and AP Zuker. “The shell model as a unified view of nuclear structure”. Reviews of modern Physics 77, 427 (2005).
- [4] Cornelius Lanczos. “An iteration method for the solution of the eigenvalue problem of linear differential and integral operators”. Journal of research of the National Bureau of Standards 45, 255–282 (1950).
- [5] Yousef Saad. “Numerical methods for large eigenvalue problems”. Society for Industrial and Applied Mathematics. (2011).
- [6] Åke Björck. “Numerical methods in matrix computations”. Volume 59 of Texts in Applied Mathematics. Springer. (2015).
- [7] Adolfo Avella and Ferdinando Mancini. “Strongly correlated systems”. Volume 176 of Springer Series in Solid-State Sciences. Springer. (2013).
- [8] Mario Motta, Chong Sun, Adrian T. K. Tan, Matthew J. O’Rourke, Erika Ye, Austin J. Minnich, Fernando G. S. L. Brandão, and Garnet Kin-Lic Chan. “Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution”. Nature Physics 16, 205–210 (2019).
- [9] Kübra Yeter-Aydeniz, Raphael C Pooser, and George Siopsis. “Practical quantum computation of chemical and nuclear energy levels using quantum imaginary time evolution and lanczos algorithms”. npj Quantum Information 6, 1–8 (2020).
- [10] Robert M. Parrish and Peter L. McMahon. “Quantum filter diagonalization: Quantum eigendecomposition without full quantum phase estimation” (2019). arXiv:1909.08925.
- [11] Nicholas H. Stair, Renke Huang, and Francesco A. Evangelista. “A multireference quantum krylov algorithm for strongly correlated electrons”. Journal of Chemical Theory and Computation 16, 2236–2245 (2020).
- [12] Tatiana A Bespalova and Oleksandr Kyriienko. “Hamiltonian operator approximation for energy measurement and ground-state preparation”. PRX Quantum 2, 030318 (2021).
- [13] Jeffrey Cohn, Mario Motta, and Robert M Parrish. “Quantum filter diagonalization with compressed double-factorized hamiltonians”. PRX Quantum 2, 040352 (2021).
- [14] Katherine Klymko, Carlos Mejuto-Zaera, Stephen J Cotton, Filip Wudarski, Miroslav Urbanek, Diptarka Hait, Martin Head-Gordon, K Birgitta Whaley, Jonathan Moussa, Nathan Wiebe, et al. “Real-time evolution for ultracompact hamiltonian eigenstates on quantum hardware”. PRX Quantum 3, 020323 (2022).
- [15] Cristian L Cortes and Stephen K Gray. “Quantum krylov subspace algorithms for ground-and excited-state energy estimation”. Physical Review A 105, 022417 (2022).
- [16] Ethan N Epperly, Lin Lin, and Yuji Nakatsukasa. “A theory of quantum subspace diagonalization”. SIAM Journal on Matrix Analysis and Applications 43, 1263–1290 (2022).
- [17] Yizhi Shen, Katherine Klymko, James Sud, David B. Williams-Young, Wibe A. de Jong, and Norm M. Tubman. “Real-Time Krylov Theory for Quantum Computing Algorithms”. Quantum 7, 1066 (2023).
- [18] Oleksandr Kyriienko. “Quantum inverse iteration algorithm for programmable quantum simulators”. npj Quantum Information 6, 1–8 (2020).
- [19] Kazuhiro Seki and Seiji Yunoki. “Quantum power method by a superposition of time-evolved states”. PRX Quantum 2, 010333 (2021).
- [20] William Kirby, Mario Motta, and Antonio Mezzacapo. “Exact and efficient Lanczos method on a quantum computer”. Quantum 7, 1018 (2023).
- [21] Ryan Babbush, Craig Gidney, Dominic W. Berry, Nathan Wiebe, Jarrod McClean, Alexandru Paler, Austin Fowler, and Hartmut Neven. “Encoding electronic spectra in quantum circuits with linear t complexity”. Phys. Rev. X 8, 041015 (2018).
- [22] Joonho Lee, Dominic W. Berry, Craig Gidney, William J. Huggins, Jarrod R. McClean, Nathan Wiebe, and Ryan Babbush. “Even more efficient quantum computations of chemistry through tensor hypercontraction”. PRX Quantum 2, 030305 (2021).
- [23] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alán Aspuru-Guzik, and Jeremy L O’brien. “A variational eigenvalue solver on a photonic quantum processor”. Nature communications 5, 4213 (2014).
- [24] Jarrod R McClean, Sergio Boixo, Vadim N Smelyanskiy, Ryan Babbush, and Hartmut Neven. “Barren plateaus in quantum neural network training landscapes”. Nature communications 9, 4812 (2018).
- [25] Beresford N. Parlett. “The symmetric eigenvalue problem”. Society for Industrial and Applied Mathematics. (1998).
- [26] Robert M Parrish, Edward G Hohenstein, Peter L McMahon, and Todd J Martínez. “Quantum computation of electronic transitions using a variational quantum eigensolver”. Physical review letters 122, 230401 (2019).
- [27] Ken M Nakanishi, Kosuke Mitarai, and Keisuke Fujii. “Subspace-search variational quantum eigensolver for excited states”. Physical Review Research 1, 033062 (2019).
- [28] William J Huggins, Joonho Lee, Unpil Baek, Bryan O’Gorman, and K Birgitta Whaley. “A non-orthogonal variational quantum eigensolver”. New Journal of Physics 22, 073009 (2020).
- [29] Nicholas H. Stair, Cristian L. Cortes, Robert M. Parrish, Jeffrey Cohn, and Mario Motta. “Stochastic quantum krylov protocol with double-factorized hamiltonians”. Phys. Rev. A 107, 032414 (2023).
- [30] Jarrod R. McClean, Mollie E. Kimchi-Schwartz, Jonathan Carter, and Wibe A. de Jong. “Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states”. Phys. Rev. A 95, 042308 (2017).
- [31] Jarrod R McClean, Zhang Jiang, Nicholas C Rubin, Ryan Babbush, and Hartmut Neven. “Decoding quantum errors with subspace expansions”. Nature communications 11, 636 (2020).
- [32] Nobuyuki Yoshioka, Hideaki Hakoshima, Yuichiro Matsuzaki, Yuuki Tokunaga, Yasunari Suzuki, and Suguru Endo. “Generalized quantum subspace expansion”. Phys. Rev. Lett. 129, 020502 (2022).
- [33] Artur K Ekert, Carolina Moura Alves, Daniel KL Oi, Michał Horodecki, Paweł Horodecki, and Leong Chuan Kwek. “Direct estimations of linear and nonlinear functionals of a quantum state”. Physical review letters 88, 217901 (2002).
- [34] Mingxia Huo and Ying Li. “Error-resilient monte carlo quantum simulation of imaginary time”. Quantum 7, 916 (2023).
- [35] Pei Zeng, Jinzhao Sun, and Xiao Yuan. “Universal quantum algorithmic cooling on a quantum computer” (2021). arXiv:2109.15304.
- [36] Andrew M Childs and Nathan Wiebe. “Hamiltonian simulation using linear combinations of unitary operations”. Quantum Information & Computation 12, 901–924 (2012).
- [37] Paul K. Faehrmann, Mark Steudtner, Richard Kueng, Maria Kieferova, and Jens Eisert. “Randomizing multi-product formulas for Hamiltonian simulation”. Quantum 6, 806 (2022).
- [38] Thomas E O’Brien, Stefano Polla, Nicholas C Rubin, William J Huggins, Sam McArdle, Sergio Boixo, Jarrod R McClean, and Ryan Babbush. “Error mitigation via verified phase estimation”. PRX Quantum 2, 020317 (2021).
- [39] Sirui Lu, Mari Carmen Bañuls, and J Ignacio Cirac. “Algorithms for quantum simulation at finite energies”. PRX Quantum 2, 020321 (2021).
- [40] G.B. Arfken, H.J. Weber, and F.E. Harris. “Mathematical methods for physicists: A comprehensive guide”. Elsevier. (2012).
- [41] Seth Lloyd. “Universal quantum simulators”. Science 273, 1073–1078 (1996).
- [42] Dominic W Berry, Graeme Ahokas, Richard Cleve, and Barry C Sanders. “Efficient quantum algorithms for simulating sparse hamiltonians”. Communications in Mathematical Physics 270, 359–371 (2007).
- [43] Nathan Wiebe, Dominic Berry, Peter Høyer, and Barry C Sanders. “Higher order decompositions of ordered operator exponentials”. Journal of Physics A: Mathematical and Theoretical 43, 065203 (2010).
- [44] Dominic W Berry, Andrew M Childs, Richard Cleve, Robin Kothari, and Rolando D Somma. “Simulating hamiltonian dynamics with a truncated taylor series”. Physical review letters 114, 090502 (2015).
- [45] Richard Meister, Simon C. Benjamin, and Earl T. Campbell. “Tailoring Term Truncations for Electronic Structure Calculations Using a Linear Combination of Unitaries”. Quantum 6, 637 (2022).
- [46] Earl Campbell. “Random compiler for fast hamiltonian simulation”. Physical review letters 123, 070503 (2019).
- [47] Yongdan Yang, Bing-Nan Lu, and Ying Li. “Accelerated quantum monte carlo with mitigated error on noisy quantum computer”. PRX Quantum 2, 040361 (2021).
- [48] Philip W Anderson. “The resonating valence bond state in la2cuo4 and superconductivity”. science 235, 1196–1198 (1987).
- [49] Patrick A Lee, Naoto Nagaosa, and Xiao-Gang Wen. “Doping a mott insulator: Physics of high-temperature superconductivity”. Reviews of modern physics 78, 17 (2006).
- [50] Kazuhiro Seki, Tomonori Shirakawa, and Seiji Yunoki. “Symmetry-adapted variational quantum eigensolver”. Phys. Rev. A 101, 052340 (2020).
- [51] Roger A. Horn and Charles R. Johnson. “Matrix analysis”. Cambridge University Press. (2012). 2 edition.
- [52] Germund Dahlquist and Åke Björck. “Numerical methods in scientific computing, volume i”. Society for Industrial and Applied Mathematics. (2008).
- [53] William Kirby. “Analysis of quantum krylov algorithms with errors” (2024). arXiv:2401.01246.
- [54] https://github.com/ZongkangZhang/QKSD.
- [55] Wassily Hoeffding. “Probability inequalities for sums of bounded random variables”. Journal of the American Statistical Association 58, 13–30 (1963).
- [56] Joel A. Tropp. “An introduction to matrix concentration inequalities”. Foundations and Trends® in Machine Learning 8, 1–230 (2015).
- [57] Raymundo Albert and Cecilia G. Galarza. “Model order selection for sum of complex exponentials”. In 2021 IEEE URUCON. Pages 561–565. (2021).
- [58] D. Babusci, G. Dattoli, and M. Quattromini. “On integrals involving hermite polynomials”. Applied Mathematics Letters 25, 1157–1160 (2012).
- [59] Alexei Yu Kitaev, Alexander Shen, and Mikhail N Vyalyi. “Classical and quantum computation”. Graduate studies in mathematics. American Mathematical Society. (2002).
- [60] Julia Kempe, Alexei Kitaev, and Oded Regev. “The complexity of the local hamiltonian problem”. Siam journal on computing 35, 1070–1097 (2006).
- [61] Chris Cade, Marten Folkertsma, Sevag Gharibian, Ryu Hayakawa, François Le Gall, Tomoyuki Morimae, and Jordi Weggemans. “Improved Hardness Results for the Guided Local Hamiltonian Problem”. In Kousha Etessami, Uriel Feige, and Gabriele Puppis, editors, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Volume 261 of Leibniz International Proceedings in Informatics (LIPIcs), pages 32:1–32:19. Dagstuhl, Germany (2023). Schloss Dagstuhl – Leibniz-Zentrum für Informatik.
- [62] Seunghoon Lee, Joonho Lee, Huanchen Zhai, Yu Tong, Alexander M Dalzell, Ashutosh Kumar, Phillip Helms, Johnnie Gray, Zhi-Hao Cui, Wenyuan Liu, et al. “Evaluating the evidence for exponential quantum advantage in ground-state quantum chemistry”. Nature communications 14, 1952 (2023).