14
Faster Amplitude Estimation
Abstract
In this paper, we introduce an efficient algorithm for the quantum amplitude estimation task which is tailored for near-term quantum computers. The quantum amplitude estimation is an important problem which has various applications in fields such as quantum chemistry, machine learning, and finance. Because the well-known algorithm for the quantum amplitude estimation using the phase estimation does not work in near-term quantum computers, alternative approaches have been proposed in recent literature. Some of them provide a proof of the upper bound which almost achieves the Heisenberg scaling. However, the constant factor is large and thus the bound is loose. Our contribution in this paper is to provide the algorithm such that the upper bound of query complexity almost achieves the Heisenberg scaling and the constant factor is small.
1 Introduction
The application of near-term quantum computers has been attracting significant interest recently. In the near-term quantum computers, the depth of the circuit and the number of qubits are constrained for reducing the noise. Under the constraints, most of the quantum algorithms which realize quantum speed-up in the ideal quantum computers are not available in the near-term quantum computers. The development of algorithms tailored for near-term quantum computers is demanded.
In this paper, we focus on the problem of quantum amplitude estimation in near-term quantum computers. Quantum amplitude estimation is the problem of estimating the value of in the following equation: . It is well known that the amplitude estimation can be applied to quantum chemistry, finance and machine learning [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. In these applications, the cost of execution is high, thus how to reduce the number of calling while estimating with required accuracy, is the heart of the problem.
The efficient quantum amplitude estimation algorithm which uses the phase estimation has been well documented in previous literature[11, 12]. The algorithm achieves Heisenberg scaling, that is, if we demand that the estimation error of is within with the probability larger than , then the query complexity (required number of the call of the operator ) is . However, the phase estimation requires a lot of noisy gates of two qubits and therefore is not suitable for near-term quantum computers. Thus, it is fair to inquire if there are any quantum amplitude estimation algorithms which work with less resources and still achieve Heisenberg scaling.
Recently, quantum amplitude estimation algorithms without phase estimation have been suggested in some literature [13, 14, 15, 16]. Suzuki et al [13] suggests an algorithm which uses maximum likelihood estimation. It shows that the algorithm achieves Heisenberg scaling numerically but there is no rigorous proofs. Wie [14] also studies the problem in the context of quantum counting. Still, it is not rigorously proved that the algorithm achieves Heisenberg scaling. Aaronson et al[15] suggests an algorithm which achieves Heisenberg scaling and give rigorous proof. However, the constant factor proportional to is large where is the probability that the error is less than . Grinko et al [16] also suggests an algorithm which achieves Heisenberg scaling, but the constant factor is still large in the worst case even though it is shown numerically that the constant is smaller in most of the cases.
In this paper, we propose a quantum amplitude estimation algorithm without phase estimation which acheives the Heisenberg scaling and the constant factor proportional to is smaller than previous literature[15, 16]. The structure of the paper is as follows. In Section 2, we discuss our proposed algorithm. The validity of the algorithm is verified by numerical experiments in Section 3. In Section Acknowledgement, we conclude with a discussion on future works. The complexity upper bound is proved in the Appendix.
2 Algorithm
In this section, we show our proposed algorithm. Before going into the detail, let us summarize the algorithm briefly.
Similar to the previous literature, we use the quantum amplitude amplification technique[11]. Given the amplitude that we want to estimate, the quantum amplitude amplification enables us to estimate the values of for each non-negative integer directly by measurements, and the resulting estimation errors of are of the order with high probability when measurements are executed for each .
Our algorithm estimates the values of for each iteratively. As in Kitaev’s iterative phase estimation [17] (see also [18]), if are estimated and those estimation errors are within , then the value of can be iteratively estimated with error , meaning that the Heisenberg scaling is achieved. However, the value of is generally not determinable only by the estimate of because there is ambiguity whether or . Our algorithm solves this ambiguity by taking two-stages method. The algorithm is in the first stage when . In this stage, can be obtained from the estimate of without ambiguity by using the inverse cosine function. When the estimate of at the iteration , the algorithm moves to the second stage. In the second stage, might be larger than , hence cannot be determined only by the measurements of because of the above mentioned ambiguity. However, by combining the measurements of with those of , the value of can be estimated by using the trigonometric addition formula, and accordingly can be determined without the ambiguity. As a result, the algorithm can estimate the value of with the error less than .
The algorithm in the reference [16] suggests a different approach for solving the ambiguity. However their method requires precise measurements of cosine in the worst case and therefore the complexity upper bound becomes loose. On the other hand, our proposed algorithm works with relatively rough estimates of cosine even in the worst case. Thus, as we will see in Section 2.3 and Appendix A, the complexity upper bound of our algorithm is tighter than existing method.
In the following discussion in this section, we first define the problem in Section 2.1. Next we look into the detail of the algorithm and finally we show the complexity upper bound of our algorithm.
2.1 Preliminary
The quantum amplitude estimation is the problem of estimating the value of in the following equation:
(1) |
where . In applications, it often takes cost to execute , thus reducing the number of calling while estimating with required accuracy is the heart of the problem.
As we see later, our proposed algorithm works correctly if the amplitude is less than or equals to . However, imposing the condition on is not necessary because the amplitude can be attenuated by adding an extra ancilla qubit as follows:
(2) |
where and operates as follows:
(3) |
In the last line of (2.1), we define and as a state orthogonal to . As expected, the amplitude is attenuated as and therefore
(4) |
Thus, instead of estimating the value of directly, we estimate the value of and convert it to . The condition (4) is utilized as the initial bound in our proposed algorithm.
Similar to the original amplitude amplification[11], we define an operator as
(5) |
where is the identity operator in dimension. It is worth mentioning that
(6) |
We get the estimates of by measuring the state (6) for multiple ; the following defined readily computable from the measurement result, is an estimate of :
(7) |
where is the number of the results of the measurements in which the last two qubits in (6) are both one and is the total number of measurements of the state (6). The estimation error of can be evaluated by using the Chernoff bound for the Bernoulli distribution as discussed in [17], i.e., given the confidence interval of as , the bounds of the interval are computed as
(8) |
where is the probability that the true value of (i.e. ) is out of the interval. For later purpose, we define three functions: , and . The function returns as a result of times measurements of the state (6). The function returns the confidence interval of computed from the parameters: , and . The function is an extended arctangent function defined in the realm as
(9) |
Finally, we define as the number of calls of required for estimating . Our objective in this paper is providing an algorithm to estimate with required accuracy while reducing the number of .
2.2 Proposed Algorithm
In this subsection, we show our proposed algorithm. Our procedure is shown in Algorithm 1111The source code of the algorithm is shown in https://github.com/quantum-algorithm/faster-amplitude-estimation. Given as the confidence interval of in -th iteration, the algorithm updates the values of and so that becomes smaller in each iteration. The total iteration count is a parameter given by users of the algorithm and it is chosen so that the final result satisfies the required accuracy. As we see later, given acceptable error of the amplitude , . Therefore, it is suffice to take as .
Even though is not always inside the confidence interval: , the probability is bounded and exponentially decreases as and increases. Thus, for simplicity, we discuss only the case when holds for all s in this subsection. As we show later, the probability that holds for all is larger than .
In the following, we show how our algorithm works. As we see in Algorithm 1, there are two stages and the estimation methods are different in each stage. At the beginning of the iteration (), the algorithm is in the first stage and later the algorithm may change into the second stage if a condition is satisfied. We show the detail in the following.
First Stage
The algorithm is in the first stage when or when and all satisfy . In this stage, is gotten by inverting and as
(10) |
because is guaranteed as the following argument; if , the bound (4) leads to , and if and for then
(11) |
The algorithm changes into the second stage if . The two values are memorized for the purpose of our utilizing them in the second stage. One is defined as the last value of in the first stage. Another is defined as
(12) |
Note that above defined is an estimate of and the confidence interval is obtainable from the Chernoff bound.
In case that is less than for all , the algorithm finishes without going to the second stage and the final result is . The value of is set to . In the case, the error of the final result is at most . Thus, the error of the amplitude is bounded as
(13) |
The probability that is inside the confidence interval is .
Second Stage
In the second stage, may be larger than . Thus, the value of can not be estimated by inverting . However, it is still possible to estimate the value of by utilizing the results of measurements in other angle: , in addition to the bounds of gotten in the previous iteration. Here, firstly we show how to estimate , next we show how to estimate without ambiguity.
(i)The estimate of
To estimate , not only the estimate of (i.e., ) but also the estimate of are necessary. The estimate of is not directly obtainable from measurements but can be computed by the following procedure. From the trigonometric addition formula:
(14) |
if is not zero,
(15) |
Replacing as , as and as in the right hand side of the above formula, we can define as
(16) |
then becomes the estimate of . The estimation error of reflects the estimation errors of , and , which is discussed in Appendix A. It is straightforward to get the estimate of from and ; if we define as
(17) |
then is an estimate of .

The confidence interval of can be derived from those of , and as in the case of . Note that there are two types of the confidence interval. One is the connected confidence interval, meaning that there is no discontinuities in the confidence interval, e.g., . Another is disconnected confidence interval, meaning that the confidence interval is separated to an interval containing and an interval containing , e.g., and , which is realized when the confidence interval of contains and that of contains 0222If both the confidence intervals of and contain , the confidence interval of has discontinuity in . However, as long as and takes the upper bound value derived in Appendix A, the estimation errors are bounded so that either the confidence interval of or that of does not contain 0 because holds. Thus, we do not discuss the type of discontinuity in the following.. In the connected confidence interval case, given interval as , we define . On the other hand, in the disconnected confidence interval case, given intervals as and , we define as
(18) |
The overview of the definition of is shown in Figure 1. The above defined can be interpreted as the estimation error of in a sense that
(19) |
holds with an unknown integer as long as the true value of (i.e. ) is inside the confidence interval.
(ii)The estimate of
Now let us show how is estimated from . Using (19) and the inequality,
(20) |
it can be shown that
(21) |
Thus, if
(22) |
then can be uniquely determined as
(23) |
where is the largest integer that does not exceed . By using (20) and (26), it can be inductively shown that if all are determined with the precision of then the condition (22) is satisfied.
Although (19) with in (26) gives the upper/lower bounds of , a complicated procedure is necessary for evaluating in the algorithm. Thus, in our algorithm, instead of estimating , we set the upper/lower bounds of at the -th iteration as
(24) |
and
(25) |
which are correct as far as . In Appendix A, we show that for all , holds and (19) is satisfied with the probability larger than when at least
(26) |
In the -th iteration, the final result is set to . Then, the error of the final result is less than . Thus, the error of the amplitude is
(27) |
We show the overview of our algorithm when and in Figure 2.

2.3 Complexity Upper Bound
As we show in Appendix, by using our proposed algorithm, the required query complexity () with which the estimation error of is less than with the probability less than is bounded as
(28) |
The worst case is realized when the algorithm moves to the second stage at the first iteration (when ). We see that the upper bound of almost achieves Heisenberg scaling: () because the dependency of the factor on is small, e.g., even when , the factor is at most . The tightest upper bound in previous literature is give by [16] as in our notation. We see that the constant factor is times smaller in our algorithm.
Although detail discussion is made in Appendix, here we briefly show why the upper bound is proportional to . In order for to be bounded as (27), it is suffice that the errors of all s used in our algorithm are less than , which is realized if measurements for each . The number of oracle call in each is about for each measurement. Thus, as we expected.

3 Numerical Experiment
In this section, we verify the validity of the algorithm introduced in Section 2 by numerical experiments. We choose and as the amplitudes estimated. is taken as . We compute and the estimation error with changing the total number of algorithm steps . In each parameter set , we execute 1000 trials of the algorithm.
The computation results are shown in Fig. 3. For each , we plot the estimation errors(green dots) so that of the estimation errors in 1000 trials are equals to or smaller than the plotted value. In the same figure, we also show . For data points where the algorithm does not go to the second stage, we write “First Stage Only” instead of writing the value of . The data points are fitted with (blue lines) where the fitting parameter is determined by the least-squares.
Here is the list of notable points:
-
•
As expected, the Heisenberg scaling is almost achieved.
-
•
In “First Stage Only” cases, tends to be below the blue line, i.e., required is small for fixed compared with the case when the algorithm goes to the second stage. Because in “First Stage Only” cases, the cause of the error is limited; only is needed to estimate .
-
•
As increases, decreases as long as the algorithm goes to the second stage. Because, as increases, is satisfied with smaller .
4 Conclusion
The quantum amplitude estimation is an important problem that can be applied in various applications. Recently, the way of solving the problem without the phase estimation has been studied. Some of them suggest algorithms which achieve Heisenberg scaling () and they give rigorous proof. However the constant factor in each algorithm is large. Our contribution in this paper is providing an algorithm which almost achieves Heisenberg scaling and the constant factor is smaller than previous methods. We also give proof of the upper bound.
In a practical usage of the algorithm, some improvements might be possible. Although we determine the values of and at the beginning of the algorithm for simplicity, we can reduce those values by iteratively determining them. For example, in the second stage, can be smaller than that in (26) as long as in (21) is uniquely determined. Investigating those possible improvements are left for future works.
The effect of noise should also be examined. Although our algorithm can reduce the depth of the circuit compared to the quantum phase estimation algorithm, the required depth is still and the effect of noise is not neglectable. Thus, studying how to tailor noise in our algorithm would be important for discussing the practicability of our algorithm, which is also left for future works.
Acknowledgement
We acknowledge Naoki Yamamoto for insightful discussions and constructive comments. We also thank the two anonymous reviewers whose comments/suggestions helped improve and clarify this manuscript.
References
- [1] E. Knill, G. Ortiz, and R.D. Somma. Optimal Quantum Measurements of Expectation Values of Observables, 2006; arXiv:quant-ph/0607019. DOI: 10.1103/PhysRevA.75.012328.
- [2] I. Kassal, S. P. Jordan, P. J. Love, M. Mohseni, and A. Aspuru-Guzik. Polynomial-time quantum algorithm for the simulation of chemical dynamics, 2008, Proc. Natl. Acad. Sci. 105, 18681(2008); arXiv:0801.2986. DOI: 10.1073/pnas.0808245105.
- [3] P. Rebentrost, B. Gupt, and T. R. Bromley. Quantum computational finance: Monte Carlo pricing of financial derivatives, 2018, Phys. Rev. A 98, 022321 (2018); arXiv:1805.00109. DOI: 10.1103/PhysRevA.98.022321.
- [4] A. Montanaro. Quantum speedup of Monte Carlo methods, 2015, Proc. Roy. Soc. Ser. A, vol. 471 no. 2181, 20150301, 2015; arXiv:1504.06987. DOI: 10.1098/rspa.2015.0301.
- [5] S. Woerner and D. J. Egger. Quantum Risk Analysis, 2018, npj Quantum Inf 5, 15 (2019); arXiv:1806.06893. DOI: 10.1038/s41534-019-0130-6.
- [6] K. Miyamoto and K. Shiohara. Reduction of Qubits in Quantum Algorithm for Monte Carlo Simulation by Pseudo-random Number Generator, 2019; arXiv:1911.12469.
- [7] N. Wiebe, A. Kapoor, and K. Svore. Quantum Algorithms for Nearest-Neighbor Methods for Supervised and Unsupervised Learning, 2014, Quantum Information & Computation 15(3 & 4): 0318-0358 (2015); arXiv:1401.2142.
- [8] N. Wiebe, A. Kapoor, and K. M. Svore. Quantum Deep Learning, 2014; Quantum Information & Computation 16, 541–587 (2016) arXiv:1412.3489.
- [9] N. Wiebe, A. Kapoor, and K. M. Svore. Quantum Perceptron Models, 2016; Advances in Neural Information Processing Systems 29 3999-4007 (2016) arXiv:1602.04799.
- [10] I. Kerenidis, J. Landman, A. Luongo, and A. Prakash. q-means: A quantum algorithm for unsupervised machine learning (2018); Advances in Neural Information Processing Systems 32 4136-4146 (2019) arXiv:1812.03584.
- [11] G. Brassard, P. Hoyer, M. Mosca, and A. Tapp. Quantum Amplitude Amplification and Estimation, 2000, Quantum Computation and Quantum Information, Samuel J. Lomonaco, Jr. (editor), AMS Contemporary Mathematics, 305:53-74, 2002; arXiv:quant-ph/0005055.
- [12] A. Y. Kitaev. Quantum measurements and the Abelian Stabilizer Problem, 1995; Electronic Colloquium on Computational Complexity 3 (1996) arXiv:quant-ph/9511026.
- [13] Y. Suzuki, S. Uno, R. Raymond, T. Tanaka, T. Onodera, and N. Yamamoto. Amplitude Estimation without Phase Estimation, 2019; arXiv:1904.10246.
- [14] C. R. Wie. Simpler Quantum Counting, 2019, Quantum Information & Computation, Vol.19, No.11&12, pp0967-0983, (2019); arXiv:1907.08119. DOI: 10.26421/QIC19.11-12.
- [15] S. Aaronson and P. Rall. Quantum Approximate Counting, Simplified, 2019; arXiv:1908.10846.
- [16] D. Grinko, J. Gacon, C. Zoufal, and S. Woerner. Iterative Quantum Amplitude Estimation, 2019; arXiv:1912.05559.
- [17] A. Y. Kitaev, A. H. Shen, and M. N. Vyalyi. 2002. Classical and Quantum Computation. American Mathematical Society, Boston, MA, USA.
- [18] K. M. Svore, M. B. Hastings, and M. Freedman. Faster Phase Estimation, 2013, Quant. Inf. Comp. Vol. 14, No. 3&4, pp. 306-328 (2013); arXiv:1304.0741.
Appendix A Proof of Complexity Upper Bound
In this appendix, we provide a proof of the complexity upper bound.
Theorem 1.
The following upper bound holds for :
(29) |
Proof.
Our strategy to obtain the upper bound is calculating the required number of and for the algorithm to work correctly with the probability . Both upper bounds of and can be derived from the condition that our algorithm works correctly in the second stage because even though the condition from the first stage also bounds loosely, the most strict upper bound of can be gotten from the condition that the estimation error of is small enough. Thus, in the following, we only discuss the condition from the second stage.
In the second stage, as we mention in Section 2.2, the algorithm works correctly as long as . Even though the conditions for derived from are different depending on whether the confidence interval of is the connected confidence interval or the disconnected confidence interval, the required precisions for and do not change depending on the interval type. Therefore, in the following, we discuss only the case of the connected confidence interval. Then, the condition can be converted to
(30) |
where are the true values of and respectively. Given , , from (46) in Appendix B, the following inequality holds for the left hand side of (30):
(31) |
as long as and . On the other hand, from (16), it holds that
(32) | ||||
(33) |
where and . Thus, if at least the estimation errors are bounded as
(34) | ||||
(35) | ||||
(36) |
then it holds
(37) |
As a result,
(38) |
is satisfied. Thus, by using (8), if
(39) |
then both the conditions (34) and (35) is satisfied with the probability . On the other hand, (36) is achieved if at least
(40) |
holds in the first stage because
(41) |
Thus, by using (8) again, it is shown that if
(42) |
then (36) is satisfied with the probability . In summary, as far as (39) and (42) are satisfied, for all , holds and (19) is satisfied as long as all the estimates of cosines are inside the confidence interval; the probability is .
Finally, we evaluate the query complexity in the worst case. The worst case is that the algorithm moves to the second stage at the first iteration(). In the case, the number of oracle call is
(43) |
and the success probability of the algorithm is . Thus, if we demand that the success probability is more than then and
(44) |
By combining with (27)
(45) |
∎
Appendix B Theorem for atan function
Theorem 2.
When , takes one of the value of , and , the following inequality holds:
(46) |
if and and if there is no discontinuity of in the intervals: and .
Proof.
It is suffice to prove in following three cases: (i) (ii) and (iii) . In case (i) , using trigonometric addition formulas for , it holds that
(47) |
To show the last inequality, we use .
In case (ii) ,
(48) |
where the sign of is same as that of . The first term in (B) can be bounded as
(49) |
where take the value between and , and we use the mean value theorem for showing the first equality. Similary,
(50) |
By substituting (B), (50) and (that follows from no-discontinuity condition) to the right-hand side of (B), it follows
(51) |
The last equality holds because the signs of and are different.
In case (iii) , when ,
(52) |
where the sign is the same as the sign of and the sign of is same as that of . The values of the first line go to and the value of the second line can be evaluated by the same arguments as (B). Thus, it follows
(53) |
By the same discussion, when
(54) |
In all of the three cases, (46) is proved. ∎