Amplitude Amplification for Optimization
via Subdivided Phase Oracle
Abstract
We propose an algorithm using a modified variant of amplitude amplification to solve combinatorial optimization problems via the use of a subdivided phase oracle. Instead of dividing input states into two groups and shifting the phase equally for all states within the same group, the subdivided phase oracle changes the phase of each input state uniquely in proportion to their objective value. We provide visualization of how amplitudes change after each iteration of applying the subdivided phase oracle followed by conventional Grover diffusion in the complex plane. We then show via numerical simulation that for normal, skew normal, and exponential distribution of objective values, the algorithm can be used to amplify the probability of measuring the optimal solution to a significant degree independent of the search space size. In the case of skew normal and exponential distributions, this probability can be amplified to be close to unity, making our algorithm near deterministic. We then modify our algorithm in order to demonstrate how it can be extended to a broader set of objective value distributions. Finally, we discuss the speedup compared to classical schemes using the query complexity model, and show that our algorithm offers a significant advantage over these classical approaches.
Index Terms:
Quantum computing, Grover’s search algorithm, optimization, amplitude amplificationI Introduction
Grover’s search algorithm [1] is one of the most celebrated quantum algorithms which demonstrated the computational power of quantum computers that classical computers cannot reach. The problem that Grover’s algorithm solves can be explained as finding one of the inputs among total inputs that matches some certain constraints, , given by an oracle function . The assumption about the oracle is that nothing is known about its construction or whether the problem has any structure we can exploit to infer the solution from previously seen inputs and outputs. Grover’s algorithm can accomplish this task in just queries, compared to the classical approach which requires queries as we need to exhaustively check all inputs.
Since the original algorithm proposed by Grover in 1996, there have been multiple advancements to the algorithm. It is shown that Grover’s algorithm is a special case, and is optimal [2, 3, 4], of a more general approach, later known as “amplitude amplification” technique [5, 6, 7, 8, 9]. The use of a non- phase shift in place of the usual -flip of Grover oracle was investigated in [10, 11, 12, 13] and later generalized into a technique to increase the success probability of Grover to unity [14, 15]. It also finds applications in solving the souffle problem, too few or too many iterations deteriorates the success probability, which is known as the fixed-point searching [16, 17, 18, 19].
In recent years, there have been investigations on the viability of using amplitude amplification technique with a new variant of non-trivial phase shift oracle, one that does not simply recognize two classes of inputs but rather acts on each input uniquely depending on some cost functions [20, 21, 22]. In this paper, we aim to extend the use of this non-trivial phase shift oracle to solve combinatorial optimization problems, instead of the usual adaptive Grover’s algorithm [23, 24, 25] where we try to find better solutions by running the algorithm repeatedly until we cannot find a better solution.
The contribution of this paper is to introduce the use of a non-trivial phase shift oracle which we refer to as a subdivided phase oracle (SPO) [20], replacing the canonical Grover oracle in the amplitude amplification process, to solve combinatorial optimization problems. We show that this approach can be used to efficiently find the solutions of a large class of optimization problems, characterized by a number of objective functions.
We describe the setting of maximization problems (section II) which we then discuss one approach to construct a subdivided phase oracle operator from the given objective function and how the objective values can be embedded into the phase of each basis state (section III). We define our algorithm and provide visualization of how amplitudes change after each iteration of applying the subdivided phase oracle followed by conventional Grover diffusion (inverse around the average). This modified amplitude amplification process is complex and non-trivial to mathematically derive closed form expressions. Instead, we give intuitions on how it can amplify the probability of measuring the solution states and discuss the relationship between the distribution of objective values, oracle construction, and performance of the algorithm (section IV). We then analyze the performance of the algorithm and show results via simulations of applying it to several classes of objective functions, plotting the probability of measuring the solutions with respect to the number of iterations (section V). We also discuss a dynamic approach where the oracle operator changes at each iteration depending on current amplitude values, although this approach is purely theoretical since it is impossible to keep track of all amplitudes in large space, it suggests that a heuristic or variational [26] approach may exist in selecting parameters to construct the oracle. We then propose a sampling scheme based on the modified amplitude amplification process, plot the trade-off between the number of sampling trials and iterations, and derive conditions to which the sampling scheme gives a better expected run time than classical sampling or exhaustive search (section VI). We end the paper by discussing possible ways of extending the ideas we present here, pointers to efficiently realize the subdivided phase oracle in circuit models, connections to QAOA [27, 28, 29] and pointers to make it variational, and utilizing the visualization we developed to create a more suitable diffusion operator to increase the performance of our algorithm (section VII).
II Optimization Problem Setting
The problem we consider is the combinatorial optimization problem. For example, “what is the shortest route starting from city to city given a map of roads currently available?”. An optimization problem can be described by three elements; a set of all possible inputs , an objective function defined over part of the input space, and a Boolean function which checks for the feasibility of the input. Using the above example, refers to all possible routes connecting and , refers to the distance of each route, and since some roads might be closed due to constructions or accidents, refers to the state of whether a route can be used at the time of travel. The task is to find an input such that is the maximum or the minimum while ,
(1) |
In this study, we will focus on the combinatorial optimization problem, where the input is defined over the set of binary string of length , which gives possible inputs. The properties of problems we consider in this work and their objective functions are as follow:
-
•
We will only study the task of maximizing . Since the minimization can be transformed into maximization by multiplying every objective value by .
-
•
We assume that all inputs are feasible. Since the constraint function can be absorbed into by transforming into , where is a positive constant chosen such that any feasible input has higher objective value than all unfeasible inputs ( for all when and ).
-
•
We assume the range of are non-negative real values. Since we are considering finite search space, the objective values are bounded and minimum value exists. Then we transform to such that where is the minimum value.
III Subdivided Phase Oracle
Here in place of the two-class recognition oracle which is used in amplitude amplification for canonical Grover, we will define which changes the phase of each input in proportion to its objective value given by the objective function . Even though the word “oracle” normally refers to a black-box Boolean function which recognizes solutions to decision or search problems, we will keep using it to refer to this phase shift operator to reflect the modification from amplitude amplification technique, and to be consistent with [20, 21, 22]. We will refer to the oracle used in this paper as the subdivided phase oracle or just the phase oracle, as taken from [20], since we subdivided the objective value into the phase of each state with equal proportion. In this section we will explain how one could construct a subdivided phase oracle from a given objective function.
The action of the phase oracle is to embed the objective value into the phase of each computational basis state,
(2) |
where is a function that maps the set of possible inputs to the phase angle with which each state is shifted in the range . The mapping from objective function to is done by
(3) |
where is a parameter we can adjust and is the minimum objective value. The action of is to introduce a global phase which can be ignored, meaning the action of the phase oracle can be written as follows,
(4) |
We make a distinction between and so that is always referred to the objective function while is the mapped function that will be used to perform the phase shift to the computational bases. When the context is clear which is used or when it is not important to denote the selected value, we will omit the and write the oracle as just .
From Eq. (4), it can be seen that the action of the oracle is invariant under shift of the objective values. Furthermore, the nature of the oracle does not depend nor exploit the structure of the problem, which leads to the following two simplifications:
IV Amplitude Amplification using phase oracle
Having established the problem definition, we now present our algorithm as below.
It has the same structure as Grover’s algorithm but uses the subdivided phase oracle in place of the binary oracle operator. The diffusion operator is given by
(5) |
where is the identity operator.
IV-A Visualizing the effect of subdivided phase oracle

Before applying our algorithm to various optimization problems, it is in order to gain visual intuition about how it leads to amplitude amplification. Unlike the Grover’s algorithm, where all vector amplitudes are real numbers, the subdivided phase oracle produces complex amplitudes. Therefore each amplitude corresponding to a computational basis state can be represented as a vector in the complex plane.
Figure 1(a) represents the amplitudes of the equal superposition state , where all vectors are real and have the same length given by , where . The action of the phase oracle is to rotate each vector corresponding to all the computational basis states around the origin as seen in Figure 1(b). This introduction of relative phases does not change the probabilities of measuring the individual computational basis state. The amplitude amplification occurs after the application of the diffusion operator , which can be visualized in the following three steps:
-
1.
Find the mean amplitude vector and scale its length by two, represented by a yellow arrow in Figure 1(c).
-
2.
Compute the vector differences between the amplitude vectors and the scaled mean amplitude vector, depicted by purple arrows in Figures 1(d) and (e).
-
3.
Finally, replace the old amplitude vectors with the newly computed vectors from Step 2. This effectively shifts these updated amplitude vectors to the origin, as shown in Figure 1(f).
The vector corresponding to the probability amplitude of the optimal computational basis state representing the solution to the optimization problem is given by the most saturated red color in Fig. 1. We can see that application of the subdivided phase oracle followed by the diffusion operator increases its magnitude, amplifying the probability of measuring the optimal solution.
IV-B Amplifiability via subdivided phase oracle
Grover oracle, when viewed in the same picture can be tracked easily, as the phase flip would make the group of marked state’s amplitude vectors point to the reverse direction. When the diffusion operator is applied, the mean would lie on the real number line and after the difference vectors are created, all amplitude vectors would still lie on the real number line. It is noteworthy that when everything stays on the real number line, the amplitude of the marked states only gets larger when the mean vector is pointing in the opposite direction. The mean vector would get smaller and smaller after each iteration until it stops pointing in the opposite direction and starts to point in the same direction as vectors of marked states.
Sequences of applying followed by the diffusion operator lead to a more complex scenario than the Grover oracle, as both the directions and norm of the amplitude vectors and the mean vectors largely depend on the choice of . The visualization that we discussed in the previous subsection however offers an intuitive picture when and how the subdivided phase oracles leads to amplitude amplification. Since we are interested in using this algorithm to solve optimization problems, we are only interested in the optimal state vector. The optimal state solution will grow larger up to a certain point and then shrink back and repeat this process in cycles. The condition when the vector gets larger after diffusion is only when where denotes the amplitude vector. This means the angle between the mean vector and the amplitude vector of the optimal state should be between and in order to observe significant amplitude amplification after every iteration of our algorithm. For angles outside this range, amplitude amplification occurs but its magnitude is only marginal. As we will see later, the choice of affects this angle and hence the level of amplitude amplification.
V Results
Before we dive into the simulations and their results, we first need to discuss the connections between the objective function and the distribution of its mapped objective values. Recall that we want to have the angle between the best state amplitude vector and the mean vector be as close as opposite direction as possible, this success condition of the amplification relies heavily on the distribution of the objective values. Hence we will categorize the sets of simulations based on the density of the objective function.
In addition to the normal, skew normal, and exponential distributions, we consider the case where the objective function follows basic classes of polynomials and exponential. Although these objective functions are artificially generated and probably will not come up from natural real-world problems, we include them in our study to look at a more general behavior of the algorithm.
V-A Normally distributed objective values
We begin by considering the objective values to be normally distributed according to the probability density function,
(6) |
where denotes the mean and the standard deviation of the distribution, as shown in Fig. 2(a). This is a natural assumption as many instances of NP-hard optimization problems can be transformed into the Ising problem [30]. Such problems can be described by using local interactions between a small number of bits of the entire bit string. This property in turn allows for the distribution of objective values of a randomly generated large instance of such NP-hard problems with local cost term sampled from a single distribution to be well approximated by a normal distribution. This approximation becomes more accurate with increasing size of the problem due to the central limit theorem and the law of large numbers.
Normal distribution is symmetric, which makes it possible for the phase difference between the mean amplitude vector and the optimal solution to be , leading to excellent amplification results. In fact, using the visualization technique discussed in subsection IV-A, it is straightforward to observe that the mean amplitude vector and the vector corresponding to the optimal solution can be made to always lie on the real axis for appropriately chosen .

Choosing such that the mean amplitude vector and the solution vector to lie on the real axis is actually the optimal value. Since the normal distribution is symmetric, it is the case that applying the algorithm does not only amplify the best state but it also amplify the worst state to the same probability.
V-B Skew normal distribution
We have demonstrated that optimization problems with normally distributed objective values can be tackled easily by our algorithm. The key property that made this possible was the symmetry of the normal distribution. Real-world optimization problems on the other hand are unlikely to possess such nice property. In this and the following subsections, we evaluate the performance of our algorithm for optimization problems with objective value distributions which are not symmetric. We show that not only do they not pose an obstacle to our algorithm, asymmetry in the distributions works in our favor.
To capture the deviation from normally distributed objective values we now turn to skew normal distributions,
(7) |
where is the probability density function of the normal distribution given in Eq. (6), and
(8) |
The parameter controls the shape of the distribution, as seen in Fig. 2(b)-(c). For , becomes the normal distribution of Eq. (6), while for , the distribution in Eq. (7) becomes asymmetric. In the limit of , becomes a half-normal distribution.
We observe that as for the case of a normal distribution, there is always an optimal parameter which amplifies the amplitude of the optimal solution to a significant degree as depicted in Fig 2(e).
V-C Exponential distribution
We have seen that our algorithm can be applied to optimization problems involving normal and skew normal distributions of objective values. In order to assess the applicability of the algorithm further, we expand our discussion to exponential distribution,
(9) |
where is the rate parameter, as depicted in Fig. 2(d). Despite their rather different definitions, all three objective distribution functions described above share a number of important features when it comes to amplitude amplification, which we discuss in the following subsection.
V-D Common observations

We have seen how Algorithm 1 can be applied to optimization problems with normal, skew normal, and exponential distribution of objective values. In Fig. 2(e), we display how the probability of measuring the optimal value, , gets amplified with every successive application of the subdivided phase oracle and the diffusion operator . Initially, , as the input size of the optimization problem is set to . We observe that the type of distribution affects both how many iterations are needed to achieve the maximum amplitude amplification, given by , as well as the value of . Interestingly, when the objective values are distributed according to the normal distribution, Algorithm 1 yields . For the case of the two asymmetric objective value distributions on the other hand, the probability of measuring the optimal value gets amplified more strongly. for the skew normal distribution, and for the exponential distribution.
Figure 2(e) displays the number of iterations needed to achieve maximum amplification of the optimal solution. Extending the number of iterations further reveals that begins to oscillate as depicted in Fig. 3. This is reminiscent of the behavior observed in amplitude amplification using the conventional binary oracle. Unlike the binary oracle, the oscillations of display amplitude modulation not observed previously, leading to a number of oscillations of before the value of is reached again. This beating behavior is a consequence of the fact that the subdivided phase oracle introduces complex probability amplitudes into the state vector . This basic pattern is observed for all distributions and tested objective functions. Increasing in the subdivided phase oracle leads to decreased frequency of these oscillations.

As discussed in Section IV, in order to obtain the maximum probability , we have to choose the optimal in the subdivided phase oracle . Algorithm 1 is sensitive to this choice and deviating from the optimal rapidly decreases the achievable amplitude amplification. In Fig. 4, we show how the maximum amplified probability depends on the deviation of from its optimal value for normal and skew normal distributions.
In Grover’s algorithm or the usual amplitude amplification algorithm, if the diffusion operator is kept the same, being the inversion around the mean operator , it is known that in large search space, only phase shift on the solution states are acceptable. And if one were to change the oracle to use phase angle other than , one would need to change the diffusion operator to match the angle of the oracle under some constraints as explored in [10, 12]. The same effect can also be seen in our results in Fig 3.
The sensitivity to the angle or the value is not just for the efficiency in terms of number of iterations but is crucial in order to make algorithm 1 work.
An important observation to make at this point is that the effectiveness of our algorithm is not affected by the size of the input space. This is a desired feature and signifies the applicability of our subdivided phase oracle approach to optimization even for extremely large data sets. Below, we discuss optimization problems which do not share this property and the degree of amplitude amplification diminishes with increasing input space. This will lead to a modified algorithm which overcomes this issue.
V-E Overcoming size-induced limits to amplifiability
We have seen above that Algorithm 1 possesses a number of desired properties when applied to common distributions of objective values. One such property is that its effectivity does not diminish with increasing size of the input space of the optimization problem. In this subsection we demonstrate that this is not always the case and introduce a modification to Algorithm 1 that counteracts this issue.
We focus on the following group of objective functions,
(10) | ||||
(11) | ||||
(12) | ||||
(13) |
They map each state to a unique object value, therefore we refer to them as injective objective functions. These injective objective functions are not likely to appear naturally in optimization problems in this simple form. However, they serve the purpose of demonstrating an important property of Algorithm 1 that may be encountered when using objective functions not explicitly covered in our manuscript.
The primary quantity that we are concerned about in this subsection is the relative amplified probability of the optimal solution, , with its initial value, , before the first iteration of Algorithm 1. In particular, we are interested in determining how the ratio behaves for increasing input size space . Figure 5 displays the scaling of this probability ratio as a function of the number of iterations of Algorithm 1 for increasing . Interestingly, we can observe that the ratio converges to a single curve for increasing . This is in stark contrast to behavior observed for normal, skew normal, and exponential distributions where the optimal solution amplitude could be amplified easily independently of the input space size .
One might suspect that we would see a different picture if we fix the objective value ranges but increase the possible data points in between while keeping the density fixed (sampling more from the same density). Instead of mapping to , we map to where is some constant and our total states then become instead of . We would still see the same picture, the highest achievable probability relative to initial approach a certain limit.
Figure 5 shows the behavior of but identical observations were made for the remaining injective objective functions as well. In the remainder of this section, we present a modification that overcomes this size-induced limit to amplification.

So far we have only considered applying with constant for all iterations of the algorithm. There are no real restrictions that this must be the case, and in fact, we can show that by changing the parameter to the operator for each iteration, we can achieve better results and overcome the above mentioned issue. Recall that in Fig 3, the general trend is that the smaller the phase angle we choose, the shorter the period of the oscillating probability. This gives us some intuition that a smaller angle can be used in early iterations to quickly boost the amplitude of the solution state then later use a larger angle which allows it to get past the amplification limit of the smaller angle. An interesting effect of varying the at each iteration is that it allows the injective objective functions case to get past the negligible boosted probability limit by a single fixed . We show in Fig 6 for injective families the probability of measuring the solution state at each iteration when using a greedy approach. at each iteration is chosen such that the is highest by scanning the range of .
Although the greedy approach to dynamically choose is not the optimal strategy, it serves as one point in favor of evidence that in general even the injective families of objective functions which cannot be boosted via a fixed choice of are boostable with this amplitude amplification technique.
One way to avoid dealing with the complexity when using the subdivided phase oracle is to alternate the direction of the rotation for each iteration which can be done by applying on the even iteration and on the odd iteration. This way the mean vector can be tracked easily as it always stays on the same axis throughout all iterations. Although this approach solves the problem of choosing the optimal since the algorithm performs best when approaches , the number of iterations requires to amplify the probability grows much quicker than the fixed method, and the magnitude of the solution state is smaller. One interesting side effect of this approach is that the states with objective value higher than the mean get larger while those lower than the mean get smaller. This approach of alternating between and was explored in detail in [21] with the help of an interference pattern created by adding one ancilla qubit to select the rotation direction in superposition which can be thought of as doubling the number of amplitude vectors and splitting them into two groups. Having them rotate in opposite directions forces the mean vector to always stay on the real number line.

VI Quantum Advantage
We have shown that using our modified amplitude amplification algorithm, it is possible to raise the probability of measuring the solution to a significant degree depending on the distribution of objective values. Since the amplified probability does not reach unity, we still need to resort to classical random sampling after applying the amplitude amplification algorithm. If we look at Fig 2(e), the increase in probability is faster for the first half than the later half of the iterations required to reach the peak which means that there is an optimal stopping point in which would give us smaller total overall complexity.
In order to properly quantify the speedups against classical scheme, we will use ‘expected number of trials’ to compare quantum and classical schemes. Let us define:
(14) | ||||
(15) | ||||
(16) | ||||
(17) |
where () is the probability of measuring the solution classically (quantumly using Algorithm 1), and () is the expected number of queries to get (measure) the solution classically (quantumly), is the number of iterations or time step count for each trial in the quantum case, and is the probability of measuring the optimal solution state .
We make a reasonable simplification that we treat one iteration of the amplitude amplification algorithm as a single query. Query complexity is the most used metric to quantify the speedups of Grover’s algorithm and amplitude amplification thus we will use this in the same manner.

As we have seen in the previous sections, the increase in probability does not scale linearly with number of iterations, hence it is not always the optimal strategy to sample after boosting the probability of the solution state to the highest but rather boost it enough such that the expected number of trials is minimum. Using the simulations and objective functions mentioned in the previous section, Fig 7 shows the expected number of queries and the trade-off between further amplifying the probability and the number of sampling.
Now that we have defined the expected number of trials to get the optimal solution, we will look at another metric, the success probability of finding the solution. Since one trial has the success probability , the success probability when we sample for trials is
(18) |
Using the values obtained in Section V, we see that even a very small number of trials yields probability of success to be very close to unity.
VII conclusion
Even though Amplitude amplification has been known since its discovery decades ago as one of the key quantum tools for achieving an advantage over classical computers, there are still many things to be learned from it. In this paper, we showed evidence that the amplitude amplification algorithm in combination with subdivided phase oracle can be used to solve optimization problems, and depending on the structure and distribution of their objective values, an advantage can be gained over classical schemes. Using a fixed angle oracle , normal distribution of objective values can be amplified such that the probability of observing the solution state is of a significant degree. In the case of skew normal and exponential distributions, this probability becomes near unity. If changing at each time-step is allowed, the algorithm is general enough to amplify other distributions of objective values, such as those that are uniform or the injective objective functions. The process of choosing an optimal is non-trivial and needs further study but it has shown promising path in the next step to the more general amplitude amplification of more than two classes of inputs.
There are still so many things to be learned from using the subdivided phase oracle with the diffusion operator, such as the mathematical derivation of the process explained in section III or the methods of choosing when little is known about the problem. The visualization technique that we developed and propose in this paper, albeit a simple approach, should be proved useful in designing a better suited diffusion operator to be used with respect to the choice of and the distribution. As with past research on Grover’s algorithm, it is known that there is a relationship between the choice of phase shift operator and the angle of diffusion operator for the unstructured search. It will be interesting to see if the same thing can be derived when the oracle does not separate two groups of states but each of them uniquely.
From another perspective, the whole process can be viewed as a form of QAOA [29, 27], replacing the mixer layer with the diffusion operator [28]. Applying the low-depth QAOA might be a good starting point for selecting and from there we can vary using some heuristic algorithm.
Another approach to ease the sensitivity of the choice of and to improve the potency of the algorithm is to transform the objective function into another function that exhibits better characteristics. So far we have only discussed the theoretical aspect of using this algorithm without mentioning the implementation. The embedding of objective function could be referred to the ideas presented in [24, 25] where techniques of directly embedding polynomials can be done without the need for ancilla qubits which might enable the idea of transforming the objective function without an excessively complex circuit design.
Acknowledgment
This work was supported by MEXT Quantum Leap Flagship Program Grant Number JPMXS0118067285 and JPMXS0120319794.
References
- [1] L. K. Grover, “A fast quantum mechanical algorithm for database search,” in Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, ser. STOC ’96. New York, NY, USA: Association for Computing Machinery, Jul. 1996, pp. 212–219.
- [2] M. Boyer, G. Brassard, P. Hoeyer, and A. Tapp, “Tight bounds on quantum searching,” Fortschritte der Physik, vol. 46, no. 4-5, pp. 493–505, Jun. 1998.
- [3] C. H. Bennett, E. Bernstein, G. Brassard, and U. Vazirani, “Strengths and Weaknesses of Quantum Computing,” SIAM Journal on Computing, vol. 26, no. 5, pp. 1510–1523, Oct. 1997.
- [4] M. E. S. Morales, T. Tlyachev, and J. Biamonte, “Variationally Learning Grover’s Quantum Search Algorithm,” Physical Review A, vol. 98, no. 6, p. 062333, Dec. 2018.
- [5] G. Brassard, P. Hoyer, M. Mosca, and A. Tapp, “Quantum Amplitude Amplification and Estimation,” arXiv:quant-ph/0005055, vol. 305, pp. 53–74, 2002.
- [6] E. Farhi and S. Gutmann, “Analog analogue of a digital quantum computation,” Physical Review A, vol. 57, no. 4, pp. 2403–2406, Apr. 1998.
- [7] E. Farhi, J. Goldstone, D. Gosset, S. Gutmann, and P. Shor, “Unstructured Randomness, Small Gaps and Localization,” arXiv:1010.0009 [quant-ph], Sep. 2010.
- [8] A. M. Childs and J. Goldstone, “Spatial search by quantum walk,” Physical Review A, vol. 70, no. 2, p. 022314, Aug. 2004.
- [9] E. Biham, O. Biham, D. Biron, M. Grassl, and D. A. Lidar, “Grover’s quantum search algorithm for an arbitrary initial amplitude distribution,” Physical Review A, vol. 60, no. 4, pp. 2742–2745, Oct. 1999.
- [10] G. L. Long, W. L. Zhang, Y. S. Li, and L. Niu, “Arbitrary Phase Rotation of the Marked State Cannot Be Used for Grover’s Quantum Search Algorithm,” Communications in Theoretical Physics, vol. 32, no. 3, pp. 335–338, Oct. 1999.
- [11] G. L. Long, Y. S. Li, W. L. Zhang, and L. Niu, “Phase matching in quantum searching,” Physics Letters A, vol. 262, no. 1, pp. 27–34, Oct. 1999.
- [12] P. Hoyer, “On Arbitrary Phases in Quantum Amplitude Amplification,” Physical Review A, vol. 62, no. 5, p. 052304, Oct. 2000.
- [13] E. Biham, O. Biham, D. Biron, M. Grassl, D. A. Lidar, and D. Shapira, “Analysis of generalized Grover quantum search algorithms using recursion equations,” Physical Review A, vol. 63, no. 1, p. 012310, Dec. 2000.
- [14] G. L. Long, “Grover Algorithm with zero theoretical failure rate,” Physical Review A, vol. 64, no. 2, p. 022307, Jul. 2001.
- [15] F. M. Toyama, W. van Dijk, and Y. Nogami, “Quantum search with certainty based on modified Grover algorithms: Optimum choice of parameters,” Quantum Information Processing, vol. 12, no. 5, pp. 1897–1914, May 2013.
- [16] T. J. Yoder, G. H. Low, and I. L. Chuang, “Fixed-Point Quantum Search with an Optimal Number of Queries,” Physical Review Letters, vol. 113, no. 21, p. 210501, Nov. 2014.
- [17] L. K. Grover, “Fixed-Point Quantum Search,” Physical Review Letters, vol. 95, no. 15, p. 150501, Oct. 2005.
- [18] L. K. Grover, A. Patel, and T. Tulsi, “Quantum Algorithms with Fixed Points: The Case of Database Search,” Mar. 2006.
- [19] H. Kwon and J. Bae, “Quantum amplitude-amplification operators,” Physical Review A, vol. 104, no. 6, p. 062438, Dec. 2021.
- [20] T. Satoh, Y. Ohkura, and R. Van Meter, “Subdivided Phase Oracle for NISQ Search Algorithms,” IEEE Transactions on Quantum Engineering, vol. 1, pp. 1–15, 2020.
- [21] P. Shyamsundar, “Non-Boolean Quantum Amplitude Amplification and Quantum Mean Estimation,” arXiv:2102.04975 [quant-ph], Feb. 2021.
- [22] D. Koch, M. Cutugno, S. Karlson, S. Patel, L. Wessing, and P. M. Alsing, “Gaussian Amplitude Amplification for Quantum Pathfinding,” arXiv:2112.08167 [quant-ph], Jan. 2022.
- [23] C. Durr and P. Hoyer, “A Quantum Algorithm for Finding the Minimum,” arXiv:quant-ph/9607014, Jan. 1999.
- [24] A. Gilliam, S. Woerner, and C. Gonciulea, “Grover Adaptive Search for Constrained Polynomial Binary Optimization,” Quantum, vol. 5, p. 428, Apr. 2021.
- [25] A. Gilliam, M. Pistoia, and C. Gonciulea, “Optimizing Quantum Search with a Binomial Version of Grover’s Algorithm,” arXiv:2007.10894 [quant-ph], Jul. 2020.
- [26] M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, and P. J. Coles, “Variational Quantum Algorithms,” Nature Reviews Physics, vol. 3, no. 9, pp. 625–644, Sep. 2021.
- [27] S. Hadfield, Z. Wang, B. O’Gorman, E. G. Rieffel, D. Venturelli, and R. Biswas, “From the Quantum Approximate Optimization Algorithm to a Quantum Alternating Operator Ansatz,” Algorithms, vol. 12, no. 2, p. 34, Feb. 2019.
- [28] A. Bärtschi and S. Eidenbenz, “Grover Mixers for QAOA: Shifting Complexity from Mixer Design to State Preparation,” in 2020 IEEE International Conference on Quantum Computing and Engineering (QCE), Oct. 2020, pp. 72–82.
- [29] E. Farhi, J. Goldstone, and S. Gutmann, “A Quantum Approximate Optimization Algorithm,” arXiv:1411.4028 [quant-ph], Nov. 2014.
- [30] A. Lucas, “Ising formulations of many NP problems,” Frontiers in Physics, vol. 2, 2014.