A Quantum Approximation Scheme for -Means
Abstract
We give a quantum approximation scheme (i.e., -approximation for every ) for the classical -means clustering problem in the QRAM model with a running time that has only polylogarithmic dependence on the number of data points. More specifically, given a dataset with points in stored in QRAM data structure, our quantum algorithm runs in time and with high probability outputs a set of centers such that . Here denotes the optimal -centers, denotes the standard -means cost function (i.e., the sum of squared distance of points to the closest center), and is the aspect ratio (i.e., the ratio of maximum distance to minimum distance). This is the first quantum algorithm with a polylogarithmic running time that gives a provable approximation guarantee of for the -means problem. Also, unlike previous works on unsupervised learning, our quantum algorithm does not require quantum linear algebra subroutines and has a running time independent of parameters (e.g., condition number) that appear in such procedures.
1 Introduction
Data clustering and the -means problem, in particular, have many applications in data processing. The -means problem is defined as: given a set of points , and a positive integer , find a set of centers such that the cost function,
is minimised. Here, is the Euclidean distance between points and . Partitioning the points based on the closest center in the center set gives a natural clustering of the data points. Due to its applications in data processing, a lot of work goes into designing algorithms from theoretical and practical standpoints. The -means problem is known to be -hard, so it is unlikely to have a polynomial time algorithm. Much research has been done on designing polynomial time approximation algorithms for the -means problem. However, the algorithm used in practice to solve -means instances is a heuristic, popularly known as the -means algorithm (not to be confused with the -means problem). This heuristic, also known as Lloyd’s iterations [Llo82], iteratively improves the solution in several rounds. The heuristic starts with an arbitrarily chosen set of centers. In every iteration, it (i) partitions the points based on the nearest center and (ii) updates the center set to the centroids of the partitions. In the classical computational model, it is easy to see that every Lloyd iteration costs time. This hill-climbing approach may get stuck in a local minimum or take a huge amount of time to converge and hence does not give provable guarantees on the quality of the final solution or the running time. In practice, Lloyd’s iterations are usually preceded by the -means++ algorithm [AV07], a fast sampling-based approach for picking the initial centers that also gives an approximation guarantee. So, Lloyd’s iterations, preceded by the -means++ algorithm, give the best of both worlds, theory, and practice. Hence, it is unsurprising that a lot of work has been done on these two algorithms. This ranges from efficiency improvements in specific settings to implementations in distributed and parallel models. With the quantum computing revolution imminent, it is natural to talk about quantum versions of these algorithms and quantum algorithms for the -means problem in general.
Early work on the -means problem within the quantum setting involved efficiency gains from quantizing Lloyd’s iterations. In particular, Aimeur, Brassard, and Gambs [ABG13] gave an time algorithm for executing a single Lloyd’s iteration for the Metric -median clustering problem that is similar to the -means problem. This was using the quantum minimum finding algorithm of Durr and Hoyer [DH99]. Using quantum distance estimation techniques assuming quantum data access, Lloyd, Mohseni, and Rebentrost [LMR13] gave an time algorithm for the execution of a single Lloyd’s iteration for the -means problem. More recently, [KLLP19] gave an approximate quantization of the -means++ method and Lloyd’s iteration assuming QRAM data structure [kp17] access to the data. Interestingly, the running time has only polylogarithmic dependence on the size of the dataset. The algorithm uses quantum linear algebra procedures, and hence there is dependence on certain parameters that appear in such procedures, such as the condition number . Since Lloyd’s iterations do not give an approximation guarantee, its quantum version is also a heuristic without a provable approximation guarantee.111Even though [KLLP19] gives a quantum version of the -means++ algorithm that has an approximation guarantee, the guarantee for the quantum version (which has errors) is not shown explicitly. Our work on the -means problem builds upon the techniques developed in all the above and other works on quantum unsupervised learning to design algorithms with provable approximation guarantees. Specifically, we want to design an approximation scheme for the -means problem with a running time that has only a polylogarithmic dependence on the data size as in the algorithm of [KLLP19]. An approximation scheme is an algorithm that, in addition to the dataset and , takes an error parameter as input and outputs a solution with a cost within factor of the optimal. We do this by quantizing the highly parallel, sampling-based approximation scheme of [BGJK20]. The tradeoff in obtaining this fine-grained approximation is that the running time of our algorithm has an exponential dependence on and error parameter . In the classical setting, such algorithms are categorized as Fixed Parameter Approximation Schemes (fpt-AS). Such -approximation algorithms can have exponential running time dependence on the parameter (e.g., the number of clusters in our setting). The practical motivation for studying Fixed-Parameter Tractability for computationally hard problems is that when the parameter is small (e.g., number of clusters ), the running time is not prohibitively large. We state our main result as the following theorem, which we will prove in the remainder of the paper.
Theorem 1.1 (Main Theorem)
Let be the error parameter. There is a quantum algorithm that, when given QRAM data structure access to a dataset , runs in time and outputs a center set such that with high probability . Here, is the aspect ratio, i.e., the ratio of the maximum to the minimum distance between two given points in .222The notation hides logarithmic factors in . The in the exponent hides logarithmic factors in and .
1.1 An approximation scheme in the classical setting
We convert the -sampling-based approximation scheme of [BGJK20] to a Quantum version. The approximation scheme is simple and highly parallel, which can be described in the following few lines:
Input: Dataset , integer , and error
Output: A center set with
-
1.
(Constant approximation) Find a center set that is a constant factor approximate solution. An pseudo-approximate solution, for constants , also works.
-
2.
(-sampling) Pick a set of points independently from the dataset using -sampling with respect to the center set .
-
3.
(All subsets) Out of all -tuples of (multi)subsets of , each of size , return that gives the least -means cost. Here, denotes the centroid of points in .
We will discuss the quantization of the above three steps of the approximation scheme of [BGJK20], thus obtaining a quantum approximation scheme. 333Steps (2) and (3) in the algorithm are within a loop for probability amplification. This loop is skipped in this high-level description for simplicity.
1. (Constant approximation)
The first step requires finding a constant factor approximate solution for the -means problem. Even though several constant factor approximation algorithms are known, we need one with a quantum counterpart that runs in time that is polylogarithmic in the input size . One such algorithm is the -means++ seeding algorithm [AV07] that picks centers in a sequence with the center picked using -sampling444-sampling: Given a center set , -sampling picks a datapoint with probability proportional to the squared distance of the point to the closest center in . with respect to the previously chosen centers. [KLLP19] give an approximate quantum version of -sampling. The approximation guarantee of the -means++ algorithm is instead of the constant approximation required in the approximation scheme of [BGJK20]. It is known from the work of [ADK09] that if the -sampling in -means++ is continued for steps instead of stopping after sampling centers, then we obtain a center set of size that is a -pseudo approximate solution. This means that this -size center set has a -means cost that is some constant times the optimal. Such a pseudo-approximate solution is sufficient for the approximation scheme of [BGJK20] to work. We show that the pseudo-approximation guarantee of [ADK09] also holds when using the approximate quantum version of the -sampling procedure.
2. (-sampling)
The second step of [BGJK20] involves -sampling, which we already discussed how to quantize. This is no different than the -sampling involved in the -means++ algorithm of the previous step. The sampling in this step is simpler since the center set with respect to which the -sampling is performed, does not change (as is the case with the -means++ algorithm.)
3. (All subsets)
Since the number of points sampled in the previous step is , we need to consider a list of tuples of subsets, each giving a -center set (a tuple defines ). We need to compute the -means cost for each center sets in the list and then pick the one with the least cost. We give quantization of the above steps. 555Note that when picking the center set with the least cost, we can get quadratic improvement in the search for the best -center set using quantum search. Given that the search space is of size , this results only in a constant factor improvement in the exponent. So, we leave out the quantum search from the discussion for simplicity.
Note that the quantization of the classical steps of [BGJK20] will incur precision errors. So, we first need to ensure that the approximation guarantee of [BGJK20] is robust against small errors in distance estimates, -sampling probabilities, and -means cost estimates. We must carefully account for errors and ensure that the quantum algorithm retains the approximation guarantee of the robust version of [BGJK20].
Organization
We begin the technical discussions in the next section by showing that the approximation scheme of [BGJK20] is robust against errors. We will also show the robustness of the -means++ procedure. In the subsequent section, we give the quantization of the steps of [BGJK20]. First, we briefly discuss the related work.
1.2 Related work
We have already discussed past research works on quantum versions of the -means algorithm (i.e., Lloyd’s iterations). This includes [ABG13], [LMR13], and [KLLP19]. All these have been built using various quantum tools and techniques developed for various problems in quantum unsupervised learning, such as coherent amplitude and median estimation, distance estimation, minimum finding, etc. See [WKS15] for examples of several such tools. Other directions on quantum -means includes adiabatic algorithms (e.g., [LMR13]) and algorithms using the QAOA framework (e.g., [OMA+17, FGG14]). However, these are without provable guarantees. A line of work has also suggested that quantum algorithms can outperform classical ones because of the QRAM data structure access. A more level playing field is to assume that a similar sample and query data access is available in the classical setting. Under this assumption, several “dequantization” results for unsupervised machine learning algorithms have been given. This includes [Tan19, CGL+20, Tan21]. It will be interesting to see if similar dequantization is possible for the quantum algorithms presented in this work since the main ingredient of our algorithm and the dequantization results is length-squared sampling.
2 A Robust Approximation Scheme
We start the discussion with the -sampling method. In particular, we would like to check the robustness of the approximation guarantee provided by the -sampling method against errors in estimating the distances between points. We will show that the -sampling method gives a constant pseudo-approximation even under sampling errors.
2.1 Pseudoapproximation using -sampling
Let the matrix denote the dataset, where row contains the data point . Let the matrix any -center set, where row contains the center out of the centers. Sampling a data point using the distribution w.r.t. (short for with respect to) a center set means that the datapoint gets sampled with probability proportional to the squared distance to its nearest center in the center set . This is also known as sampling w.r.t. center set . More formally, data points are sampled using the distribution , where . For the special case , sampling is the same as uniform sampling. The -means++ seeding algorithm starts with an empty center set and, over iterations, adds a center to in every iteration by sampling w.r.t. the current center set . It is known from the result of [AV07] that this -means++ algorithm above gives an approximation in expectation. It is also known from the result of [ADK09] that if centers are sampled, instead of (i.e., the for-loop runs from to ), the cost with respect to these centers is at most some constant times the optimal -means cost. Such an algorithm is called a pseudo approximation algorithm. Such a pseudo approximation algorithm is sufficient for the approximation scheme of [BGJK20]. So, we will quantize the following constant factor pseudo-approximation algorithm.
In the quantum simulation of the above sampling procedure, there will be small errors in the sampling probabilities in each iteration. We need to ensure that the constant approximation guarantee of the above procedure is robust against small errors in the sampling probabilities owing to errors in distance estimation. We will work with a relative error of for small . Following is a crucial lemma from [AV07] needed to show the pseudo-approximation property of Algorithm 1.
Lemma 1 (Lemma 3.2 in [AV07])
Let be an arbitrary optimal cluster, and let be an arbitrary set of centers. Let be a center chosen from with -sampling with respect to . Then .
The above lemma is used as a black box in the analysis of Algorithm 1 in [ADK09]. The following version of the lemma holds for distance estimates with a relative error of and gives a constant factor approximation guarantee. Since Lemma 1 is used as a black box in the analysis of Algorithm 1, replacing this lemma with Lemma 2 also gives a constant factor approximation to the -means objective. We will use the following notion of the closeness of two distance functions.
Definition 1
A distance function is said to be -close to distance function , denoted by , if for every pair of points , .666We use the notation that for positive reals , if .
Lemma 2
Let . Let be an arbitrary optimal cluster and be an arbitrary set of centers. Let be a center chosen from with -sampling with respect to , where . Then .
Proof
Let denote the distance of the point from the nearest center in and let denote the estimated distance. We have . The following expression gives the expectation:
Note that for all , . This gives , which further gives and . We use this to obtain the following upper bound on the expectation :
This completes the proof of the lemma.∎
We will use this lemma in the approximation scheme of [BGJK20]. However, this lemma may be of independent interest as this gives a quantum pseudo approximation algorithm with a constant factor approximation that runs in time that is polylogarithmic in the data size and linear in and . We will discuss this quantum algorithm in the next Section.
2.2 Approximation scheme of [BGJK20]
A high-level description of the approximation scheme of [BGJK20] was given in the introduction. We give a more detailed pseudocode in Algorithm 2.
In addition to the input instance and error parameter , the algorithm is also given a constant approximate solution , which is used for -sampling. A pseudoapproximate solution is sufficient for the analysis in [BGJK20]. The discussion from the previous subsection gives a robust algorithm that outputs a pseudoapproximate solution even under errors in distance estimates. So, the input requirement of Algorithm 2 can be met. Now, the main ingredient being -sampling, we need to ensure that errors in distance estimate do not seriously impact the approximation analysis of Algorithm 2. We state the main theorem of [BGJK20] before giving the analogous statement for the modified algorithm where is replaced with that is -close to .
Theorem 2.1 (Theorem 1 in [BGJK20])
Let be the error parameter, be the dataset, be a positive integer, and let be a constant approximate solution for dataset . Let be the list returned by Algorithm 2 on input using the Euclidean distance function . Then with probability at least , contains a center set such that . Moreover, and the running time of the algorithm is .
We give the analogous theorem with access to the Euclidean distance function replaced with a function that is -close to .
Theorem 2.2
Let be the error parameter, be the closeness parameter, be the dataset, be a positive integer, and let be a constant approximate solution for dataset . Let be the list returned by Algorithm 2 on input using the distance function that is -close to the Euclidean distance function . Then with probability at least , contains a center set such that . Moreover, and the running time of the algorithm is .
The proof of the above theorem closely follows the proof of Theorem 2.1 of [BGJK20]. This is similar to the proof of Theorem 2 that we saw earlier, closely following the proof of Lemma 1. The minor changes are related to approximate distance estimates using instead of real estimates using . The statement of Theorem 2.2 is not surprising in this light. Instead of repeating the entire proof of [BGJK20], we point out the one change in their argument caused by using instead of as the distance function. The analysis of [BGJK20] works by partitioning the points in any optimal cluster into those that are close to and those that are far. For the far points, it is shown that when doing -sampling, a far point will be sampled with probability at least times the uniform sampling probability (see Lemma 21 in [GJK20], which is a full version of [BGJK20]). It then argues that a reasonable size set of -sampled points will contain a uniform sub-sample. A combination of the uniform sub-sample along with copies of points in gives a good center for this optimal cluster . Replacing with decrease the value of by a multiplicative factor of . This means that the number of points sampled should increase by a factor of . This means that the list size increases to . Note that when , the list size and running time retains the same form as that in [BGJK20] (i.e., and time ).
3 Quantum Algorithms
We will work under the assumption that the minimum distance between two data points is , which can be acheived using scaling. This makes the aspect ratio simply the maximum distance between two data points. We will use for an index into the rows of the data matrix , and for an index into the rows of the center matrix . We would ideally like to design a quantum algorithm that performs the transformation:
Let us call the state on the right . This is an ideal quantum state for us since helps to perform -sampling and to find the -means cost of clustering, which are the main components of the approximation scheme of [BGJK20] that we intend to use. One caveat is that we will only be able to perform the following transformation (instead of the abovementioned transformation)
where is an approximation for in a sense that we will make precise below. We will use to denote the state . This state is prepared using tools such as swap test followed by coherent amplitude estimation, and median estimation. Since these tools and techniques are known from previous works [WKS15, LMR13, KLLP19], we summarise the discussion (see Section 4.1 and 4.2 in [KLLP19]) in the following lemma.
Lemma 3 ([KLLP19] and [WKS15])
Assume for a data matrix and a center set matrix that the following unitaries: (i) , (ii) can be performed in time and the norms of the vectors are known. For any , there is a quantum algorithm that in time computes:
where satisfies the following two conditions for every and :
-
(i)
, and
-
(ii)
For every , .
In the subsequent discussions, we will use as the time to access the QRAM data structure [kp17], i.e., for the transitions and as given in the above lemma. This is known to be . Moreover, the time to update each entry in this data structure is also . This is the logarithmic factor that is hidden in the notation. In the following subsections, we discuss the utilities of for the various components of the approximation scheme of [BGJK20]. During these discussions, it will be easier to see the utility first with the ideal state before the real state that can actually be prepared. We will see how is sufficient within a reasonable error bound.
3.1 Finding distance to closest center
Let us see how we can estimate the distance of any point to its closest center in a center set with centers. We can use the transformation to prepare the following state for any :
We can then iteratively compare and swap pairs of registers to prepare the state . If we apply the same procedure to , then with probability at least , the resulting state will be . So, the contents of the second register will be an estimate of the distance of the point to its closest center in the center set . This further means that the following state can be prepared with probability at least :777The state prepared is actually with . However, instead of working with this state, subsequent discussions become much simpler if we assume that is prepared with probability .
This quantum state can be used to find the approximate clustering cost of the center set , which we discuss in the following subsection. However, before we do that, let us summarise the main ideas of this subsection in the following lemma.
Lemma 4
There is a quantum algorithm that, with probability at least , prepares the quantum state in time .
3.2 Computing cost of clustering
Suppose we want to compute the -means cost, , of the clustering given by a center set . We can prepare copies of the state and then estimate the cost of clustering by measuring copies of this quantum state and summing the squares of the second registers. If is sufficiently large, we obtain a close estimate of . To show this formally, we will use the following Hoeffding tail inequality.
Theorem 3.1 (Hoeffding bound)
Let be independent, bounded random variables such that . Let . Then for any , we have:
Let denotes the square of the measured value of the second register in . These are random values in the range , where . First, we note the expectation of these random variables equals , where . We define the variable and apply the Hoeffding bound on these bounded random variables to get a concentration result that can then be used.
Lemma 5
Let and . If , then we have:
Proof
We know that From the Hoeffding tail inequality, we get the following:
This implies that:
This completes the proof of the lemma.∎
So, conditioned on having copies of the state , we get an estimate of the clustering cost within a relative error of with probability at least . Removing the conditioning, we get the same with probability at least . We want to use the above cost estimation technique to calculate the cost for a list of center sets , and then pick the center set from the list with the least cost. We must apply the union bound appropriately to do this with high probability. We summarise these results in the following lemma. Let us first set some of the parameters with values that we will use to implement the approximation scheme of [BGJK20].
-
•
denotes the size of the list of -center sets we will iterate over to find the one with the least cost. This quantity is bounded as .
-
•
is the number of copies of the state made to estimate the cost of the center set . This, as given is Lemma 5 is , where .
Lemma 6
Let , , and . Given a point set and a list of center sets in the QRAM model, there is a quantum algorithm that runs in time and outputs an index such that with probbaility at least .
Proof
The algorithm estimates the cost of using copies each of and picks the index with the minimum value in time . Plugging the values of , and we get the running time stated in the lemma.
Let us bound the error probability of this procedure. By Lemma 4, the probability that we do not have the correct copies each of is bounded by . Conditioned of having , the probability that there exists an index , where the estimate is off by more than a factor is upper bounded by by the union bound. So, the probability that the algorithm will find an index such that is upper bounded by . This probability is at most since . This completes the proof of the lemma. ∎
3.3 -sampling
-sampling from the point set with respect to a center set with centers, samples with probability proportional to . Let us see if we can use our state is useful to perform this sampling. If we can pull out the value of the second register as the amplitude, then measurement will give us close to -sampling. This is possible since we have an estimate of the clustering cost from the previous subsection. We can use controlled rotations on an ancilla qubit to prepare the state:
where . So, the probability of measurement of is . Since we do rejection sampling (ignoring ’s that are sampled with probability ), we end up sampling with a distribution where the probability of sampling is . This means that points get sampled with a probability close to the actual -sampling probability. As we have mentioned earlier, this is sufficient for the approximation guarantees of [BGJK20] to hold. We summarise the observations of this section in the next lemma. We will need the following notion of the relative similarity of two distributions.
Definition 2
Let . For two distributions and over a finite set , we say that if for every , .
Lemma 7
Given a dataset and a center set in the QRAM model, there is a quantum algorithm that runs in time and with probability at least outputs independent samples with distribution such that , where denotes the -sampling distribution.
Proof
The proof follows from Lemma 4 and the preceding discussion.∎
The above lemma says that for , we obtain the required samples with high probability. We can now give proof of Theorem 1.1 assembling the quantum tools of this section.
Proof (Proof of Theorem 1.1)
The first requirement for executing the algorithm of [BGJK20] is a constant pseudo approximation algorithm using which we obtain the initial center set . By Lemma 2, we know that points sampled using -sampling gives such a center set. From Lemma 7, this can be done quantumly in time , which also includes the time to set up the QRAM data structure for all iterations. The algorithm of [BGJK20] has an outer repeat loop for probability amplification. Within the outer loop, points are -sampled with respect to the center set (line 6). This can again be done quantumly using Lemma 7 in time . We can then classically process the point set (see line 7 in Algorithm 2) and create the QRAM data structure for the list of -center sets that correspond to all possible disjoint subsets of (see line 8 in Algorithm 2). This takes time , where . Theorem 2.2 shows that at least one center set in the list gives -approximation. We use this fact in conjunction with the result of Lemma 6 to get that the underlying quantum algorithm runs in time and with high probability outputs a center set such that .888We needed , but got instead. However, this can be handled with . ∎
4 Discussion and Open Problems
We give a quantum algorithm for the -means problem with a provable approximation guarantee of for arbitrary with a polylogarithmic running time dependence on the data size and an exponential dependence on . In the classical setting, there are FPT (fixed-parameter tractable) algorithms that have polynomial running time dependence on the input size but are allowed to have exponential dependence on the parameters (e.g. in the -means problem, which is typically a small number). In this paper, we witnessed a case where we were able to take such a classical FPT algorithm into the quantum setting and lower the dependency on from linear in the classical setting [BGJK20] to polylogarithmic (this paper) while keeping the dependence on the parameters () intact. The aspect ratio can be considered an additional parameter. It would be interesting to see if there are other problems where such quantization is possible. If so, discussing Quantum FPT (QFPT) algorithms with polylogarithmic dependence on the input size and possibly exponential dependence on the parameters would make sense. Another future direction is to check whether the sample and query access defined by [Tan19] is sufficient to obtain comparable results in the classical setting.
References
- [ABG13] Esma Aïmeur, Gilles Brassard, and Sébastien Gambs. Quantum speed-up for unsupervised learning. Machine Learning, 90:261–287, 2013.
- [ADK09] Ankit Aggarwal, Amit Deshpande, and Ravi Kannan. Adaptive sampling for k-means clustering. In Proceedings of the 12th International Workshop and 13th International Workshop on Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX ’09 / RANDOM ’09, pages 15–28, Berlin, Heidelberg, 2009. Springer-Verlag.
- [AV07] David Arthur and Sergei Vassilvitskii. K-means++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’07, page 1027–1035, USA, 2007. Society for Industrial and Applied Mathematics.
- [BGJK20] Anup Bhattacharya, Dishant Goyal, Ragesh Jaiswal, and Amit Kumar. On Sampling Based Algorithms for k-Means. In Nitin Saxena and Sunil Simon, editors, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020), volume 182 of Leibniz International Proceedings in Informatics (LIPIcs), pages 13:1–13:17, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
- [CGL+20] Nai-Hui Chia, András Gilyén, Han-Hsuan Lin, Seth Lloyd, Ewin Tang, and Chunhao Wang. Quantum-Inspired Algorithms for Solving Low-Rank Linear Equation Systems with Logarithmic Dependence on the Dimension. In Yixin Cao, Siu-Wing Cheng, and Minming Li, editors, 31st International Symposium on Algorithms and Computation (ISAAC 2020), volume 181 of Leibniz International Proceedings in Informatics (LIPIcs), pages 47:1–47:17, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
- [DH99] Christoph Durr and Peter Hoyer. A quantum algorithm for finding the minimum, 1999.
- [FGG14] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm, 2014.
- [GJK20] Dishant Goyal, Ragesh Jaiswal, and Amit Kumar. Streaming ptas for constrained k-means, 2020.
- [KLLP19] Iordanis Kerenidis, Jonas Landman, Alessandro Luongo, and Anupam Prakash. q-means: A quantum algorithm for unsupervised machine learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
- [Llo82] S. Lloyd. Least squares quantization in pcm. IEEE Transactions on Information Theory, 28(2):129–137, 1982.
- [LMR13] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsupervised machine learning, 2013.
- [OMA+17] J. S. Otterbach, R. Manenti, N. Alidoust, A. Bestwick, M. Block, B. Bloom, S. Caldwell, N. Didier, E. Schuyler Fried, S. Hong, P. Karalekas, C. B. Osborn, A. Papageorge, E. C. Peterson, G. Prawiroatmodjo, N. Rubin, Colm A. Ryan, D. Scarabelli, M. Scheer, E. A. Sete, P. Sivarajah, Robert S. Smith, A. Staley, N. Tezak, W. J. Zeng, A. Hudson, Blake R. Johnson, M. Reagor, M. P. da Silva, and C. Rigetti. Unsupervised machine learning on a hybrid quantum computer, 2017.
- [Tan19] Ewin Tang. A quantum-inspired classical algorithm for recommendation systems. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, page 217?228, New York, NY, USA, 2019. Association for Computing Machinery.
- [Tan21] Ewin Tang. Quantum principal component analysis only achieves an exponential speedup because of its state preparation assumptions. Physical Review Letters, 127(6), aug 2021.
- [WKS15] Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore. Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Quantum Info. Comput., 15(3?4):316–356, mar 2015.