Harbin Institute of Technology, Harbin, Heilongjiang 150001, China
A Sub-linear Time Algorithm for Approximating k-Nearest-Neighbor with Full Quality Guarantee
Abstract
In this paper we propose an algorithm for the approximate k-Nearest-Neighbors problem. According to the existing researches, there are two kinds of approximation criterion. One is the distance criteria, and the other is the recall criteria. All former algorithms suffer the problem that there are no theoretical guarantees for the two approximation criterion. The algorithm proposed in this paper unifies the two kinds of approximation criterion, and has full theoretical guarantees. Furthermore, the query time of the algorithm is sub-linear. As far as we know, it is the first algorithm that achieves both sub-linear query time and full theoretical approximation guarantee.
Keywords:
Computation Geometry Approximate k-Nearest-Neighbors1 Introduction
The k-Nearest-Neighbor (kNN) problem is a well-known problem in theoretical computer science and applications. Let be a metric space, then for the input set of elements and a query element , the kNN problem is to find the elements with smallest distance to . Since the exact results are expensive to compute when the size of the input is large [18], and approximate results serve as good as the exact ones in many applications [29], the approximate kNN, kANN for short, draws more research efforts in recent years. There are two kinds of approximation criterion for the kANN problem, namely, the distance criterion and the recall criterion. The distance criterion requires that the ratio between the distance from the approximate results to the query and the distance from the exact results to the query is no more than a given threshold. The recall criterion requires that the size of the intersection of the approximate result set and the exact result set is no less than a given threshold. The formal description will be given in detail in Section 2. Next we brief the existing algorithms for the kANN problem to see how these two criteria are considered by former researchers.
The algorithms for the kANN problem can be categorized into four classes. The first class is the tree-based methods. The main idea of this method is to recursively partition the metric space into sub-spaces, and organize them into a tree structure. The K-D tree [6] is the representative idea in this category. It is efficient in low dimensional spaces, but the performance drops rapidly when the number of dimension grows up. Vantage point tree (VP-tree) [30] is another data structure with a better partition strategy and better performance. The FLANN [24] method is a recent work with improved performance in high dimensional spaces, but it is reported that this method would achieve in sub-optimal results [19]. To the best of our knowledge, the tree based methods can satisfy neither the distance nor the recall criterion theoretically.
The second class is the permutation based methods. The idea is to choose a set of pivot points, and represent each data element with a permutation of the pivots sorted by the distance to it. In such a representation, close objects will have similar permutations. Methods using the permutation idea include the MI-File [2] and PP-Index [13]. Unfortunately, the permutation based method can not satisfy either of the distance or the recall criterion theoretically, as far as we know.
The third class is the Locality Sensitive Hashing (LSH) based methods. LSH was first introduced by Indyk et. [18] for the kANN problem where . Soon after, Datar et. [11] proposed the first practical LSH function, and since then there came a burst in the theoretical and applicational researches on the LSH framework. For example, Andoni et. proved the lower bound of the time-space complexities of the LSH based algorithms [3], and devised the optimal LSH function which meets the lower bound [4]. On the other hand, Gao et. [15] proposed an algorithm that aimed to close the gap between the LSH theory and kANN search applications. See [28] for a survey. The basic LSH based method can satisfy only the distance criterion when [18]. Some existing algorithms made some progress. The C2LSH algorithm [14] solved the kANN problem with the distance criterion, but it has a constraint that the approximation factor must be a square of an integer. The SRS algorithm [27] is another one aimed at the distance criterion. However, it only has partial guarantee, that is, the results satisfy the distance criterion only when the algorithm terminates on a specific condition.
The forth class is graph based methods. The specific kind of graphs used in this method is the proximity graphs, where the edges in this kind of graph are defined by the geometric relationship of the points. See [22] for a survey. The graph based kANN algorithms usually conduct a navigating process on the proximity graphs. This process selects an vertex in the graph as the start point, and move to the destination point following some specific navigating strategy. For example, Paredes et. [26] used the kNN graph, Ocsa et. [25] used the Relative Neighborhood Graph (RNG), and Malkov et. [21] used the Navigable Small World Graph (NSW) [21]. None of these algorithms have theoretical guarantee on the two approximation criteria.
In summary, most of the existing algorithms do not have theoretical guarantee on either of the two approximation criteria. The recall criterion is only used as a measurement of the experimental results, and the distance criterion is only partially satisfied by only a few algorithms [14, 27]. In this paper, we propose a sub-linear time algorithm for kANN problem that unifies the two kinds of approximation criteria, which overcomes the disadvantages of the existing algorithms. The contributions of this paper are listed below.
-
1.
We propose an algorithm that unifies the distance criterion and the recall criterion for the approximate k-Nearest-Neighbor problem. The result returned by the algorithm can satisfy at least one criterion in any situation. This is a major progress compared to the existing algorithms.
-
2.
Assuming the input point set follows the spatial Poisson process, the algorithm takes time of preprocessing, space, and answers a query in time, where is a constant.
-
3.
The algorithm is the first algorithm for kANN that provides theoretical guarantee on both of the approximation criteria, and it is also the first algorithm that achieves sub-linear query time while providing theoretical guarantees. The former works [14, 27] with partial guarantee both need linear query time.
2 Preliminaries
2.1 Problem Definitions
The problem studied in this paper is the approximate k-Nearest-Neighbor problem, which is denoted as kANN for short. In this paper the problem is constrained to the Euclidean space. The input is a set of points where each is a d-dimensional vector . The distance between two points and is defined by , which is the well known Euclidean distance. Before giving the definition of the kANN problem, we first introduce the exact kNN problem.
Definition 1 (kNN)
Given the input point set and a query point , define to be the set of points in that are nearest to . Formally,
-
1.
, and ;
-
2.
for and .
Next we will give the definition of the approximate kNN. There are two kinds of definitions based on different approximation criteria.
Definition 2 ()
Given the input point set , a query point , and a approximation factor , find a point set which satisfies:
-
1.
, and ;
-
2.
let , then holds for .
Remark 1
The second requirement in Definition 2 is called the distance criterion.
Definition 3 ()
Given the input point set , a query point , and a approximation factor , find a point set which satisfies:
-
1.
, and ;
-
2.
.
Remark 2
If a kANN algorithm returned a set , the value is usually called the recall of the set . This is widely used in many works to evaluate the quality of the kANN algorithm. Thus we call the second statement in Definition 3 as the recall criterion.
Next we give the definition of the problem studied in this paper, which unifies the two different criteria.
Definition 4
Given the input point set , a query point , and approximation factors and , find a point set which satisfies:
-
1.
, and ;
-
2.
satisfies at least one of the distance criterion and the recall criterion. Formally, either holds for , or .
According to Definition 4, the output of the algorithm is required to satisfy one of the two criteria, but not both. It will be our future work to devise an algorithm to satisfy both of the criteria.
In the rest of this section we will introduce some concepts and algorithms that will be used in our proposed algorithm.
2.2 Minimum Enclosing Spheres
The D-dimensional spheres is the generalization of the circles in the 2-dimensional case. Let be the center and be the radius. A d-dimensional sphere, denoted as , is the set . Note that the boundary is included. If we say that falls inside sphere , or the sphere encloses point . A sphere is said to pass through point iff .
Given a set of points, the minimum enclosing sphere (MES) of , is the d-dimensional sphere enclosing all points in and has the smallest possible radius. It is known that the MES of a given finite point set in is unique, and can be calculated by a quadratic programming algorithm [31]. Next we introduce the approximate minimum enclosing spheres.
Definition 5 (AMES)
Given a set of points and an approximation factor , the approximate minimum enclosing sphere of , denoted as , is a d-dimensional sphere satisfies:
-
1.
for ;
-
2.
, where is the radius of the exact MES of .
The following algorithm can calculate the AMES in time, which is given in [5].
The following Lemma gives the complexity of Algorithm 1 .
2.3 Delaunay Triangulation
The Delaunay Triangulation (DT) is a fundamental data structure in computation geometry. The definition is given below.
Definition 6 (DT)
Given a set of points , the Delaunay Triangulation is a graph which satisfies:
-
1.
;
-
2.
for , iff there exists a d-dimensional sphere passing through and , and no other is inside it.
The Delaunay Triangulation is a natural dual of the Voronoi diagram. We omit the details about their relationship since it is not the focus of this paper.
There are extensive research works about the Delaunay triangulation. An important problem is to find the expected properties of built on random point sets. Here we focus on the point sets that follow the spatial Poisson process in d-dimensional Euclidean space. In this model, for any region , the probability that contains points follows the Poisson distribution. See [1] for more details. We cite one important property of the spatial Poisson process in the following lemma.
Lemma 2 ([1])
Let be a point set following the spatial Poisson process. Suppose there are two regions . For any point , if falls inside then the probability that falls inside is the ratio between the volume of and . Formally, we have
Further, we cite some important properties of the Delaunay triangulation built on point sets which follow the spatial Poisson process.
Lemma 3 ([7])
Let be a point set following the spatial Poisson process, and be the maximum degree of . Then the expected maximum degree of is .
Lemma 4 ([9])
Let be a point set following the spatial Poisson process. The expected time to construct is .
2.4 Walking in Delaunay Triangulation
Given a Delaunay Triangulation , the points and edges of form a set of simplices. Given a query point , there is a problem to find which simplex of that falls in. There is a class of algorithms to tackle this problem which is called Walking. The Walking algorithm start at some simplex, and walk to the destination by moving to adjacent simplices step by step. There are several kinds of walking strategy, including JumpWalk [23], Straight Walk [8] and Stochastic Walk [12], etc. Some of these strategies are only applicable to 2 or 3 dimensions, while Straight Walk can generalize to higher dimension. As Figure 1 shows, the Straight Walk strategy only considers the simplices that intersect the line segment from the start point to the destination. The following lemma gives the complexity of this walking strategy.
Lemma 5 ([10])
Given a Delaunay Triangulation of a point set , and two points and in as the start point and destination point, the walking from to using Straight Walk takes expected time.

2.5 -NN
The Approximate Near Neighbor problem is introduced in [18] for solving the problem with . Usually the Approximate Near Neighbor problem is denoted as -NN since there are two input parameters and . The definition is given below. The idea to use -NN to solve is via Turing reduction, that is, use -NN as an oracle or sub-procedure. The details can be found in [18, 16, 17, 20].
Definition 7
Given a point set , a query point , and two query parameters , the output of the -NN problem should satisfy:
-
1.
if , then output a point ;
-
2.
if for , then output ;
Since we aim to solve kANN problem in this paper, we need the following definition of -kNN.
Definition 8
Given a point set , a query point , and two query parameters , the output of the -kNN problem is a set , which satisfies:
-
1.
if , then output a set , where ;
-
2.
if , then output ;
It can be easily seen that the -kNN problem is a natural generalization of the -NN problem. Recently, there are several algorithms proposed to solve this problem. The following Lemma 6 gives the complexity of the -kNN algorithm, which will be proved in Appendix A.
Lemma 6
There is an algorithm that solves -kNN problem in of time, requiring time of preprocessing and of space. The parameter is a constant depending on the LSH function used in the algorithm, and always holds.
3 Algorithm
The proposed algorithm consists of two phases, i.e., the preprocessing phase and the query phase. The preprocessing phase is to built a data structure, which will be used to guide the search in the query phase. Next we will describe the algorithm of the two phases in detail.
3.1 Preprocessing Algorithm
Before describing the details of the preprocessing algorithm, we first introduce several concepts that will be used in the following discussion.
3.1.1 Axis Parallel Box.
An axis parallel box in is defined to be the Cartesian product of intervals, i.e., . And the following is the definition of Minimum Bounding Box.
Definition 9
Given a point set , the Minimum Bounding Box, denoted as , is the axis parallel box satisfying the following two requirements:
-
1.
encloses all points in , and
-
2.
there exists points and in such that for each interval defining , .
3.1.2 Median Split
Given a point set and its minimum bounding box , we introduce an operation on that splits into two subsets, which is called median split. This operation first finds the longest interval from the intervals defining . Then, the operation finds the median of the set , which is the median of the -th coordinates of the points in . This median is denoted as . Finally is split into two subsets, i.e., and . Here we assume that no two points share the same coordinate in any dimension. This assumption can be assured by adding some random small shift on the original coordinates.
3.1.3 Median Split Tree
By recursively conducting the median split operation, a point set can be organized into a tree structure, which is called the Median Split Tree (MST). The definition of MST is given below.
Definition 10
Given the input point set , a Median Split Tree (MST) based on , denoted as , is a tree structure satisfying the following requirements:
-
1.
the root of is , and the other nodes in are subsets of ;
-
2.
there are two child nodes for each interior node , which are generated by conducting a median split on ;
-
3.
each leaf node contains only one point.
3.1.4 Balanced Median Split Tree
The depth of a node in a tree , denoted as , is defined to be the number of edges in the path from to the root of . It can be noticed that the leaf nodes in the may have different depths. So we introduce the Balanced Median Split Tree (BMST), where all leaf nodes have the same depth.
Let , which is the nodes in the -th layer in tree , and be the number of points included in node . For a median split tree , it can be easily proved that either or for . Given , the is constructed as follows. Find the smallest such that , then for each node , all the nodes in the sub-tree rooted at are directly connected to .
3.1.5 Hierarchical Delaunay Graph
Given a point set , we introduce the most important concept for the preprocessing algorithm in this paper, which is the Hierarchical Delaunay Graph (HDG). This structure is constructed by adding edges between nodes in the same layer of . The additional edges are called the graph edges, in contrast with the tree edges in . The definition of the HDG is given below. Here denotes the center of .
Definition 11
Given a point set and the balanced median split tree , a Hierarchical Delaunay Graph is a layered graph based on , where each layer is a Delaunay triangulation. Formally, for each , there is an graph edge between iff
-
1.
, and
-
2.
there exists a d-dimensional sphere passing through , and there is no such that falls in , where is in the same layer with and . That is, the graph edges connecting nodes in the same layer forms the Delaunay Triangulation.
3.1.6 The preprocessing algorithm
Next we describe the preprocessing algorithm which aims to build the HDG. The algorithm can be divided into three steps.
Step 1, Split and build tree. The first step is to recursively split into smaller sets using the median split operation, and the median split tree is built. Finally the nodes near the leaf layer is adjusted to satisfy the definition of the balanced median split tree.
Step 2, Compute Spheres. In this step, the algorithm will go over the tree and compute the AMES for each node using Algorithm 1.
Step 3, Construct the . In this step, an algorithm given in [9] which satisfies Lemma 4 is invoked to compute the Delaunay triangulation for each layer.
The pseudo codes of the preprocessing algorithm is given in Algorithm 2.
3.2 Query Algorithm
The query algorithm takes the built by the preprocessing algorithm, and executes the following three steps.
The first is the descending step. The algorithm goes down the tree and stops at level such that . At each level, the child node with smallest distance to the query is chosen to be visited in next level.
The second is the navigating step. The algorithm marches towards the local nearest AMES center by moving on the edges of the .
The third step is the answering step. The algorithm finds the answer of by invoking the -kNN query. The answer can satisfy the distance criterion or the recall criterion according to the different return result of the -kNN query.
Algorithm 3 describes the above process in pseudo codes, where and are the center and radius of the of node , respectively.
4 Analysis
The analysis in this section will assume that the input point set follows the spatial Poisson process.
4.1 Correctness
Lemma 7
If Algorithm 3 terminates when , then the returned point set is a -kNN of in with at least probability.
Proof
Let , , be an integer. We define the following three events.
The lemma states the situation that the algorithm returns at , which implies that event happens. Event represents the situation that is a -kNN set. Then it is easy to see that the desired probability is in this lemma is . By the formula of conditional probability,
Thus in the rest of the proof we focus on calculate .
To calculate we need the probability that a single point falls in . We have the following calculations.
The last equation is based on Lemma 2.
On the other hand, the number of points in is at most . Here we use the trivial upper bound of since it is sufficient to the proof. Denote , we have the following equations.
By the property of the Poisson Distribution, the above equation achieves the maximum when . Thus we have .
Finally, combining the above analysis, we achieve the result that is a -kNN set with at least probability. ∎
Lemma 8
If Algorithm 3 returns at , then the returned point set is a -kNN of in .
Proof
Let and , which are the input parameter of the -th and -th invocation of the -kNN query. The lemma states the situation that the algorithm returns at the -th loop, which implies that the -kNN query returns empty set in the -th loop. According to the definition of the -kNN problem, the number of points is less than in the d-dimensional sphere . Denote , then it can be deduced that . On the other hand, the algorithm returns at the -th loop, which implies that the -kNN query returns a subset of . Thus we have for each in the result. Finally, , which exactly satisfies the definition of -kNN. ∎
Theorem 4.1
The result of Algorithm 3 satisfies the requirement of with at least probability.
4.2 Complexities
For ease of understanding, We first analyze the complexity of the single steps in Algorithm 2 and 3.
Lemma 9
The first step of Algorithm 2 takes time.
Proof
The following recursion formula can be easily deduced from the pseudo codes of Algorithm 2.
The term comes from the time of splitting and computing the median. This recursion formula can be solved by standard process, and the result is . ∎
Lemma 10
The second step of Algorithm 2 takes time.
Proof
According to lemma 1, the time to compute the AMES of a point set is proportional to the number of points in this set. Thus we have the following recursion formula:
The answer is also . ∎
Lemma 11
The third step of Algorithm 2 takes time.
Proof
According to the definition and the building process of the , there are nodes in the -th layer. And by Lemma 4, the time to build the Delaunay triangulation is . Thus, the time complexity of the third step is represented by the following equation.
This result of this additive equation is . ∎
Lemma 12
The navigating step (second step) in Algorithm 3 needs time.
Proof
From Lemma 5 we know that the navigating step passes at most simplices. A d-dimensional simplex has vertexes, and thus the number of points passed by the navigation process is . While a node is visited, the process goes over the neighbors of this node, and from Lemma 3 we know that the expected maximum degree of each node is . Thus the total time of the navigating is .
Now we are ready to present the final results about the complexities.
Theorem 4.2
The expected time complexity of Algorithm 2, which is the preprocessing time complexity, is .
Proof
Theorem 4.3
The space complexity of Algorithm 2 is .
Proof
The space needed to store the is the proportional to the number of the graph edges and tree edges. The number of graph edges connected to each node is according to Lemma 3, and the number of tree edges is constant. On the other hand, the number of nodes in is . Finally, we get the result that the space complexity is . ∎
Theorem 4.4
The time complexity of Algorithm 3, which is the query complexity, is , where is a constant.
Proof
The time complexity of Algorithm 3 consists of three parts. For the first part, which is descending part, it is easy to see that the complexity is . And the time complexity of the second part is already solved by Lemma 12, which is . The third part is to invoke the -kNN query for times. By Lemma 6 each invocation of -kNN needs where is a constant. If we take this algorithm as a Turing reduction, then the time to invoke -kNN query is constant. Thus the third step needs time. Adding the three parts and the desired result is achieved. ∎
5 Conclusion
In this paper we proposed an algorithm for the approximate k-Nearest-Neighbors problem. We observed that there are two kinds of approximation criterion in the history of this research area, which is called the distance criteria and the recall criteria in this paper. But we also observed that all existing works do not have theoretical guarantees on this criteria. We raised a new definition for the approximate k-Nearest-Neighbor problem which unifies the distance criteria and the recall criteria, and proposed an algorithm that solves the new problem. The result of the algorithm can satisfy at least one of the two criterion. In our future work, we will try to devise new algorithms that can satisfy both of the criterion.
References
- [1] Poisson Point Process, https://wikimili.com/en/Poisson–_˝point–_˝process
- [2] Amato, G., Gennaro, C., Savino, P.: MI-File: using inverted files for scalable approximate similarity search. Multimedia Tools and Applications 71(3), 1333–1362 (aug 2014). https://doi.org/10.1007/s11042-012-1271-1, http://link.springer.com/10.1007/s11042-012-1271-1
- [3] Andoni, A., Laarhoven, T., Razenshteyn, I., Waingarten, E.: Optimal Hashing-based Time-Space Trade-offs for Approximate Near Neighbors. In: Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. pp. 47–66. Society for Industrial and Applied Mathematics, Philadelphia, PA (jan 2017). https://doi.org/10.1137/1.9781611974782.4
- [4] Andoni, A., Razenshteyn, I.: Optimal Data-Dependent Hashing for Approximate Near Neighbors. In: Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing - STOC ’15. pp. 793–801. ACM Press, New York, New York, USA (2015). https://doi.org/10.1145/2746539.2746553
- [5] Bâdoiu, M., Bâdoiu, M., Clarkson, K.L., Clarkson, K.L.: Smaller core-sets for balls. In: Proceedings of the Fourteenth Annual {ACM-SIAM} Symposium on Discrete Algorithms. pp. 801–802 (2003)
- [6] Bentley, J.L.: Multidimensional binary search trees used for associative searching. Communications of the ACM 18(9), 509–517 (sep 1975). https://doi.org/10.1145/361002.361007, http://portal.acm.org/citation.cfm?doid=361002.361007
- [7] Bern, M., Eppstain, D., YAO, F.: THE EXPECTED EXTREMES IN A DELAUNAY TRIANGULATION. International Journal of Computational Geometry & Applications 01(01), 79–91 (mar 1991). https://doi.org/10.1142/S0218195991000074, http://link.springer.com/10.1007/3-540-54233-7–_˝173https://www.worldscientific.com/doi/abs/10.1142/S0218195991000074
- [8] Bose, P., Devroye, L.: On the stabbing number of a random Delaunay triangulation. Computational Geometry: Theory and Applications 36(2), 89–105 (2007). https://doi.org/10.1016/j.comgeo.2006.05.005
- [9] Buchin, K., Mulzer, W.: Delaunay triangulations in O(sort(n)) time and more. Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS V, 139–148 (2009). https://doi.org/10.1109/FOCS.2009.53
- [10] de Castro, P.M.M., Devillers, O.: Simple and Efficient Distribution-Sensitive Point Location in Triangulations. In: 2011 Proceedings of the Thirteenth Workshop on Algorithm Engineering and Experiments (ALENEX), pp. 127–138. Society for Industrial and Applied Mathematics, Philadelphia, PA (jan 2011). https://doi.org/10.1137/1.9781611972917.13, http://epubs.siam.org/doi/abs/10.1137/1.9781611972917.13
- [11] Datar, M., Immorlica, N., Indyk, P., Mirrokni, V.S.: Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the twentieth annual symposium on Computational geometry - SCG ’04. pp. 253–262. ACM Press, New York, New York, USA (2004). https://doi.org/10.1145/997817.997857, http://portal.acm.org/citation.cfm?doid=997817.997857
- [12] Devillers, O., Pion, S., Teillaud, M., Devillers, O., Pion, S., Teillaud, M.: Walking in a triangulation To cite this version : HAL Id : inria-00072509 pp. 181–199 (2006)
- [13] Esuli, A.: Use of permutation prefixes for efficient and scalable approximate similarity search. Information Processing & Management 48(5), 889–902 (sep 2012). https://doi.org/10.1016/j.ipm.2010.11.011, http://dx.doi.org/10.1016/j.ipm.2010.11.011https://linkinghub.elsevier.com/retrieve/pii/S0306457310001019
- [14] Gan, J., Feng, J., Fang, Q., Ng, W.: Locality-sensitive hashing scheme based on dynamic collision counting. In: Proceedings of the 2012 international conference on Management of Data - SIGMOD ’12. pp. 541–552. ACM Press, New York, New York, USA (2012). https://doi.org/10.1145/2213836.2213898, http://dl.acm.org/citation.cfm?doid=2213836.2213898
- [15] Gao, J., Jagadish, H., Ooi, B.C., Wang, S.: Selective Hashing: Closing the Gap between RadiusSearch and k-NN Search. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’15. pp. 349–358. ACM Press, New York, New York, USA (2015). https://doi.org/10.1145/2783258.2783284, http://dl.acm.org/citation.cfm?doid=2783258.2783284
- [16] Har-Peled, S.: A replacement for Voronoi diagrams of near linear size. In: Proceedings 42nd IEEE Symposium on Foundations of Computer Science. pp. 94–103. IEEE (2001). https://doi.org/10.1109/SFCS.2001.959884, http://ieeexplore.ieee.org/document/959884/https://graphics.stanford.edu/courses/cs468-02-winter/Papers/sariel–_˝1.pdfhttps://ieeexplore.ieee.org/document/959884/
- [17] Har-Peled, S., Indyk, P., Motwani, R.: Approximate Nearest Neighbor: Towards Removing the Curse of Dimensionality. Theory of Computing 8(1), 321–350 (2012). https://doi.org/10.4086/toc.2012.v008a014
- [18] Indyk, P., Motwani, R.: Approximate nearest neighbors: Towards Removing the Curse of Dimensionality. In: Proceedings of the thirtieth annual ACM symposium on Theory of computing - STOC ’98. pp. 604–613. ACM Press, New York, New York, USA (1998). https://doi.org/10.1145/276698.276876, http://portal.acm.org/citation.cfm?doid=276698.276876
- [19] Lin, P.C., Zhao, W.L.: Graph based Nearest Neighbor Search: Promises and Failures X(X), 1–8 (apr 2019), http://arxiv.org/abs/1904.02077
- [20] Ma, H.Z., Li, J.: An Algorithm for Reducing Approximate Nearest Neighbor to Approximate Near Neighbor with $$O(log {n})$$ Query Time. In: Kim, D., Uma, R.N., Zelikovsky, A. (eds.) Combinatorial Optimization and Applications - 12th International Conference, {COCOA} 2018, Atlanta, GA, USA, December 15-17, 2018, Proceedings. Lecture Notes in Computer Science, vol. 11346, pp. 465–479. Springer, Atlanta, GA, USA, (2018).
- [21] Malkov, Y., Ponomarenko, A., Logvinov, A., Krylov, V.: Approximate nearest neighbor algorithm based on navigable small world graphs. Information Systems 45, 61–68 (2014). https://doi.org/10.1016/j.is.2013.10.006, http://dx.doi.org/10.1016/j.is.2013.10.006
- [22] Mitchell, J.S., Mulzer, W.: Proximity algorithms. Handbook of Discrete and Computational Geometry, Third Edition pp. 849–874 (2017). https://doi.org/10.1201/9781315119601
- [23] Mücke, E.P., Saias, I., Zhu, B.: Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations. Computational Geometry 12(1-2), 63–83 (feb 1999). https://doi.org/10.1016/S0925-7721(98)00035-2, https://linkinghub.elsevier.com/retrieve/pii/S0925772198000352
- [24] Muja, M., Lowe, D.G.: Scalable Nearest Neighbor Algorithms for High Dimensional Data. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(11), 2227–2240 (nov 2014). https://doi.org/10.1109/TPAMI.2014.2321376, http://elk.library.ubc.ca/handle/2429/44402http://ieeexplore.ieee.org/document/6809191/
- [25] Ocsa, A., Bedregal, C., Cuadros-vargas, E., Society, P.C.: A new approach for similarity queries using proximity graphs. Simpósio Brasileiro de Banco de Dados pp. 131–142 (2007), http://socios.spc.org.pe/ecuadros/papers/Ocsa2007RNG-SBBD.pdf
- [26] Paredes, R., Chávez, E.: Using the k-Nearest Neighbor Graph for Proximity Searching in Metric Spaces. In: International Symposium on String Processing and Information Retrieval. pp. 127–138 (2005).
- [27] Sun, Y., Wang, W., Qin, J., Zhang, Y., Lin, X.: SRS. Proceedings of the VLDB Endowment 8(1), 1–12 (sep 2014). https://doi.org/10.14778/2735461.2735462, https://doi.org/10.14778/2735461.2735462http://dl.acm.org/doi/10.14778/2735461.2735462
- [28] Wang, J., Shen, H.T., Song, J., Ji, J.: Hashing for Similarity Search: A Survey (2014), http://arxiv.org/abs/1408.2927
- [29] Weber, R., Schek, H.J., Blott, S.: A Quantitative Analysis and Performance Study for Similarity-Search Methods in High-Dimensional Spaces. In: Proceedings of 24rd International Conference on Very Large Data Bases. pp. 194–205 (1998)
- [30] Yianilos, P.N.: Data Structures and Algorithms for Nearest Neighbor Search in General Metric Spaces. In: Proceedings of the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms. pp. 311–321. SODA ’93, Society for Industrial and Applied Mathematics, USA (1993). https://doi.org/10.5555/313559.313789
- [31] Yildirim, E.A.: Two Algorithms for the Minimum Enclosing Ball Problem. SIAM Journal on Optimization 19(3), 1368–1391 (jan 2008). https://doi.org/10.1137/070690419, http://epubs.siam.org/doi/10.1137/070690419
Appendix A Proof of Lemma 6
Proof
The algorithm for -kNN is adapted from the standard LSH algorithm for -NN. See [11] for more details. Briefly speaking, let be a family of LSH functions, be a set of LSH functions uniformly drawn from , , and be a composition of LSH functions. The algorithm stores each element in the input point set in the hash bucket , , and for the query point , the algorithm scans the buckets , , and collects the points in . If the algorithm collects points in , then the algorithm returns the points. If the algorithm have scanned points before collects enough points, it returns .
Now we prove the algorithm succeeds with constant probability. The algorithm succeeds if the following two conditions are true. Here we call the points out side as outer points.
-
A.
If there exists , then for each there exists such that , and
-
B.
the algorithm encounters at most outer points.
First we introduce the following two probabilities.
Let . Apparently . Then let . For all points outside , the expected number of outer points satisfying for some is at most . Thus for all , the expected number of outer points satisfying is at most .
By Markov’s inequality,
This is the possibility of scanning at least outer points, which is the possibility that event fails. Thus .
On the other hand, let . By setting we have the following derivations
Setting , the possibility that there exists at least one such that is at least
For all the points, the possibility that coincides with each of the points, which is , is that
The last inequality comes from the montonicity of the function . Finally, the probability of condition and both succeed can be easily computed, which is constant probability. The details are omitted.
After all, we have proved that the algorithm can return the result of -kNN with constant probability by setting and . Finally substituting the and with the proper values, we get the complexities of the algorithm, which is stated in Lemma 6.