On the adversarial robustness of Locality-Sensitive Hashing in Hamming space
Abstract
Locality-sensitive hashing [22] is a classical data structure for approximate nearest neighbor search. It allows, after a close to linear time preprocessing of the input dataset, to find an approximately nearest neighbor of any fixed query in sublinear time in the dataset size. The resulting data structure is randomized and succeeds with high probability for every fixed query. In many modern applications of nearest neighbor search the queries are chosen adaptively. In this paper, we study the robustness of the locality-sensitive hashing to adaptive queries in Hamming space. We present a simple adversary that can, under mild assumptions on the initial point set, provably find a query to the approximate near neighbor search data structure that the data structure fails on. Crucially, our adaptive algorithm finds the hard query exponentially faster than random sampling.
1 Introduction
Nearest neighbor search is one of the most fundamental data structure problems in machine learning. In particular, the development of efficient data structures for nearest neighbor search in high dimensions is a challenge and finding the exact nearest neighbor often requires a linear scan over the data [35]. Therefore, people also started to investigate approximate nearest neighbor search. One important technique to design approximate nearest neighbor data structures is locality sensitive hashing (LSH) which has been introduced in the seminal paper by Indyk and Motwani [22]. LSH is widely used in many applications including music retrieval [33], video anomaly detection [39], and clustering [26] (see also the survey [23] for further examples).
LSH is a randomized data structure and as such its performance guarantees depend on its internal randomness. The standard analysis assumes an oblivious adversary, i.e. queries may not depend on answers to earlier queries for the guarantees to apply. What are the consequences, if we would like to have the LSH performance guarantees when LSH is used as a data structure in an algorithm? It essentially says that for the guarantees to hold, the algorithm must be able to specify at a single point of time a set of query points and it gets all answers to these query points at once. Then it can do any computation with the answers but it may not ask further queries. Such a restricted functionality significantly limits the strength of the guarantees provided by LSH as more often than not future queries depend on the result of earlier queries. Therefore, it would be great to better understand the LSH guarantees in the adaptive setting. Even though some work on LSH such as constructions without false negatives [31, 1] (see also Related Work for further discussions) are motivated by this question, there is still a lack of knowledge about how resilient standard LSH data structures are if we allow adaptive adversaries.
In this paper, we address this question by developing a randomized query scheme (adversarial algorithm) for LSH under the Hamming distance with the goal of efficiently generating a query that makes LSH return an incorrect answer. We show that for a wide setting of parameters, if a point set has a somewhat isolated point, then we can make LSH have a false positive (that is, returning no point while there is a near point) much faster than this would happen with non-adaptive queries.
Let us discuss this further on a simple example. We may use LSH for an approximate version of a -nearest neighbor classifier (see [2] for a related study), that is, we store the training set in an LSH data structure and on a query we return the label of the (approximate) nearest point in the training set that is returned by LSH. In this setting LSH will satisfy its standard guarantees when the choice of queries does not depend on previous answers. This happens, for example, when queries are independently sampled from a fixed query distribution. However, one could potentially design a carefully engineered sequence of queries to construct queries on which the algorithm returns an incorrect answer. This is particularly problematic if the algorithm is used for a sensitive task and relies on strong guarantees.
Our results. Our main result is an algorithm for generating an adversarial query:
Theorem 1.1 (Informal version of Theorem A.6).
Given an ‘isolated point’ in a dataset , one can find a point at a distance at most from such that querying LSH with returns no point. The number of queries to the LSH data structure needed to generate the point is bounded by , where is the failure probability of LSH.
Note that to find such a point through random sampling, one would have to make queries in expectation Theorem 3.3, which is exponentialy slower in than the proposed adversary. We then evaluate our data structure experimentally in various settings to understand how the efficiency and effectiveness of our approach depends on the various parameters.
2 Related Work
Monte-Carlo Locality-Sensitive Hashing. Most of the work dedicated to Locality-Sensitive Hashing was done in the Monte-Carlo model, which we consider to be the standard model in this paper. That is, the goal is to construct a data structure with deterministically bounded query time that correctly answers the queries with high probability. Constructions of locality-sensitive hash families are known for several different metrics, including Hamming distance [22, 7], [19], the Euclidean metric [3], Jaccard similarity [13, 18], angular distance [4, 25, 29] and more. Asymmetric LSH solutions, which allow to reduce query time at the expense of an increase in space, or reduce space at the expense of increasing query time, have received significant attention in the literature and are quite well understood by now [32, 24, 15, 6]. Distance sensitive hash families, where collision probabilities are not necessarily monotone in the distance between points being hashed, have also been constructed [9]. Data-dependent results, which adapt the locality-sensitive hash family to the dataset to achieve better exponents, are also known [5, 6].
Las-Vegas Locality Sensitive Hashing. In an attempt to avoid the problem of having false negatives, another line of work seeks to construct LSH instances that answer the queries deterministically correctly, but the running time is a random variable and guarantees are usually on its expectation. It was first shown by [31] that such a scheme exists, however this scheme for general parameter settings requires more space than the standard LSH. A later work by [1] achieves the same space-time trade-off as in [22]. [36] further improves on it for , and offers a data-dependent construction following [3]. Although Las Vegas constructions avoid the problem of having false negatives, their guarantees still hold only against an oblivious adversary. It is entirely possible that an adaptive adversary can find a sequence of queries with an average response time much larger than what is guaranteed against an oblivious adversary. In any case, investigation of such adversaries lies outside of the scope of this paper.
Robustness to adaptive adversaries. One of the main motivations of this paper is to study the behavior of a classic data structure solving against an adaptive adversary. One of the most studied class of algorithms that is relevant to our paper is linear sketches. Some of the notable early works here include [20], which proposed an efficient attack on the linear sketch for norm estimation, and [30], which proposed deterministic sketch constructions for a range of problems that work for all inputs and provide lower bounds on their size. In [27] an adversarially robust approach to solving approximate nearest neighbor in Hamming space is proposed, which does not use LSH. It is improved upon by [14], which gives a robust construction for vector -norm estimation, for . On the other side of the spectrum, an adaptive attack on CountSketch was proposed by [17, 16], with the latter also proposing a robustifycation of this sketch.
Another fruitful line of work covers the use of differential privacy to make existing algorithms more robust against adaptive adversaries [10, 37, 21, 11, 8]. The main idea is to use differentially private mechanisms to hide the random bits of the algorithm from the adversary. This approach turned out to be applicable to a wide range of estimation problems.
3 Preliminaries
For , will denote binary logarithm of . For , will be the set of integers from to . For a point and , is the value of the point along -th dimension. For two points , is their Hamming distance.
Basics of Locality Sensitive Hashing. Locality sensitive hashing is a technique to derive randomized data structures for near neighbor search. This can be formalized as follows. A (randomized) data structure for the -approximate near neighbor problem in a metric space preprocesses a point set in such a way that for a query point with probability at least it (a) returns with , if there is with , (b) returns , if there is no point with , and (c) returns either or with , otherwise. Here is the failure probability of the data structure. In this paper, we will consider the Hamming space, i.e. and is the Hamming distance and in the following discussion we focus on this setting.
The key component of LSH schemes is a locality-sensitive hash family. The idea behind them is that a function from such a family is more likely to map close points to the same bucket than far ones.
Definition 3.1 ([22]).
A hash family is -sensitive if for every , the following holds. Let be chosen uniformly at random from . If , then . Otherwise, if , .
For a -dimensional Hamming space , the locality-sensitive hash family has a very simple form [22]. Let be the elementary hash function family. That is, it consists of hash functions , which project points from to their -th bit. It is known that
Fact 3.2 ([22]).
For any , such that , the elementary hash function family is -sensitive.
Indyk and Motwani give the following guarantee (assuming that the points come from a -dimensional space):
Theorem 3.3 ([22]).
Suppose there exists a -sensitive family of hash functions. Then there is a data structure for the -approximate near neighbor problem (with ) that uses space and requires evaluations of hash functions, where .
In the following, we briefly describe the construction of an LSH data structure, which can be used to prove the above theorem. An LSH data structure consists of a multiset of of hash functions in the product i.e. In other words, each element of is constructed by sampling a sequence of elementary hash functions uniformly at random from and then setting , to be their concatenation. That is, , . The intuition is that concatenating hash functions from allows to exponentially increase the gap between probabilities and in such a way that if we sample hash functions then two points with distance at most are likely to have at least one with and we can recover the point. At the same time, the number of points with and will be small, so that the recovery can be done efficiently.
The proof of [22] essentially follows by choosing the parameters and appropriately, which is done as follows. We set and such that is -sensitive. We define and . For an input set of points we set . For this setting of parameter, we get the guarantees provided in the above Theorem. In order to get a smaller failure probability we set , which results in a failure probability of .
One particular property of the above LSH construction is that there is a chance of incorrectly returning that there is no near point (i.e. returning ). Such an incorrect answer will be called a false negative.
Definition 3.4 (False negative).
Consider an LSH data structure initialized for the point set . We say that the answer to a query is a false negative query for this LSH instance, if but there exists a point such that .
Defining the adversary. The goal of this paper is to study how one can force LSH to return a false negative by doing adaptive queries. We take the role as an adversary and our goal is design an adversarial algorithm that has access to an LSH instance and quickly forces it to return a false negative. In particular, we provide guarantees on the number of queries required until LSH makes a false negative.
In the following, we will describe our formal setting. Our LSH instance is constructed according to the scheme described in Section 3. Recall that this LSH scheme samples a multiset of hash functions uniformly at random from the set of allowed hash functions of the form with for elementary hash function . The LSH scheme receives as input a point set of size and parameters and and calculates and as described earlier. Our adversarial algorithm may use the input point set , its size and dimension , and the parameters and of the LSH scheme and can interact with the LSH data structure using the procedure 1 described below. That is, the LSH data structure is in a black box and the only interaction uses procedure 1. In particular, the randomness used to construct the LSH instance and so the hash functions is unknown to the adversarial algorithm.
Note that in an LSH implementation one would store all buckets of in a hash table. This way, line 3 can be implemented without going through all points. We also remark that since we do not have control of the parameters and point set our adversarial algorithm cannot be expected to work in all settings (for example, if the number of hash functions is very large, then no false negatives exist).
4 Adversarial Algorithm
Define the set to be the set of hash functions in that collide on and . Our algorithm starts with finding a point in the dataset such that any other point is at least at a distance of from . We assume that such a point exists, and we experimentally explore the case when it doesn’t in Section 6. We also show that this assumption is reasonable for a randomly chosen point set in Lemma A.7. We refer to as origin point.
Algorithm outline. The idea of the algorithm is to query a random point at distance such that the expected number of hash functions hashing to the same bucket as is at most . We show that this is true when , which is the value we use throughout this section. This implies that with a constant probability . The algorithm now does iterations and in each of them it moves point away from by flipping one bit in it in such a way that the size of the set also decreases by at least .
To find this bit, the algorithm flips bits in at randomly chosen positions among those where is equal to and obtains a point . is located at distance from , so with a high probability . Now consider a path between and , i.e. a sequence of points starting with and ending with , where each two consecutive points differ in one bit and where each point is further away from than its predecessor. Because doesn’t hash with but does, there are two consecutive points , on this path with the same property. Consider the bit where they differ. Because and , this bit must lie in the support of all functions in .111 for an elementary hash function its support is defined as the set . For a hash function , its support is the union of supports of . Therefore, by flipping it in the algorithm removes these functions from . It is easy to see that at the end we get a point at a distance at most from such that , so and is a false negative.
In the following we prove Lemmas 4.3 and 4.4, which are the key results underpinning the correctness of our algorithm. Lemma 4.3 essentially shows that a point sampled at distance collides with in at most hashings with constant probability. Lemma 4.4 that says that if we move this point randomly and monotonically away from to distance , it will not hash at all with .
A key property of these lemmas is that, unlike in the standard analysis of LSH, we fix the hash functions and sample points at random. This gives us a different perspective on how LSH behaves when queried with random points, which reflects the fact that our goal is to construct an adversary for LSH.
First, we state some intermediate results bounding the size of and the support of hash functions. Their proof is given in Appendix A.
Lemma 4.1 (Bounds on ).
If and , one has .
Lemma 4.2 (Support lower bound).
Let and . Then with probability at least the size of the support of each of the functions is at least .
The next two lemmas lower bound the size of the support of hash functions in LSH when they are chosen as described in Section 3. We now show the first lemma, which bounds the number of hash functions hashing a point at distance from in the same bucket with . We outline the proof below, and present the full proof in Section A.1.
Lemma 4.3.
Let , , , and . Let the hash functions be such that the size of the support of each is at least . Let . Let be a point at distance from chosen uniformly at random. The expected size of is at most , where the expectation is taken with respect to the randomness of .
Proof outline.
We fix an arbitrary set of hash functions that satisfy the condition of the lemma. By linearity of expectation, where is the indicator random variable for the event and the randomness is over the choice of as in the lemma. In the following we consider an arbitrary and derive an upper bound for the collision probability of and . Indeed, let . We first observe (see Section A.1 for more details) that The basic argument is that sampling a point at distance from is not worse than just picking bits to flip in independently at random, and then bounding the probability that none of those chosen bits lie in the support of .
Next we can use this upper bound to argue that Using and we obtain It remains to show that the r.h.s. is at most . Because and are considerably smaller than and respectively, this essentially boils down to show that This is simple to bound using the Taylor expansion of both terms (see Section A.1 for detailed calculations). ∎
We now prove the second lemma, which bounds the probability that a point at distance will be hashed with the origin.
Lemma 4.4.
Let , , , . Let be such that no hash functions has support smaller than . Let be a point at distance at most from such that is at most . Let be a point chosen uniformly at random among all of the points at distance from such that for all , if , then . Then with probability at least , , i.e. no function hashes and to the same bucket.
Proof.
Fix . First notice that defined in lemma statement can be generated as follows. One first selects a uniformly random subset of coordinates in . Then one lets (negation of ) for and for . We denote this distribution by . Note that because (since by assumption), the support of is a subset of . We will show that In order to establish the above, we view the generation of as a two step process. First we uniformly sample bits from with replacement into a set (removing duplicates) and flip them in – denote the result by . Next, we choose more bits from and flip them in to get . It again holds that implies , so .
The probability that each bit chosen in the first step lies in support of is at least . Hence, the probability that none of them lie in the support, i.e. , is at most . This establishes the inequality.
By Lemma 4.1, . By applying A.1 to , we obtain: Now consider two cases. First, if , then the rhs of the above inequality satisfies since by assumption of the lemma. Otherwise, using that by the assumption of the lemma, we get . Thus, in both cases one has . Finally, since by the assumption of the lemma, by union bound across all , , as required. ∎
4.1 Analysis of the algorithm
Equipped with Lemma 4.3 and Lemma 4.4, we are now ready to analyse Algorithm 2. Throughout this section we make a strong assumption that the input point is isolated, that is, the distance from to any other point in that is not equal to is at least . This means that as long as we only make queries at distance at most to , the only two possible results of QueryLSH are and . This in turn means that iff .
First, we analyze the inner loop of the algorithm — lines 6 to 12. Its goal is to find a bit to flip in such that the size of would decrease afterward. Of course, our algorithm only gets black-box access to QueryLSH, and cannot test the size of in general. However, for a point one can, using black-box access to QueryLSH, distinguish between and : due to the assumption that is an isolated point we have iff . As we show, this ability suffices. Algorithm 2 simply keeps flipping bits in at random (but restricted to coordinates where and agree) until is no longer returned by QueryLSH on the modified query: the last flipped bit must lead to a decrease in even in the original query ! More formally, our algorithm finds two points and such that lies on a path222Recall that by a path between and we mean a sequence of points starting with and ending with , where each two consecutive points differ in one bit and where each point is further away from than its predecessor. between and , and and , and that differ only in one bit. This is enough to conclude that this bit lies in support of all hash functions of , so by flipping it in we would remove all of those hash functions from .
Lemma 4.5 (Inner loop of the adversarial walk).
Let , , , . Let be such that no hash functions has support smaller than . Suppose that the point passed to Algorithm 2 is located at distance at least from any other point in . Let be sampled in line 2 be such that . For a fixed iteration of the outer loop in line 3 of Algorithm 2, with probability at least the inner loop between lines 6 and 12 finds two points and such that , , and and differ in one bit.
Proof.
Fix an iteration of the outer loop. Then the inner loop at line 7 essentially does a random walk away from , always increasing the distance from after every flip. There are two possibilities: either it finds a point at distance at most from such that , i.e. , or it reaches a point at distance such that , where the algorithm would return .
Assume that the latter happens. Since the bits flipped are chosen at random, this process is equivalent to flipping different bits of where is not equal to . This is the same process as in statement of Lemma 4.4, so by this lemma the probability that this case takes place is at most . ∎
We are now ready to analyze Algorithm 2. Here we present only the conceptual part of the proof of Theorem A.4. The full proof can be found in Section A.2
Theorem 4.6.
Let , , , and . Let be the parameter from Algorithm 2 (Algorithm 3). Suppose that the point passed to Algorithm 2 (Algorithm 3) is located at a distance of at least from any other point in . With probability at least the algorithm finds a point at a distance of at most from such that querying LSH with returns no point. It uses at most ( for Algorithm 3) queries to the LSH data structure.
Proof outline.
We start with Algorithm 2. Let be the value of at the end of the iteration inner loop of Lemma 4.5, and let be its value at the beginning of the first iteration of this loop. Let be the number of iterations.
For each iteration of the inner loop, by Lemma 4.5 two points and are found. The bit where they differ must lie in the support of one of the hash functions in since . Since and differ in only this bit, and because , it holds that , so the size of the decreases by at least after each execution of the loop.
By Lemma 4.3 and Markov’s inequality, is at most with constant probability, so at most iterations will be made until , which also means that so the loop stops. On the other hand, , which makes a false negative query.
With a simple modification of the inner loop of Algorithm 2, we can exponentially reduce the dependence on in the running time and the number of queries. We state the algorithm and give the proof of its correctness in Section A.3. At a high level, instead of using a linear search as in Algorithm 2, to find a pair of points , such that and , we use a binary search in the inner loop. We still start with the same point . However, we now flip the bits in in positions where equals to get . Then by Lemma 4.4, does not hash with . Now we use a binary search on the path from to to find the aforementioned pair of points. The analysis is analogous to that of Lemma 4.5. ∎
5 Discussion of potential defenses against our attacking scheme
As was mentioned in Section 2, there already exists large literature on the robustification of LSH, many of which work against our adversary. In particular, we highlight [1], which provides a false-negative-free Las-Vegas type data structure with query time and used space , achieving the same asymptotic performance as [22], as well as [14], which can be applied to LSH to guarantee perfect robustness by paying times more space.
However, both approaches are not perfect within the same parameter bounds, as approach of [1] doesn’t give any guarantees on query time in this setting, and the space usage of [14] is too prohibitive in settings with large . We experimentally investigate how the latter, and a construction based on differential privacy, fares against our adaptive adversary when the available space is limited in Experiment 6 in Appendix C. Interestingly, these defences perform well in this setting, which suggests that they might be reasonably effective even against a general adversary.
6 Experiments
The goal of this section is to verify the efficacy of our adversary on real datasets, where an isolated point does not necessarily exist, and for parameter configurations outside of the guarantees of Theorems A.4 and A.6.


Implementation details. Our implementation of LSH follows Algorithm 1 in Section 3 [22]. To summarise, given a query point , we iterate through all points in the dataset that hash together with under some hash function, then return the first one which lies at distance at most from , and return nothing if there is no such point. The adversary is implemented according to Algorithm 3.
Experiment 1: Influence of parameters on success probability. The goal of this experiment is to explore within which parameter setting the algorithm works. For this experiment we use synthetic datasets, since for them we can easily vary the number of points and dimensions. We use two datasets:
Zero. This dataset entirely consists of copies of the all-zero point. This dataset is the easiest for our algorithm to find a false negative in, as it is equivalent to a dataset containing only a single point, which would automatically be isolated.
Random. Each point is sampled i.i.d. uniformly, with each bit having a probability to be set to .
For each point on the plots, the experiments were run times, and the result is the mean of the associated values. Plots contain error bars computed as standard error of the mean, but sometimes they are too small to be visible.
The parameters that we vary are , , , , and . Each parameter is changed individually. The default setting of parameters is , , , , . The origin point is picked at random. The offset is set to , or, in other words, the initial point is initialized to be equal to . This is different than what was used in Algorithms 2 and 3. However, the intuition is that if instead of setting to be a random point at distance from we set and then move it away from in a way that ensures that the size of decreases for every changed bit, the probability that the algorithm will find a false negative query would only increase. This intuition ends up being supported by the experiment results.
The results are presented in Figure 1. The sixth subplot, counting left-to-right and top-to-bottom, supports the claim that we just made: the lower the starting distance of the point , which is equal to , the higher the success probability of the algorithm. The first and fifth subplots are straightforward. The number of points has a very weak effect on the success probability. Parameter also has a weak influence if it is large enough, and when it is small it is unlikely that a point at distance does not hash with , see Lemma 4.4. The drop in probability on the left-hand side of the third subplot where the near radius is varied and the right-hand side of the fourth where is varied can be explained by the fact that a random point at distance has in expectation hash functions hashing it with . Hence, intuitively, for it to be possible to eliminate all collisions with , it must be that , which also implies that is necessary.
Experiment 2: Efficiency across datasets. In this experiment we measure how well our algorithm performs on a variety of different datasets. We use the datasets from Experiment 1, as well as one additional synthetic dataset and 3 datasets with real data. The new synthetic dataset is Sparse random. Each point there is generated by independently sampling each bit to be with probability , and to be otherwise. The remaining datasets are: We use embeddings of the data sets MNIST [28], Anonymous Microsoft Web Data (MSWeb) [12] and Mushroom [34, 38] into Hamming space. For MNIST we flatten the images into vectors whose Boolean variables indicate the presence of a pixel. For MSWeb and Mushroom we use one-hot encoding to map the data into Hamming space. A more detailed description can be found in the Appendix C.
Due to computational constraints, we choose to limit the size of each dataset to at most . For MNIST and MSWeb we only use the first points. We use all points in Mushroom. For the synthetic datasets we generate points with the dimension , which is about the same order of magnitude as the dimensions of real datasets. We choose a setting of parameters with which our algorithm has a good chance of success as per the results of Experiment 1. As the number of dimensions varies depending on the dataset, we set . , and the starting distance is on the second subplot. Otherwise, we use the same setup as Experiment 1, except that for fixed datasets the origin point is the first one in the dataset. On the first subplot the starting distance is measured as a fraction of . On the second subplot the goal of the algorithm is to find a query to the LSH data structure that returns at a distance from the origin specified by the axis as a fraction of the . The dashed black vertical line represents the radius and is located at since . The results in the first subplot of Figure 2 show that success probability drops monotonically for all datasets as the starting distance grows. This is natural to expect given that similar behaviour was observed in Experiment 1. On the second subplot we can see that for the most datasets we can reliably find a negative query only at distance much larger than . This, in a way, allows us to compare the “hardness” of datasets, as further away from the origin we move, the less functions collide the origin point and our current point, the less bits we need to change for the query to become negative. As one can see, all of the real datasets lie between the Random and Sparse random on the second subplot. This can be explained by the fact that the points in real datasets are much sparser than the Random, and the maximum distance from the most isolated point to the nearest point is smaller than the ideal .
References
- [1] Thomas Dybdahl Ahle. Optimal las vegas locality sensitive data structures. In Chris Umans, editor, 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, pages 938–949. IEEE Computer Society, 2017.
- [2] Panagiotis Anagnostou, Petros Barbas, Aristidis G. Vrahatis, and Sotiris K. Tasoulis. Approximate knn classification for biomedical data. In 2020 IEEE International Conference on Big Data (Big Data), pages 3602–3607, 2020.
- [3] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2006), 21-24 October 2006, Berkeley, California, USA, Proceedings, pages 459–468. IEEE Computer Society, 2006.
- [4] Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya P. Razenshteyn, and Ludwig Schmidt. Practical and optimal LSH for angular distance. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1225–1233, 2015.
- [5] Alexandr Andoni, Piotr Indyk, Huy L. Nguyen, and Ilya P. Razenshteyn. Beyond locality-sensitive hashing. In Chandra Chekuri, editor, Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 1018–1028. SIAM, 2014.
- [6] Alexandr Andoni, Thijs Laarhoven, Ilya P. Razenshteyn, and Erik Waingarten. Optimal hashing-based time-space trade-offs for approximate near neighbors. In Philip N. Klein, editor, Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19, pages 47–66. SIAM, 2017.
- [7] Alexandr Andoni and Ilya P. Razenshteyn. Optimal data-dependent hashing for approximate near neighbors. In Rocco A. Servedio and Ronitt Rubinfeld, editors, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June 14-17, 2015, pages 793–801. ACM, 2015.
- [8] Idan Attias, Edith Cohen, Moshe Shechner, and Uri Stemmer. A framework for adversarial streaming via differential privacy and difference estimators. In Yael Tauman Kalai, editor, 14th Innovations in Theoretical Computer Science Conference, ITCS 2023, January 10-13, 2023, MIT, Cambridge, Massachusetts, USA, volume 251 of LIPIcs, pages 8:1–8:19. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2023.
- [9] Martin Aumüller, Tobias Christiani, Rasmus Pagh, and Francesco Silvestri. Distance-sensitive hashing. In Jan Van den Bussche and Marcelo Arenas, editors, Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, Houston, TX, USA, June 10-15, 2018, pages 89–104. ACM, 2018.
- [10] Amos Beimel, Haim Kaplan, Yishay Mansour, Kobbi Nissim, Thatchaphol Saranurak, and Uri Stemmer. Dynamic algorithms against an adaptive adversary: generic constructions and lower bounds. In Stefano Leonardi and Anupam Gupta, editors, STOC ’22: 54th Annual ACM SIGACT Symposium on Theory of Computing, Rome, Italy, June 20 - 24, 2022, pages 1671–1684. ACM, 2022.
- [11] Omri Ben-Eliezer, Rajesh Jayaram, David P. Woodruff, and Eylon Yogev. A framework for adversarially robust streaming algorithms. J. ACM, 69(2):17:1–17:33, 2022.
- [12] Jack Breese, David Heckerman, and Carl Kadie. Anonymous Microsoft Web Data. UCI Machine Learning Repository, 1998. DOI: https://doi.org/10.24432/C5VS3Q.
- [13] Moses Charikar. Similarity estimation techniques from rounding algorithms. In John H. Reif, editor, Proceedings on 34th Annual ACM Symposium on Theory of Computing, May 19-21, 2002, Montréal, Québec, Canada, pages 380–388. ACM, 2002.
- [14] Yeshwanth Cherapanamjeri and Jelani Nelson. On adaptive distance estimation. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
- [15] Tobias Christiani. A framework for similarity search with space-time tradeoffs using locality-sensitive filtering. In Philip N. Klein, editor, Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19, pages 31–46. SIAM, 2017.
- [16] Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Moshe Shechner, and Uri Stemmer. On the robustness of countsketch to adaptive inputs. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 4112–4140. PMLR, 2022.
- [17] Edith Cohen, Jelani Nelson, Tamás Sarlós, and Uri Stemmer. Tricking the hashing trick: A tight lower bound on the robustness of countsketch to adaptive inputs. In Brian Williams, Yiling Chen, and Jennifer Neville, editors, Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, USA, February 7-14, 2023, pages 7235–7243. AAAI Press, 2023.
- [18] Søren Dahlgaard, Mathias Bæk Tejs Knudsen, and Mikkel Thorup. Practical hash functions for similarity estimation and dimensionality reduction. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6615–6625, 2017.
- [19] Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the twentieth annual symposium on Computational geometry, pages 253–262, 2004.
- [20] Moritz Hardt and David P. Woodruff. How robust are linear sketches to adaptive inputs? In Dan Boneh, Tim Roughgarden, and Joan Feigenbaum, editors, Symposium on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1-4, 2013, pages 121–130. ACM, 2013.
- [21] Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Yossi Matias, and Uri Stemmer. Adversarially robust streaming algorithms via differential privacy. J. ACM, 69(6):42:1–42:14, 2022.
- [22] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In Jeffrey Scott Vitter, editor, Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, Dallas, Texas, USA, May 23-26, 1998, pages 604–613. ACM, 1998.
- [23] Omid Jafari, Preeti Maurya, Parth Nagarkar, Khandker Mushfiqul Islam, and Chidambaram Crushev. A survey on locality sensitive hashing algorithms and their applications. arXiv preprint arXiv:2102.08942, 2021.
- [24] Michael Kapralov. Smooth tradeoffs between insert and query complexity in nearest neighbor search. In Tova Milo and Diego Calvanese, editors, Proceedings of the 34th ACM Symposium on Principles of Database Systems, PODS 2015, Melbourne, Victoria, Australia, May 31 - June 4, 2015, pages 329–342. ACM, 2015.
- [25] Christopher Kennedy and Rachel A. Ward. Fast cross-polytope locality-sensitive hashing. In Christos H. Papadimitriou, editor, 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, volume 67 of LIPIcs, pages 53:1–53:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017.
- [26] Hisashi Koga, Tetsuo Ishibashi, and Toshinori Watanabe. Fast agglomerative hierarchical clustering algorithm using locality-sensitive hashing. Knowledge and Information Systems, 12:25–53, 2007.
- [27] Eyal Kushilevitz, Rafail Ostrovsky, and Yuval Rabani. Efficient search for approximate nearest neighbor in high dimensional spaces. SIAM J. Comput., 30(2):457–474, 2000.
- [28] Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998.
- [29] Ping Li, Art B. Owen, and Cun-Hui Zhang. One permutation hashing. In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, pages 3122–3130, 2012.
- [30] Jelani Nelson, Huy L. Nguyên, and David P. Woodruff. On deterministic sketching and streaming for sparse recovery and norm estimation. In Anupam Gupta, Klaus Jansen, José D. P. Rolim, and Rocco A. Servedio, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques - 15th International Workshop, APPROX 2012, and 16th International Workshop, RANDOM 2012, Cambridge, MA, USA, August 15-17, 2012. Proceedings, volume 7408 of Lecture Notes in Computer Science, pages 627–638. Springer, 2012.
- [31] Rasmus Pagh. Locality-sensitive hashing without false negatives. In Robert Krauthgamer, editor, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016, pages 1–9. SIAM, 2016.
- [32] Rina Panigrahy. Entropy based nearest neighbor search in high dimensions. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2006, Miami, Florida, USA, January 22-26, 2006, pages 1186–1195. ACM Press, 2006.
- [33] Matti Ryynanen and Anssi Klapuri. Query by humming of midi and audio using locality sensitive hashing. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 2249–2252, 2008.
- [34] Mushroom. UCI Machine Learning Repository, 1987. DOI: https://doi.org/10.24432/C5959T.
- [35] Roger Weber, Hans-Jörg Schek, and Stephen Blott. A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In Ashish Gupta, Oded Shmueli, and Jennifer Widom, editors, VLDB’98, Proceedings of 24rd International Conference on Very Large Data Bases, August 24-27, 1998, New York City, New York, USA, pages 194–205. Morgan Kaufmann, 1998.
- [36] Alexander Wei. Optimal las vegas approximate near neighbors in . ACM Trans. Algorithms, 18(1):7:1–7:27, 2022.
- [37] David P. Woodruff and Samson Zhou. Tight bounds for adversarially robust streams and sliding windows via difference estimators. In 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, Denver, CO, USA, February 7-10, 2022, pages 1183–1196. IEEE, 2021.
- [38] Barry L Wulff. The audubon society field guide to north american mushrooms, by gary lincoff, 1982.
- [39] Ying Zhang, Huchuan Lu, Lihe Zhang, Xiang Ruan, and Shun Sakai. Video anomaly detection based on locality sensitive hashing filters. Pattern Recognition, 59:302–311, 2016.
Appendix A Omitted proofs and statements
This appendix contains lemmas and proofs omitted in the Section 4.
A.1 Proofs from Section 4
Fact A.1.
For , . For , .
Proof.
First we show . Taking a derivative, we get , which is non-negative on . Therefore, since the inequality holds at , it holds on .
For the second inequality, consider . It is enough to show that it’s first derivative is non-positive on . For this it is sufficient to show that on this interval. is a decreasing function. Since , , is positive on the interval . This means that is non-negative on this interval since . ∎
See 4.1
Proof.
Recall that . By taking logarithm of the inequalities of A.1, we get that for , . Setting , we obtain
The required inequalities now follow immediately, since and , so and . ∎
Fact A.2 (Chernoff bound).
Let be independent Bernoulli random variables. Let . Let . Then
Lemma A.3.
Suppose that a hash function was constructed according to the procedure in Section 3 and . Then with probability at least the size of the support of is at least .
Proof.
Recall that a hash function is composed of hash functions each of which are supported on one dimension from . The dimension on which each of the function is supported is picked uniformly and independently from . We will call hash function duplicated if for some , . Note that the first instance of each hash function in the range according to this definition is not duplicated. Then, because , it is enough to bound the number of duplicated hash functions to conclude the statement of the lemma, which we will denote by .
Consider the following equivalent process of constructing . We will pick each of the in a sequence of increasing . Denote by the size of the union of supports of . For each we will first reorder the range so that the first entries would be the dimension indices already in the support of , and the rest would be everything else. We will then pick a number uniformly at random, and this number would indicate the position of the dimension index in the new range on which will be supported on.
It is now easy to see that the hash function is duplicated iff .
Since the support of first hash functions is at most of size , it always holds that . Let be an indicator of the event , and be an indicator of the event . Then and for each , .
Hence we will upper bound by , and upper bound using the Chernoff bound. Indeed . Note that when , which is the case for the values we use below.
Now, we first consider the case when .
By applying A.2 to and setting , we get
When , set .
Therefore,
which concludes the proof. ∎
See 4.2
Proof.
By Lemma A.3, the probability that the support of any fixed hash function in is at least is at least . The number of hash functions in is and so by the union bound, the probability that there exists a hash function with support less than is at most . ∎
See 4.3
Proof.
We fix an arbitrary set of hash functions that satisfy the condition of the lemma. By linearity of expectation,
where the randomness is over the choice of as in the lemma.
Let
In the following we consider an arbitrary and derive an upper bound for the collision probability of and .
(1) |
where .
To see it, consider the following way to sample : first sample bits uniformly at random with replacement from to form a set , , and then flip the bits from in to obtain a point . After that choose the rest of bits without replacement uniformly at random among the bits and flip them. Because the bits flipped in the second step are different from the bits flipped in the first step, if then . Therefore, .
On the other hand, if and only if all of the bits sampled into do not lie in the support of . Because the support of is at least , each sampled bit doesn’t lie in it with probability at most . Since the bits are chosen independently, we have . This establishes (1).
We are now in a position to upper bound the expected size of , i.e. the number of hashings that and collide under, obtaining the result of the lemma. Using (1) we get
(2) |
Recall that by our parameter setting the number of hashings satisfies , where
(3) |
Hence, by (3), we have the following upper bound on the expectation.
(4) |
The only step left is to upper bound the right hand side of (4). Therefore, we need to upper bound . We have by A.1
The largest term cancels out in the fraction, so we are only left with a sum of smaller terms.
(5) |
Now we will establish upper bounds on each of the summands in the right-hand side of (5). First, by Lemma 4.1,
This implies:
-
•
, since , so .
-
•
, since , so and .
-
•
, since , so and
The last inequality implies that , and implies , so
On the other hand, the first two bounds on imply and , which finally allows us to bound rhs of (5):
A.2 Proofs from Section 4.1
We split Theorem 4.6 into two separate Theorems A.4 and A.6 for each of Algorithms 2 and 3. In this subsection we formally prove Theorem A.4.
Theorem A.4 (Simplest algorithm).
Let , , , and . Let be the parameter from Algorithm 2. Suppose that the point passed to Algorithm 2 is located at a distance of at least from any other point in . With probability at least the algorithm finds a point at a distance of at most from such that querying LSH with returns no point. It uses at most queries to the LSH data structure.
Proof.
First we show that under this parameter setting the requirements of Lemmas 4.4 and 4.3 are satisfied. For this we need to show that and that the support of each hash function is at least .
The inequality immediately follows from and .
By Lemma 4.2, each hash function has support at least with probability at least . By Lemma 4.1 and the parameter setting, , hence . On the other hand, , so and . This means that , so each has support at least with probality at least .
To conclude the theorem statement, we show in this proof that Algorithm 2 returns a point within distance of such that .
Let be the point sampled in line 2. By Lemma 4.3 and Markov’s inequality, with probability . Conditioned on those inequalities holding, we will show that with a probability at least the algorithm finds a bit during each execution of the outer loop at line 3 such that, after flipping this bit in , decreases by at least . This would mean that after iterations . This happens with probability at least by a union bound.
By taking the union bound with the probability that each has support at least , we obtain the final success probability of at least .
The inner loop at line 6 by Lemma 4.5, with probability at least , finds two points and such that , and differ in exactly one bit.
Assuming such a pair of points is found, let be the position of the bit that is different between them. Since , there is a function such that . On the other hand, because , , therefore bit must lie in the support of . Hence after we flip this bit in , and decreases.
Note that because we are doing the inner loop at most times, , so the prerequisites for Lemma 4.5 always hold.
The outer loop is executed at most times, and the inner loop makes at most iterations, as each iteration moves further away from . Therefore, the total number of queries is bounded by . ∎
A.3 Proofs for Algorithm 3
Here we present Algorithm 3 and its analysis. The key difference compared to Algorithm 2 is the inner loop, which we analyze in a separate lemma.
Lemma A.5 (Inner loop of the fast adversarial walk).
Let , , , . Let be such that no hash functions has support smaller than . Suppose that the point passed to Algorithm 3 is located at distance at least from any other point in . Let sampled in line 2 be such that . For a fixed iteration of the outer loop in line 3 of Algorithm 3, with probability at least the inner loop between the lines 6 and 17 finds two points and such that:
-
•
,
-
•
,
-
•
and differ in one bit.
The inner loops takes time at most .
Proof.
Fix an iteration of the outer loop. The idea of the inner loop is to sample two points and such that there is a path between them that goes strictly away from and such that and . Then we move those points towards each other by binary search while preserving those properties.
Initial point is sampled through the same process as in Lemma 4.4, so with probability at least the condition holds and the return in line 9 is not reached. The condition holds at the start because by the while clause of the outer loop.
During the execution both guarantees continue to hold through the way we reassign variables in the binary search. At the end, the distance between the two points becomes , which means that they are differ in exactly one bit. This concludes the correctness proof.
The running time is because the distance between and is halved after each iteration. ∎
The proof of Algorithm 3 is almost the same as of Algorithm 2, so we only outline the differences.
Theorem A.6 (Fast algorithm).
Let , , , and . Let be the parameter from Algorithm 2 (Algorithm 3). Suppose that the point passed to Algorithm 2 (Algorithm 3) is located at a distance of at least from any other point in . With probability at least the algorithm finds a point at a distance of at most from such that querying LSH with returns no point. It uses at most ( for Algorithm 3) queries to the LSH data structure.
Proof.
Algorithms 3 and 2 only differ in the implementation of the inner loop, between the lines 6 and 12 and the lines 6 and 17 respectively. Because of this, the proof is exactly the same as in Algorithm 2, however Lemma A.5 is used instead of Lemma 4.5.
The final runtime is since the inner loop takes time . ∎
A.4 Existence of an isolated point
Not all data sets contain a sufficiently isolated point as required by Theorems A.4 and A.6, i.e. a point that is at a distance from any other point in the set. However, when a point set is sampled randomly, this assumption holds when for all of the points with high probability.
Lemma A.7.
Let be a set of random points sampled uniformly independently at random from . If , with probability at least the distance between any two of them is at least .
Proof.
Fix an arbitrary point . We will first show that the probability that is at most for a uniformly random point .
Notice that , where , and all are independent with respect to each other and are equal to with probability and otherwise. Hence . Therefore, by the Chernoff bound,
Now, by a union bound across all pairs of points, the probability that the minimum of pairwise distances between any two of them is at most is at most . ∎
Appendix B Discussion of impact
This paper presents work whose goal is to advance the field of Machine Learning.
The main contribution of the paper is to get an improved understanding for which choices of parameters LSH data structure are unsafe with respect to adversarial attacks. We show that, in principle, attacks on standard LSH constructions are possible, if the parameters are not chosen carefully. As a consequence false-negative-free LSH data structures should be given preference in sensitive applications. Our main contribution is the theoretical analysis of the attack and our experiments are focussing on understanding the parameter setting for which attack may work. Therefore we believe that there is very limited risk to abuse our results and we do think that the benefit of understanding such potential attacks clearly outweights this limited risk.
Appendix C Additional experiments and details
Resources used.
The experiments were run on a server in an internal cluster, where a typical worker has two AMD EPYC 7302 16-Core processors, with up to 500 GiB of RAM. Experiments where run in parallel on multiple cores. In total, running all of the experiments, including those in the appendix, takes at most 100000 CPU hours, and uses 21 GiB of disk memory.
Data sets used in experiment 2.
MNIST [28]. This dataset consists of grayscale pictures of handwritten digits of size . We first flatten the images into vectors of size , and then we project those vectors into Hamming space by setting each entry to if it corresponded to a white pixel, i.e. if the grayscale color was , and to otherwise. Note that this transformation preserves the dataset fairly well, as the background pixels in each image are perfectly white.
Anonymous Microsoft Web Data (MSWeb) [12]. Entries in this dataset are anonymous web user IDs and paths on the Microsoft website that they have visited. There are in total different paths. We represent each user by a point in dimensional Hamming space, where a point entry is equal to if they have visited the corresponding path.
Experiment 3: Influence of on the algorithm efficiency.
In this experiment the goal was to measure the impact of on the probability of success of the algorithm. The setup is the same as in Experiment 2. All runs are done on the Random dataset. The results are on Figure 3.
Similar to Experiment 2, we compare the different values of by varying the distance at which we want to find the negative query. As is evident from the plots, the bigger the , the harder it is for the algorithm to succeed.
Note that the value of where algorithm no longer succeeds for distance are rather modest. It may be that it is enough to just increase the value of to defend against the type of attack that is described in this paper.
Experiment 4: Number of queries as a function of starting distance.
In this experiment we experimentally confirm that the number of queries that an algorithm needs is linearly dependent on . We use the same datasets and experimental setting as in Experiment 2. The results are on Figure 4.
Combined with results of Experiment 2, we can see that lowering the starting distance provides a trade-off between the running time and the probability of success.
Experiment 5: Efficiency gain over random sampling.
Finally, we investigated the efficiency gain that our adaptive algorithm achieved over random point sampling. The experimental setup is the same as in Experiment 2. Here we use the Random dataset. For both approaches, we continued rerunning them until a false negative query was found. The results are presented in Figure 5.
As you can see, when the and, accordingly, the number of hash functions are very small, random sampling is more efficient. However, for even modest values of , we already get a significant exponential speed-up.
Experiment 6: Comparison of defense mechanisms.
In this experiment we compare two different defence mechanisms based on ideas from differential privacy [10] and distance estimation [14].
Theorem 1.3. of [14] presents a general robustification mechanism which, when applied to then LSH of [22], says that it can be made robust with blow-ups of for memory and for query time, where is the failure probability per adaptive query. This is achieved by creating copies of plain LSH. Querying is done by selecting of those LSH data structures, relaying the query to them and then choosing any of the returned points as the answer, or if all of the data structure failed to find a point.
As a side note, the original ADE construction of [14] would also allow to solve the ANN problem. We don’t study it in this experiment for two reasons: it would require time per query , which is too slow in our parameter setting, and it is based upon linear sketches, against which our adaptive adversary is not effective.
The generic differentially private conversion of [10] is similar. For a generic estimation data structure, to be able to process queries against an adaptive adversary, they create copies of the data structure. For each query, they select copies of the data structure and aggregate their output using a differentially private median. Although their construction is not generally applicable to ANN, as it is a search problem, it would nevertheless still be effective against our adversary, as for the latter it is not important which exact point was returned, but only whether anything was returned at all.
As both of the data structures can provably defend against an adaptive adversary and because both of them require a lot of additional memory, in this experiment we consider their performance against our adversary when the available memory is limited. Similarly, because rigorous implementation of a differentially private median would be computationally expensive, we instead opt to use a slighlty simplified version adapted to our setting. Namely, after sampling and querying the data structures, we count the number of them that returned a point and those that did not. We add to both values two-sided geometric noise, whose probability mass function is, with parameter . If after this perturbation the number of data structures that returned is bigger than the ones that returned a point, we return . Otherwise, one of the outputs of the successful data structures is returned.
The results are depicted on Figures 6 and 7. On both figures, plots annotated with both and “q.s.” correspond to one of the two robustified variation accordingly. represents the number of copies used, and “q.s.” stands for “query samples”, and represents the number of data structures randomly sampled to answer each query. Each used copy is build with the same parameters as in experiment , and . The plots annotated with only are the plain LSH constructions with different values of . The idea is that the plots with the same value of have the same number of initialized hash functions. The used dataset is Random.
Both figures contain 4 subplots. The first subplot represents the probability that our adaptive adversary reports that it has found a false negative query at a desired target distance. Note that here, unlike in our previous experiments, the output of each query is non-deterministic even after the randomness of the hash functions is fixed. Hence we also test how “good” of a false negative the reported point is. For each found point, we repeatedly query it to the data structure times, and depict on the other three subplots the probability of the points discovered by the adversary to still be a negative query for at least , and of the repeated queries.
As you can see from the Figure 6, the defense based on [14] is performing better than querying the plain LSH with the same and even often greater amount of hash functions. Similarly, the probability that the discovered false negative remain false negatives in at least or of cases is fairly low.
On the other hand, the results for the algorithm based on differential privacy on Figure 7 show that the probability of reporting a false negative is considerably higher than for a plain LSH even for low values of target distance. This is due to the aggregation mechanism that we use, as it has a chance of reporting nothing even if all sampled data structures returned a positive answer. However, this also means that the adversary has less chances to learn the underlying data structure, which we can see in the fact that the probability of that a repeated query would be a negative one in at least of cases being below .
Based on this, it appears that unless in your application you are prepared to tolerate a significantly higher chance of failure for each query, it is better to use the construction based on [14]. Otherwise, a differentially private construction would leak less information to the adversary. Note that both constructions are still computationally faster than directly querying each data structure.




