GIST: Greedy Independent Set Thresholding for Diverse Data Summarization
Abstract
We propose a novel subset selection task called min-distance diverse data summarization (MDDS), which has a wide variety of applications in machine learning, e.g., data sampling and feature selection. Given a set of points in a metric space, the goal is to maximize an objective that combines the total utility of the points and a diversity term that captures the minimum distance between any pair of selected points, subject to the constraint . For example, the points may correspond to training examples in a data sampling problem, e.g., learned embeddings of images extracted from a deep neural network. This work presents the GIST algorithm, which achieves a -approximation guarantee for MDDS by approximating a series of maximum independent set problems with a bicriteria greedy algorithm. We also prove a complementary -hardness of approximation, for any . Finally, we provide an empirical study that demonstrates GIST outperforms existing methods for MDDS on synthetic data, and also for a real-world image classification experiment the studies single-shot subset selection for ImageNet.
1 Introduction
Subset selection is a challenging optimization problem with a wide variety of applications in machine learning, including feature selection, recommender systems, news aggregation, drug discovery, data summarization, and designing pretraining sets for large language models (Anil et al., 2023). Data sampling in particular is a salient problem due to unprecedented and continuous data collection. For example, LiDAR and imaging devices in one self-driving vehicle can easily capture ~80 terabytes of data per day (Kazhamiaka et al., 2021).
In most subset selection tasks, we rely on the weight (or utility) of the objects to rank one over the other, and also to avoid selecting duplicate or near-duplicate objects. If we select a small subset, then we also want to ensure that the selected subset is a good representation of the original set. These utility, diversity, and coverage criteria can be expressed through objective functions, and the interesting research lies in developing efficient algorithms with strong approximation guarantees. The underlying machinery used in constrained subset selection algorithms shares many similarities with techniques from other areas of combinatorial optimization such as submodular maximization, -center clustering, and convex hull approximations.
In this work, we study the problem of selecting a set of points in a metric space that maximizes an objective that combines their utility and a minimum pairwise-distance diversity measure. Concretely, given a set of points , weight function , and distance function , our goal is to solve the cardinality-constrained optimization problem:
where the diversity term encourages maximizing the minimum distance between all pairs of distinct points in the selected subset. Bhaskara et al. (2016) call the min-min diversity objective and Meinl et al. (2011) call it -dispersion maximization, where it is known to admit a -approximation ratio subject to the strict cardinality constraint .
Other diversity objectives such as the sum-of-min-distances and sum-of-all-distances have also been studied (Meinl et al., 2011; Bhaskara et al., 2016). While previous works have looked at maximizing the diversity objective , our formulation that introduces the utility term has not been studied before. In particular, Bhaskara et al. (2016) observe that min-min is a powerful diversity objective, but point out a major challenge: it is highly non-monotonic, i.e., it can exhibit a stark decrease in the objective value by adding a single new point. Our work shows that the use of a utility term along with the min-min can alleviate some of these pitfalls, and allows us to inherit the strong benefits of this strict diversity objective.
1.1 Our contributions
We summarize our main contributions below:
-
•
We propose a novel subset selection problem called min-distance diverse data summarization (MDDS), where the goal is to maximize an objective function that combines a linear weight term with the min-min distance diversity term subject to the cardinality constraint . This problem has a wide variety of applications in machine learning, e.g., data denoising and active learning.
-
•
We give a -approximation algorithm for MDDS called GIST that approximates a series of maximum independent set problems on geometric intersection graphs with a bicriteria greedy algorithm and returns the best solution.
-
•
We prove a complementary -hardness of approximation, for any , for general metric spaces by reducing from approximate maximum clique. We also prove that MDDS is APX-complete for Euclidean metrics.
-
•
Our experiments show that GIST overcomes the shortcomings of existing methods for MDDS on synthetic data. We also demonstrate how GIST can be used to create better single-shot subsets of training data for an image classification benchmark on ImageNet (Russakovsky et al., 2015) compared to margin sampling and -center algorithms.
1.2 Background and related work
Diversity objectives.
A closely related problem is -center, which aims to find a subset of centers that minimize the largest distance of any point to its closest center in . Concretely, the objective is , and a greedy -approximation algorithm is widely used in active learning applications (Sener and Savarese, 2018). The main difference between our objective function and -center is the ability to benefit from the utility term, e.g., uncertainty values in an active learning setting. The second difference is that we relax the need to cover every point in the original dataset, which is particularly beneficial for large datasets, where we prefer to avoid instability due to certain corner or outlier points.
Bhaskara et al. (2016) study various distance-based diversity metrics subject to a cardinality constraint. In particular, they give a -approximation algorithm for the sum-min diversity objective defined as by rounding a linear programming relaxation, and they complement this result by proving a conditional -hardness of approximation under the planted clique assumption. They also study the min-min objective , which is the same as . They do not have a linear weight function in their objective, so they enforce a strict cardinality constraint ; otherwise, it is always better to output at most one or two points. Meinl et al. (2011) study the same objective subject to under the name -dispersion (also called max-min or remote-edge) diversity maximization. This cardinality issue does not arise in our formulation since we need to optimize the trade-off between competing linear weight (utility) and diversity terms.
Moumoulidou et al. (2020), Addanki et al. (2022), and Mahabadi and Trajanovski (2023) initiated the study of diversity maximization under fairness constraints. Given a partitioning of the dataset, they ensure that a prespecified number of points are selected from each group . This fairness constraint considerably differentiates their problem setting from ours (i.e., ). Moumoulidou et al. (2020); Addanki et al. (2022) maximize the objective, and Mahabadi and Trajanovski (2023) consider the sum-of-pairwise distances and sum-min objectives. None of these works, however, add a linear weight function to the objective.
Data sampling.
In the realm of dataset curation and active learning, there have been several lines of work that study the combination of utility and diversity terms. Ash et al. (2020) invoke -means++ seeding over the gradient space of a model in order to balance uncertainty and diversity. Wei et al. (2015) introduce several submodular objectives, e.g., facility location, and use them to diversify a set of uncertain examples in each active learning iteration. Citovsky et al. (2021) cluster examples which are represented by embeddings extracted from the penultimate layer of a partially trained DNN, and then use these clusters to diversify uncertain examples in each iteration. Our work differs from these and several others (e.g., Kirsch et al. (2019); Zhdanov (2019)) in that we directly incorporate both the utility and diversity terms in our objective function.
2 Preliminaries
We are given a set of points in the metric space , where is the distance function. We are also given weights representing the utility of each point. We extend these functions to take subsets as input in the standard way: and . For the sake of completeness, we define .
Intersection graph.
Let denote the intersection graph of for distance threshold , i.e., the graph with node set and edge set
This graph is well-defined for any metric space and distance . Note that for any independent set of and pair of distinct nodes , we have .
Definition 2.1.
This work focuses on the min-distance diversity reward function
This reward function encourages selecting a well-separated set of points (generalizing the notion of data deduplication). To make well-defined if is empty or consists of a singleton point, we set the diversity to be the diameter of the whole ground set , implying that it is always beneficial to select at least two points to maximize the value of the selected set.
For any , we define the objective function
(1) |
which allows us to control the trade-off between the average utility and the diversity of set . The goal of the min-distance diverse data summarization problem (MDDS) is to maximize subject to a cardinality constraint :
Let denote the optimal objective value.
Simple -approximation algorithm.
Equation 1 has two parts, each of which is easy to optimize in isolation. To maximize the term, we simply select the heaviest points. In contrast, to maximize the term, we select two points whose distance is the diameter of . Outputting a subset that corresponds to the maximum of these two terms gives a -approximation guarantee. To see this, observe that
where is defined in Definition 2.1. Our main goals are to improve over this simple baseline and prove complementary hardness results.
3 Algorithm
We now present the GIST algorithm, which achieves a -approximation for the MDDS problem. It works by repeatedly calling the GreedyIndependentSet heuristic to find an independent set of the intersection graph for various distance thresholds . The subroutine GreedyIndependentSet sorts the points by their weight in a non-increasing order breaking ties arbitrarily. It starts with an empty set of selected points, and then it scans the points according to the sorted order. For each point, GreedyIndependentSet selects the point as long as its distance to the set of selected points so far is at least and the number of selected points does not exceed .
The GIST algorithm computes multiple solutions and returns the one with the maximum value . It starts by picking points with the largest weight as the initial solution, which can be done by calling GreedyIndependentSet with the distance threshold set to zero. We then consider a set of distance thresholds as follows. Let be the diameter of the ground set , i.e., . Define the set of distance thresholds . For each , we call the subroutine to find a set of size at most . If the newly generated set has a larger objective value than our best solution so far, then update . After considering all distance thresholds in , the highest-value solution among all candidates is the output of the GIST algorithm.
Theorem 3.1.
For any , GIST outputs a set with and .
The main building block in our design and analysis of GIST is the following lemma, which is inspired by the greedy -approximation algorithm for metric -centers.
Lemma 3.2.
Let be a max-weight independent set of the intersection graph of size at most . If is the output of , for , then .
Proof.
Let and be the points in in the order that GreedyIndependentSet selected them. Let be the points in contained in the radius- open ball around . First, we show that each contains at most one point in . If this is not true, then some contains two different points . Since is a metric, this means
which contradicts the assumption that is an independent set of . Note that it is possible to have , for , since these balls consider all points in .
Now let be the set of uncovered (by the open balls) points that become covered when GreedyIndependentSet selects . Concretely, . Each contains at most one point in since . Moreover, if , then because the points are sorted in non-increasing order and selected if uncovered.
Let be the set of points covered by the algorithm. For each point , there is exactly one covering set corresponding to . It follows that
(2) |
It remains to account for the points in . Note that if we have any such points, then since the points in are uncovered at the end of the algorithm. Further, for any and , we have since was selected and the points are sorted by non-increasing weight. Therefore, we can assign each to a unique such that . It follows that
(3) |
Adding the two sums together in Equation 2 and (3) completes the proof. ∎
Equipped with this bicriteria approximation guarantee, we can now analyze the approximation ratio of the GIST algorithm.
Proof of Theorem 3.1.
Let be the minimum distance between two distinct points in . There are two cases: or . For the first case, outputting the heaviest points (Line 2) achieves a -approximation. To see this, observe that
The sum of the heaviest points upper bounds , so it follows that
Therefore, we focus on the second case where . It follows that GIST considers a threshold such that
Using Lemma 3.2, outputs a set such that
(4) |
The potential max-diameter update to on Line 5 gives us
(5) |
Putting Equation 4 and (5) together, for any , it holds that
ALG | |||
To maximize the approximation ratio as , solve to get . It follows that
ALG | (6) | |||
which completes the proof. ∎
Remark 3.3.
We can modify GIST to achieve an exact -approximation by trying all pairwise distance thresholds in . This requires calls to GreedyIndependentSet compared to , but otherwise the same analysis holds for .
Finally, we show how to implement GIST in nearly linear time for low-dimensional Euclidean metrics using cover trees for exact nearest-neighbor queries (Beygelzimer et al., 2006; Izbicki and Shelton, 2015) and a -approximation algorithm for point-set diameters (Barequet and Har-Peled, 2001; Imanparast et al., 2018).
Theorem 3.4.
For and Euclidean distance, GIST runs in time . Furthermore, for fixed dimension , the running time can be improved to .
We give the proof in Section A.1 and note that the key property is that cover trees allow us to insert a point into a nearest-neighbor index in time and find a nearest neighbor to a query point in time, where is the doubling dimension of -dimensional Euclidean space.
4 Hardness of approximation
We start by summarizing our hardness results for the MDDS problem. Assuming , we prove that for any :
-
1.
There is no polynomial-time -approximation algorithm for general metric spaces.
-
2.
There is no polynomial-time -approximation algorithm for general metric spaces, even if .
-
3.
APX-completeness for the Euclidean metric, i.e., there is no polynomial-time approximation scheme (PTAS) for this objective.
4.1 General metric spaces
Our first result builds on the hardness of approximating the size of the maximum clique, and it tightly complements the -approximation guarantee that we can achieve with GIST.
Theorem 4.1.
For any , there is no polynomial-time -approximation algorithm for the MDDS problem, unless .
Proof.
First, recall that a clique is a subset of vertices in an undirected graph such that there is an edge between every pair of its vertices. Håstad (1996) and Zuckerman (2006) showed that the maximum clique problem does not admit an -approximation for any constant , unless . This implies that there is no constant-factor approximation algorithm for maximum clique. In other words, for any constant , there exists a graph and a threshold integer value such that it is NP-hard to distinguish between the following two cases:
-
•
YES instance: Graph has a clique of size .
-
•
NO instance: Graph does not have a clique of size greater than .
We reduce this instance of the maximum clique decision problem to MDDS with objective function (1) as follows. Represent each vertex of graph with a point in our ground set. The distance between a pair of points is if there is an edge between their corresponding vertices in , and it is otherwise.
Use the same threshold value of (in the YES and NO instance above) for the cardinality constraint on set , and set each weight . In a YES instance, selecting a clique of size as set results in the maximum possible value of the objective:
(7) |
In a NO instance, the best objective value that can be achieved in polynomial-time is the maximum of the following two scenarios: (a) selecting points with minimum distance , or (b) selecting at most vertices forming a clique with minimum distance . The maximum value obtained by any polynomial-time algorithm is then
ALG | |||
These two terms become equal for . Thus, the gap between the maximum value any algorithm can achieve in the NO case and the optimum value in the YES case is
To complete the proof, it suffices to show that the ratio above is at most . We separate the term as follows:
Therefore, we must choose a value of satisfying . Since , the denominator is positive. Equivalently, we want to satisfy:
By setting , we satisfy the required inequality and achieve the inapproximability gap in the theorem statement. ∎
4.2 Large subsets
In many data curation problems, we are interested in instances of MDDS where . Even in this restricted case, we can show hardness of approximation.
Theorem 4.2.
For any , there is no polynomial-time -approximation algorithm for the MDDS problem, even if , unless .
The proof of this theorem uses the vertex cover instance constructed in the proof of Håstad (2001, Theorem 8.1) and is given in Section B.1.
4.3 Euclidean metrics
Our final result is specific to the Euclidean metric, i.e., if and we consider the distance between points. It builds on a result of Alimonti and Kann (2000), which says the size of a maximum independent set in a bounded-degree graph cannot be approximated to within a constant factor , for any , unless . In other words, it does not admit a PTAS.
Lemma 4.3 (Alimonti and Kann (2000, Theorem 3.2)).
The maximum independent set problem for graphs with degree at most is APX-complete.
Similar to our reduction from maximum clique in Section 4.1 for general metric spaces, this construction uses a node embedding function to encode graph adjacency in Euclidean space.
Lemma 4.4.
Let be a simple undirected graph with , , and maximum degree . There exists an embedding such that if then
and if then .
Theorem 4.5.
The MDDS problem is APX-complete for Euclidean metrics.
We sketch the main idea of our reduction and defer the complete proofs of Lemma 4.4 and Theorem 4.5 to Appendix B. Let be the set of independent sets of . A set of embedded nodes (with at least two nodes) of a graph with maximum degree has the property:
-
•
If : .
-
•
If : .
This gap between the two cases is at least (i.e., a universal constant) for all such graphs, and allows us to show that MDDS inherits the APX-complete hardness of bounded-degree max independent set.
5 Experiments
5.1 Warm-up: Synthetic dataset
Setup.
We start with a simple dataset of weighted random points to see the shortcomings of existing methods for MDDS. For a fixed dimension , we generate points , where each and each point is assigned an i.i.d. uniform continuous weight .
We consider three baseline algorithms: random, simple, and greedy. The random algorithm chooses a random set of points, permutes them, and returns the best prefix. simple is the -approximation algorithm in Section 2. greedy builds a set one element at a time by selecting the point with index at each step . Then it returns the best prefix , as this sequence of objective values is not necessarily monotone.

Results.
We plot the values of for our baseline algorithms and GIST for all in Figure 1. We set to balance the two competing objective terms (which are easy to optimize in isolation).222If are i.i.d., then , so . There is a clear separation in solution quality for each when . Note that these objective values are not directly comparable as a function of due to the normalizing factor (hence the discontinuities), but both terms in the objective decreasing on average in for a fixed : mean utility and min distance of the selected points. GIST noticeably outperforms simple, greedy, and random. The poor performance of greedy is somewhat surprising since it is normally competitive for submodular-style objectives. simple also outperforms greedy for all but the extreme values of , i.e., very small or close to , but is always worse relative to GIST.
5.2 Image classification
The following data sampling experiment compares the top-1 image classification accuracy achieved by different single-shot subset selection algorithms.
Setup.
We use the standard vision dataset ImageNet (Russakovsky et al., 2015), which contains ~1.3 million images and 1000 classes. We select 10% of the data points at random and use them to train an initial ResNet-56 model (He et al., 2016). Then we use this initial model to compute an uncertainty value and 2048-dimensional embedding for each example. The uncertainty value of is , where is the best class label according to the initial model. Finally, we use the fast maximum inner product search of Guo et al. (2020) to build a -nearest neighbor graph in the embedding space using and cosine distance. We present all of the model training hyperparameters in Appendix C.
We compare GIST with three state-of-the-art benchmarks:
-
•
random: We select samples from the dataset uniformly at random (and without replacement). While being a simple and lightweight approach, this implicitly encodes diversity in many settings and provides good solutions.
-
•
margin (Roth and Small, 2006): Margin sampling selects the top- points based on how hard they are to classify, i.e., using the margin scores , which measure the difference between the probability of the best and second-best class label for an example.
-
•
-center (Sener and Savarese, 2018): We run the classic greedy algorithm for -center on the -nearest neighbor graph. The distance between non-adjacent nodes is implicitly assumed to be very large, which is acceptable in the large budget regime .
For this task, GIST uses uncertainty values as its weights and the same graph as -center, and sets .
Results.
We run each algorithm with cardinality constraint on the full dataset to get a subset of examples that we use to train a new ResNet-56 model. We report the top-1 classification accuracy of these models in Table 1.
(% of data) | random | margin | -center | GIST |
---|---|---|---|---|
30 | 65.02 | 67.64 | 67.32 | 67.70 |
40 | 68.92 | 70.45 | 70.98 | 71.20 |
50 | 70.86 | 73.41 | 72.51 | 73.67 |
60 | 72.39 | 74.70 | 74.78 | 75.16 |
70 | 72.92 | 75.57 | 75.62 | 75.85 |
80 | 74.36 | 75.39 | 76.26 | 76.49 |
90 | 75.10 | 76.11 | 76.02 | 76.50 |
Conclusion
This work proposes a novel subset selection problem called MDDS that combines the total utility of the selected points with the diversity objective. We design and analyze the GIST algorithm, which achieves a -approximation guarantee for general metric spaces by solving a series of maximum independent set instances on geometric intersection graphs with the GreedyIndependentSet bicriteria approximation algorithm. We also give a nearly linear-time implementation of GIST for points in low-dimensional Euclidean space. We complement our algorithm with a -hardness of approximation, for any , in general metric spaces. We also prove hardness of approximation in the practical cases of Euclidean metrics or . Finally, we present an empirical study showing that GIST steadily outperforms baseline methods for MDDS (in particular greedy) on a simple set of Gaussian points with random weights. We conclude with an experiment comparing the top-1 image classification accuracy achieved by GIST and three other state-of-the-art data sampling methods for ImageNet.
References
- Addanki et al. (2022) R. Addanki, A. McGregor, A. Meliou, and Z. Moumoulidou. Improved approximation and scalability for fair max-min diversification. In Proceedings of the 25th International Conference on Database Theory, pages 7:1–7:21, 2022.
- Alimonti and Kann (2000) P. Alimonti and V. Kann. Some APX-completeness results for cubic graphs. Theoretical Computer Science, 237(1-2):123–134, 2000.
- Anil et al. (2023) R. Anil, S. Borgeaud, Y. Wu, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, et al. Gemini: A family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
- Ash et al. (2020) J. T. Ash, C. Zhang, A. Krishnamurthy, J. Langford, and A. Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. In Proceedings of the 8th International Conference on Learning Representations, 2020.
- Barequet and Har-Peled (2001) G. Barequet and S. Har-Peled. Efficiently approximating the minimum-volume bounding box of a point set in three dimensions. Journal of Algorithms, 38(1):91–109, 2001.
- Beygelzimer et al. (2006) A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In Proceedings of the 23rd International Conference on Machine Learning, pages 97–104, 2006.
- Bhaskara et al. (2016) A. Bhaskara, M. Ghadiri, V. Mirrokni, and O. Svensson. Linear relaxations for finding diverse elements in metric spaces. Advances in Neural Information Processing Systems, 29:4098–4106, 2016.
- Citovsky et al. (2021) G. Citovsky, G. DeSalvo, C. Gentile, L. Karydas, A. Rajagopalan, A. Rostamizadeh, and S. Kumar. Batch active learning at scale. Advances in Neural Information Processing Systems, 34:11933–11944, 2021.
- Guo et al. (2020) R. Guo, P. Sun, E. Lindgren, Q. Geng, D. Simcha, F. Chern, and S. Kumar. Accelerating large-scale inference with anisotropic vector quantization. In International Conference on Machine Learning, pages 3887–3896. PMLR, 2020.
- Håstad (1996) J. Håstad. Clique is hard to approximate within . In Proceedings of 37th Conference on Foundations of Computer Science, pages 627–636, 1996.
- Håstad (2001) J. Håstad. Some optimal inapproximability results. Journal of the ACM, 48(4):798–859, 2001.
- He et al. (2016) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
- Imanparast et al. (2018) M. Imanparast, S. N. Hashemi, and A. Mohades. An efficient approximation for point-set diameter in higher dimensions. In Proceedings of the 30th Canadian Conference on Computational Geometry, pages 72–77, 2018.
- Izbicki and Shelton (2015) M. Izbicki and C. Shelton. Faster cover trees. In International Conference on Machine Learning, pages 1162–1170. PMLR, 2015.
- Kazhamiaka et al. (2021) F. Kazhamiaka, M. Zaharia, and P. Bailis. Challenges and opportunities for autonomous vehicle query systems. In Proceedings of the Conference on Innovative Data Systems Research, 2021.
- Kirsch et al. (2019) A. Kirsch, J. Van Amersfoort, and Y. Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. Advances in Neural Information Processing Systems, 32, 2019.
- Mahabadi and Trajanovski (2023) S. Mahabadi and S. Trajanovski. Core-sets for fair and diverse data summarization. Advances in Neural Information Processing Systems, 2023.
- Meinl et al. (2011) T. Meinl, C. Ostermann, and M. R. Berthold. Maximum-score diversity selection for early drug discovery. Journal of Chemical Information and Modeling, 51(2):237–247, 2011.
- Moumoulidou et al. (2020) Z. Moumoulidou, A. McGregor, and A. Meliou. Diverse data selection under fairness constraints. In Proceedings of the 24th International Conference on Database Theory, pages 13:1–13:25, 2020.
- Roth and Small (2006) D. Roth and K. Small. Margin-based active learning for structured output spaces. In Proceedings of the 17th European Conference on Machine Learning, 2006.
- Russakovsky et al. (2015) O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115:211–252, 2015.
- Sener and Savarese (2018) O. Sener and S. Savarese. Active learning for convolutional neural networks: A core-set approach. In Proceedings of the 6th International Conference on Learning Representations, 2018.
- Wei et al. (2015) K. Wei, R. Iyer, and J. Bilmes. Submodularity in data subset selection and active learning. In International Conference on Machine Learning, pages 1954–1963. PMLR, 2015.
- Zhdanov (2019) F. Zhdanov. Diverse mini-batch active learning. arXiv preprint arXiv:1901.05954, 2019.
- Zuckerman (2006) D. Zuckerman. Linear degree extractors and the inapproximability of max clique and chromatic number. In Proceedings of the 38th Annual ACM Symposium on Theory of Computing, pages 681–690, 2006.
Appendix A Missing analysis for Section 3
A.1 Proof of Theorem 3.4
See 3.4
Proof.
First, observe that we can compute and store all pairwise distances in time and space. This is useful in the naive implementation because we can then look up the values in constant time in any subroutine.
In the nearly linear-time implementation, we skip this precomputation step and use cover trees for fast exact nearest-neighbor search in low-dimensional Euclidean space (Beygelzimer et al., 2006): inserting a point into the index takes time and finding a nearest neighbor in the index to a query point takes time, where is the doubling dimension of -dimensional Euclidean space.
Thus, we can speed up the following subroutines in Algorithm 1 to have near-linear running times in a fixed dimension:
-
•
Computing : after discarding duplicate points.
-
•
GreedyIndependentSet:
-
•
Evaluating :
Taking advantage of the error in our -approximation guarantee, we use the approximate point-set diameter algorithm in Imanparast et al. (2018, Theorem 1) to compute a -approximate diameter, together with two points realizing it. This speeds up the naive running time of Lines 3–4 in Algorithm 1 as follows:
-
•
Computing a -approximate :
Since we have a smaller diameter , we need to show that it is compatible with the proof of Theorem 3.1. Concretely, this means modifying Eq. (5) and (6), and observing that the inequality
still holds. Lastly, the number of calls to GreedyIndependentSet (i.e., distance thresholds that GIST tries) is . Putting everything together proves the claim. ∎
Appendix B Missing analysis for Section 4
B.1 Proof of Theorem 4.2
See 4.2
Proof.
We use the vertex cover instance constructed in Håstad (2001, Theorem 8.1). For any , they construct a graph with nodes for some integer and show that it is NP-hard to distinguish between the following two cases:
-
•
YES case: Graph has an independent set of size .
-
•
NO case: Graph has no independent set of size larger than .
We mimic the general structure of the proof of Theorem 4.1. Add a point for each vertex in graph and set its weight to . Set the distance between two points to if they share an edge in the graph, and set it to otherwise. Lastly, let . Since there are nodes in the graph, in this case. In the YES case, objective function (1) can achieve the following value by selecting the independent set of size :
OPT | |||
In the NO case, the largest value that can be achieved is the best of two scenarios: (a) selecting points with minimum distance , or (b) selecting an independent set of size at most and achieving minimum distance . This results in the following upper bound for the objective:
ALG | |||
The largest hardness gap occurs when the two terms of the expression become equal, which happens at and yields the upper bound . The optimum value OPT will be lower bounded by:
OPT | |||
The desired inapproximability gap of in Theorem 4.2 is achieved when the ratio between the upper bound on ALG and lower bound for OPT drops below it. In other words, we want to set such that the following inequality holds:
We set and prove that the above inequality holds. We first note that both the numerator and denominator of the left side ratio are positive. Therefore, we can invert both ratios and swap their places to achieve the following equivalent inequality:
The left-hand side inequality holds by our choice of . ∎
B.2 Proof of Lemma 4.4
Let be a simple undirected graph. Our goal is to embed the vertices of into , for some , in a way that encodes the adjacency structure of . Concretely, we want to construct a function such that:
-
•
if , and
-
•
if ,
for the largest possible value of .
Construction.
First, let and . Augment by adding a self-loop to each node to get . We embed using since each node now has positive degree. Let be the degree of in and be the neighborhood of in .
Define a total ordering on (e.g., lexicographically by sorted endpoints ). Each edge corresponds to an index in the embedding dimension . We consider the embedding function that acts as a degree-normalized adjacency vector:
(8) |
See 4.4
Proof.
If , then we have
This follows because the only index where both embeddings can be nonzero is , if it exists.
Now suppose that . It follows that
The previous inequality follows from . For any , we have
so it follows that
which completes the proof. ∎
B.3 Proof of Theorem 4.5
See 4.5
Proof.
We build on the hardness of approximation for the maximum independent set problem for graphs with maximum degree . Alimonti and Kann (2000, Theorem 3.2) showed that this problem is APX-complete, so there exists some such that there is no polynomial-time -approximation algorithm unless . Hence, there exists a graph with max degree and a threshold integer value such that it is NP-hard to distinguish between the following two cases:
-
•
YES instance: Graph has an independent set of size .
-
•
NO instance: Graph does not have an independent set of size greater than .
We reduce this instance of bounded-degree maximum independent set to MDDS with objective function (1) as follows. Embed each node of the graph into Euclidean space using the function in Lemma 4.4. We use the same threshold value of (between YES and NO instances above) for the cardinality constraint on set , and we set each weight . In a YES instance, selecting an independent set of size as the set results in the maximum value of objective (1):
since for any two distinct points since there is no edge between and in graph .
In a NO instance, the best objective value that can be achieved in polynomial-time is the maximum of the following two scenarios: (a) selecting points with minimum distance at most , or (b) selecting at most vertices forming an independent set with minimum distance equal to . The maximum value obtained by any polynomial-time algorithm is then
ALG | |||
These two terms become equal for . Therefore, the gap between the maximum value any algorithm can achieve in the NO case and the optimum value in the YES case is upper bounded by
Since is a constant, is also a positive constant. This completes the proof of APX-completeness. ∎
Appendix C Missing details for Section 5
Hyperparameters for ImageNet classification.
We generate predictions and embeddings for all points using a coarsely-trained ResNet-56 model (He et al., 2016) trained on a random 10% subset of ImageNet (Russakovsky et al., 2015). We use SGD with Nesterov momentum 0.9 with 450/90 epochs. The base learning rate is 0.1, and is reduced by a tenth at 5, 30, 69, and 80. We extract the penultimate layer features to produce -dimensional embeddings of each image.