Department of Computer Science, Rutgers University, [email protected]://orcid.org/0000-0001-5083-6082This work is supported by NSF OAC-1939459, CCF-2118953 and CCF-1934924. Department of Computer Science, Queens College, City University of New York, [email protected][orcid]This work is supported by US National Science Foundation (NSF) awards CRII-1755791 and CCF-1910873. Department of Computer Science, Rutgers University, [email protected] work was supported by Subhash Khot’s Simons Investigator Award and by a grant from the Simons Foundation, Grant Number 825876, Awardee Thu D. Nguyen. Institute of Information Science, Academia Sinica, [email protected]://orcid.org/0000-0002-2243-8666 This research was supported in part by the Ministry of Science and Technology of Taiwan under contract MOST grant 109-2221-E-001-025-MY3. Department of Computer Science, Stony Brook University, [email protected] School of Informatics, University of Edinburgh, [email protected] \CopyrightJie Gao, Mayank Goswami, Karthik C. S., Meng-Tsung Tsai, Shih-Yu Tsai, and Hao-Tsung Yang \ccsdesc[500]Theory of computation Approximation algorithms analysis \ccsdesc[300]Theory of computation Computational geometry \ccsdesc[100]Theory of computation Problems, reductions and completeness \supplement
Acknowledgements.
\hideLIPIcs\EventEditors \EventNoEds2 \EventLongTitle \EventShortTitle \EventAcronym \EventYear \EventDate \EventLocation \EventLogo \SeriesVolume \ArticleNo1 \declaretheorem[name=Theorem,numberwithin=section]thmObtaining Approximately Optimal and Diverse Solutions via Dispersion
Abstract
There has been a long-standing interest in computing diverse solutions to optimization problems. Motivated by reallocation of governmental institutions in Sweden, in 1995 J. Krarup [26] posed the problem of finding edge-disjoint Hamiltonian Circuits of minimum total weight, called the peripatetic salesman problem (PSP). Since then researchers have investigated the complexity of finding diverse solutions to spanning trees, paths, vertex covers, matchings, and more. Unlike the PSP that has a constraint on the total weight of the solutions, recent work has involved finding diverse solutions that are all optimal.
However, sometimes the space of exact solutions may be too small to achieve sufficient diversity. Motivated by this, we initiate the study of obtaining sufficiently-diverse, yet approximately-optimal solutions to optimization problems. Formally, given an integer , an approximation factor , and an instance of an optimization problem, we aim to obtain a set of solutions to that a) are all approximately-optimal for and b) maximize the diversity of the solutions. Finding such solutions, therefore, requires a better understanding of the global landscape of the optimization function.
We show that, given any metric on the space of solutions, and the diversity measure as the sum of pairwise distances between solutions, this problem can be solved by combining ideas from dispersion and multicriteria optimization. We first provide a general reduction to an associated budget-constrained optimization (BCO) problem, where one objective function is to be maximized (minimized) subject to a bound on the second objective function. We then prove that bi-approximations to the BCO can be used to give bi-approximations to the diverse approximately optimal solutions problem with a little overhead. We then give the following applications of our result:
-
•
Polynomial time bi-approximation algorithms for diverse -approximate maximum matchings, and for diverse -approximate shortest paths.
-
•
Polynomial time bi-approximation, a PTAS bi-approximation and pseudopolynomial bi-approximation algorithms for diverse -approximate minimum weight bases of a matroid. In particular, this gives us diverse -approximate minimum spanning trees, advancing a step towards achieving diverse -approximate TSP tours.
We explore the connection to the field of multiobjective optimization and show that the class of problems to which our result applies includes those for which the associated DUALRESTRICT problem defined by Papadimitriou and Yannakakis [35], and recently explored by Herzel et al. [24] can be solved in polynomial time.
On the way we prove a conditional 2-hardness of approximation lower bound on the problem of -dispersion in metric spaces, resolving (up to ETH) the optimality of the greedy farthest point heuristic for dispersion. This also provides evidence of tightness of the factor 2 for diversity in most of our algorithms.
keywords:
diversity, minimum spanning trees, maximum matchings, shortest paths, travelling salesman problem, dispersion problemcategory:
\relatedversion1 Introduction
Techniques for optimization problems focus on obtaining optimal solutions to an objective function and have widespread applications ranging from machine learning, operations research, computational biology, networks, to geophysics, economics, and finance. However, in many scenarios, the optimal solution is not only computationally difficult to obtain, but can also render the system built upon its utilization vulnerable to adversarial attacks. Consider a patrolling agent tasked with monitoring sites in the plane. The most efficient solution (i.e., maximizing the frequency of visiting each of the sites) would naturally be to patrol along the tour of shortest length111We assume without loss of generality that the optimal TSP is combinatorially unique by a slight perturbation of the distances. (the solution to TSP - the Traveling Salesman Problem). However, an adversary who wants to avoid the patroller can also compute the shortest TSP tour and can design its actions strategically [39]. Similarly, applications utilizing the minimum spanning tree (MST) on a communication network may be affected if an adversary gains knowledge of the network [11]; systems using solutions to a linear program (LP) would be vulnerable if an adversary gains knowledge of the program’s function and constraints.
One way to address the vulnerability is to use a set of approximately optimal solutions and randomize among them. However, this may not help much to mitigate the problem, if these approximate solutions are combinatorially too “similar” to the optimal solution. For example, all points in a sufficiently small neighborhood of the optimal solution on the LP polytope will be approximately optimal, but these solutions are not too much different and the adversaries can still effectively carry out their attacks. Similarly one may use another tree instead of the MST, but if the new tree shares many edges with the MST the same vulnerability persists. Thus -best enumeration algorithms ([17, 22, 28, 29, 33]) developed for a variety of problems fall short in this regard.
One of the oldest known formulations is the Peripatetic Salesman problem (PSP) by Krarup [26], which asks for edge-disjoint Hamiltonian circuits of minimum total weight in a network. Since then, several researchers have tried to compute diverse solutions for several optimization problems [3, 4, 15, 21]. Most of these works are on graph problems, and diversity usually corresponds to the size of the symmetric difference of the edge sets in the solutions. Crucially, almost all of the aforementioned work demands either every solution individually be optimal, or the set of solutions in totality (as in the case of the PSP) be optimal. Nevertheless, the space of optimal solutions may be too small to achieve sufficient diversity, and it may just be singular (unique solution). In addition, for NP-complete problems finding just one optimal solution is already difficult. While there is some research that takes the route of developing FPT algorithms for this setting [4, 16], to us it seems practical to also consider the relaxation to approximately-optimal solutions.
This motivates the problem of finding a set of diverse and approximately optimal solutions, which is the problem considered in this article. The number of solutions and the desired approximation factor is provided by the user as input. Working in the larger class gives one more hope of finding diverse solutions, yet every solution has a guarantee on its quality.
1.1 Our Contributions
We develop approximation algorithms for finding solutions to the given optimization problem: for every solution, the quality is bounded by a user-given approximation ratio to the optimal solution and the diversity of these solutions is maximized. Given a metric on the space of solutions to the problem, we consider the diversity measure given by the sum (or average) of pairwise distances between the solutions. Combining ideas from the well-studied problem on dispersion (which we describe next), we reduce the above problem to a budget constrained optimization (BCO) program.
1.2 Dispersion
Generally speaking, if the optimization problem itself is -hard, finding diverse solutions for that problem is also -hard (see Proposition 3.5 for more detail). On the other hand, interestingly, even if the original problem is not -hard, finding diverse and approximately optimal solutions can still be -hard. This is due to the connection of the diversity maximization objective with the general family of problems that consider selecting elements from the given input set with maximum “dispersion”, defined as max-min distance, max-average distance, and so on.
The dispersion problem has a long history, with many variants both in the metric setting and the geometric setting [14, 27, 38]. For example, finding a subset of size from an input set of points in a metric space that maximizes the distance between closest pairs or the sum of distances of the selected points are both -hard [1, 37]. For the max-sum dispersion problem, the best known approximation factor is 2 in the metric setting [6, 23] and with for the 2D Euclidean setting [37]. It is not known whether there is a PTAS for the max-min dispersion problem.
Conditional hardness In this paper, we provide new hardness results for the max-sum -dispersion problem. We show that, assuming a standard complexity theoretic conjecture (Exponential Time Hypothesis), there is no polynomial time algorithm for the -dispersion problem in a metric space which approximates it to a factor better than (Theorem 4.3). This provides a tight understanding for the max-sum -dispersion problem which has been studied since the 90s [37].
Dispersion in exponentially-sized space We make use of the general framework of the 2-approximation algorithm [7, 37] to the max-sum -dispersion problem, a greedy algorithm where the th solution is chosen to be the most distant/diverse one from the first solutions. Notice that in our setting, there is an important additional challenge to understand the space within which the approximate solutions stay. In all of the problems we study, the total number of solutions can be exponential in the input size. Thus we need to have a non-trivial way of navigating within this large space and carry furthest insertion without considering all points in the space. This is where our reduction to budget constrained problem comes in.
Self avoiding dispersion Furthermore, even after implicitly defining the th furthest point insertion via some optimization problem, one needs to take care that the (farthest, in terms of sum of distances) solution does not turn out to equal one of the previously found solutions, as this is a requirement for the furthest point insertion algorithm. This is an issue one faces because of the implicit nature of the furthest point procedure in the exponential-sized space of solutions: in the metric -dispersion problem, it was easy to guarantee distinctness as one only considered the points not yet selected.
1.3 Reduction to Budget Constrained Optimization
Combining with dispersion, we reduce the diversity computational problem to a budget constrained optimization (BCO) problem where the budget is an upper (resp. lower) bound if the quality of solution is described by a minimization (resp. maximization) problem. Recall that the number of solutions and the approximation factor is input by the user.
We show how using an bi-approximation algorithm for the BCO problem provides a set of -diverse, approximately-optimal solutions to the diversity computational problem (the hidden constant is at most ). This main reduction is described in Theorem 4.8.
The main challenge in transferring the bi-approximation results because of a technicality that we describe next. Let be the space of approximate solutions. A bi-approximation algorithm to the BCO relaxes the budget constraint by a factor , and hence only promises to return a faraway point in the larger space . Thus bi-approximation of BCO do not simply give a farthest point insertion in the space of solutions, but return a point in a larger space. Nevertheless, we prove in Lemma 4.12 that in most cases, one loses a factor of at most in the approximation factor for the diversity.
Once the reduction to BCOs is complete, for diverse approximate matchings, spanning trees and shortest paths we exploit the special characteristics of the corresponding BCO to solve it optimally (). For other problems such as diverse approximate minimum weight spanning trees, and the more general minimum weight bases of a matroid, we utilize known bi-approximations to the BCO to obtain bi-approximations for the diversity problem. For all problems except diverse (unweighted) spanning trees222While an exact algorithm for diverse unweighted spanning trees is known [21], we give a faster (by a factor where denotes the inverse of the Ackermann function) -approximation here., our algorithms are the first polynomial time bi-approximations for these problems.
We also connect to the wide literature on multicriteria optimization and show that our result applies to the entire class of problems for which the associated DUALRESTRICT problem (defined by Papadimitriou and Yannakakis [35], and recently studied by Herzel et al. [24]) has a polynomial time solution. We discuss this in more detail after presenting our reduction.
The rest of this paper is organized as follows: we survey related work in Section 2, and formulate the problem in Section 3. In Section 4 we describe our main results on dispersion (Theorem 4.3) and the reduction to the budget constrained optimization problem (Theorem 4.8). Sections 5, 6, 7 and 8 describe four applications of our technique to various problems such as diverse approximate matchings, shortest paths, minimum spanning trees, and minimum weight bases of a matroid. We remark that this list is by no means exhaustive, and we leave finding other interesting optimization problems which are amenable to our approach for future research. We end with open problems in Section 9.
2 Related Work
Recently there has been a surge of interest in the tractability of finding diverse solutions for a number of combinatorial optimization problems, such as spanning trees, minimum spanning trees, -paths, shortest paths, -matchings, etc. [15, 16, 19, 20, 21]. Most of the existing work focuses on finding diverse optimal solutions. In cases when finding the optimal solution is NP-complete, several works have focused on developing FPT algorithms [4, 16]. Nevertheless, as pointed out in [20], it would be more practical to consider finding a set of diverse “short” paths rather than one set of diverse shortest paths. They show that finding a set of approximately shortest paths with the maximum diversity is NP-hard, but leave the question of developing approximation algorithms open, a question that we answer in our paper for several problems. Similarly the problem of finding diverse maximum matchings was proved to be NP-hard in [15]. We remark that the main difference between our result and previous work is that our algorithms can find a diverse set of -approximate solutions in polynomial time. If the attained diversity is not sufficient for the application, the user can input a larger , in hopes of increasing it.
Multicriteria Optimization: In this domain, several optimization functions are given on a space of solutions. Clearly, there may not be a single solution that is the best for all objective functions, and researchers have focused on obtaining Pareto-optimal solutions, which are solutions that are non-dominated by other solutions. Put differently, a solution is Pareto-optimal if no other solution can have a better cost for all criteria. Since exact solutions are hard to find, research has focused on finding Pareto-optimal solutions, which are a factor approximations of Pareto-optimal solutions. Papadimitriou and Yannakakis [35] showed that under pretty mild conditions, any mutlicriteria optimization problem admits an Pareto-optimal set of fully polynomial cardinality. In terms of being able to find such an Pareto-optimal set, they show that a (FPTAS) PTAS exists for the problem if and only if an associated GAP problem can be solved in (fully) polynomial time. Very recently, Herzel et al.[24] study the class of problems for which a FPTAS or PTAS exists for finding Pareto-optimal solutions that are exact in one of the criteria. Clearly such problems are a subset of the ones characterized by GAP. Herzel et al. [24] characterize the condition similarly: a FPTAS (PTAS) exists if and only if an associated DUALRESTRICT problem can be solved in (fully) polynomial time. For more details we refer the reader to the survey by Herzel at al. [25].
3 Problem Statement
First, we define some notations. We use the definition of optimization problems given in [2] with additional formalism as introduced in [18].
Definition 3.1 (Optimization Problem).
An optimization problem is characterized by the following quadruple of objects , where:
-
•
is the set of instances of . In particular for every , is the set of instances of of input size at most (bits);
-
•
is a function that associates to any input instance the set of feasible solutions of ;
-
•
is the measure function333We define the measure function only for feasible solutions of an instance. Indeed if an algorithm solving the optimization problem outputs a non-feasible solution then, the measure just evaluates to -1 in case of maximization problems and in case of minimization problems., defined for pairs such that and . For every such pair , provides a non-negative integer which is the value of the feasible solution ;
-
•
specifies whether is a maximization or minimization problem.
We would like to identify a subset of our solution space which are (approximately) optimal with respect to our measure function. To this effect, we define a notion of approximately optimal feasible solution.
Definition 3.2 (Approximately Optimal Feasible Solution).
Let be an optimization problem and let . For every and we say that is a -approximate optimal solution of if for every we have if and if .
Definition 3.3 (Computational Problem).
Let be an optimization problem and let . The computational problem associated with is given as input an instance (for some ) and real find a -approximate optimal feasible solution of .
Definition 3.4 (Diversity Computational Problem).
Let be an optimization problem and let . Let be a diversity measure that maps every feasible solutions of an instance of to a non-negative real number. The diversity computational problem associated with is given as input an instance (for some ), an integer , and real , find -many -approximate solutions to which maximize the value of .
Proposition 3.5.
Let be an optimization problem and let . Let be a diversity measure that maps every feasible solutions of an instance of to a non-negative real number. If the computational problem associated with is -hard then, the diversity computational problem associated with is also -hard.
Therefore the interesting questions arise when we compute problems associated with which are in , or even more when, is in where is the constant function which maps every element of the domain to 1. For the remainder of this paper, we will consider to be the constant function, and will simply refer to the constant as .
Finally, we define bicriteria approximations for the diversity computational problem:
Definition 3.6 ( bi-approximation for the Diversity Computational Problem).
Consider the diversity computational problem associated with , and a given instance (for some ). An algorithm is called an bi-approximation for the diversity computational problem if it outputs feasible solutions such that a) is a -approximate optimal feasible solution to for all , and b) for any set of -many -approximate optimal feasible solutions, . Furthermore, such an algorithm is said to run in polynomial time if the running time is polynomial in and .
4 The Reduction: Enter Dispersion and Biobjective Optimization
As stated in the introduction, our problems are related to the classical dispersion problem in a metric space. Here we state the dispersion problem and show that under the exponential time hypothesis, 2-approximation is actually tight for the -dispersion problem. We will then use dispersion to reduce the problem of finding diverse, approximately optimal solutions to solving an associated budget constrained optimization problems.
4.1 Dispersion Problem
Definition 4.1 (-Dispersion, total distance).
Given a finite set of points whose pairwise distances satisfy the triangle inequality and an integer , find a set of cardinality so that is maximized, where is the sum of the pairwise distances between points in .
The main previous work on the -dispersion problem relevant to us is [37], where the problem was named as Maximum-Average Facility Dispersion problem with triangle inequality (MAFD-TI). The problems are equivalent as maximizing the average distance between the points also maximizes the sum of pairwise distances between them and vice-versa.
The -dispersion problem is -hard, but one can find a set whose is at least a constant factor of the maximum possible in polynomial time by a greedy procedure [37]. We call the greedy procedure furthest insertion. It works as follows. Initially, let be a singleton set that contains an arbitrary point from the given set. While , add to a point so that for any . Repeat the greedy addition until has size . The final is a desired solution, which is shown to be a 4-approximation in [37]. It is worth noting that the furthest insertion in [37] initializes as a furthest pair of points in the given set, and the above change does not worsen the approximation factor. In a later paper [7], the above greedy algorithm that chooses an arbitrary initial point is shown to be a 2-approximation, which is a tight bound for this algorithm [6].
Lemma 4.2 (Furthest Insertion in [7, 37]).
The -dispersion problem can be 2-approximated by the furthest insertion algorithm.
The running time of the furthest insertion algorithm is polynomial in the size of , as it performs iterations, each performing at most distance computations/lookups. Note that in our case is the collection of objects of a certain type (matchings, paths, trees, etc.). Hence the size of our metric space is typically exponential in and . This adds a new dimension of complexity to the traditional dispersion problems studied.
We prove that the -dispersion problem cannot be approximated better than given the Exponential Time Hypothesis.
Theorem 4.3 (Inapproximability of -Dispersion).
Assuming the Exponential Time Hypothesis, for every , no algorithm running in polynomial time can approximate the -Dispersion problem to a factor of .
Proof 4.4.
We use the inapproximability of the densest subgraph problem.
Theorem 4.5 (Inapproximability of Densest -subgraph with Perfect Completeness [30]).
Given the Exponential Time Hypothesis, for every , no algorithm running in polynomial time can distinguish between the following two cases, given as input a graph and an integer :
- Completeness:
-
There is a clique of size in .
- Soundness:
-
The subgraph induced by any vertices in has at most many edges.
Given a graph , we define a distance function as follows.
Note that is a metric space. Suppose there is a set of vertices such that it forms a clique in , then, On the other hand, if for some set of vertices we have that the subgraph induced by in has at most many edges, then, The theorem statement thus follows from Theorem 4.5 by setting .
4.2 Reduction to Budget Constrained Optimization
Recall the definitions of the Diversity Computational Problem (Definition 3.4) and bi-approximations (Definition 3.6). As the input instance will be clear from context, we drop the dependence on , and assume a fixed input instance to a computational problem. Thus will denote the set of feasible solutions, and the measure of the feasible solution .
Diversity and similarity measures from metrics Let be a metric on the space of feasible solutions. When such a metric is available, we will consider the diversity function that assigns the diversity measure to a -tuple of feasible solutions . Also, given such a metric , define to be the diameter of under , i.e., . In many cases, we will be interested in the similarity measure defined by . The examples the reader should keep in mind are graph objects such as spanning trees, matchings, shortest paths, Hamiltonian circuits, etc., such that denotes the Hamming distance, a.k.a. size of the symmetric difference of the edge sets of and , and denotes the size of their intersection.
In the remainder of the paper we consider the above total distance (resp. similarity) diversity measures arising from the metric (resp. similarity measure ), and we will parameterize the problem by (resp. ) instead.
Definition 4.6 (Budget Constrained Optimization).
Given an instance of a computational problem , a , and a set of feasible solutions in , define the metric budget constrained optimization problem as follows:
-
•
If , define . Then is the problem
(1) s.t. -
•
If , define . Then is the problem
(2) s.t. -
•
Define the similarity budget constrained problem , where is a given similarity measure, with the same constraint set as above (depending on ), but with the objective function changed to instead of .
Definition 4.7 (Bi-approximation to BCO).
An algorithm for an associated BCO is called an bi-approximation algorithm if for any , it outputs a solution such that the following holds.
-
•
If and the associated BCO is , then a) , b) , and c) for all satisfying the constraints of , .
-
•
If and the associated BCO is , then a) , b) , and c) for all satisfying the constraints of , .
-
•
If and the associated BCO is , then a) , b) , and c) for all satisfying the constraints of , .
-
•
If and the associated BCO is , then a) , b) , and c) for all satisfying the constraints of , .
We are now ready to state our main theorem.
Theorem 4.8 (Reduction of DCP to BCO).
Consider an input to the diversity computational problem (DCP).
-
•
For metric BCO,
-
1.
An bi-approximation to can be used to give a approximation to the DCP, and
-
2.
An bi-approximation to can be used to give a approximation to the DCP.
-
1.
-
•
For similarity BCO,
-
3.
A bi-approximation to can be used to give a approximation to the DCP,
-
4.
A bi-approximation to can be used to give approximation to the DCP,
-
5.
A bi-approximation to can be used to give approximation to the DCP, under the condition that the average pairwise distance in the optimal solution to the DCP is at least .
-
3.
In all of the above, the overhead for obtaining a bi-approximation for the DCP, given a bi-approximation for BCO problem, is .
Proof 4.9.
Given a , denote by its feasible set. When , the returned solution lies in . On the other hand when , the returned solution lies in the larger space . Our proof will handle these cases separately. We will first state two lemmas concerning the greedy heuristic of farthest point insertion, the first of which we suspect is folklore, but we could not find a reference. The second lemma is the heart of the argument, and is (as far as we know) novel.
Lemma 4.10.
Let be a metric space, and consider the -dispersion problem in as in Definition 4.1. Consider an oracle that, given a set of points in , returns a point such that for all for some constant . Then in calls to one can obtain a -approximate solution to the -dispersion problem on . In other words, the total pairwise distance between the points of the solution obtained via is at most that of the optimal solution to the -dispersion divided by .
Proof 4.11.
We will prove the statement of the lemma for the average distance, which will imply the claim for the total distance. Let be the average distance between solutions in the optimal solution to the -dispersion problem. By Lemma 1 in [6], at any step , there is always a solution whose distance to the already selected set is at least , and hence the same holds for the farthest solution. This implies that the point returned by the algorithm has a distance at least . A simple inductive argument now finishes the proof: assume the lemma holds until stage , i.e., the average pairwise distance between points in is at least . When is chosen, its average distance to the points in is at least by the above reasoning, and so the has an average pairwise distance of at least .
Lemma 4.12.
Assume is a metric space, and . Suppose there is an oracle that, given a set of points in , outputs a point such that . Then the oracle can be used to give a set of points in whose diversity is at least of an optimal solution to the -dispersion problem in .
Proof 4.13.
For a point , denote by its closest point in . By definition, if then .
Assume that the oracle in the hypothesis ofthe lemma exists. Then given the point set , consider the point set . As mentioned before, a careful analysis of Lemmas 3-6 in [6] reveals that there always exists a point whose average distance to the points in is at least , where denotes the optimal average pairwise distance between points in the optimal solution to the dispersion problem on .
Consider the three points and . By the triangle inequality, . However, by definition, . This implies that , implying that . Summing over all , we get that the total (and hence the average) distance of to points is at least half of its total distance to points in . Since the average distance of to points in is at least , we get that the average distance of to points in is at least .
However, the oracle returns a point whose average distance to is at least that of to points in , which as argued above is at least . By the same inductive argument as in Lemma 4.10, we get that the average distance between points returned by calls to is at least , proving that it gives a -approximation.
Given the above lemmas, we now prove the theorem.
-
•
Type 1: An bi-approximation to returns a point in . Therefore we only apply Lemma 4.10 and we are done.
- •
-
•
Type 3: A bi-approximation to the similarity returns the farthest point in , and so this case follows from the -approximation guarantee of the farthest insertion heuristic, without applying either lemma.
-
•
Type 4: A bi-approximation to returns the farthest point in . In this case, we only apply Lemma 4.12 to get the claimed result.
We prove the remaining case of Type 5 separately. A bi-approximation to returns a point in . However, this point is not the farthest point, nor necessarily an approximation of it. This is because the guarantee is only on the total similarity, and not the total distance. In other words, a approximation to is not a approximation to – in fact, these functions are the “opposite” of each other, as .
Let be the point in that maximizes , i.e., the farthest point. The bi-approximation algorithm returns a point such that . Consider the condition that the average pairwise distance in the optimal solution to the DCP is at least . We refer the reader now to Lemma 1 in [6], which proves that during farthest point insertion, at step in the algorithm when have been selected, there is a point whose average distance is at least . While this lemma seems like it can only be applied to point sets formed during an iterated farthest insertion, a careful analysis of Lemmas 3,4,5 and 6 in [6] used to prove it reveals that the argument does not demand that this be the case. That is, for any set of points, there always exist a whose average distance is at least .
This implies the existence of a point such that the average distance of to the s is at least . Since is the farthest point, the same condition holds for .
This implies that the total distance , implying that . We then have the following string of inequalities
But we know that the returned point satisfies , implying that . Hence is a -approximation to the farthest point, and we can apply Lemma 4.10 to complete the proof.
A few remarks are in order:
-
•
The above theorem provides a recipe for solving the diversity computational problem for any given optimization problem. As long as either the metric or the similarity budget constrained optimization problems can be solved or approximated in polynomial time, one has an analogous result for the DCP.
-
•
In the remainder of this paper we will see several application that follow from the above 5 “types” of bi-approximations available. These include DCP for Maximum Matching (Type 1), DCP for shortest path (Type 3), DCP for minimum weight bases of a matroid, minimum spanning trees (Types 4 and 5).
-
•
Whenever either or (or both) is set to be , we call a bi-approximation for the BCO problem an FPTAS if the running time is polynomial in in addition to being polynomial in and . Otherwise we call it a PTAS.
Relation to Multicriteria Optimization: Observe that for similarity BCOs, we need either or to be . This class of biobjective problems that have a PTAS that is exact in one of the criteria is a special case of the multicriteria problems that have a PTAS that is exact in one of the criteria. Herzel et al. [24] showed that this class is exactly the class of problems for which the DUALRESTRICT version of the problem, posed by Diakonikolas and Yannakakis [12]), can be solved in polynomial time. These are also the class of problems having a polynomial-time computable approximate -Pareto set that is exact in one objective. This equivalence means that our theorem is applicable to this entire class of problems.
4.3 Relaxed BCOs and Self-Avoidance
Before we delve into our applications, we describe another challenge in directly applying results from multicriteria optimization literature. For a BCO, the second constraint demands that . Intuitively is the farthest point to the set of already discovered solutions , and because it is defined implicitly, without the second constraint may equal one of the (). Consider an alternate BCO, which we call where the constraint is relaxed to . For many graph problems, solving combined with the approach by Lawler [28] gives a solution to the original BCO. This is extremely useful because most of the literature on multicriteria optimization concerns optimization of the relaxed type of problems , and one can borrow results derived before without worrying about the second constraint. We remark that for other problems, -best enumeration algorithms (see [17, 22, 28, 29, 33] for examples) may be useful to switch from the BCO to its relaxed version. Thus any algorithm for can be used, modulo the self-avoiding constraint (to be handled using Lawler’s approach), to give a polynomial time algorithm for the Diversity Computational Problem with the same guarantees as in Theorem 4.8. We provide examples of the approach by Lawler in subsequent sections where we consider specific problems.
5 Application 1: Diverse Spanning Trees
In this section, we discuss the diverse spanning trees problem, which is the diversity computational problem for spanning trees with Hamming distance function as the diversity measure. Let be an undirected graph. The problem aims to output a set of spanning trees of such that the sum of the pairwise distances is maximized, where is the Hamming distance between the edge sets of the trees. While this problem actually has an exact algorithm running in time [21], we get a faster approximation algorithm.
Theorem 5.1.
Given a simple graph , there exists an -time algorithm, where is the inverse of the Ackermann function, that generates spanning trees , such that the sum of all pairwise Hamming distances is at least half of an optimal set of diverse spanning trees.
Proof 5.2.
We develop an exact polynomial time subroutine for the associated BCO problem. Using our reduction Theorem 4.8, we observe that since one is only required to output spanning trees, the associated budget constrained problem has no inequalities. Having obtained trees (starting with an arbitrary spanning tree ), the BCO problem looks like
(3) | ||||
s.t. |
The relaxed budget constrained problem then simply asks to maximize . We first show how to solve this problem exactly, and then adapt the approach by Lawler [28] to handle the self-avoiding constraint. The algorithm to maximize is very simple: give each edge a weight and compute the minimum spanning tree with respect to these edge weights.
Lemma 5.3.
The minimum spanning tree with respect to the edge weights satisfies for any spanning tree .
Proof 5.4.
Given input spanning trees , suppose is the tree returned by the algorithm in Section 4.1, and is an optimal spanning tree. Let be the subset of edges with weight . That is, for all and partition as .
Let and . To prove that is optimal, we will show that .
Define and . Plugging and into the sum of pairwise distances from our tree and the optimal tree to constructed trees, we get
and
We will show that We first express as the dot product of two vectors
where denotes transpose, and makes a column vector out of a row vector. We develop some more notation:
With this we have that
We will show by induction on , which will prove that , completing the proof.
We first claim that for all To prove this, fix a and let be the number of connected components (including any isolated vertices) of and be the number of connected components of . Clearly, since for Kruskal’s algorithm for minimum spanning trees is the greedy algorithm that adds as many edges from into the solution as possible without creating a cycle, and the addition of an edge decreases the number of connected components by one. Since, and , implies that
Now, we continue our proof that by induction on Observe as both spanning trees contain exactly edges.
Base Case For the base case of , and From the previously mentioned property, we know Hence,
The induction hypothesis assumes that for , i.e., the statement is true for two vectors of length . Now, consider when . Since , there are two cases,
Case 1) Then since , we get that . Using the induction hypothesis, we get
Case 2) . Then yields Let Then, . We claim that this quantity is positive. This is because the minimum value of , subject to the conditions that a) and b) , occurs when for all , , and . For this setting, the value of equals , which is positive, completing the proof.
If for all , that is, the new spanning tree is different from all previous trees, then we are done and Theorem 5.1 is proved using Lemma 5.3.
Self-Avoiding Constraint However, this is not guaranteed as our measure is the sum-of-distances and not the minimum distance. Note that for furthest point insertion, we need the point that is furthest from the current set, but does not belong to the current set. This is an issue we face because of the implicit nature of the furthest point procedure in the exponential-sized space of spanning trees: in the metric k-dispersion problem, it was easy to guarantee distinctness as one only considered the points not yet selected.
We now show how to guarantee that the new tree is distinct. In case that for some , we use the approach by Lawler [28]. We will obtain distinct spanning trees , at least one of which must then be distinct from , which will be chosen as . Set . For every edge , we find the minimum spanning tree (with respect to the same edge weights as before) after deleting from the graph. Among the collection of trees thus obtained, we find the one whose sum of distances from is as large as possible, and set it to be . Note that . If for all , then we are done; else, we repeat the above procedure to obtain the collection and set as the best one in , after which we are either done or we repeat to get , and so on. We stop after obtaining , and select the that is distinct from all of . The proof of Lemma 5.3 implies that the thus obtained is the furthest tree from the among all trees , and we have accomplished furthest point insertion in the space of spanning trees. In total, the above needs to compute instances of the minimum spanning tree, so the running time is [9].
6 Application 2: Diverse Approximate Shortest Paths
Given a graph , non-negative edge weights , two vertices and , and a factor , the diversity computational problem asks to output many paths, such that the weight of each path is within a factor of the weight of the shortest path, and subject to this constraint, the total pairwise distance between the paths is maximized. Here the distance between two paths is again the Hamming distance, or size of symmetric difference of their edge sets.
In [20], it is shown that finding shortest paths with the maximum diversity (i.e. the average Hamming distance between solutions) can be solved in polynomial time, but finding “short” paths with the maximum diversity is NP-hard. In contrast, in what follows, we will show that finding “short” paths with constant approximate diversity is polynomial-time solvable.
We will show that the associated budget constrained optimization problem for this is of Type 3 in Theorem 4.8. In other words, we will show that the BCO can be solved exactly. This will result in a approximation algorithm for the diversity computational problem.
Hence, we need an algorithm that implements: given a set of --shortest paths, find a --shortest path so that is maximum among all for --shortest path . Here, is the sum of all pairwise Hamming distances between two elements in . This is a special case of the bicriteria shortest paths, for which there is an algorithm in [34]. In our case, one of the two weight functions is an integral function with range bounded in . Hence, it can be solved more efficiently than the solution in [34], which can be summarized as following.
Lemma 6.1 (Exact solution to the relaxed problem).
Given a real and a directed simple graph associated with two weight functions on edges and , there is an -time algorithm to output an -path so that is minimized while retaining for all -paths .
Proof 6.2.
Let be a table with the entry values defined as the following:
If the RHS takes the minimum from an empty set, then we define . By definition, has so that
where denotes the distance under from to . By backtracking from the , one can construct from the table in time. Hence, our goal is to fill out the entries in using time.
To compute for all , it suffices to run Dijkstra’s algorithm [13] on the graph obtained from with the removal of the edges whose . This step takes time.
Given for all , to compute for all there are two cases to discuss. We say an -path admits if has and .
-
Case 1. Some -path with admits .
-
Case 2. Some -path with admits , and together with some edges whose admits .
For Case 1, it suffices to check for all the neighbors of with . This step takes time. For Case 2, observe that can be determined by Case 1. It suffices to initialize for all as the values obtained from Case 1 and update if there is a path from some node to so that every edge has and . This can be implemented in time for each by running Dijkstra’s algorithm on the graph where is a set of new directed edges for all with and as the initial obtained from Case 1, and is the set of all the edges that have . It is worth noting that, though we do not explicitly check whether the found paths contain cycles or not (i.e. not simple paths), the paths found to admit must be simple since all shortest paths in a positive weight graph are simple.
To sum up, an optimal solution can be found in time.
Self-Avoiding Constraint We now turn to solving the associated (non-relaxed) problem, by generalizing the above lemma to Corollary 6.3. Thus Corollary 6.3 will help us avoid the situation that a furthest insertion returns a path that is already picked by some previous furthest insertion.
Corollary 6.3 (Exact solution to the problem).
Given a real , a directed simple graph associated with two weight functions on edges , , and two disjoint subsets of edges so that all edges in together form a directed simple path starting from node , there exists an -time algorithm to output an --shortest path under so that is minimum among all the --shortest paths that contain as a prefix and contain no edges from , if such an --shortest path exists.
Proof 6.4.
Let . This suffices to find a feasible -path on the graph obtained from with the removal of all the nodes in except and with the removal of all edges in and those incident to the deleted nodes. This can be computed by Lemma 6.1.
We are ready to state our main result for the diverse --shortest paths.
Theorem 6.5 ( bi-approximation to the Diversity Problem on Shortest Paths).
For any directed simple graph , given a and a , there exists an -time algorithm that, if contains at least distinct --shortest paths, computes a set of distinct --shortest paths so that the sum of all pairwise Hamming distances between two paths in is at least one half of the maximum possible; otherwise, reports “Non-existent.”
Proof 6.6.
Let be an arbitrary --shortest path (i.e. a shortest path from to ), which can be computed in time by Dijkstra’s algorithm. We perform the furthest insertion mentioned in Lemma 4.2 to obtain the --shortest paths . The collection is a desired solution.
To perform the -th furthest insertion for , we need an algorithm for the problem that, given , find an --shortest path so that the following conditions both hold:
-
1.
is minimum among all --shortest paths where for is defined as the number of times that appears in for all . Hence, maps to .
-
2.
for all .
To find an --shortest path that satisfies the first condition, it is an instance of the bicriteria shortest path problem with two weight functions and . By Lemma 6.1, this can be done in time.
However, the --shortest path found by minimizing is not necessarily different from those found in the previous furthest insertions. To remedy, one can find the --shortest paths whose are the -th smallest for all . Some of the --shortest paths can meet the second condition. This step can be implemented by the standard approach due to Lawler [28]. That is, suppose in time one can compute with an additional constraint that some given edge set (all edges in together form a directed path starting from node ) must be contained in and some given edge set must not be contained in , then enumerating the smallest candidates for can be done in time. By Corollary 6.3, . Thus, the -th furthest insertion can be done in time. In total, our approach needs time as desired.
7 Application 3: Diverse Approximate Maximum Matchings
Consider the diversity computational problem for computing many maximum matchings for undirected graphs. In [15], the authors present an algorithm, among others, to find a pair of maximum matchings for bipartite graphs whose Hamming distance is maximized. In contrast, our result can be used to find approximate maximum matchings for any graph whose diversity (i.e. the average Hamming distance) approximates the largest possible by a factor of .
Since , using the Hamming metric on the edge set of the solutions gives us a BCO with minimization in the objective function (the sum of the total distances) and constraints with lower bound.
We show that this problem can be restated into the budgeted matching problem [5]. As noted in [5], though the budgeted matching is in general -hard, if both the weight and cost functions are integral and have a range bounded by a polynomial in , then it can be solved in polynomial time with a good probability by a reduction to the exact perfect matching problem [8, 32]. The exact running time for such a case is not stated explicitly in [5]. We combine the algorithm in [5] and the approach by Lawler [28] to prove our main theorem for diverse matchings (Theorem 7.3).
Define the distance between two matchings as the Hamming distance between the edge sets. The associated relaxed is a maximizatin problem with lower bounded constraints. It can be restated as: given a non-empty set of -maximum matchings, find a -maximum matching so that is maximum among all for -maximum matching (here is the sum of pairwise distances between all matchings in . This problem can be restated into the budgeted matching problem [5]. As noted in [5], though the budgeted matching is in general -hard, if both the weight and cost functions are integral and have a range bounded by a polynomial in , then it can be solved in polynomial time with a good probability by a reduction to the exact perfect matching problem [8, 32]. The exact running time for such a case is not stated explicitly in [5], and we analyze it as below.
Lemma 7.1 (Restricted Budgeted Matching Problem).
Given an undirected simple graph and a cost function on edges, there exists an -time randomized algorithm that can find a matching of smallest cost among all -maximum matchings with some constant success probability where the cost of a matching is the sum of the costs of all edges in the matching.
Proof 7.2.
Suppose that, for all with , for all with , we can decide whether contains a matching that consists of edges and has cost . Then, the desired matching can be found.
As the reduction in [5], we construct an edge-weighted graph so that
-
•
where for all ,
-
•
, and
-
•
for every edge , if is also in , then it has weight for some sufficiently large integer to be determined later; otherwise, it has weight .
By setting +1, has a matching that consists of edges and has cost if and only if has a perfect matching of weight . This is an instance of the exact perfect matching problem [8, 32], which can be solved by a randomized algorithm in time with some constant success probability. Summing over all choices of and , the running time is .
We are ready to prove our main result for the diverse -maximum matchings.
Theorem 7.3.
There exists a time, bi-approximation to the diversity computational problem for -maximum matchings, with failure probability .
Proof 7.4.
Let be an arbitrary maximum matching, which can be computed in time [31]. For each , given , find a -maximum matching so that is maximum among all for -maximum matching . By Lemma 7.1, set the cost function for every as the number of times that edge appears in for all and apply the algorithm for Lemma 7.1 to solve the instance . This returns an -maximum matching so that is maximum among all that is an -maximum matching and may or may not be contained in . Thus we need to guarantee self-avoidance.
To remedy, we enumerate the best candidates for . Some of them is not contained in . This enumeration can be done by running the algorithm for Lemma 7.1 times, each of which preselects some edges to be included in and some to be excluded from , as the Lawler’s approach [28]. For each , this step takes time and may fail with probability . The failure probability is rather than a constant because we can run the algorithm that may err times.
Summing up the running time for the furthest insertions, the total running time is , and the failure probability is by the Union bound.
8 Application 4: Diverse Minimum Weight Matroid Bases and Minimum Spanning Trees
One of the original ways to attack the peripatetic salesman problem (Krarup [26]) was to study the edge-disjoint spanning trees problem [10]. Note that the existence of such trees is not guaranteed, and one can use our results in Section 5 to maximize diversity of the trees found.
However, for an application to the TSP problem, cost conditions must be taken into account. Here we study the diverse computational problem (DCP) on minimum spanning trees: Given a weighted undirected graph with nonnegative weights , and a , return spanning trees of such that each spanning tree is a -approximate minimum spanning tree, and subject to this, the diversity of the trees is maximized. Here again the diversity of a set of trees is the sum of pairwise distances between them, and the distance between two trees is the size of their symmetric difference.
Our results in this section generalize to the problem of finding diverse bases of a matroid such that every basis in the solution set is a approximate minimum-weight basis. The DCP on MSTs is a special case of this problem. However, in order to not introduce extra notation and definitions here, we will describe our method for minimum spanning trees. We will then briefly sketch how to extend the algorithm to the general matroid case.
Starting with (a minimum spanning tree on , computable in polynomial time), assume we have obtained trees , all of which are -approximate minimum spanning trees. Assign to each edge a length which equals .
Claim 1.
Given , finding that maximizes is equivalent to finding that minimizes .
Proof 8.1.
An explicit calculation reveals that .
Consider now the associated similarity budget constrained optimization problem
(4) | ||||
s.t. | ||||
Here is just the set of spanning trees on . We will handle the self-avoiding constraints in a similar fashion as in Section 5. For the moment consider the relaxed where the last constraint is simply . This is a budget constrained MST with two weights. This problem has been considered by Ravi and Goemans [36], who termed it the CMST problem. They provide a bi-approximation that runs in near-linear time, and a bi-approximation that runs in polynomial time444The latter is a PTAS, not an FPTAS.. Also, they show that the bi-approximation can be used as a subroutine to compute a bi-approximation in pseudopolynomial time.
Applying their results and observing that we are in cases 4 and 5 of Theorem 4.8, we get
Theorem 8.2 (DCP for Mininum Spanning Trees).
There exists a
-
•
polynomial (in and ) time algorithm that outputs a bi-approximation to the DCP problem for MSTs.
-
•
polynomial (in and ) and exponential in time algorithm that outputs a bi-approximation to the DCP problem for MSTs.
-
•
pseudopolynomial time algorithm that outputs a bi-approximation to the DCP problem for MSTs, as long as the average distance between the trees in the optimal solution to the DCP on -approximate minimum spanning trees does not exceed .
Extension to Matroids: It is stated in the paper by Ravi and Goemans [36] that the same result holds if one replaces the set of spanning trees by the bases of any matroid. It is straightforward to show that the analog of Claim 1 hold in the matroid setting too. With a bit of work, one can also generalize the approach of Lawler [28] to avoid self-intersection (the bases found so far), and thus all the techniques generalize to the matroid setting. In all of this, we assume an independence oracle for the matroid, as is standard. In [16], it is shown that, given integers , finding perfect matchings so that every pair of the found matchings have Hamming distance at least is NP-hard. This hardness result also applies to finding weighted diverse bases and weighted diverse common independent sets.
9 Conclusion
We obtained bi-approximation algorithms for the diversity computational problems associated with approximate minimum-weight bases of a matroid, shortest paths, matchings, and spanning trees. Our general reduction lemma can be applied to any computational problem for which the DUALRESTRICT version defined by Papadimitriou and Yannakakis [35] can be solved in polynomial time. There are (at least) three open questions:
-
•
Are there other interesting examples of problems to which our reduction lemma applies?
-
•
It is noteworthy that while Types 3, 4 and 5 in Theorem 4.8 contain the class for which DUALRESTRICT can be solved efficiently, the case of missing in our theorem would include the larger GAP class studied by Papadimitriou and Yannakakis [35]. Our current techniques to prove Theorem 4.8 do not allow us to translate such bi-approximations of BCOs into those for DCPs (at least not without demanding certain stronger condition than in Type 5).
-
•
Can the diversity computational problem for approximate TSP tours or Max-TSP tours be solved in polynomial time?
References
- [1] Sofiane Abbar, Sihem Amer-Yahia, Piotr Indyk, Sepideh Mahabadi, and Kasturi R. Varadarajan. Diverse near neighbor problem. In Symposium on Computational Geometry (SoCG), pages 207–214. ACM, 2013.
- [2] Giorgio Ausiello, Alberto Marchetti-Spaccamela, Pierluigi Crescenzi, Giorgio Gambosi, Marco Protasi, and Viggo Kann. Complexity and approximation: combinatorial optimization problems and their approximability properties. Springer, 1999.
- [3] Julien Baste, Michael R Fellows, Lars Jaffke, Tomáš Masařík, Mateus de Oliveira Oliveira, Geevarghese Philip, and Frances A Rosamond. Diversity of solutions: An exploration through the lens of fixed-parameter tractability theory. Artificial Intelligence, 303:103644, 2022.
- [4] Julien Baste, Lars Jaffke, Tomáš Masařík, Geevarghese Philip, and Günter Rote. Fpt algorithms for diverse collections of hitting sets. Algorithms, 12(12):254, 2019.
- [5] André Berger, Vincenzo Bonifaci, Fabrizio Grandoni, and Guido Schäfer. Budgeted matching and budgeted matroid intersection via the gasoline puzzle. Math. Program., 128(1-2):355–372, 2011.
- [6] Benjamin E. Birnbaum and Kenneth J. Goldman. An improved analysis for a greedy remote-clique algorithm using factor-revealing LPs. Algorithmica, 55(1):42–59, 2009.
- [7] Allan Borodin, Aadhar Jain, Hyun Chul Lee, and Yuli Ye. Max-sum diversification, monotone submodular functions, and dynamic updates. ACM Trans. Algorithms, 13(3):41:1–41:25, 2017.
- [8] Paolo M. Camerini, Giulia Galbiati, and Francesco Maffioli. Random pseudo-polynomial algorithms for exact matroid problems. J. Algorithms, 13(2):258–273, 1992.
- [9] Bernard Chazelle. A minimum spanning tree algorithm with inverse-ackermann type complexity. J. ACM, 47(6):1028–1047, 2000.
- [10] Jens Clausen and Lone Aalekjær Hansen. Finding k edge-disjoint spanning trees of minimum total weight in a network: an application of matroid theory. In Combinatorial Optimization II, pages 88–101. Springer, 1980.
- [11] Clayton W Commander, Panos M Pardalos, Valeriy Ryabchenko, Stan Uryasev, and Grigoriy Zrazhevsky. The wireless network jamming problem. Journal of Combinatorial Optimization, 14(4):481–498, 2007.
- [12] Ilias Diakonikolas and Mihalis Yannakakis. Small approximate pareto sets for biobjective shortest paths and other problems. SIAM Journal on Computing, 39(4):1340–1371, 2010.
- [13] Edsger W Dijkstra. A note on two problems in connexion with graphs. Numerische mathematik, 1(1):269–271, 1959.
- [14] Erhan Erkut. The discrete p-dispersion problem. European Journal of Operational Research, 46(1):48–60, 1990.
- [15] Fedor V. Fomin, Petr A. Golovach, Lars Jaffke, Geevarghese Philip, and Danil Sagunov. Diverse pairs of matchings. In 31st International Symposium on Algorithms and Computation (ISAAC), volume 181 of LIPIcs, pages 26:1–26:12. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
- [16] Fedor V. Fomin, Petr A. Golovach, Fahad Panolan, Geevarghese Philip, and Saket Saurabh. Diverse collections in matroids and graphs. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS), volume 187 of LIPIcs, pages 31:1–31:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021.
- [17] Harold N Gabow. Two algorithms for generating weighted spanning trees in order. SIAM Journal on Computing, 6(1):139–150, 1977.
- [18] Elazar Goldenberg and Karthik C. S. Hardness amplification of optimization problems. In 11th Innovations in Theoretical Computer Science Conference (ITCS), volume 151 of LIPIcs, pages 1:1–1:13. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
- [19] Tesshu Hanaka, Masashi Kiyomi, Yasuaki Kobayashi, Yusuke Kobayashi, Kazuhiro Kurita, and Yota Otachi. A framework to design approximation algorithms for finding diverse solutions in combinatorial problems. CoRR, abs/2201.08940, 2022.
- [20] Tesshu Hanaka, Yasuaki Kobayashi, Kazuhiro Kurita, See Woo Lee, and Yota Otachi. Computing diverse shortest paths efficiently: A theoretical and experimental study. CoRR, abs/2112.05403, 2021.
- [21] Tesshu Hanaka, Yasuaki Kobayashi, Kazuhiro Kurita, and Yota Otachi. Finding diverse trees, paths, and more. In Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI), pages 3778–3786. AAAI Press, 2021.
- [22] Satoshi Hara and Takanori Maehara. Enumerate lasso solutions for feature selection. In Satinder P. Singh and Shaul Markovitch, editors, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI), pages 1985–1991. AAAI Press, 2017.
- [23] Refael Hassin, Shlomi Rubinstein, and Arie Tamir. Approximation algorithms for maximum dispersion. Oper. Res. Lett., 21(3):133–137, October 1997.
- [24] Arne Herzel, Cristina Bazgan, Stefan Ruzika, Clemens Thielen, and Daniel Vanderpooten. One-exact approximate pareto sets. Journal of Global Optimization, 80(1):87–115, 2021.
- [25] Arne Herzel, Stefan Ruzika, and Clemens Thielen. Approximation methods for multiobjective optimization problems: A survey. INFORMS Journal on Computing, 33(4):1284–1299, 2021.
- [26] Jakob Krarup. The peripatetic salesman and some related unsolved problems. In Combinatorial programming: methods and applications, pages 173–178. Springer, 1995.
- [27] Michael J. Kuby. Programming models for facility dispersion: The p-dispersion and maxisum dispersion problems. Geographical Analysis, 19(4):315–329, October 1987.
- [28] Eugene L. Lawler. A procedure for computing the k best solutions to discrete optimization problems and its application to the shortest path problem. Management Science, 18(7):401–405, 1972.
- [29] Erik M Lindgren, Alexandros G Dimakis, and Adam Klivans. Exact map inference by avoiding fractional vertices. In International Conference on Machine Learning, pages 2120–2129. PMLR, 2017.
- [30] Pasin Manurangsi. Almost-polynomial ratio eth-hardness of approximating densest k-subgraph. In Hamed Hatami, Pierre McKenzie, and Valerie King, editors, Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 954–961. ACM, 2017.
- [31] Silvio Micali and Vijay V. Vazirani. An algoithm for finding maximum matching in general graphs. In 21st Annual Symposium on Foundations of Computer Science, pages 17–27, 1980.
- [32] Ketan Mulmuley, Umesh V. Vazirani, and Vijay V. Vazirani. Matching is as easy as matrix inversion. Comb., 7(1):105–113, 1987.
- [33] Katta G Murty. An algorithm for ranking all the assignments in order of increasing cost. Operations research, 16(3):682–687, 1968.
- [34] João Carlos Namorado Climaco and Ernesto Queirós Vieira Martins. A bicriterion shortest path algorithm. European Journal of Operational Research, 11(4):399–404, 1982.
- [35] Christos H Papadimitriou and Mihalis Yannakakis. On the approximability of trade-offs and optimal access of web sources. In Proceedings 41st annual symposium on foundations of computer science (FOCS), pages 86–92. IEEE, 2000.
- [36] Ram Ravi and Michel X Goemans. The constrained minimum spanning tree problem. In Scandinavian Workshop on Algorithm Theory (SWAT), pages 66–75. Springer, 1996.
- [37] Sekharipuram S Ravi, Daniel J Rosenkrantz, and Giri Kumar Tayi. Heuristic and special case algorithms for dispersion problems. Operations Research, 42(2):299–310, 1994.
- [38] Da-Wei Wang and Yue-Sun Kuo. A study on two geometric location problems. Information Processing Letters, 28(6):281–286, 1988.
- [39] Hao-Tsung Yang, Shih-Yu Tsai, Kin Sum Liu, Shan Lin, and Jie Gao. Patrol scheduling against adversaries with varying attack durations. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 1179–1188, 2019.