This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\forLoop

126calBbCounter

Monotone Submodular Matroid

Niv Buchbinder Department of Statistics and Operations Research, Tel Aviv University. E-mail: [email protected]    Moran Feldman Department of Computer Science, University of Haifa. E-mail: [email protected]

Deterministic Algorithm and Faster Algorithm for Submodular Maximization subject to a Matroid Constraint

Niv Buchbinder Department of Statistics and Operations Research, Tel Aviv University. E-mail: [email protected]    Moran Feldman Department of Computer Science, University of Haifa. E-mail: [email protected]
Abstract

We study the problem of maximizing a monotone submodular function subject to a matroid constraint, and present for it a deterministic non-oblivious local search algorithm that has an approximation guarantee of 11/eε1-\nicefrac{{1}}{{e}}-\varepsilon (for any ε>0\varepsilon>0) and query complexity of O~ε(nr)\tilde{O}_{\varepsilon}(nr), where nn is the size of the ground set and rr is the rank of the matroid. Our algorithm vastly improves over the previous state-of-the-art 0.50080.5008-approximation deterministic algorithm, and in fact, shows that there is no separation between the approximation guarantees that can be obtained by deterministic and randomized algorithms for the problem considered. The query complexity of our algorithm can be improved to O~ε(n+rn)\tilde{O}_{\varepsilon}(n+r\sqrt{n}) using randomization, which is nearly-linear for r=O(n)r=O(\sqrt{n}), and is always at least as good as the previous state-of-the-art algorithms.

Keywords: submodular maximization, matroid constraint, deterministic algorithm, fast algorithm

1 Introduction

The problem of maximizing a non-negative monotone submodular function subject to a matroid constraint is one of earliest and most studied problems in submodular maximization. The natural greedy algorithm obtains the optimal approximation ratio of 11/e0.6321-\nicefrac{{1}}{{e}}\approx 0.632 for this problem when the matroid constraint is restricted to be a cardinality constraint [44, 45]. However, the situation turned out to be much more involved for general matroid constraints (or even partition matroid constraints). In 19781978, Fisher et al. showed that the natural greedy algorithm guarantees a sub-optimal approximation ratio of 1/2\nicefrac{{1}}{{2}} for such constraints. This was improved (roughly 4040 years later) by Călinescu et al. [16], who described a Continuous Greedy algorithm obtaining the optimal approximation ratio of 11/e1-\nicefrac{{1}}{{e}} for general matroid constraints. The Continuous Greedy algorithm was a celebrated major improvement over the state-of-the-art, but it has a significant drawback; namely, it is based on a continuous extension of set functions termed the multilinear extension. The only known way to evaluate the multilinear extension is by sampling sets from an appropriate distribution, which makes the continuous greedy algorithm (and related algorithms suggested since the work of Călinescu et al. [16]) both randomized and quite slow.

Badanidiyuru & Vondrák [1] were the first to study ways to improve over the time complexity of Călinescu et al. [16]. To understand their work, and works that followed it, we first need to mention that it is standard in the literature to assume that access to the submodular objective function and the matroid constraint is done via value and independence oracles (see Section 2 for details). The number of queries to these oracles used by the algorithm is often used as a proxy for its time complexity. Determining the exact time complexity is avoided since it requires taking care of technical details, such as the data structures used to maintain sets, which makes more sense in the context of particular classes of matroid constraints (see, for example, [24, 35]).

In their above-mentioned work, Badanidiyuru & Vondrák [1] gave a variant of the Continuous Greedy algorithm that obtains 11/eε1-\nicefrac{{1}}{{e}}-\varepsilon approximation using O~ε(nr)\tilde{O}_{\varepsilon}(nr) queries,111The \scriptstyle\sim sign in O~ε\tilde{O}_{\varepsilon} suppresses poly-logarithmic factors, and the ε\varepsilon subscript suppresses factors depending only on ε\varepsilon. where nn is the size of the ground set, and rr is the rank of the matroid constraint (the maximum size of a feasible solution). Later, Buchbinder et al. [14] showed that, by combining the algorithm of [1] with the Random Greedy for Matroids algorithm of [12], it is possible to get the same approximation ratio using O~ε(nr+r2)\tilde{O}_{\varepsilon}(n\sqrt{r}+r^{2}) queries. Very recently, an improved rounding technique has allowed Kobayashi & Terao [38] to drop the r2r^{2} term from the last query complexity, which is an improvement in the regime of r=ω(n2/3)r=\omega(n^{2/3}). While the query complexity of [38] is the state-of-the-art for general matroid constraints, some works have been able to further improve the query and time complexities to be nearly-linear (i.e., O~ε(n)\tilde{O}_{\varepsilon}(n)) in the special cases of partition matroids, graphic matroids [24], laminar matroids and transversal matroids [35]. It was also shown by [38] that a query complexity of O~ε(n+r3/2)\tilde{O}_{\varepsilon}(n+r^{3/2}) suffices when we are given access to the matroid constraint via an oracle known as rank oracle, which is a stronger oracle compared to the independence oracle assumed in this paper (and the vast majority of the literature).

All the above results are randomized as they inherit the use of the multilinear extension from the basic Continuous Greedy algorithm. Filmus and Ward [30] were the first to try to tackle the inherent drawbacks of the multilinear extension by suggesting a non-oblivious local search algorithm.222A local search algorithm is non-oblivious if the objective function it tries to optimize is an auxiliary function rather than the true objective function of the problem. Their algorithm is more combinatorial in nature as it does not require a rounding step. However, it is still randomized because the evaluation of its auxiliary objective function is done by sampling from an appropriate distribution (like in the case for the mutlilinear extension). To the best of our knowledge, the only known deterministic algorithm for the problem we consider whose approximation ratio improves over the 1/2\nicefrac{{1}}{{2}}-approximation guarantee of the natural greedy algorithm is an algorithm due to Buchbinder et al. [11] that guarantees 0.50080.5008-approximation.

1.1 Our Contribution

We suggest an (arguably) simple deterministic algorithm for maximizing a non-negative monotone submodular function f:2\cN\bR0f\colon 2^{\cN}\rightarrow{\bR_{\geq 0}} subject to a matroid constraint \cM=(\cN,\cI)\cM=(\cN,\cI).333See Section 2 for the formal definitions of non-negative monotone submodular functions and matroid constraints. Our algorithm is a non-oblivious local search algorithm parametrized by a positive integer value \ell and additional positive values α1,,α\alpha_{1},\ldots,\alpha_{\ell}. The algorithm maintains \ell disjoint sets S1,S2,,SS_{1},S_{2},\ldots,S_{\ell} whose union is a base of \cM\cM (i.e., an inclusion-wise maximal feasible set), and aims to maximize an auxiliary function g(S1,,S)g(S_{1},\ldots,S_{\ell}). To state this function, it is useful to define, for any J[]{1,2,,}J\subseteq[\ell]\triangleq\{1,2,\dotsc,\ell\}, the shorthand SJΓiJSjS_{J}\triangleq\mathbin{\mathaccent 0{\cdot}\cup}_{i\in J}S_{j} (the dot inside the union is only aimed to emphasis that the sets on which the union is done are disjoint). For convenience, we also define α00\alpha_{0}\triangleq 0. Then, the auxiliary function gg is given by

g(S1,,S)=J[]α|J|f(SJ).g(S_{1},\ldots,S_{\ell})=\sum_{J\subseteq[\ell]}\alpha_{|J|}\cdot f(S_{J})\enspace.
1 Start from any \ell disjoint sets S1,,SS_{1},\ldots,S_{\ell} such that S[]S_{[\ell]} is a base of the matroid \cM\cM.
2 While some step from the following list strictly increases the value of g(S1,,S)g(S_{1},\ldots,S_{\ell}), perform such a step.
g Step 1: Move some element uSku\in S_{k} to a different set SjS_{j}. Step 2: Remove some element uSku\in S_{k}, and add to any SjS_{j} some element v\cNS[]v\in\cN\setminus S_{[\ell]} such that (S[]{u}){v}\cI(S_{[\ell]}\setminus\{u\})\cup\{v\}\in\cI.
return S[]S_{[\ell]}.
Algorithm 1 Non-Oblivious Local Search

The formal definition of our algorithm appears as Algorithm 1. Its output is the set S[]S_{[\ell]}, which is clearly a base of the matroid \cM\cM. The approximation guarantee of Algorithm 1 is given by the next proposition. When =1\ell=1, Algorithm 1 reduces to the standard local search algorithm (applied to ff), and Proposition 1 recovers in this case the known 1/2\nicefrac{{1}}{{2}}-approximation guarantee of this local search algorithm due to [31]. On the other extreme, when =r\ell=r, Algorithm 1 reduces to a (non-sampling exponential time) version of the algorithm of Filmus and Ward [30]. {restatable}propositionpropApproximationBasic Let OPT\cIOPT\in\cI be a set maximizing ff. By setting αi=(1+1/)i1(1i1)\alpha_{i}=\frac{(1+1/\ell)^{i-1}}{\binom{\ell-1}{i-1}} for every i[]i\in[\ell], it is guaranteed that

f(S[])(1(1+1/))f(OPT)+(1+1/)f().f(S_{[\ell]})\geq\left(1-(1+\nicefrac{{1}}{{\ell}})^{-\ell}\right)\cdot f(OPT)+(1+\nicefrac{{1}}{{\ell}})^{-\ell}\cdot f(\varnothing)\enspace.

Thus, for every ε>0\varepsilon>0, one can get f(S[])(11/eO(ε))f(OPT)f(S_{[\ell]})\geq\left(1-\nicefrac{{1}}{{e}}-O(\varepsilon)\right)\cdot f(OPT) by setting =1+1/ε\ell=1+\lceil 1/\varepsilon\rceil.

Unfortunately, the query complexity of Algorithm 1 is not guaranteed to be polynomial (for any value of \ell), which is often the case with local search algorithms that make changes even when these changes lead only to a small marginal improvement in the objective. To remedy this, we show how to implement a fast variant of Algorithm 1, which proves our main theorem.

Theorem 1.1.

There exists a deterministic non-oblivious local search algorithm for maximizing a non-negative monotone submodular function subject to a matroid constraint that has an approximation guarantee of 11/eε1-\nicefrac{{1}}{{e}}-\varepsilon (for any ε>0\varepsilon>0) and query complexity of O~ε(nr)\tilde{O}_{\varepsilon}(nr), where rr is the rank of the matroid and nn is the size of its ground set. The query complexity can be improved to O~ε(n+rn)\tilde{O}_{\varepsilon}(n+r\sqrt{n}) using randomization.

Our algorithm vastly improves over the previous state-of-the-art 0.50080.5008-approximation deterministic algorithm, and in fact, shows that there is no separation between the approximation guarantees that can be obtained by deterministic and randomized algorithms for maximizing monotone submodular functions subject to a general matroid constraint. In terms of the query complexity, our (randomized) algorithm is nearly-linear for r=O~(n)r=\tilde{O}(\sqrt{n}), and its query complexity is always at least as good as the O~ε(nr)\tilde{O}_{\varepsilon}(n\sqrt{r}) query complexity of the previous state-of-the-art algorithm for general matroid constraints (due to [38]).

A central component of our algorithm is a procedure that given a monotone submodular function f:2\cN\bR0f\colon 2^{\cN}\rightarrow{\bR_{\geq 0}}, a matroid \cM=(\cN,\cI)\cM=(\cN,\cI), and value ε(0,1)\varepsilon\in(0,1), outputs a set S\cIS\in\cI satisfying

vTf(vS{v})uSf(uS{u})εf(OPT)T\cI,\sum_{v\in T}f(v\mid S\setminus\{v\})-\sum_{u\in S}f(u\mid S\setminus\{u\})\leq\varepsilon\cdot f(OPT)\qquad\forall T\in\cI\enspace, (1)

where f(vS)f(S{v})f(S)f(v\mid S)\triangleq f(S\cup\{v\})-f(S) represents the marginal contribution of element vv to the set SS. We note that the above set SS is not a local maximum of ff in the usual sense (see Remark 4.1). However, the weaker guarantee given by Inequality (1) suffices for our purposes, and can be obtained faster. The following proposition states the query complexity required to get such a set SS. We believe that this proposition may be of independent interest.

{restatable}

propositionpropFast Let f:2\cN\bR0f\colon 2^{\cN}\rightarrow{\bR_{\geq 0}} be a non-negative mononote submodular function, and M=(\cN,\cI)M=(\cN,\cI) be a matroid of rank rr over a ground set \cN\cN of size nn. Then, for any ε(0,1)\varepsilon\in(0,1),

  • there exists a deterministic algorithm that makes O~(ε1nr)\tilde{O}(\varepsilon^{-1}nr) value and independence oracle queries and outputs a subset S\cIS\in\cI satisfying Inequality (1).

  • there exists a randomized algorithm that makes O~(ε1(n+rn))\tilde{O}(\varepsilon^{-1}(n+r\sqrt{n})) value and independence oracle queries and with probability at least 1ε1-\varepsilon outputs a subset S\cIS\in\cI satisfying Inequality (1), and otherwise outputs “Fail”.

A previous version of this paper [9] used a slightly weaker version of Inequality (1) in which the term f(vS{v})f(v\mid S\setminus\{v\}) was replaced with f(vS)f(v\mid S). This weaker version was sufficient for getting the main results of this paper, but had the intuitive weakness that a set S\cIS\in\cI obeying Inequality (1) with respect to a linear function ff was not necessarily close to maximizing ff among all the sets of \cI\cI. This weakness is important when one would like to maximize sums of linear and monotone submodular functions. Maximization of such sums can be used to maximize monotone submodular functions with bounded curvature [55] and to capture soft constraints and regularizers in machine learning applications [33, 37, 47]. See Appendix A for more detail.

Following this work, Buchbinder & Feldman [10] developed a greedy-based deterministic algorithm for maximizing general submodular functions subject to a matroid constraint. For the special case of monotone submodular functions, the algorithm of [10] recovers the optimal approximation guarantee of Theorem 1.1. Like our algorithm, the algorithm of [10] also has both deterministic and randomized versions. The deterministic version has a much larger query complexity compared to the query complexity stated in Theorem 1.1 for the deterministic case. However, the randomized version of the algorithm of [10] has a query complexity of O~ε(n+r3/2)\tilde{O}_{\varepsilon}(n+r^{3/2}), which is always at least as good as the query complexity our algorithm.

1.2 Additional Related Work

As mentioned above, the standard greedy algorithm obtains the optimal approximation ratio of 11/e1-\nicefrac{{1}}{{e}} for the problem of maximizing a monotone submodular function subject to a cardinality constraint. It was long known that this algorithm can be sped in practice using lazy evaluations, and Badanidiyuru and Vondrák [1] were able to use this idea to reduce the time complexity of the greedy algorithm to be nearly linear. Later, Mirzasoleiman et al. [42] and Buchbinder et al. [14] were able to reduce the time complexity to be cleanly linear at the cost of using randomization and deteriorating the approximation ratio by ε\varepsilon. Obtaining the same time complexity and approximation ratio using a deterministic algorithm was an open question that was recently settled by Kuhnle [39], Li et al. [40] and Huang & Kakimura [36].

We have discussed above works that aim to speed up the maximization of a monotone submodular function subject to a matroid constraint in the standard offline computational model. Other works have tried to get such a speed up by considering different computational models. Mirzasoleiman et al. [43] initiated the study of submodular maximization under a distributed (or Map-Reduce like) computational model. Barbosa et al. [49, 50] then showed that the guarantees of both the natural greedy algorithm and Continuous Greedy can be obtained in this computational model (see also [32] for a variant of the algorithm of [50] that has a reduced memory requirement). Balkanski & Singer [3] initiated the study of algorithms that are parallelizable in a different sense. Specifically, they looked for submodular maximization algorithms whose objective function evaluations can be batched into a small number of sets such that the evaluations that should belong to each set can be determined based only on the results of evaluations from previous sets, which means that the evaluations of each set can be computed in parallel. Algorithms that have this property are called low adaptivity algorithms, and such algorithms guaranteeing (11/eε)(1-\nicefrac{{1}}{{e}}-\varepsilon)-approximation for maximizing a monotone submodular function subject to a matroid constraint were obtained by Balkanski et al. [2], Chekuri & Quanrud [21], and Ene et al. [25]. Interestingly, Li et al. [41] proved that no low adaptivity algorithm can get a clean (11/e)(1-\nicefrac{{1}}{{e}})-approximation for the above problem, and getting such an approximation guarantee requires a polynomial number of evaluation sets.

Online and streaming versions of submodular maximization problems have been studied since the works of Buchbinder et al. [15] and Chakrabarti & Kale [17], respectively. The main motivations for these studies were to get algorithms that either can make decisions under uncertainty, or can process the input sequentially using a low space complexity. However, online and streaming algorithms also tend to be very fast. Specifically, the streaming algorithm of Feldman et al. [28] guarantees 0.31780.3178-approximation for maximizing a monotone submodular function subject to a matroid constraint using a nearly-linear query complexity. The same approximation ratio was obtained by an online algorithm in the special cases of partition matroid and cardinality constraints [19]. See also [20, 34] for streaming algorithms that can handle intersections of multiple matroid constraints.

We conclude this section by mentioning the rich literature on maximization of submodular functions that are not guaranteed to be monotone. Following a long line of work [7, 22, 23, 29], the state-of-the-art algorithm for the problem of maximizing a (not necessarily monotone) submodular function subject to a matroid constraint guarantees 0.4010.401-approximation [8]. Oveis Gharan and Vondrák [48] proved that no sub-exponential time algorithm can obtain a better than 0.4780.478-approximation for this problem even when the matroid is a partition matroid, and recently, Qi [51] showed that the same inapproximability result applies even to the special case of a cardinality constraint. For the even more special case of unconstrained maximization of a (not necessarily monotone) submodular function, Feige et al. [26] showed that no sub-exponential time algorithm can obtain a better than 1/2\nicefrac{{1}}{{2}}-approximation, and Buchbinder et al. [13] gave a matching randomized approximation algorithm, which was later derandomized by Buchbinder & Feldman [6].

Paper Structure.

Section 2 formally defines the problem we consider and the notation that we use, as well as introduces some known results that we employ. Section 3 analyzes Algorithm 1, and then presents and analyzes the variant of this algorithm used to prove Theorem 1.1. The analysis in Section 3 employs Proposition 1.1, whose proof appears in Section 4.

2 Preliminaries

In this section, we formally define the problem we consider in this paper. We also introduce the notation used in the paper and some previous results that we employ.

Let f:2\cN\bRf\colon 2^{\cN}\to\bR be a set function. A function ff is non-negative if f(S)0f(S)\geq 0 for every set S\cNS\subseteq\cN, and it is monotone if f(S)f(T)f(S)\leq f(T) for every two sets ST\cNS\subseteq T\subseteq\cN. Recall that we use the shorthand f(uS)f(S{u})f(S)f(u\mid S)\triangleq f(S\cup\{u\})-f(S) to denote the marginal contribution of element uu to the set SS. Similarly, given two sets S,T\cNS,T\subseteq\cN, we use f(TS)f(ST)f(S)f(T\mid S)\triangleq f(S\cup T)-f(S) to denote the marginal contribution of TT to SS. A function ff is submodular if f(S)+f(T)f(ST)+f(ST)f(S)+f(T)\geq f(S\cap T)+f(S\cup T) for every two sets S,T\cNS,T\subseteq\cN. Following is another definition of submodularity, which is known to be equivalent to the above one.

Definition 2.1.

A set function f:2\cN\bRf\colon 2^{\cN}\to\bR is submodular if f(uS)f(uT)f(u\mid S)\geq f(u\mid T) for every two sets ST\cNS\subseteq T\subseteq\cN and element u\cNTu\in\cN\setminus T.

An independence system is a pair (\cN,\cI)(\cN,\cI), where \cN\cN is a ground set and \cI2\cN\cI\subseteq 2^{\cN} is a non-empty collection of subsets of \cN\cN that is down-closed in the sense that T\cIT\in\cI implies that every set STS\subseteq T also belongs to \cI\cI. Following the terminology used for linear spaces, it is customary to refer to the sets of \cI\cI as independent sets. Similarly, independent sets that are inclusion-wise maximal (i.e., they are not subsets of other independent sets) are called bases, and the size of the largest base is called the rank of the independence system. Throughout the paper, we use nn to denote the size of the ground set \cN\cN and rr to denote the rank of the independence system.

A matroid is an independence system that also obeys the following exchange axiom: if S,TS,T are two independent sets such that |S|<|T||S|<|T|, then there must exist an element uTSu\in T\setminus S such that S{u}\cIS\cup\{u\}\in\cI. Matroids have been extensively studied as they capture many cases of interest, and at the same time, enjoy a rich theoretical structure (see, e.g., [53]). However, we mention here only two results about them. The first result, which is an immediate corollary of the exchange axiom, is that all the bases of a matroid are of the same size (and thus, every independent set of this size is a base). The other result is given by the next lemma. In this lemma (and throughout the paper), given a set SS and element uu, we use S+uS+u and SuS-u as shorthands for S{u}S\cup\{u\} and S{u}S\setminus\{u\}, respectively.

Lemma 2.1 (Proved by [5], and can also be found as Corollary 39.12a in [53]).

Let AA and BB be two bases of a matroid \cM=(\cN,\cI)\cM=(\cN,\cI). Then, there exists a bijection h:ABBAh\colon A\setminus B\rightarrow B\setminus A such that for every uABu\in A\setminus B, (Bh(u))+u\cI(B-h(u))+u\in\cI.

One can extend the domain of the function hh from the last lemma to the entire set AA by defining h(u)=uh(u)=u for every uABu\in A\cap B. This yields the following corollary.

Corollary 2.2.

Let AA and BB be two bases of a matroid \cM=(\cN,\cI)\cM=(\cN,\cI). Then, there exists a bijection h:ABh\colon A\rightarrow B such that for every uAu\in A, (Bh(u))+u\cI(B-h(u))+u\in\cI and h(u)=uh(u)=u for every uABu\in A\cap B.

In this paper, we study the following problem. Given a non-negative monotone submodular function f:2\cN\bR0f\colon 2^{\cN}\to{\bR_{\geq 0}} and a matroid \cM=(\cN,\cI)\cM=(\cN,\cI) over the same ground set, we would like to find an independent set of \cM\cM that maximizes ff among all independent sets of \cM\cM. As is mentioned above, it is standard in the literature to assume access to the objective function ff and the matroid \cM\cM via two oracles. The first oracle is called value oracle, and given a set S\cNS\subseteq\cN returns f(S)f(S). The other oracle is called independence oracle, and given a set S\cNS\subseteq\cN indicates whether S\cIS\in\cI. When the function ff happens to be linear, a well-known greedy algorithm can be used to exactly solve the problem we consider using O(n)O(n) value and independence oracle queries (see, e.g., Theorem 40.1 of [53]). Recall that our main result is a non-oblivious local search algorithm guaranteeing (11/eε)(1-1/e-\varepsilon)-approximation for the general case of this problem. To reduce the time required for this algorithm to converge, we find a good starting point for it using the algorithm given by the next lemma, which obtains a worse approximation ratio for the same problem.

Lemma 2.3 (Lemma 3.23.2 in [14], based on [1]).

Given a non-negative monotone submodular function ff and a matroid \cM=(\cN,\cI)\cM=(\cN,\cI), there exists a deterministic 1/3\nicefrac{{1}}{{3}}-approximation algorithm for max{f(S)S\cI}\max\{f(S)\mid S\in\cI\} that has a query complexity of O(nlogr)O(n\log r).

3 Algorithm

In this section, we present and analyze the algorithm used to prove our main result (Theorem 1.1). We do that in two steps. First, in Section 3.1, we analyze Algorithm 1, which is a simple idealized version of our final algorithm. Algorithm 1 does not run in polynomial time, but its analysis conveys the main ideas used in the analysis of our final algorithm. Then, in Section 3.2, we show our final algorithm, which uses Proposition 1.1, and has all the properties guaranteed by Theorem 1.1.

3.1 Idealized Algorithm

In this section, we analyze the idealized version of our final algorithm that is given above as Algorithm 1. In particular, we show that this idealized version guarantees (11/eε)(1-\nicefrac{{1}}{{e}}-\varepsilon)-approximation (but, unfortunately, its time complexity is not guaranteed to be polynomial). Recall that Algorithm 1 is parametrized by a positive integer value \ell and additional positive values α1,,α\alpha_{1},\ldots,\alpha_{\ell} to be determined later (we also define α00\alpha_{0}\triangleq 0 for convenience). Algorithm 1 is a non-oblivious local search algorithm whose solution consists of \ell disjoint sets S1,S2,,SS_{1},S_{2},\ldots,S_{\ell} whose union is a base of \cM\cM. The local search is guided by an auxiliary function

g(S1,,S)=J[]α|J|f(SJ),g(S_{1},\ldots,S_{\ell})=\sum_{J\subseteq[\ell]}\alpha_{|J|}\cdot f(S_{J})\enspace,

where SJΓiJSjS_{J}\triangleq\mathbin{\mathaccent 0{\cdot}\cup}_{i\in J}S_{j} for any J[]J\subseteq[\ell]. Intuitively, this objective function favors solutions that are non-fragile in the sense that removing a part of the solution has (on average) a relatively a small effect on the solution’s value.

The output set of Algorithm 1 is the set S[]S_{[\ell]}, which is clearly a base of the matroid \cM\cM. The rest of this section is devoted to analyzing the approximation guarantee of Algorithm 1. We begin with the following simple observation, which follows from the fact that Algorithm 1 terminates with a local maximum with respect to gg. In this observation, and all the other claims in this section, the sets mentioned represent their final values, i.e., their values when Algorithm 1 terminates.

Observation 3.1.

For every uSku\in S_{k} and j[]j\in[\ell], it holds that

J[],kJ,jJα|J|f(uSJu)J[],kJ,jJα|J|f(uSJ).\sum_{J\subseteq[\ell],k\in J,j\not\in J}\mspace{-27.0mu}\alpha_{|J|}\cdot f(u\mid S_{J}-u)\geq\sum_{J\subseteq[\ell],k\not\in J,j\in J}\mspace{-27.0mu}\alpha_{|J|}\cdot f(u\mid S_{J})\enspace. (2)

Furthermore, for any v\cNSv\in\cN\setminus S such that Su+v\cIS-u+v\in\cI, it also holds that

J[],kJα|J|f(uSJu)J[],jJα|J|f(vSJ).\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid S_{J}-u)\geq\sum_{J\subseteq[\ell],j\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(v\mid S_{J})\enspace. (3)
Proof.

Inequality (2) is an immediate consequence of the fact that, when Algorithm 1 terminates, Step 1 cannot increase gg. To get Inequality (3), note that since Step 2 of Algorithm 1 cannot increase gg either,

J[],kJ,jJα|J|f(u\displaystyle\sum_{J\subseteq[\ell],k\in J,j\not\in J}\mspace{-27.0mu}\alpha_{|J|}\cdot f(u\mid{} SJu)J[],kJ,jJα|J|f(vSJ)+J[],kJ,jJα|J|[f(SJu+v)f(SJ)]\displaystyle S_{J}-u)\geq\sum_{J\subseteq[\ell],k\not\in J,j\in J}\mspace{-27.0mu}\alpha_{|J|}\cdot f(v\mid S_{J})+\sum_{J\subseteq[\ell],k\in J,j\in J}\mspace{-27.0mu}\alpha_{|J|}\cdot\left[f(S_{J}-u+v)-f(S_{J})\right]
J[],kJ,jJα|J|f(vSJ)+J[],kJ,jJα|J|[f(SJ+v)+f(SJu)2f(SJ)],\displaystyle\geq\sum_{J\subseteq[\ell],k\not\in J,j\in J}\mspace{-27.0mu}\alpha_{|J|}\cdot f(v\mid S_{J})+\sum_{J\subseteq[\ell],k\in J,j\in J}\mspace{-27.0mu}\alpha_{|J|}\cdot\left[f(S_{J}+v)+f(S_{J}-u)-2\cdot f(S_{J})\right]\enspace,

where the last inequality holds since the submodularity of ff guarantees that f(SJu+v)f(SJ+v)+f(SJu)f(SJ)f(S_{J}-u+v)\geq f(S_{J}+v)+f(S_{J}-u)-f(S_{J}). Inequality (3) now follows by rearranging the last inequality. ∎

The next lemma uses the previous observation to get an inequality relating the value of a linear combination of the values of the sets SJS_{J} to the value of the optimal solution. Note that since ff is monotone, there must be an optimal solution for our problem that is a base of \cM\cM. Let us denote such an optimal solution by OPTOPT.

Lemma 3.2.

Let α+10\alpha_{\ell+1}\triangleq 0. Then,

J[][α|J||J|(1+1)α|J|+1(|J|)]f(SJ)i=1(1i1)αif(OPT).\sum_{J\subseteq[\ell]}\Big{[}\alpha_{|J|}\cdot|J|\cdot\Big{(}1+\frac{1}{\ell}\Big{)}-\alpha_{|J|+1}\cdot(\ell-|J|)\Big{]}\cdot f(S_{J})\geq\sum_{i=1}^{\ell}\binom{\ell-1}{i-1}\cdot\alpha_{i}\cdot f(OPT)\enspace.
Proof.

Since OPTOPT and S[]S_{[\ell]} are both bases of \cM\cM, Corollary 2.2 guarantees that there exists a bijective function h:S[]OPTh\colon S_{[\ell]}\to OPT such that S[]u+h(u)\cIS_{[\ell]}-u+h(u)\in\cI for every uS[]u\in S_{[\ell]} and h(u)=uh(u)=u for every uS[]OPTu\in S_{[\ell]}\cap OPT. We claim that the following inequality holds for every element uSku\in S_{k} and integer j[]j\in[\ell].

J[],kJα|J|f(uSJu)J[],jJα|J|f(h(u)SJ).\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid S_{J}-u)\geq\sum_{J\subseteq[\ell],j\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(h(u)\mid S_{J})\enspace. (4)

If uOPTu\not\in OPT, then the last inequality is an immediate corollary of Inequality (3) since h(u)S[]h(u)\not\in S_{[\ell]}. Thus, we only need to consider the case of uOPTu\in OPT. To handle this case, we notice that the monotonicity of ff implies that

J[],kJ,jJα|J|f(uSJu)0=J[],kJ,jJα|J|f(uSJ),\sum_{J\subseteq[\ell],k\in J,j\in J}\mspace{-27.0mu}\alpha_{|J|}\cdot f(u\mid S_{J}-u)\geq 0=\sum_{J\subseteq[\ell],k\in J,j\in J}\mspace{-27.0mu}\alpha_{|J|}\cdot f(u\mid S_{J})\enspace,

and adding this inequality to Inequality (2) yields

J[],kJα|J|f(uSJu)J[],jJα|J|f(uSJ),\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid S_{J}-u)\geq\sum_{J\subseteq[\ell],j\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid S_{J})\enspace,

which implies Inequality (4) since h(u)=uh(u)=u when uSkOPTS[]OPTu\in S_{k}\cap OPT\subseteq S_{[\ell]}\cap OPT.

Averaging Inequality (4) over all j[]j\in[\ell], we get

J[],kJα|J|f(uSJu)J[]|J|α|J|f(h(u)SJ),\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid S_{J}-u)\geq\sum_{J\subseteq[\ell]}\frac{|J|}{\ell}\cdot\alpha_{|J|}\cdot f(h(u)\mid S_{J})\enspace,

and adding this inequality up over all uS[]u\in S_{[\ell]} implies

J[][α|J||J|α|J|+1(|J|)]f(SJ)\displaystyle\sum_{J\subseteq[\ell]}\left[\alpha_{|J|}\cdot|J|-\alpha_{|J|+1}\cdot(\ell-|J|)\right]\cdot f(S_{J}) =k[]J[],kJα|J|[f(SJ)f(SJ{k})]\displaystyle=\sum_{k\in[\ell]}\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot[f(S_{J})-f(S_{J\setminus\{k\}})]
k[]J[],kJα|J|(uSkf(uSJu))\displaystyle\geq\sum_{k\in[\ell]}\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot\Big{(}\sum_{u\in S_{k}}f(u\mid S_{J}-u)\Big{)}
J[]|J|α|J|(uS[]f(h(u)SJ))\displaystyle\geq\sum_{J\subseteq[\ell]}\frac{|J|}{\ell}\cdot\alpha_{|J|}\cdot\Big{(}\sum_{u\in S_{[\ell]}}f(h(u)\mid S_{J})\Big{)}
J[]|J|α|J|f(OPTSJ)\displaystyle\geq\sum_{J\subseteq[\ell]}\frac{|J|}{\ell}\cdot\alpha_{|J|}\cdot f(OPT\mid S_{J})
J[]|J|α|J|[f(OPT)f(SJ)],\displaystyle\geq\sum_{J\subseteq[\ell]}\frac{|J|}{\ell}\cdot\alpha_{|J|}\cdot\left[f(OPT)-f(S_{J})\right]\enspace,

where the first and penultimate inequalities hold by the submodularity of ff, and the last inequality follows from ff’s monotonicity. The lemma now follows by rearranging the last inequality since

J[]|J|α|J|f(OPT)=i=1(i)iαif(OPT)=i=1(1i1)αif(OPT).\sum_{J\subseteq[\ell]}\frac{|J|}{\ell}\cdot\alpha_{|J|}\cdot f(OPT)=\sum_{i=1}^{\ell}\binom{\ell}{i}\cdot\frac{i}{\ell}\cdot\alpha_{i}\cdot f(OPT)=\sum_{i=1}^{\ell}\binom{\ell-1}{i-1}\cdot\alpha_{i}\cdot f(OPT)\enspace.\qed

To get an approximation guarantee for Algorithm 1 from the last lemma, we need to make the coefficient of f(SJ)f(S_{J}) zero for every J{,[]}J\not\in\{\varnothing,[\ell]\}. The proof of the next proposition does that by choosing appropriate values for the parameters α1,α2,,α\alpha_{1},\alpha_{2},\dotsc,\alpha_{\ell}. \propApproximationBasic*

Proof.

The second part of the proposition follows from the first part by the non-negativity of ff and the observation that, for 2\ell\geq 2, we have

(1+1/)1e(11/)1e(1+2)=1e+O(ε).(1+1/\ell)^{-\ell}\leq\frac{1}{e(1-1/\ell)}\leq\frac{1}{e}(1+\frac{2}{\ell})=\frac{1}{e}+O(\varepsilon)\enspace.

Thus, we concentrate on proving the first part of the theorem.

The values we have chosen for αi\alpha_{i} imply that, for every i[1]i\in[\ell-1],

αi+1\displaystyle\alpha_{i+1} =(1+1/)i(1i)=(1+1)(1+1/)i1(1i1)ii=(1+1)iiαi.\displaystyle=\frac{(1+1/\ell)^{i}}{\binom{\ell-1}{i}}=(1+\frac{1}{\ell})\cdot\frac{(1+1/\ell)^{i-1}}{\binom{\ell-1}{i-1}\cdot\frac{\ell-i}{i}}=(1+\frac{1}{\ell})\cdot\frac{i}{\ell-i}\cdot\alpha_{i}\enspace.

Thus, for every i[1]i\in[\ell-1], we have the equality αii(1+1/)αi+1(i)=0\alpha_{i}\cdot i(1+1/\ell)-\alpha_{i+1}(\ell-i)=0, and plugging this equality into Lemma 3.2 yields

α(+1)f(S[])α1f()=\displaystyle\alpha_{\ell}\cdot(\ell+1)\cdot f(S_{[\ell]})-\alpha_{1}\cdot\ell\cdot f(\varnothing)={} J[][αJ|J|(1+1)α|J|+1(|J|)]f(S[])\displaystyle\sum_{J\subseteq[\ell]}\Big{[}\alpha_{J}\cdot|J|\cdot\Big{(}1+\frac{1}{\ell}\Big{)}-\alpha_{|J|+1}\cdot(\ell-|J|)\Big{]}\cdot f(S_{[\ell]})
\displaystyle\geq{} i=1(1+1/)i1f(OPT)=[(1+1/)1]f(OPT).\displaystyle\sum_{i=1}^{\ell}(1+1/\ell)^{i-1}\cdot f(OPT)=\ell\cdot[(1+1/\ell)^{\ell}-1]\cdot f(OPT)\enspace.

The proposition now follows by plugging the values α=(1+1/)1\alpha_{\ell}=(1+1/\ell)^{\ell-1} and α1=1\alpha_{1}=1 into the last inequality, and rearranging. ∎

3.2 Final Algorithm

In this section, we describe the final version of our algorithm, which is similar to the idealized version (Algorithm 1) studied in Section 3.1, but uses Proposition 1.1 to get the query complexities stated in Theorem 1.1.

Algorithm 1 maintains \ell disjoint sets S1,S2,,SS_{1},S_{2},\dotsc,S_{\ell}, and accordingly, the auxiliary function gg used to guide it accepts these \ell sets as parameters. To use Proposition 1.1, we need to replace this auxiliary function with a function accepting only a single parameter. The first step towards this goal is defining a new ground set \cN\cN×[]\cN^{\prime}\triangleq\cN\times[\ell]. Intuitively, every pair (u,i)(u,i) in this ground set represents the assignment of element uu to SiS_{i}. Given this intuition, we get a new auxiliary function g:2\cN\bR0g^{\prime}\colon 2^{\cN^{\prime}}\to{\bR_{\geq 0}} as follows. For every set S\cNS\subseteq\cN^{\prime}, let πJ(S){u\cNiJ(u,i)S}\pi_{J}(S)\triangleq\{u\in\cN\mid\exists_{i\in J}\;(u,i)\in S\} be the set of elements that intuitively should be in the set iJSi\cup_{i\in J}S_{i} according to SS. Then, for every S\cNS\subseteq\cN^{\prime},

g(S)g(π1(S),,π(S))=J[]α|J|f(πJ(S)).g^{\prime}(S)\triangleq g\left(\pi_{1}(S),\ldots,\pi_{\ell}(S)\right)=\sum_{J\subseteq[\ell]}\alpha_{|J|}\cdot f(\pi_{J}(S))\enspace.

We also need to lift the matroid \cM\cM to a new matroid over the ground set \cN\cN^{\prime}. Since the sets S1,S2,,SS_{1},S_{2},\dotsc,S_{\ell} should be kept disjoint, for every u\cNu\in\cN, the elements of {(u,i)i[]}\{(u,i)\mid i\in[\ell]\} should be parallel elements in the new matroid.444Elements uu and vv of a matroid are parallel if (i) they cannot appear together in any independent set of the matroid, and (ii) for every independent set SS that contains uu (respectively, vv), the set Su+vS-u+v (respectively, Sv+uS-v+u) is also independent. Thus, it is natural to define the new matroid as \cM=(\cN,\cI)\cM^{\prime}=(\cN^{\prime},\cI^{\prime}), where \cI{S\cNu\cN|S({u}×[])|1 and π[](S)\cI}\cI^{\prime}\triangleq\{S\subseteq\cN^{\prime}\mid\forall_{u\in\cN}\;|S\cap(\{u\}\times[\ell])|\leq 1\text{ and }\pi_{[\ell]}(S)\in\cI\} (we prove below that \cM\cM^{\prime} is indeed a matroid).

Notice that a value oracle query to gg^{\prime} can be implemented using 22^{\ell} value oracle queries to ff, and an independence oracle query to \cM\cM^{\prime} can be implemented using a single independence oracle query to \cM\cM. The following two lemmata prove additional properties of gg^{\prime} and \cM\cM^{\prime}, respectively.

Lemma 3.3.

The function gg^{\prime} is non-negative, monotone and submodular.

Proof.

The non-negativity and monotonicity of gg^{\prime} follow immediately from its definition and the fact that ff has these properties. Thus, we concentrate on proving that gg is submodular. For every two sets ST\cNS\subseteq T\subseteq\cN^{\prime} and element v=(u,i)Tv=(u,i)\not\in T,

g(vS)=J[],iJα|J|f(uπJ(S))J[],iJα|J|f(uπJ(T))=g(vT),\displaystyle g^{\prime}(v\mid S)=\sum_{J\subseteq[\ell],i\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid\pi_{J}(S))\geq\sum_{J\subseteq[\ell],i\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid\pi_{J}(T))=g^{\prime}(v\mid T)\enspace,

where the inequality holds separately for each set JJ. If uπJ(T)u\not\in\pi_{J}(T), then the inequality holds for this JJ by the submodularity of ff, and otherwise, it holds by the monotonicity of ff. ∎

Lemma 3.4.

The above defined \cM\cM^{\prime} is a matroid, and its has the same rank as \cM\cM.

Proof.

The fact that \cM\cM^{\prime} is an independence system follows immediately from its definition and the fact that \cM\cM is an independence system. Thus, to prove that \cM\cM^{\prime} is a matroid, it suffices to show that it obeys the exchange axiom of matroids. Consider two sets S,T\cIS,T\in\cI^{\prime} such that |S|<|T||S|<|T|. The membership of SS and TT in \cI\cI^{\prime} guarantees that π[](S),π[](T)\cI\pi_{[\ell]}(S),\pi_{[\ell]}(T)\in\cI, |π[](S)|=|S||\pi_{[\ell]}(S)|=|S| and |π[](T)|=|T||\pi_{[\ell]}(T)|=|T|. Thus, since \cM\cM is a matroid, there must exist an element uπ[](T)π[](S)u\in\pi_{[\ell]}(T)\setminus\pi_{[\ell]}(S) such that π[](S)+u\cI\pi_{[\ell]}(S)+u\in\cI. By the definition of π[]\pi_{[\ell]}, the membership of uu in π[](T)\pi_{[\ell]}(T) implies that there is an index i[]i\in[\ell] such that (u,i)T(u,i)\in T. Additionally, (u,i)S(u,i)\not\in S since uπ[](S)u\not\in\pi_{[\ell]}(S), and one can verify that S+(u,i)\cIS+(u,i)\in\cI^{\prime}. This completes the proof that \cM\cM^{\prime} obeys the exchange axiom of matroids.

It remains to prove that \cM\cM^{\prime} has the same rank as \cM\cM. On the one hand, we notice that for every set S\cIS\in\cI, we have that S×{1}\cIS\times\{1\}\in\cI^{\prime}. Thus, the rank of \cM\cM^{\prime} is not smaller than the rank of \cM\cM. On the other hand, consider a set S\cIS\in\cI^{\prime}. The fact that S\cIS\in\cI^{\prime} implies that |π[](S)|=|S||\pi_{[\ell]}(S)|=|S| and π[](S)\cI\pi_{[\ell]}(S)\in\cI, and thus, the rank of \cM\cM is also not smaller than the rank of \cM\cM^{\prime}. ∎

We are now ready to present our final algorithm, which is given as Algorithm 2. This algorithm has a single positive integer parameter \ell. As mentioned in Section 1.1, the set SS obtained by Algorithm 2 is not a local maximum of gg^{\prime} in the same sense that the output set of Algorithm 1 is a local maximum of gg (see Remark 4.1). Nevertheless, Lemma 3.5 shows that the set SS obeys basically the same approximation guarantee (recall that OPTOPT is a feasible solution maximizing ff).

1 Define the ground set \cN\cN^{\prime}, the function gg^{\prime} and the matroid \cM\cM^{\prime} as described in Section 3.2. In the function gg^{\prime}, set the parameters α1,α2,,α\alpha_{1},\alpha_{2},\dotsc,\alpha_{\ell} as in Proposition 1.
2 Let εε/(e(1+ln))\varepsilon^{\prime}\leftarrow\varepsilon/(e(1+\ln\ell)).
3 Use either the deterministic or the randomized algorithm from Proposition 1.1 to find a set S\cIS\subseteq\cI^{\prime} that with probability at least 1ε1-\varepsilon^{\prime} obeys
vTg(vSv)uSg(uSu)εg(OPT)T\cI,\sum_{v\in T}g^{\prime}(v\mid S-v)-\sum_{u\in S}g^{\prime}(u\mid S-u)\leq\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime})\qquad\forall T\in\cI^{\prime}\enspace, (5)
where OPTOPT^{\prime} is a set of \cI\cI^{\prime} maximizing gg^{\prime}. If the randomized algorithm fails, set SS\leftarrow\varnothing.
return π[](S)\pi_{[\ell]}(S).
Algorithm 2 Fast Non-Oblivious Local Search
Lemma 3.5.

If a set S\cNS\subseteq\cN^{\prime} obeys Inequality (5), then

f(π[](S))(1(1+1/))f(OPT)+(1+1/)f()εf(OPT).f(\pi_{[\ell]}(S))\geq\big{(}1-(1+\nicefrac{{1}}{{\ell}})^{-\ell}\big{)}\cdot f(OPT)+(1+\nicefrac{{1}}{{\ell}})^{-\ell}\cdot f(\varnothing)-\varepsilon\cdot f(OPT)\enspace.
Proof.

For every j[]j\in[\ell], we have OPT×{j}\cIOPT\times\{j\}\in\cI^{\prime}. Thus, Inequality (5) implies that

uOPTJ[],jJα|J|f(\displaystyle\sum_{u\in OPT}\sum_{J\subseteq[\ell],j\in J}\alpha_{|J|}\cdot f( uπJ(S))=uOPTJ[]α|J|[f(πJ(S+(u,j)))f(πJ(S))]\displaystyle u\mid\pi_{J}(S))=\sum_{u\in OPT}\sum_{J\subseteq[\ell]}\alpha_{|J|}\cdot[f(\pi_{J}(S+(u,j)))-f(\pi_{J}(S))]
=\displaystyle={} uOPTg((u,j)S)uOPTg((u,j)S(u,j))\displaystyle\sum_{u\in OPT}g^{\prime}((u,j)\mid S)\leq\sum_{u\in OPT}g^{\prime}((u,j)\mid S-(u,j))
\displaystyle\leq{} (u,k)Sg((u,k)S(u,k))+εg(OPT)\displaystyle\sum_{(u,k)\in S}g^{\prime}((u,k)\mid S-(u,k))+\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime})
=\displaystyle={} (u,k)SJ[],kJα|J|f(uπJ(S)u)+εg(OPT),\displaystyle\sum_{(u,k)\in S}\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid\pi_{J}(S)-u)+\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime})\enspace,

where the first inequality holds by the monotonicity of gg^{\prime}. Averaging this inequality over all j[]j\in[\ell] now yields

(u,k)SJ[],kJα|J|f(uπJ(S)u)uOPTJ[]|J|α|J|f(uπJ(S))εg(OPT),\sum_{(u,k)\in S}\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid\pi_{J}(S)-u)\geq\sum_{u\in OPT}\sum_{J\subseteq[\ell]}\frac{|J|}{\ell}\cdot\alpha_{|J|}\cdot f(u\mid\pi_{J}(S))-\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime})\enspace,

which implies

J[][α|J||J|α|J|+1(|J|)]f\displaystyle\sum_{J\subseteq[\ell]}\left[\alpha_{|J|}\cdot|J|-\alpha_{|J|+1}\cdot(\ell-|J|)\right]\cdot f (πJ(S))=k[]J[],kJα|J|[f(πJ(S))f(πJ{k}(S))]\displaystyle(\pi_{J}(S))=\sum_{k\in[\ell]}\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot[f(\pi_{J}(S))-f(\pi_{J\setminus\{k\}}(S))]
k[]J[],kJα|J|((u,k)Sf(uπJ(S)u))\displaystyle\geq\sum_{k\in[\ell]}\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot\Big{(}\sum_{(u,k)\in S}f(u\mid\pi_{J}(S)-u)\Big{)}
=(u,k)SJ[],kJα|J|f(uπJ(S)u)\displaystyle=\sum_{(u,k)\in S}\sum_{J\subseteq[\ell],k\in J}\mspace{-9.0mu}\alpha_{|J|}\cdot f(u\mid\pi_{J}(S)-u)
uOPTJ[]|J|α|J|f(uπJ(S))εg(OPT)\displaystyle\geq\sum_{u\in OPT}\sum_{J\subseteq[\ell]}\frac{|J|}{\ell}\cdot\alpha_{|J|}\cdot f(u\mid\pi_{J}(S))-\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime})
J[]|J|α|J|f(OPTπJ(S))εg(OPT)\displaystyle\geq\sum_{J\subseteq[\ell]}\frac{|J|}{\ell}\cdot\alpha_{|J|}\cdot f(OPT\mid\pi_{J}(S))-\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime})
J[]|J|α|J|[f(OPT)f(πJ(S))]εg(OPT).\displaystyle\geq\sum_{J\subseteq[\ell]}\frac{|J|}{\ell}\cdot\alpha_{|J|}\cdot\left[f(OPT)-f(\pi_{J}(S))\right]-\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime})\enspace.

The last inequality appears in the proof of Lemma 3.2, except for the error term εg(OPT)-\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime}) and a replacement of SJS_{J} with πJ(S)\pi_{J}(S). Continuing with the proof of Lemma 3.2 from this point, we get that this lemma still applies up to the same modifications, and plugging this observation into the proof of Proposition 1 then yields

α(+1)f(π[](S))α1f()[(1+1/)1]f(OPT)εg(OPT).\alpha_{\ell}\cdot(\ell+1)\cdot f(\pi_{[\ell]}(S))-\alpha_{1}\cdot\ell\cdot f(\varnothing)\geq\ell\cdot[(1+1/\ell)^{\ell}-1]\cdot f(OPT)-\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime})\enspace. (6)

Recall that α=(1+1/)1\alpha_{\ell}=(1+1/\ell)^{\ell-1} and α1=1\alpha_{1}=1. Additionally, since πJ(OPT)π[](OPT)\cI\pi_{J}(OPT^{\prime})\subseteq\pi_{[\ell]}(OPT^{\prime})\in\cI for every set J[]J\subseteq[\ell], the definition of OPTOPT implies that

g(OPT)=\displaystyle g^{\prime}(OPT^{\prime})={} J[]α|J|f(πJ(OPT))J[]α|J|f(OPT)\displaystyle\sum_{J\subseteq[\ell]}\alpha_{|J|}\cdot f(\pi_{J}(OPT^{\prime}))\leq\sum_{J\subseteq[\ell]}\alpha_{|J|}\cdot f(OPT)
=\displaystyle={} 1=1(1+1/)i1if(OPT)e1=11if(OPT)e(1+ln)f(OPT).\displaystyle\sum_{1=1}^{\ell}\frac{\ell\cdot(1+1/\ell)^{i-1}}{i}\cdot f(OPT)\leq e\ell\cdot\sum_{1=1}^{\ell}\frac{1}{i}\cdot f(OPT)\leq e\ell(1+\ln\ell)\cdot f(OPT)\enspace.

The lemma now follows by plugging all the above observations into Inequality (6), and rearranging, since

εg(OPT)α(+1)=εg(OPT)/(e(1+ln))(1+1/)1(+1)εf(OPT)+1εf(OPT).\frac{\varepsilon^{\prime}\cdot g^{\prime}(OPT^{\prime})}{\alpha_{\ell}\cdot(\ell+1)}=\frac{\varepsilon\cdot g^{\prime}(OPT^{\prime})/(e(1+\ln\ell))}{(1+1/\ell)^{\ell-1}\cdot(\ell+1)}\leq\frac{\varepsilon\ell\cdot f(OPT)}{\ell+1}\leq\varepsilon\cdot f(OPT)\enspace.\qed
Corollary 3.6.

It always holds that \bE[f(π[](S))](1(1+1/))f(OPT)O(ε)f(OPT)\bE[f(\pi_{[\ell]}(S))]\geq(1-(1+\nicefrac{{1}}{{\ell}})^{-\ell})\cdot f(OPT)-O(\varepsilon)\cdot f(OPT). In particular, for every ε>0\varepsilon>0, if \ell is set to 1+1/ε1+\lceil 1/\varepsilon\rceil in Algorithm 2, then \bE[f(π[](S))](11/eO(ε))f(OPT)\bE[f(\pi_{[\ell]}(S))]\geq\left(1-\nicefrac{{1}}{{e}}-O(\varepsilon)\right)\cdot f(OPT).

Proof.

Lemma 3.5 implies that the output set π[](S)\pi_{[\ell]}(S) of Algorithm 2 obeys, with probability at least 1ε1ε1-\varepsilon^{\prime}\geq 1-\varepsilon, the inequality

f(π[](S))(1(1+1/))f(OPT)+(1+1/)f()εf(OPT).f(\pi_{[\ell]}(S))\geq\big{(}1-(1+\nicefrac{{1}}{{\ell}})^{-\ell}\big{)}\cdot f(OPT)+(1+\nicefrac{{1}}{{\ell}})^{-\ell}\cdot f(\varnothing)-\varepsilon\cdot f(OPT)\enspace.

Since the non-negativity of ff guarantees that f(π[](S))0f(\pi_{[\ell]}(S))\geq 0, the above inequality implies

\bE[f(π[](S))]\displaystyle\bE[f(\pi_{[\ell]}(S))]\geq{} (1ε)[(1(1+1/))f(OPT)+(1+1/)f()εf(OPT)]\displaystyle(1-\varepsilon)\cdot\big{[}\big{(}1-(1+\nicefrac{{1}}{{\ell}})^{-\ell}\big{)}\cdot f(OPT)+(1+\nicefrac{{1}}{{\ell}})^{-\ell}\cdot f(\varnothing)-\varepsilon\cdot f(OPT)\big{]}
\displaystyle\geq{} (1ε)(1(1+1/))f(OPT)εf(OPT)\displaystyle(1-\varepsilon)\cdot\big{(}1-(1+\nicefrac{{1}}{{\ell}})^{-\ell}\big{)}\cdot f(OPT)-\varepsilon\cdot f(OPT)
\displaystyle\geq{} (1(1+1/))f(OPT)2εf(OPT).\displaystyle\big{(}1-(1+\nicefrac{{1}}{{\ell}})^{-\ell}\big{)}\cdot f(OPT)-2\varepsilon\cdot f(OPT)\enspace.

The corollary now follows since the calculations done in the first part of the proof of Proposition 1 guarantee that, for =1+1/ε\ell=1+\lceil 1/\varepsilon\rceil, (1+1/)=1/e+O(ε)(1+1/\ell)^{-\ell}=\nicefrac{{1}}{{e}}+O(\varepsilon). ∎

To complete the proof of Theorem 1.1, it only remains to show that Algorithm 2 obeys the query complexities stated in the theorem.

Lemma 3.7.

Assume we choose for \ell the minimal value necessary to make the approximation guarantee of Algorithm 2 at least 11/eε1-1/e-\varepsilon according to Corollary 3.6. Then, the query complexity of the deterministic version of Algorithm 2 is O~ε(nr)\tilde{O}_{\varepsilon}(nr), and the query complexity of the randomized version is O~ε(n+rn)\tilde{O}_{\varepsilon}(n+r\sqrt{n}).

Proof.

Note that the above choice of value for \ell implies that both \ell and 1/ε1/\varepsilon^{\prime} can be upper bounded by functions of ε\varepsilon. Thus, a value oracle query to gg^{\prime} can be implemented using 2=Oε(1)2^{\ell}=O_{\varepsilon}(1) value oracle queries to ff, and the size of the ground set of \cM\cM^{\prime} is n=Oε(n)n\cdot\ell=O_{\varepsilon}(n). Recall also that an independence oracle query to \cM\cM^{\prime} can be implemented using a single independence oracle query to \cM\cM, and the rank of \cM\cM^{\prime} is rr. These observations imply together that the query complexities of the deterministic and randomized versions of Algorithm 2 are identical to the query complexities stated in Proposition 1.1 when one ignores terms that depend only on ε\varepsilon. The lemma now follows since the deterministic algorithm of Proposition 1.1 requires O~(ε1nr)=O~ε(nr)\tilde{O}(\varepsilon^{-1}nr)=\tilde{O}_{\varepsilon}(nr) queries, and the randomized algorithm of Proposition 1.1 requires O~(ε1(n+rn))=O~ε(n+rn)\tilde{O}(\varepsilon^{-1}(n+r\sqrt{n}))=\tilde{O}_{\varepsilon}(n+r\sqrt{n}) queries. ∎

4 Fast Algorithms for Finding an Approximate Local Optimum

In this section, we prove Proposition 1.1, which we repeat here for convenience. \propFast

Remark 4.1.

We note that a set SS obeying the guarantee (LABEL:ineq-localOPT) in Proposition 1.1 is not a local maximum of ff in the usual sense. A true (approximate) local optimum with respect to replacement of elements as in the idealized Algorithm 1 would be a base SS satisfying f(S)f(Su+v)(ε/r)f(OPT)f(S)\geq f(S-u+v)-(\varepsilon/r)\cdot f(OPT) for every uSu\in S and element v\cNSv\in\cN\setminus S for which Su+vS-u+v is an independent set of the matroid. We claim that such a local optimum always obeys (LABEL:ineq-localOPT). To see that, note that for every T\cIT\in\cI, if h:TSh\colon T^{\prime}\to S is the bijective function whose existence is guaranteed by Corollary 2.2 for SS and a base TT^{\prime} that includes TT as a subset, then the submodularity and monotonicity of ff imply together that

vTf(vSv)uSf(uSu)\displaystyle\sum_{v\in T}f(v\mid S-v)-\sum_{u\in S}f(u\mid S-u)\leq{} vTSf(vS)uSTf(uSu)\displaystyle\sum_{v\in T^{\prime}\setminus S}\mspace{-9.0mu}f(v\mid S)-\sum_{u\in S\setminus T^{\prime}}\mspace{-9.0mu}f(u\mid S-u)
\displaystyle\leq{} vTS[f(Sh(v)+v)f(S)]\displaystyle\sum_{v\in T^{\prime}\setminus S}\mspace{-9.0mu}\left[f(S-h(v)+v)-f(S)\right]
\displaystyle\leq{} |TS|(ε/r)f(OPT)εf(OPT).\displaystyle|T^{\prime}\setminus S|\cdot(\varepsilon/r)\cdot f(OPT)\leq\varepsilon\cdot f(OPT)\enspace.

This shows that the guarantee in Proposition 1.1 is indeed weaker than a true (approximate) local maximum. However, as shown in Section 3.2, this weaker guarantee suffices for our purposes, and in this section we describe faster ways to obtain it.

The deterministic and randomized algorithms mentioned by Proposition 1.1 are presented in Sections 4.1 and 4.2, respectively. Both algorithms require the following procedure, which uses the binary search technique suggested by [18, 46].

Lemma 4.2.

Let SS be an independent set of the matroid \cM\cM, SS^{\prime} a subset of SS, and vv an element of the ground set \cN\cN such that S+v\cIS+v\not\in\cI, but SS+v\cIS\setminus S^{\prime}+v\in\cI. Then, there is an algorithm that gets as input a weight wuw_{u} for every element uSu\in S^{\prime}, and finds an element in argminuS,S+vu\cI{wu}\arg\min_{u\in S^{\prime},S+v-u\in\cI}\{w_{u}\} using O(log|S|)O(\log|S^{\prime}|) independence oracle queries.

Proof.

Let us denote, for simplicity, the elements of SS^{\prime} by the numbers 11 up to |S||S^{\prime}| in a non-decreasing weight order (i.e., w1w2w|S|w_{1}\leq w_{2}\leq\ldots\leq w_{|S^{\prime}|}). Since S+v\cIS+v\not\in\cI, but SS+v\cIS\setminus S^{\prime}+v\in\cI, the down-closedness of \cI\cI implies that there exist an element j[|S|]j\in[|S^{\prime}|] such that (SS){v,i,i+1,|S|}\cI(S\setminus S^{\prime})\cup\{v,i,i+1\ldots,|S^{\prime}|\}\in\cI for all i>ji>j, and (SS){v,i,i+1,,|S|}\cI(S\setminus S^{\prime})\cup\{v,i,i+1,\ldots,|S^{\prime}|\}\not\in\cI for jij\leq i. The element jj can be found via binary search using O(log|S|)O(\log|S^{\prime}|) independence oracle queries.

We claim that the above element jj is a valid output for the algorithm promised by the lemma. To see this, note that since (SS){v,j+1,j+2,|S|}\cI(S\setminus S^{\prime})\cup\{v,j+1,j+2\ldots,|S^{\prime}|\}\in\cI, the matroid exchange axiom guarantees that it is possible to add elements from SS to the last set to get a subset of S+vS+v of size |S||S| that is independent. Since (SS){v,j,j+1,|S|}\cI(S\setminus S^{\prime})\cup\{v,j,j+1\ldots,|S^{\prime}|\}\not\in\cI, this independent subset cannot include jj, and thus, must be Sj+vS-j+v. In contrast, for every i<ji<j, Si+v(SS){v,j,j+1,,|S|}\cIS-i+v\supseteq(S\setminus S^{\prime})\cup\{v,j,j+1,\ldots,|S^{\prime}|\}\not\in\cI, and thus, Si+v\cIS-i+v\not\in\cI. ∎

4.1 A Deterministic Algorithm

In this section, we describe and analyze the deterministic algorithm promised by Proposition 1.1, which appears as Algorithm 3. Lemma 4.3 proves the correctness of this algorithm and the bounds stated in Proposition 1.1 on the the number of queries it makes.

1 Find S=S0S=S_{0} such that f(S0)13f(OPT)f(S_{0})\geq\frac{1}{3}\cdot f(OPT) using the algorithm of Lemma 2.3.
2 Add arbitrary elements of \cN\cN to SS until it becomes a base of \cM\cM.
3 while there exist elements uSu\in S and v\cNSv\in\cN\setminus S such that Su+v\cIS-u+v\in\cI and f(vS)f(uSu)εrf(S0)f(v\mid S)-f(u\mid S-u)\geq\frac{\varepsilon}{r}\cdot f(S_{0}) do
4       Update SS+vuS\leftarrow S+v-u.
return SS.
Algorithm 3 Fast Deterministic Local Search Algorithm
Lemma 4.3.

Algorithm 3 can be implemented using O(ε1nr)O(\varepsilon^{-1}nr) value oracle and O(ε1nrlogr)O(\varepsilon^{-1}nr\log r) independence oracle queries. Moreover, the final subset SS in the algorithm satisfies Inequality (LABEL:ineq-localOPT).

Proof.

We begin by proving that the final set SS in Algorithm 3 satisfies Inequality (LABEL:ineq-localOPT) with respect to every independent set T\cIT\in\cI. Notice that SS is a base of \cM\cM since it has the size of a base. Additionally, we assume that TT is a base. This assumption is without loss of generality since the monotonicity of ff guarantees that if SS violates Inequality (LABEL:ineq-localOPT) with respect to TT, then it violate it also with respect to any base of \cM\cM containing TT. We can now use Corollary 2.2 to get a bijective function h:TSh\colon T\to S such that Sh(v)+v\cIS-h(v)+v\in\cI for every vSv\in S and h(v)=vh(v)=v for every vSTv\in S\cap T. Since Algorithm 3 terminated with the set SS, for every vTSv\in T\setminus S it must hold that

f(vSv)f(h(v)Sh(v))=f(vS)f(h(v)Sh(v))εrf(S0),f(v\mid S-v)-f(h(v)\mid S-h(v))=f(v\mid S)-f(h(v)\mid S-h(v))\leq\frac{\varepsilon}{r}\cdot f(S_{0})\enspace,

where the equality holds since the fact that vTSv\in T\setminus S. Additionally, for every vSTv\in S\cap T, we have h(v)=vh(v)=v, and thus, by the non-negativity of ff, f(vSv)f(h(v)Sh(v))=0εrf(S0)f(v\mid S-v)-f(h(v)\mid S-h(v))=0\leq\frac{\varepsilon}{r}\cdot f(S_{0}). Summing up the inequalities we have proved for all vTv\in T now gives

uTf(vSv)uSf(SSu)εf(S0)εf(OPT).\sum_{u\in T}f(v\mid S-v)-\sum_{u\in S}f(S\mid S-u)\leq\varepsilon\cdot f(S_{0})\leq\varepsilon\cdot f(OPT)\enspace.

We now would like to determine the number of oracle queries used by Algorithm 3. By Lemma 4.2, Line 3 of Algorithm 3 uses O(nlogr)O(n\log r) value and independence oracle queries. Line 3 of Algorithm 3 can be implemented using O(n)O(n) independence oracle queries since it only require us to make a single pass over the ground set \cN\cN, and add to SS every element that can be added without violating independence. In the following, we prove the following two additional properties: (i) the number of iterations of Algorithm 3 is at most O(rε)O(\frac{r}{\varepsilon}), and (ii) each iteration can be implemented using O(n)O(n) value oracle queries and O(nlogr)O(n\log r) independence oracle queries. Notice that, together with our previous observations, these properties complete the proof of the lemma.

In each iteration, Algorithm 3 updates the set SS to be S+vuS+v-u, where uSu\in S and v\cNSv\in\cN\setminus S are elements obeying f(vS)f(uSu)εrf(S0)f(v\mid S)-f(u\mid S-u)\geq\frac{\varepsilon}{r}\cdot f(S_{0}). By the submodularity of ff,

f(S+vu)f(S)f(vS)f(uSu)εrf(S0)ε3rf(OPT).f(S+v-u)-f(S)\geq f(v\mid S)-f(u\mid S-u)\geq\frac{\varepsilon}{r}\cdot f(S_{0})\geq\frac{\varepsilon}{3r}\cdot f(OPT)\enspace.

Thus, the value of f(S)f(S) increases by at least ε3rf(OPT)\frac{\varepsilon}{3r}\cdot f(OPT) in each iteration of the algorithm. Since SS remains an independent set of \cM\cM throughout Algorithm 3, f(S)f(S) cannot exceed f(OPT)f(OPT); and hence, the number of iterations of Algorithm 3 must be O(rε)O(\frac{r}{\varepsilon}).

To implement a single iteration of Algorithm 3, we first compute for each element uSu\in S the value f(uSu)f(u\mid S-u), which requires O(r)=O(n)O(r)=O(n) value oracle queries. By defining the weight of every uSu\in S to be f(uSu)f(u\mid S-u), we can then use the algorithm from Lemma 4.2 to find for each v\cNSv\in\cN\setminus S, an element uvargminuS,S+vu\cI{f(uSu)}u_{v}\in\arg\min_{u\in S,S+v-u\in\cI}\{f(u\mid S-u)\} using O(logr)O(\log r) independence oracle queries per element vv, and thus, O(nlogr)O(n\log r) independence oracle queries in total.555Technically, to use Lemma 4.2 we need to verify, for every element v\cNSv\in\cN\setminus S, that S+vS+v is not independent, and that SS+v={v}\cIS\setminus S+v=\{v\}\in\cI. The first of these properties is guaranteed to hold since SS is a base of \cM\cM. The second property can be checked with a single additional independence oracle query per element vv. If this property is violated, then by the down-closedness property of independence systems, there is no element uSu\in S such that S+vu\cIS+v-u\in\cI. To complete the iteration, it only remains to check whether any pair of elements uvS,v\cNSu_{v}\in S,v\in\cN\setminus S obeys the conditions of the loop of Algorithm 3, which requires another O(n)O(n) value oracle queries. If such a pair is found, then this pair can be used to complete the iteration. Otherwise, no pair of elements uSu\in S, v\cNSv\in\cN\setminus S obeys the condition of the loop, and therefore, the algorithm should terminate. ∎

4.2 A Randomized Algorithm

In this section, we prove the part of Proposition 1.1 related to its randomized algorithm. We do this by proving the following proposition.

Proposition 4.4.

Let f:2\cN\bR0f\colon 2^{\cN}\rightarrow{\bR_{\geq 0}} be a non-negative mononote submodular function, M=(\cN,\cI)M=(\cN,\cI) be a matroid of rank rr over a ground set \cN\cN of size nn, and OPTOPT a set in \cI\cI maximizing ff. Then, for any ε(0,1)\varepsilon\in(0,1), there exists a randomized algorithm that makes O~(ε1(n+rn))\tilde{O}(\varepsilon^{-1}(n+r\sqrt{n})) value and independence oracle queries and with probability at least 2/32/3 outputs a subset S\cIS\in\cI satisfying Inequality (LABEL:ineq-localOPT), and otherwise outputs “Fail”.

The guarantee of Proposition 1.1 for its randomized algorithm follows from Proposition 4.4 by a standard repetition argument. Namely, to get the guarantee of Proposition 1.1, one needs to execute the algorithm of Proposition 4.4 log3ε1\lceil\log_{3}{\varepsilon^{-1}}\rceil times, and then choose an output set produced by an execution that did not fail (if all executions fail, then one has to declare failure as well). Clearly, such a repetition makes the failure probability drop from 1/3\nicefrac{{1}}{{3}} to ε\varepsilon, while increasing the query complexity only by a logarithmic factor.

The algorithm we use to prove Proposition 4.4 appears as Algorithm 4. For simplicity, we assume in this algorithm that nn is a perfect square. This can be guaranteed, for example, by adding dummy elements to the matroid that cannot be added to any independent set. Alternatively, one can simply replace every appearance of n\sqrt{n} in Algorithm 4 with n\lceil\sqrt{n}\rceil. Some of the ideas in the design and analysis of Algorithm 4 can be traced to the work of Tukan et al. [56], who designed a fast local search algorithm for maximizing a submodular function subject to a cardinality constraint.

1 Find S0S_{0} such that f(S0)13f(OPT)f(S_{0})\geq\frac{1}{3}\cdot f(OPT) using the algorithm in Lemma 2.3.
2 Add arbitrary elements of \cN\cN to S0S_{0} until it becomes a base of \cM\cM.
3 for i=1i=1 to k18r/εk\triangleq\lceil\nicefrac{{18r}}{{\varepsilon}}\rceil do
4       Sample a subset R1Si1R_{1}\subseteq S_{i-1} of size min{r,n}\min\{r,\sqrt{n}\} uniformly at random.
5       Sample a subset R2\cNR_{2}\subseteq\cN of size max{nr,n}\max\{\frac{n}{r},\sqrt{n}\} uniformly at random.
6       Initialize SiSi1S_{i}\leftarrow S_{i-1}.
7       if there is an element vR2Si1v\in R_{2}\setminus S_{i-1} such that Si1R1+v\cIS_{i-1}\setminus R_{1}+v\in\cI then
8             Let (u,v)argmaxuR1,vR2Si1,Si1u+v\cI{f(vSi1)f(uSi1u)}(u,v)\in\arg\max_{u\in R_{1},v\in R_{2}\setminus S_{i-1},S_{i-1}-u+v\in\cI}\{f(v\mid S_{i-1})-f(u\mid S_{i-1}-u)\}.
9             if f(vSi1)f(uSi1u)0f(v\mid S_{i-1})-f(u\mid S_{i-1}-u)\geq 0 then
10                  Update SiSi1+vuS_{i}\leftarrow S_{i-1}+v-u.
11            
12      
13Sample uniformly at random i[k]i\in[k].
14 if there exists a set T\cIT\in\cI such that vTf(vSi1v)uSi1f(Si1Si1u)εf(S0)\sum_{v\in T}f(v\mid S_{i-1}-v)-\sum_{u\in S_{i-1}}f(S_{i-1}\mid S_{i-1}-u)\geq\varepsilon\cdot f(S_{0}) then
15      return “Fail”.
16else return SiS_{i}.
Algorithm 4 Fast Randomized Local Search Algorithm

We begin the analysis of Algorithm 4 with the following observation, which follows from the condition on Line 4 of the algorithm since f(S0)f(OPT)f(S_{0})\leq f(OPT).

Observation 4.5.

If Algorithm 4 does not return “Fail”, then its output satisfies Inequality (LABEL:ineq-localOPT).

The next two lemmata complete the proof that Algorithm 4 obeys all the properties guaranteed by Proposition 4.4. The first of these lemmata bounds the number of queries used by Algorithm 4, and the second lemma bounds the probability that Algorithm 4 returns “Fail”.

Lemma 4.6.

Algorithm 4 makes only O(nlogr+ε1(n+rn))O(n\log r+\varepsilon^{-1}(n+r\sqrt{n})) value oracle queries and O(ε1(n+rn)logr)O(\varepsilon^{-1}(n+r\sqrt{n})\log r) independence oracle queries.

Proof.

As explained in the proof of Lemma 4.3, the first two lines of Algorithm 4 can be implemented using O(nlogr)O(n\log r) value and independence oracle queries. Additionally, as mentioned in the paragraph in Section 2 immediately before Lemma 2.3, the condition on Line 4 of the algorithm can be checked using O(n)O(n) value and independence queries since this condition requires finding an independent set TT maximizing the linear function g:2\cN\bR0g\colon 2^{\cN}\to{\bR_{\geq 0}} defined as g(T)vTf(vSiv)g(T)\triangleq\sum_{v\in T}f(v\mid S_{i}-v) for every T\cNT\subseteq\cN. Thus, to complete the proof of the lemma, it only remains to show that every iteration of the loop of Algorithm 4 can be implemented using O(n/r+n)O(n/r+\sqrt{n}) value oracle queires and O((n/r+n)logr)O((n/r+\sqrt{n})\log r) independence oracle queries.

The only parts of the loop of Algorithm 4 that require more than constantly many oracle queries are Lines 4 and 4. Line 4 requires a single independence oracle query per element of R2Si1R_{2}\setminus S_{i-1}, and thus, requires O(|R2|)=O(n/r+n)O(|R_{2}|)=O(n/r+\sqrt{n}) independence oracle queries in total. To implement Line 4 of Algorithm 4, we first have to evaluate f(vSi1)f(v\mid S_{i-1}) for every vR2Si1v\in R_{2}\setminus S_{i-1} and f(uSi1u)f(u\mid S_{i-1}-u) for every uR1u\in R_{1}, which requires O(|R1|+|R2|)=O(min{r,n}+max{nr,n})=O(n/r+n)O(|R_{1}|+|R_{2}|)=O(\min\{r,\sqrt{n}\}+\max\{\frac{n}{r},\sqrt{n}\})=O(n/r+\sqrt{n}) value oracle queries. Once these values are determined, we need to find a pair (u,v)argmaxuR1,vR2Si1,Si1u+v\cI{f(vSi1)f(uSi1u)}(u,v)\in\arg\max_{u\in R_{1},v\in R_{2}\setminus S_{i-1},S_{i-1}-u+v\in\cI}\{f(v\mid S_{i-1})-f(u\mid S_{i-1}-u)\} (such a pair must exist when the condition on Line 4 evaluates to true because of the exchange axiom of matroids). Finding such a pair can be done by performing two things for every element vR2Si1v\in R_{2}\setminus S_{i-1}: determining whether there exists an element uR1u\in R_{1} obeying Si1u+v\cIS_{i-1}-u+v\in\cI, and if there is such element, finding an element uvargmaxuR1,Si1u+v\cI{f(vSi1)f(uSi1u)}u_{v}\in\arg\max_{u\in R_{1},S_{i-1}-u+v\in\cI}\{f(v\mid S_{i-1})-f(u\mid S_{i-1}-u)\}. We show below that both these things can be done using O(logr)O(\log r) independence oracle queries per element vv, and thus, O(|R2|logr)=O((n/r+n)logr)O(|R_{2}|\log r)=O((n/r+\sqrt{n})\log r) independence oracle queires in total for all the elements vR2Si1v\in R_{2}\setminus S_{i-1}.

Given an element vR2Si1v\in R_{2}\setminus S_{i-1}, one can determine whether there exists an element uR1u\in R_{1} obeying Si1u+v\cIS_{i-1}-u+v\in\cI by simply checking whether Si1R1+v\cIS_{i-1}\setminus R_{1}+v\in\cI, which requires a single independence oracle query. If it turns out that such an element exists, then, by treating f(uSi1u)f(u\mid S_{i-1}-u) as the weight of element uu for every uR1u\in R_{1}, the algorithm of Lemma 4.2 can find an element uvargmaxuR1,Si1u+v\cI{f(vSi1)f(uSi1u)}u_{v}\in\arg\max_{u\in R_{1},S_{i-1}-u+v\in\cI}\{f(v\mid S_{i-1})-f(u\mid S_{i-1}-u)\} using O(log|R1|)=O(logr)O(\log|R_{1}|)=O(\log r) independence oracle queries. ∎

Lemma 4.7.

Algorithm 4 returns “Fail” with probability at most 1/3\nicefrac{{1}}{{3}}.

Proof.

Recall that Algorithm 4 makes k=18rεk=\lceil\frac{18r}{\varepsilon}\rceil iterations, and let i[k]i\in[k] be one of these iterations. Fix all randomness of the algorithm until iteration ii (including iteration ii itself), and let TT be any base of \cM\cM. Since Si1S_{i-1} is also a base of \cM\cM (it is an independent set with the size of a base), Corollary 2.2 guarantees the existence of a bijection h:TSi1h\colon T\to S_{i-1} such that Si1h(v)+v\cIS_{i-1}-h(v)+v\in\cI for every vTv\in T and h(v)=vh(v)=v for every vSi1Tv\in S_{i-1}\cap T. If we now define C{(h(v),v)vT}C\triangleq\{(h(v),v)\mid v\in T\}, and let R1R_{1} and R2R_{2} denote their values in iteration number ii, then

f(SiSi1)\displaystyle f(S_{i}\mid S_{i-1}) =max{0,max(u,v)R1×(R2Si1)Si1u+v\cI[f(Si1u+v)f(Si1)]}\displaystyle=\max\{0,\max_{\begin{subarray}{c}(u,v)\in R_{1}\times(R_{2}\setminus S_{i-1})\\ S_{i-1}-u+v\in\cI\end{subarray}}[f(S_{i-1}-u+v)-f(S_{i-1})]\}
max{0,max(u,v)R1×(R2Si1)Si1u+v\cI[f(vSi1)f(uSi1u)]}\displaystyle\geq\max\{0,\max_{\begin{subarray}{c}(u,v)\in R_{1}\times(R_{2}\setminus S_{i-1})\\ S_{i-1}-u+v\in\cI\end{subarray}}[f(v\mid S_{i-1})-f(u\mid S_{i-1}-u)]\}
max{0,max(u,v)C(R1×(R2Si1))[f(vSi1)f(uSi1u)]}\displaystyle\geq\max\{0,\max_{(u,v)\in C\cap(R_{1}\times(R_{2}\setminus S_{i-1}))}[f(v\mid S_{i-1})-f(u\mid S_{i-1}-u)]\}
=max{0,max(u,v)C(R1×R2)[f(vSi1v)f(uSi1u)]}\displaystyle=\max\{0,\max_{(u,v)\in C\cap(R_{1}\times R_{2})}[f(v\mid S_{i-1}-v)-f(u\mid S_{i-1}-u)]\}
max{0,(u,v)C(R1×R2)f(vSi1v)f(uSi1u)|C(R1×R2)|}\displaystyle\geq\max\bigg{\{}0,\sum_{(u,v)\in C\cap(R_{1}\times R_{2})}\frac{f(v\mid S_{i-1}-v)-f(u\mid S_{i-1}-u)}{|C\cap(R_{1}\times R_{2})|}\bigg{\}}

where the first equality holds since the condition on Line 4 of Algorithm 4 holds by the down-closedness of \cI\cI whenever there is a pair of elements uR1u\in R_{1} and vR2Si1v\in R_{2}\setminus S_{i-1} such that Si1u+v\cIS_{i-1}-u+v\in\cI, the first inequality follows from the submodularity of ff, the second inequality holds since Si1u+v\cIS_{i-1}-u+v\in\cI for every (u,v)C(u,v)\in C, and the second equality holds since for every (u,v)C(u,v)\in C such that vSi1v\in S_{i-1} it holds that v=h(v)=uv=h(v)=u.

Unfixing the random choice of the subsets R1R_{1} and R2R_{2} in iteration ii, we get

\bER1,R2[f(SiSi1)]\displaystyle\bE_{R_{1},R_{2}}[f(S_{i}\mid S_{i-1})] \bE[max{0,(u,v)C(R1×R2)f(vSi1v)f(uSi1u)|C(R1×R2)|}]\displaystyle\geq\bE\bigg{[}\max\bigg{\{}0,\sum_{(u,v)\in C\cap(R_{1}\times R_{2})}\frac{f(v\mid S_{i-1}-v)-f(u\mid S_{i-1}-u)}{|C\cap(R_{1}\times R_{2})|}\bigg{\}}\bigg{]} (7)
11/er[vTf(vSi1v)uSi1f(Si1Si1u)].\displaystyle\geq\frac{1-\nicefrac{{1}}{{e}}}{r}\bigg{[}\sum_{v\in T}f(v\mid S_{i-1}-v)-\sum_{u\in S_{i-1}}f(S_{i-1}\mid S_{i-1}-u)\bigg{]}\enspace.

Let us explain why the last inequality holds. Let p1p_{\geq 1} be the probability that C(R1×R2)C\cap(R_{1}\times R_{2})\neq\varnothing. Assume now that we select a pair (u,v)C(u,v)\in C uniformly at randomly among the pairs in C(R1×R2)C\cap(R_{1}\times R_{2}), and let pu,vp_{u,v} be the probability that the pair (u,v)C(u,v)\in C is selected conditioned on C(R1×R2)C\cap(R_{1}\times R_{2}) being non-empty. Given these probabilities, the law of total expectation implies that

\bE[(u,v)C(R1×R2)f(vSi1v)f(uSi1u)|C(R1×R2)|]=p1(u,v)Cpu,v[f(vSi1v)f(Si1Si1u)].\bE\bigg{[}\sum_{(u,v)\in C\cap(R_{1}\times R_{2})}\mspace{-18.0mu}\frac{f(v\mid S_{i-1}-v)-f(u\mid S_{i-1}-u)}{|C\cap(R_{1}\times R_{2})|}\bigg{]}\\ =p_{\geq 1}\cdot\sum_{(u,v)\in C}p_{u,v}\left[f(v\mid S_{i-1}-v)-f(S_{i-1}\mid S_{i-1}-u)\right]\enspace.

As the choice of the sets R1R_{1} and R2R_{2} is done uniformly at random, the probability pu,vp_{u,v} must be the same for every (u,v)C(u,v)\in C, and thus, it is 1/|C|=1/r1/|C|=1/r. Note also that the probability p1p_{\geq 1} that C(R1×R2)C\cap(R_{1}\times R_{2}) is non-empty is at least 11/e1-\nicefrac{{1}}{{e}} because, even conditioned on a fixed choice of the set R1Si1R_{1}\subseteq S_{i-1}, the probability that none of the elements in {vh(v)R1}\{v\mid h(v)\in R_{1}\} are chosen to R2R_{2} is only

i=1|R2|(1|R1|ni+1)(1|R1|n)|R2|e|R1||R2|/n=1e,\prod_{i=1}^{|R_{2}|}\left(1-\frac{|R_{1}|}{n-i+1}\right)\leq\left(1-\frac{|R_{1}|}{n}\right)^{|R_{2}|}\leq e^{-\nicefrac{{|R_{1}|\cdot|R_{2}|}}{{n}}}=\frac{1}{e}\enspace,

where the final equality holds since |R1||R2|=min{r,n}max{nr,n}=n|R_{1}|\cdot|R_{2}|=\min\{r,\sqrt{n}\}\cdot\max\{\frac{n}{r},\sqrt{n}\}=n. This completes the proof of Inequality (7).

Next, suppose that there exists a subset T\cIT\in\cI such that

vTf(vSi1v)uSi1f(uSi1u)εf(S0).\sum_{v\in T}f(v\mid S_{i-1}-v)-\sum_{u\in S_{i-1}}f(u\mid S_{i-1}-u)\geq\varepsilon\cdot f(S_{0})\enspace. (8)

By the monotonicity of ff, we may assume that TT is a base of \cM\cM (if TT obeys Inequality (8), then every base containing it also obeys this inequality). Thus, by Inequality (7), \bER1,R2[f(SiSi1)]11/erεf(S0)ε6rf(OPT)\bE_{R_{1},R_{2}}[f(S_{i}\mid S_{i-1})]\geq\frac{1-\nicefrac{{1}}{{e}}}{r}\cdot\varepsilon\cdot f(S_{0})\geq\frac{\varepsilon}{6r}\cdot f(OPT), where the last inequality follows since f(S0)13f(OPT)f(S_{0})\geq\frac{1}{3}\cdot f(OPT).

Unfix now the randomness of all the iterations of Algoritm 4, and let us define \cAi\cA_{i} to be the event that in iteration ii there exists a subset T\cIT\in\cI obeying Inequality (8). By the above inequality and the law of total expectation,

\bE[f(SiSi1)]=\displaystyle\bE[f(S_{i}\mid S_{i-1})]={} Pr[\cAi]\bE[f(SiSi1)\cAi]+Pr[\cA¯i]\bE[f(SiSi1)\cA¯i]\displaystyle\Pr[\cA_{i}]\cdot\bE[f(S_{i}\mid S_{i-1})\mid\cA_{i}]+\Pr[\bar{\cA}_{i}]\cdot\bE[f(S_{i}\mid S_{i-1})\mid\bar{\cA}_{i}]
\displaystyle\geq{} Pr[\cAi]\bE[f(SiSi1)\cAi]Pr[\cAi]ε6rf(OPT),\displaystyle\Pr[\cA_{i}]\cdot\bE[f(S_{i}\mid S_{i-1})\mid\cA_{i}]\geq\Pr[\cA_{i}]\cdot\frac{\varepsilon}{6r}\cdot f(OPT)\enspace,

where the first inequality uses the fact that the inequality f(Si)f(Si1)f(S_{i})\geq f(S_{i-1}) holds deterministically. Summing up the last inequality over all i[k]i\in[k] yields

f(OPT)\bE[f(Sk)]i=1k\bE[f(SiSi1)]ε6ri=1kPr[\cAi]f(OPT).f(OPT)\geq\bE[f(S_{k})]\geq\sum_{i=1}^{k}\bE[f(S_{i}\mid S_{i-1})]\geq\frac{\varepsilon}{6r}\cdot\sum_{i=1}^{k}\Pr[\cA_{i}]\cdot f(OPT)\enspace.

Hence, i=1kPr[\cAi]6r/ε\sum_{i=1}^{k}\Pr[\cA_{i}]\leq 6r/\varepsilon. It now remains to observe that, by the definition of \cAi\cA_{i}, Pr[\cAi]\Pr[\cA_{i}] is exactly the probability that the algorithm returns “Fail” if it chooses Si1S_{i-1} as its output set. Since the algorithm chooses a uniformly random i[k]i\in[k] for this purpose, the probability that it fails is k1i=1kPr[\cAi]6rkε1/3k^{-1}\cdot\sum_{i=1}^{k}\Pr[\cA_{i}]\leq\frac{6r}{k\cdot\varepsilon}\leq\nicefrac{{1}}{{3}}. ∎

5 Conclusion

In this work, we designed a deterministic non-oblivious local search algorithm that has an approximation guarantee of 11/eε1-\nicefrac{{1}}{{e}}-\varepsilon for the problem of maximizing a monotone submodular function subject to a matroid constraint. This algorithm shows that there is essentially no separation between the approximation guarantees that can be obtained by deterministic and randomized algorithms for maximizing monotone submodular functions subject to a general matroid constraint. Adding randomization, we showed that the query complexity of our algorithm can be improved to Oε(n+rn)O_{\varepsilon}(n+r\sqrt{n}). Following this work, Buchbinder & Feldman [10] developed a greedy-based deterministic algorithm for maximizing general non-monotone submodular functions subject to a matroid constraint, which, for the special case of monotone submodular functions, recovers the optimal approximation guarantee of Theorem 1.1. Like our algorithm, the algorithm of [10] also has both deterministic and randomized versions. Its deterministic version has a much larger query complexity compared to our deterministic algorithm, but the randomized version of the algorithm of [10] has a query complexity of O~ε(n+r3/2)\tilde{O}_{\varepsilon}(n+r^{3/2}), which is always at least as good as the query complexity our randomized algorithm.

There are plenty of open questions that remain. The most immediate of these is to find a way to further improve the running time of the randomized (or the deterministic) algorithm to be nearly-linear for all values of rr (rather than just for r=O~(n2/3)r=\tilde{O}(n^{2/3}) as in [10]). A more subtle query complexity question is related to the fact that, since a value oracle query for the function gg^{\prime} defined in Section 3.2 requires O(21/ε)O(2^{\nicefrac{{1}}{{\varepsilon}}}) value oracle queries to ff, the query complexities of our algorithms depend exponentially on 1/ε1/\varepsilon. We remark that the algorithms designed in [10] also have exponential dependency on 1/ε1/\varepsilon. In other words, our algorithms and the algorithms of [10] are efficient PTASs in the sense that their query complexities are h(ε)O(Poly(n,r))h(\varepsilon)\cdot O(\operatorname{Poly}(n,r)) for some function hh. A natural question is whether it is possible to design a deterministic FPTAS for the problem we consider, i.e., a deterministic algorithm obtaining (11/eε)(1-1/e-\varepsilon)-approximation using O(Poly(n,r,ε1))O(\operatorname{Poly}(n,r,\varepsilon^{-1})) oracle queries. By guessing the most valuable element of an optimal solution and setting ε\varepsilon to be polynomially small, such an FPTAS will imply, in particular, a deterministic algorithm guaranteeing a clean approximation ratio of 11/e1-1/e using O(Poly(n,r))O(\operatorname{Poly}(n,r)) oracle queries.

Appendix A Sums of Monotone Submodular and Linear Functions

Sviridenko et al. [55] initiated the study of the following problem. Given a non-negative monotone submodular function f:2\cN\bR0f\colon 2^{\cN}\to{\bR_{\geq 0}}, a linear function :2\cN\bR\ell\colon 2^{\cN}\to\bR and a matroid \cM=(\cN,\cI)\cM=(\cN,\cI), find a set S\cIS\in\cI that (approximately) maximizes f(S)+(S)f(S)+\ell(S). They showed that one can efficiently find a set S\cIS\in\cI such that

f(S)+(S)(11/e)f(OPT)+(OPT){small error term},f(S)+\ell(S)\geq(1-1/e)\cdot f(OPT)+\ell(OPT)-\{\text{small error term}\}\enspace, (9)

and that this is the best that can be done (in some sense). The original motivation of Sviridenko et al. [55] for studying this problem has been obtaining optimal approximation for maximization of monotone submodular functions of bounded curvature subject to a matroid constraint.666The curvature cc is a number between [0,1][0,1] that measures the distance of a monotone submodular function from linearity. Sviridenko et al. [55] obtained an approximation ratio of 1c/e1-c/e for this problem, which smoothly interpolates between the optimal approximation ratio of 11/e1-1/e for general submodular functions (the case of c1c\leq 1) and the optimal approximation ratio of 11 for linear functions (the case of c=0c=0). Sviridenko et al. [55] also showed that their result is optimal for any value of cc. Later this problem found uses also in machine learning applications as it captures soft constraints and regularizers [33, 37, 47]. Feldman [27] suggested a simpler algorithm for maximizing sums of monotone submodular functions and linear functions subject to arbitrary solvable polytope constraints, and maximization of such sums involving non-monotone submodular functions was studied by [4, 52, 54].

Recall now that Filmus and Ward [30] described a non-oblivious local search algorithm guided by some auxiliary objective function gg such that the output set SS of this local search is guaranteed to obey f(S)(11/e)f(OPT)f(S)\geq(1-1/e)\cdot f(OPT). Furthermore, by applying a local search algorithm to the linear function \ell, it is possible to get an output set SS such that (S)=(OPT)\ell(S)=\ell(OPT). One of the results of Sviridenko et al. [55] was a non-oblivious local search based on the guiding function (11/e)g()+()(1-1/e)\cdot g(\cdot)+\ell(\cdot), which is a weighted linear combination of the two above guiding functions. Sviridenko et al. [55] showed that the output set SS of their local search algorithm obeys Inequality (9), which is natural since Inequality (9) is a linear combination of the above-mentioned guarantees of the local searches guided by either gg or \ell.

Reusing the idea of Sviridenko et al. [55], we can maximize the sum f()+()f(\cdot)+\ell(\cdot) by simply replacing the function gg^{\prime} in Line 5 of Algorithm 2 with g()+α(+1)()g^{\prime}(\cdot)+\alpha_{\ell}\cdot(\ell+1)\cdot\ell^{\prime}(\cdot), where \ell^{\prime} is the natural extension of \ell to the ground set \cN\cN^{\prime} defined, for every set S\cNS\subseteq\cN^{\prime}, by (S)(u,j)S({u})\ell^{\prime}(S)\triangleq\sum_{(u,j)\in S}\ell(\{u\}). The effect of this change on the first inequality in the proof of Lemma 3.5 is an addition of the term

α(+1)uOPT((u,j)S(u,j))=α(+1)uOPT({u})=α(+1)(OPT)\alpha_{\ell}\cdot(\ell+1)\cdot\sum_{u\in OPT}\ell^{\prime}((u,j)\mid S-(u,j))=\alpha_{\ell}\cdot(\ell+1)\cdot\sum_{u\in OPT}\ell(\{u\})=\alpha_{\ell}\cdot(\ell+1)\cdot\ell(OPT)

to the leftmost hand side of this inequality, and an addition of the term

α(+1)(u,k)S((u,k)S(u,k))=α(+1)uπ[](S)({u})=α(+1)(π[](S))\alpha_{\ell}\cdot(\ell+1)\cdot\sum_{(u,k)\in S}\ell^{\prime}((u,k)\mid S-(u,k))=\alpha_{\ell}\cdot(\ell+1)\cdot\sum_{u\in\pi_{[\ell]}(S)}\ell(\{u\})=\alpha_{\ell}\cdot(\ell+1)\cdot\ell(\pi_{[\ell]}(S))

to the rightmost hand side of this inequality. Carrying these extra terms throughout the analysis of Algorithm 2 yields the following theorem.

Theorem A.1.

There exists a deterministic non-oblivious local search algorithm that given a non-negative monotone submodular function f:2\cN\bR0f\colon 2^{\cN}\to{\bR_{\geq 0}}, a linear function :2\cN\bR\ell\colon 2^{\cN}\to\bR and a matroid \cM=(\cN,\cI)\cM=(\cN,\cI), finds a set S\cIS\in\cI such that f(S)+(S)(11/eε)f(OPT)+(OPT)f(S)+\ell(S)\geq(1-\nicefrac{{1}}{{e}}-\varepsilon)\cdot f(OPT)+\ell(OPT) (for any ε>0\varepsilon>0 and set OPT\cIOPT\in\cI) using a query complexity of O~ε(nr)\tilde{O}_{\varepsilon}(nr), where rr is the rank of the matroid and nn is the size of its ground set. The query complexity can be improved to O~ε(n+rn)\tilde{O}_{\varepsilon}(n+r\sqrt{n}) using randomization.

Acknowledgments.

We would like to thank Wenxin Li for suggesting the current version of Inequality (1) and its application for maximizing sums of monotone submodular functions and linear functions. The work of Niv Buchbinder was supported in part by Israel Science Foundation (ISF) grant no. 3001/24 and United States - Israel Binational Science Foundation (BSF) grant no. 2022418. The work of Moran Feldman was supported in part by Israel Science Foundation (ISF) grant no. 459/20.

References

  • [1] Ashwinkumar Badanidiyuru and Jan Vondrák. Fast algorithms for maximizing submodular functions. In Chandra Chekuri, editor, Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, pages 1497–1514. SIAM, 2014.
  • [2] Eric Balkanski, Aviad Rubinstein, and Yaron Singer. An optimal approximation for submodular maximization under a matroid constraint in the adaptive complexity model. Oper. Res., 70(5):2967–2981, 2022.
  • [3] Eric Balkanski and Yaron Singer. The adaptive complexity of maximizing a submodular function. In Ilias Diakonikolas, David Kempe, and Monika Henzinger, editors, ACM SIGACT Symposium on Theory of Computing (STOC), pages 1138–1151. ACM, 2018.
  • [4] Kobi Bodek and Moran Feldman. Maximizing sums of non-monotone submodular and linear functions: Understanding the unconstrained case. In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman, editors, Proceedings of the 30th Annual European Symposium on Algorithms (ESA), volume 244 of LIPIcs, pages 23:1–23:17. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022.
  • [5] Richard A. Brualdi. Comments on bases in dependence structures. Bull. of the Australian Math. Soc., 1(02):161–167, 1969.
  • [6] Niv Buchbinder and Moran Feldman. Deterministic algorithms for submodular maximization problems. ACM Trans. Algorithms, 14(3):32:1–32:20, 2018.
  • [7] Niv Buchbinder and Moran Feldman. Constrained submodular maximization via a nonsymmetric technique. Math. Oper. Res., 44(3):988–1005, 2019.
  • [8] Niv Buchbinder and Moran Feldman. Constrained submodular maximization via new bounds for dr-submodular functions. CoRR, abs/2311.01129, 2023.
  • [9] Niv Buchbinder and Moran Feldman. Deterministic algorithm and faster algorithm for submodular maximization subject to a matroid constraint, 2024. To appear in Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS).
  • [10] Niv Buchbinder and Moran Feldman. Extending the extension: Deterministic algorithm for non-monotone submodular maximization. CoRR, abs/2409.14325, 2024.
  • [11] Niv Buchbinder, Moran Feldman, and Mohit Garg. Deterministic (1/2+ε)(1/2+\varepsilon)-approximation for submodular maximization over a matroid. SIAM J. Comput., 52(4):945–967, 2023.
  • [12] Niv Buchbinder, Moran Feldman, Joseph Naor, and Roy Schwartz. Submodular maximization with cardinality constraints. In Chandra Chekuri, editor, ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1433–1452. SIAM, 2014.
  • [13] Niv Buchbinder, Moran Feldman, Joseph Naor, and Roy Schwartz. A tight linear time (1/2)-approximation for unconstrained submodular maximization. SIAM J. Comput., 44(5):1384–1402, 2015.
  • [14] Niv Buchbinder, Moran Feldman, and Roy Schwartz. Comparing apples and oranges: Query trade-off in submodular maximization. Math. Oper. Res., 42(2):308–329, 2017.
  • [15] Niv Buchbinder, Moran Feldman, and Roy Schwartz. Online submodular maximization with preemption. ACM Trans. Algorithms, 15(3):30:1–30:31, 2019.
  • [16] Gruia Călinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a monotone submodular function subject to a matroid constraint. SIAM J. Comput., 40(6):1740–1766, 2011.
  • [17] Amit Chakrabarti and Sagar Kale. Submodular maximization meets streaming: matchings, matroids, and more. Math. Program., 154(1-2):225–247, 2015.
  • [18] Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, Sahil Singla, and Sam Chiu-wai Wong. Faster matroid intersection. In David Zuckerman, editor, IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 1146–1168. IEEE Computer Society, 2019.
  • [19] T.-H. Hubert Chan, Zhiyi Huang, Shaofeng H.-C. Jiang, Ning Kang, and Zhihao Gavin Tang. Online submodular maximization with free disposal. ACM Trans. Algorithms, 14(4):56:1–56:29, 2018.
  • [20] Chandra Chekuri, Shalmoli Gupta, and Kent Quanrud. Streaming algorithms for submodular function maximization. In Magnús M. Halldórsson, Kazuo Iwama, Naoki Kobayashi, and Bettina Speckmann, editors, International Colloquium on Automata, Languages, and Programming (ICALP), volume 9134 of Lecture Notes in Computer Science, pages 318–330. Springer, 2015.
  • [21] Chandra Chekuri and Kent Quanrud. Parallelizing greedy for submodular set function maximization in matroids and beyond. In Moses Charikar and Edith Cohen, editors, Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 78–89. ACM, 2019.
  • [22] Chandra Chekuri, Jan Vondrák, and Rico Zenklusen. Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM J. Comput., 43(6):1831–1879, 2014.
  • [23] Alina Ene and Huy L. Nguyen. Constrained submodular maximization: Beyond 1/e. In Irit Dinur, editor, IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 248–257. IEEE Computer Society, 2016.
  • [24] Alina Ene and Huy L. Nguyen. Towards nearly-linear time algorithms for submodular maximization with a matroid constraint. In Christel Baier, Ioannis Chatzigiannakis, Paola Flocchini, and Stefano Leonardi, editors, ICALP, volume 132 of LIPIcs, pages 54:1–54:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019.
  • [25] Alina Ene, Huy L. Nguyen, and Adrian Vladu. Submodular maximization with matroid and packing constraints in parallel. In Moses Charikar and Edith Cohen, editors, Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 90–101. ACM, 2019.
  • [26] Uriel Feige, Vahab S. Mirrokni, and Jan Vondrák. Maximizing non-monotone submodular functions. SIAM J. Comput., 40(4):1133–1153, 2011.
  • [27] Moran Feldman. Guess free maximization of submodular and linear sums. Algorithmica, 83(3):853–878, 2021.
  • [28] Moran Feldman, Paul Liu, Ashkan Norouzi-Fard, Ola Svensson, and Rico Zenklusen. Streaming submodular maximization under matroid constraints. In Mikolaj Bojanczyk, Emanuela Merelli, and David P. Woodruff, editors, International Colloquium on Automata, Languages, and Programming (ICALP), volume 229 of LIPIcs, pages 59:1–59:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022.
  • [29] Moran Feldman, Joseph Naor, and Roy Schwartz. A unified continuous greedy algorithm for submodular maximization. In Rafail Ostrovsky, editor, Annual Symposium on Foundations of Computer Science (FOCS), pages 570–579. IEEE Computer Society, 2011.
  • [30] Yuval Filmus and Justin Ward. Monotone submodular maximization over a matroid via non-oblivious local search. SIAM J. Comput., 43(2):514–542, 2014.
  • [31] M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximations for maximizing submodular set functions–II. In Polyhedral Combinatorics, volume 8 of Mathematical Programming Studies, pages 73–87. Springer Berlin Heidelberg, 1978.
  • [32] Shivaram Gopal, S M Ferdous, Hemanta K. Maji, and Alex Pothen. Greedyml: A parallel algorithm for maximizing submodular functions. CoRR, abs/2403.10332, 2024.
  • [33] Chris Harshaw, Moran Feldman, Justin Ward, and Amin Karbasi. Submodular maximization beyond non-negativity: Guarantees, fast algorithms, and applications. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning (ICML), volume 97 of Proceedings of Machine Learning Research, pages 2634–2643. PMLR, 2019.
  • [34] Christopher Harshaw, Ehsan Kazemi, Moran Feldman, and Amin Karbasi. The power of subsampling in submodular maximization. Math. Oper. Res., 47(2):1365–1393, 2022.
  • [35] Monika Henzinger, Paul Liu, Jan Vondrák, and Da Wei Zheng. Faster submodular maximization for several classes of matroids. In Kousha Etessami, Uriel Feige, and Gabriele Puppis, editors, ICALP, volume 261 of LIPIcs, pages 74:1–74:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2023.
  • [36] Chien-Chung Huang and Naonori Kakimura. Multi-pass streaming algorithms for monotone submodular function maximization. Theory Comput. Syst., 66(1):354–394, 2022.
  • [37] Ehsan Kazemi, Shervin Minaee, Moran Feldman, and Amin Karbasi. Regularized submodular maximization at scale. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning (ICML), volume 139 of Proceedings of Machine Learning Research, pages 5356–5366. PMLR, 2021.
  • [38] Yusuke Kobayashi and Tatsuya Terao. Subquadratic submodular maximization with a general matroid constraint. In Karl Bringmann, Martin Grohe, Gabriele Puppis, and Ola Svensson, editors, International Colloquium on Automata, Languages, and Programming (ICALP), volume 297 of LIPIcs, pages 100:1–100:19. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2024.
  • [39] Alan Kuhnle. Quick streaming algorithms for maximization of monotone submodular functions in linear time. In Arindam Banerjee and Kenji Fukumizu, editors, AISTATS, volume 130 of Proceedings of Machine Learning Research, pages 1360–1368. PMLR, 2021.
  • [40] Wenxin Li, Moran Feldman, Ehsan Kazemi, and Amin Karbasi. Submodular maximization in clean linear time. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, NeurIPS, 2022.
  • [41] Wenzheng Li, Paul Liu, and Jan Vondrák. A polynomial lower bound on adaptive complexity of submodular maximization. In Konstantin Makarychev, Yury Makarychev, Madhur Tulsiani, Gautam Kamath, and Julia Chuzhoy, editors, Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 140–152. ACM, 2020.
  • [42] Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrák, and Andreas Krause. Lazier than lazy greedy. In Blai Bonet and Sven Koenig, editors, AAAI, pages 1812–1818. AAAI Press, 2015.
  • [43] Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause. Distributed submodular maximization. J. Mach. Learn. Res., 17:238:1–238:44, 2016.
  • [44] G. L. Nemhauser and L. A. Wolsey. Best algorithms for approximating the maximum of a submodular set function. Mathematics of Operations Research, 3(3):177–188, 1978.
  • [45] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions–I. Mathematical Programming, 14:265–294, 1978.
  • [46] Huy L. Nguyen. A note on cunningham’s algorithm for matroid intersection. CoRR, abs/1904.04129, 2019.
  • [47] Sofia Maria Nikolakaki, Alina Ene, and Evimaria Terzi. An efficient framework for balancing submodularity and cost. In Feida Zhu, Beng Chin Ooi, and Chunyan Miao, editors, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pages 1256–1266. ACM, 2021.
  • [48] Shayan Oveis Gharan and Jan Vondrák. Submodular maximization by simulated annealing. In Dana Randall, editor, ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1098–1116. SIAM, 2011.
  • [49] Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, and Justin Ward. The power of randomization: Distributed submodular maximization on massive datasets. In Francis R. Bach and David M. Blei, editors, ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 1236–1244. JMLR.org, 2015.
  • [50] Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, and Justin Ward. A new framework for distributed submodular maximization. In Irit Dinur, editor, IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 645–654. IEEE Computer Society, 2016.
  • [51] Benjamin Qi. On maximizing sums of non-monotone submodular and linear functions. In Sang Won Bae and Heejin Park, editors, International Symposium on Algorithms and Computation (ISAAC), volume 248 of LIPIcs, pages 41:1–41:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022.
  • [52] Benjamin Qi. On maximizing sums of non-monotone submodular and linear functions. Algorithmica, 86(4):1080–1134, 2024.
  • [53] A. Schrijver. Combinatorial Optimization: Polyhedra and Effciency. Springer-Verlag, Berlin, 2003.
  • [54] Xin Sun, Congying Han, Chenchen Wu, Dachuan Xu, and Yang Zhou. The regularized submodular maximization via the lyapunov method. In Weili Wu and Guangmo Tong, editors, Proceedings of the 29th International Conference on Computing and Combinatorics (COCOON), volume 14423 of Lecture Notes in Computer Science, pages 118–143. Springer, 2023.
  • [55] Maxim Sviridenko, Jan Vondrák, and Justin Ward. Optimal approximation for submodular and supermodular optimization with bounded curvature. Math. Oper. Res., 42(4):1197–1218, 2017.
  • [56] Murad Tukan, Loay Mualem, and Moran Feldman. Practical 0.385-approximation for submodular maximization subject to a cardinality constraint. CoRR, abs/2405.13994, 2024.