This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A constant factor approximation for Nash social welfare with subadditive valuations

Shahar Dobzinski, Wenzheng Li, Aviad Rubinstein, Jan Vondrák Weizmann Institute of ScienceStanford UniversityStanford UniversityStanford University
Abstract

We present a constant-factor approximation algorithm for the Nash social welfare maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a template for NSW optimization by solving a configuration-type LP and using a rounding procedure for (utilitarian) social welfare as a blackbox, which could be applicable to other variants of the problem.

1 Introduction

We consider the problem of allocating a set \mathcal{I} of mm indivisible items to a set 𝒜\mathcal{A} of nn agents, where each agent i𝒜i\in\mathcal{A} has a valuation function vi:20v_{i}:2^{\mathcal{I}}\to{\mathbb{R}}_{\geq 0}. The Nash social welfare (NSW) problem is to find an allocation 𝒮=(Si)i𝒜{\mathcal{S}}=(S_{i})_{i\in\mathcal{A}} that maximizes the geometric mean of the agents’ valuations,

NSW(𝒮)=(i𝒜vi(Si))1/|𝒜|.\mbox{\sf NSW}({\mathcal{S}})=\left(\prod_{i\in\mathcal{A}}v_{i}(S_{i})\right)^{1/|\mathcal{A}|}.

For α1\alpha\geq 1, an α\alpha-approximate solution to the NSW problem is an allocation 𝒮\mathcal{S} with NSW(𝒮)OPT/α\mbox{\sf NSW}(\mathcal{S})\geq\mathrm{OPT}/\alpha, where OPT\mathrm{OPT} denotes the optimum value of the NSW-maximization problem.

Allocating resources to agents in a fair and efficient manner is a fundamental problem in computer science, economics, and social choice theory, with substantial prior work [Bar05, BT96, BCE+16, Mou04, RW98, Rot16, You95]. A common measure of efficiency is utilitarian social welfare, i.e., the sum of the utilities i𝒜vi(Si)\sum_{i\in\mathcal{A}}v_{i}(S_{i}) for an allocation (Si)i𝒜(S_{i})_{i\in\mathcal{A}}. This objective does not take fairness into account, as all items could be allocated to one agent whose valuation function dominates the others. In order to incorporate fairness, various notions have been considered, ranging from envy-freeness, proportional fairness to various modifications of the objective function. At the end of the spectrum opposite to utilitarian social welfare, one can consider the max-min objective, mini𝒜vi(Si)\min_{i\in\mathcal{A}}v_{i}(S_{i}), also known as the Santa Claus problem [BS06]. This objective is somewhat extreme in considering only the happiness of the least happy agent.

Nash social welfare provides a balanced tradeoff between the requirements of fairness and efficiency. It has been introduced independently in several contexts: as a discrete variant of the Nash bargaining game [KN79, Nas50]; as a notion of competitive equilibrium with equal incomes in economics [Var74]; and also as a proportional fairness notion in networking [Kel97]. Nash social welfare has several desirable features, for example invariance under scaling of the valuation functions viv_{i} by independent factors λi\lambda_{i}, i.e., each agent can express their preference in a “different currency” without changing the optimization problem (see [Mou04] for additional characteristics).

1.1 Preliminaries

The difficulty of optimizing Nash social welfare depends naturally on the class of valuation functions that we want to deal with, and how they are accessible. Various classes of valuations have been considered in the literature. For the sake of this paper, let us introduce three basic classes of valuations, and two oracle models.

Classes of valuation functions

A set function v: 2v:\,2^{\mathcal{I}}\to{\mathbb{R}} is monotone if v(S)v(T)v(S)\leq v(T) whenever STS\subseteq T. A monotone set function with v()=0v(\emptyset)=0 is also called a valuation function or simply valuation.

A valuation v: 2v:\,2^{\mathcal{I}}\to{\mathbb{R}} is additive if v(S)=jSwjv(S)=\sum_{j\in S}w_{j} for nonnegative weights wjw_{j}.

A valuation v: 2v:\,2^{\mathcal{I}}\to{\mathbb{R}} is submodular if v(S)+v(T)v(ST)+v(ST)S,Tv(S)+v(T)\geq v(S\cap T)+v(S\cup T)\,\quad\forall S,T\subseteq\mathcal{I}.

A valuation v: 2v:\,2^{\mathcal{I}}\to{\mathbb{R}} is fractionally subadditive (or XOS) if v(S)=maxijSwijv(S)=\max_{i\in\mathcal{I}}\sum_{j\in S}w_{ij} for nonnegative weights wijw_{ij}.

A valuation v: 2v:\,2^{\mathcal{I}}\to{\mathbb{R}} is subadditive if v(S)+v(T)v(ST)S,Tv(S)+v(T)\geq v(S\cup T)\,\quad\forall S,T\subseteq\mathcal{I}.

We remark that these classes form a chain of inclusions: additive valuations are submodular, submodular valuations are XOS, and XOS valuations are subadditive.

Oracle access.

Note that additive valuations can be presented explicitly on the input. However, for more general classes of valuations, we need to resort to oracle access, since presenting a valuation explicitly would take an exponential amount of space. Three types of oracles to access valuation functions have been commonly considered in the literature.

  • Value oracle: Given a set SS\subseteq\mathcal{I}, return the value v(S)v(S).

  • Demand oracle: Given prices (pj:j)(p_{j}:j\in\mathcal{I}), return a set SS maximizing v(S)jSpjv(S)-\sum_{j\in S}p_{j}.

  • XOS oracle (for an XOS valuation vv): Given a set SS, return an additive function aa from the XOS representation of vv such that v(S)=a(S)v(S)=a(S).

1.2 Prior work

The Nash social welfare problem is NP-hard already in the case of two agents with identical additive valuations, by a reduction from the Subset-Sum problem. For multiple agents, it is NP-hard to approximate within a factor better than 0.9360.936 for additive valuations [GHM17], and better than 11/e0.6321-1/e\simeq 0.632 for submodular valuations [GKK20].

The first constant-factor approximation algorithm for additive valuations, with the factor of 1/(2e1/e)0.3461/(2e^{1/e})\approx 0.346, was given by Cole and Gkatzelis [CG15] using a continuous relaxation based on a particular market equilibrium concept. Later, [CDG+17] improved the analysis of this algorithm to achieve the factor of 1/21/2. Anari, Oveis Gharan, Saberi, and Singh [AGSS17] used a convex relaxation that relies on properties of real stable polynomials, to give an elegant analysis of an algorithm that gives a factor of 1/e1/e. The current best factor is 1/e1/eϵ0.6921/e^{1/e}-\epsilon\simeq 0.692 by Barman, Krishnamurthy, and Vaish [BKV18]; the algorithm uses a different market equilibrium based approach. Note also that this factor is above 11/e1-1/e, hence separating the additive and submodular settings.

Constant-factor approximations have been extended to some classes beyond additive functions: capped-additive [GHM18], separable piecewise-linear concave (SPLC) [AMGV18], and their common generalization, capped-SPLC [CCG+18] valuations; the approximation factor for capped-SPLC valuations matches the 1/e1/eε1/e^{1/e}-\varepsilon factor for additive valuations. All these valuations are special classes of submodular ones. Subsequently, Li and Vondrák [LV21b] designed an algorithm that estimates the optimal value within a factor of (e1)2e30.147\frac{(e-1)^{2}}{e^{3}}\simeq 0.147 for a broad class of submodular valuations, such as coverage and summations of matroid rank functions, by extending the techniques of [AGSS17] using real stable polynomials. However, this algorithm only estimates the optimum value but does not find a corresponding allocation in polynomial time.

An important concenptual advance was presented in [GHV20], where a relaxation combining ideas from matching theory and convex optimization was shown to give a constant factor for the class of “Rado valuations” (containing weighted matroid rank functions and some related valuations). A crucial property of this approach is that it is quite modular and ended up leadin to multiple further advances. In [LV21a], this approach was extended to provide a constant factor approximation algorithm for general submodular valuations, by replacing the concave extension of a valuation by the multilinear extension. The initial factor was rather small (1/3801/380). Recently, a much simpler algorithm combining matching and local search was presented to give a (1/4ϵ)(1/4-\epsilon)-approximation for submodular valuations [GHL+23].

For the more general classes of XOS and subadditive valuations [BBKS20, CGM21, GKK20], however, only polynomial approximation factors were known until now, and this is the best one can hope for in the value oracle model [BBKS20], for the same reasons that this is a barrier for the utilitarian social welfare problem [DNS10]. The best known approximation factors up to now have been O(1/n)O(1/n) for subadditive valuations, and O(1/n53/54)O(1/n^{53/54}) for XOS valuations if we are given access to both demand and XOS oracles [BKKN21]. Constant factors for XOS valuations seemed quite out of reach prior to this work, and obtaining any sublinear factor for subadditive valuations was stated as an open problem in [BKKN21].

1.3 Our results and techniques

Our main result is the following.

Theorem.

(informal) There is an algorithm, with polynomial time and demand queries to agents’ valuations, that provides a constant factor approximation for the Nash Social Welfare with subadditive valuations.

As a special case, this also gives a constant-factor approximation for XOS valuations accessible via demand queries. (The algorithm for XOS valuations is somewhat simpler, as we discuss later in this section.) This completes the picture in the sense that now we have a constant factor approximation for Nash social welfare in the main settings where one is known for utilitarian social welfare: for submodular valuations with value queries, and for subadditive valuations with demand queries. (It is known that a stronger oracle than a value oracle is required for XOS and subadditive valuations, even for utilitarian social welfare.)

The basis of our approach is the matching+relaxation paradigm which gave a constant-factor approximation for submodular valuations [GHV20, LV21a]. Considering that the only constant-factor approximation for social welfare with subadditive valuations [Fei08] is based on the "Configuration LP", which can be solved using demand queries, it is a natural idea to use a relaxation similar to the Configuration LP. A natural variant for Nash social welfare is the Eisenberg-Gale relaxation, using the logarithm of the concave extension of each agent’s valuation. We apply this relaxation on top of an initial matching, as in [GHV20].

The main obstacle with this approach is that natural rounding procedures for the Configuration LP do not satisfy any concentration properties. At a high level, without concentration some agents have higher value but some have lower value - leading to poor Nash social welfare even if we can maintain the expected utilitarian social welfare. More specifically, the first challenge is that, given a fractional solution xi,Sx_{i,S}, we would ideally like to round it to an integral allocation by allocating set SS to agent ii with probability xi,Sx_{i,S}. Even though this ideal rounding preserves each agent’s expected value, the variance can be arbitrary, depending on the fractional solution xi,Sx_{i,S}. Our first technical contribution is a procedure (see Lemma 3) for finding a new feasible solution to the Configuration LP that, for each agent, has only high value subsets in its support (with the exception of agents who get most of their value from a single item — this case is handled separately with the matching procedure). This procedure is rather simple in hindsight. At a high level, we can think of the fractional solution as a distribution of allocations for each agent. We want to discard the part of the distribution that corresponds to low value subsets; but this leaves uncovered probability mass. We re-cover this remaining mass by splitting high value subsets to “stretch” over more probability mass, while allocating each item with to agent ii with the same total probability.

The next obstacle in rounding the Configuration LP is resolving “contentions”: aka under the ideal rounding procedure described above, we may be trying to allocate the same item to multiple agents (even though in expectation it is only allocated to one agent). For XOS valuations, a simple independent randomized contention resolution scheme guarantees a constant factor approximation and also enjoys good concentration. However the situation is more complicated for subadditive valuations. The only known constant-factor approximation for social welfare with subadditive valuations is a rather intricate rounding procedure of Feige [Fei08], which does not seem to satisfy any useful concentration properties. In any rounded solution, there might be agents who receive very low value, which hurts Nash social welfare, and hence we cannot use it directly.

Our solution is an iterated rounding procedure, where in each stage a certain fraction of agents is “satisfied” in the sense that they receive value comparable to their fractional value. We allocate the respective items to them, subject to random filtering which ensures that enough items are still left for the remaining agents. Then we recurse on the remaining agents and remaining items. Still, some agents may receive a relatively small value, but we guarantee that the fraction of agents who receive low values is proportionally small, which means that the Nash social welfare overall is guaranteed to be good. As an example: if OPT=(V1Vn)1/nOPT=(V_{1}\cdots V_{n})^{1/n}, it suffices to solve for an allocation where n2\frac{n}{2} agents receive value at least 12Vi\frac{1}{2}V_{i}, n4\frac{n}{4} agents receive value at least 14Vi\frac{1}{4}V_{i}, n8\frac{n}{8} agents receive value at least 18Vi\frac{1}{8}V_{i}, and so on. Then the approximation factor in terms of Nash Social Welfare turns out to be

(1/2)1/2(1/4)1/4(1/8)1/8(1/16)1/16(1/2)^{1/2}(1/4)^{1/4}(1/8)^{1/8}(1/16)^{1/16}\cdots

and this infinite product converges to 1/41/4 (we leave this as an exercise for the reader).

In order to guarantee the success of this rounding procedure, we need a concentration inequality (as in previous works). Concentration properties of subadditive functions are somewhat weaker and more difficult to prove that for submodular or XOS functions. Here we appeal to a powerful subadditive concentration inequality presented by Schechtman [Sch03], which is based on the “qq-point control inequality” of Talagrand [Tal89, Tal95].

Reducing Nash Welfare to Rounding of Configuration LP

Technically, we prove a reduction theorem (Theorem 1) which shows that to achieve a constant factor approximation for Nash social welfare, is it sufficient to implement efficient subroutines for two subproblems: (1) finding a solution of the Configuration LP satisfying a certain additional property (which happens to be satisfied for example by an optimal solution of the Eisenberg-Gale relaxation [GHV20], or the continuous greedy algorithm for the log-multilinear relaxation [LV21a]), and (2) rounding a fractional solution of the Configuration LP while losing only a constant factor with respect to utilitarian social welfare. The latter problem is relatively easy for XOS valuations, but non-trivial for subadditive valuations. Fortunately, a factor 1/21/2 rounding procedure is known due to Feige’s work on welfare maximization with subadditive bidders [Fei08], which we use here as a blackbox.

We remark that the constant factors lost in various stages of our proof are rather large and lead to a final approximation factor of 1/375,000\sim 1/375,000 for the Nash social welfare problem with subadditive valuations. One may hope that as in the case of submodular valuations, an initially large constant factor can be eventually improved to a “practical one”.

Paper organization.

In Section 2, we present the main technical result, which is a reduction of Nash social welfare to a certain relaxation solver and a rounding procedure for the Configuration LP. In Section 3, we show how this implies a constant factor approximation algorithm for Nash social welfare with subadditive valuations.

We defer some components of the algorithm which are similar to earlier work to the appendices: Solving the relaxation and proving the required guarantees (Appendix A), the rematching lemmas (Appendix B), and concentration of subadditive functions (Appendix C).

2 Optimizing NSW via relaxation and rounding for social welfare

Here we describe our general approach which allows us to derive algorithms for NSW optimization in several settings. At a high-level, we reduce the NSW optimization to finding a certain solution for the "Configuration LP" (for social welfare optimization), and having a rounding procedure for the Configuration LP, again with respect to social welfare.

Let us define the Configuration LP:

max\displaystyle\max\quad i𝒜Svi(S)xi,S\displaystyle\sum_{i\in\mathcal{A}}\sum_{S\subseteq\mathcal{I}}v_{i}(S)x_{i,S} (Configuration LP)
i𝒜S:jSxi,S1\displaystyle\quad\sum_{i\in\mathcal{A}}\sum_{S\subseteq\mathcal{I}:j\in S}x_{i,S}\leq 1 j\displaystyle\forall j\in\mathcal{I}
Sxi,S=1\displaystyle\quad\sum_{S\subseteq\mathcal{I}}x_{i,S}=1 i𝒜\displaystyle\forall i\in\mathcal{A}
xi,S0\displaystyle\quad x_{i,S}\geq 0 i𝒜,S\displaystyle\forall i\in\mathcal{A},S\subseteq\mathcal{I}

Equivalently, this can be written as

max\displaystyle\max\quad i𝒜vi+(𝐱i)\displaystyle\sum_{i\in\mathcal{A}}v_{i}^{+}(\mathbf{x}_{i})
i𝒜xij1\displaystyle\quad\sum_{i\in\mathcal{A}}x_{ij}\leq 1 j\displaystyle\forall j\in\mathcal{I}
xij0\displaystyle\quad x_{ij}\geq 0 i𝒜,j\displaystyle\forall i\in\mathcal{A},j\in\mathcal{I}

where as before,

vi+(𝐱i)=\displaystyle v_{i}^{+}(\mathbf{x}_{i})= maxSvi(S)xi,S:\displaystyle\max\sum_{S\subset\mathcal{I}}v_{i}(S)x_{i,S}: (Concave Extension)
S:jSxi,Sxij\displaystyle\sum_{S\subset\mathcal{I}:j\in S}x_{i,S}\leq x_{ij} j\displaystyle\forall j\in\mathcal{I}
Sxi,S=1\displaystyle\ \ \sum_{S\subset\mathcal{I}}x_{i,S}=1
xi,S0\displaystyle\ \ x_{i,S}\geq 0 S\displaystyle\forall S\subset\mathcal{I}

The following is our main reduction theorem, which provides an algorithm for Nash social welfare, given two procedure that we call the Relaxation Solver and Rounding Procedure. Note that assumption on the Relaxation Solver is somewhat unusual: It is not that (xi,S)(x_{i,S}) is an optimal or near-optimal solution of (Configuration LP), but a different condition that the optimum social welfare with valuations by wi(S)=vi(S)/Vi=vi(S)/Svi(S)xi,Sw_{i}(S)=v_{i}(S)/V_{i}=v_{i}(S)/\sum_{S^{\prime}}v_{i}(S^{\prime})x_{i,S^{\prime}} is upper-bounded by c|𝒜|c|\mathcal{A}|. (The social welfare of xi,Sx_{i,S} itself with valuations wiw_{i} is exactly |𝒜||\mathcal{A}|, so as a consequence (xi,S)(x_{i,S}) is cc-approximate optimum with respect to the valuations wiw_{i}.) This condition is required primarily for the later “rematching” step (Lemma 8). Fortunately, this condition is satisfied by natural approaches to solve the “Gale-Eisenberg” relaxation, which replaces the continuous valuation extensions by their logarithms. We discuss this further in Section 3.

Theorem 1.

Suppose that for a certain class of instances of Nash social welfare, with subadditive valuations, we have the following procedures available, with parameters c,d1c,d\geq 1:

  • Relaxation Solver: Given valuations (vi:i𝒜)(v_{i}:i\in\mathcal{A}) on a set of items \mathcal{I}, we can find a feasible solution (xi,S)(x_{i,S}) of (Configuration LP) such that the social welfare optimum with valuations

    wi(S)=1Vivi(S),Vi=Svi(S)xi,Sw_{i}(S)=\frac{1}{V_{i}}v_{i}(S),\ \ \ \ \ V_{i}=\sum_{S\subseteq\mathcal{I}}v_{i}(S)x_{i,S}

    is at most c|𝒜|c|\mathcal{A}|.

  • Rounding Procedure: Given a feasible solution (xi,S)(x_{i,S}) of (Configuration LP), we can find an allocation (S1,,Sn)(S_{1},\ldots,S_{n}) where each SiS_{i} is a subset of some set SiS^{\prime}_{i} such that xi,Si>0x_{i,S^{\prime}_{i}}>0 and

    i𝒜wi(Si)=i𝒜1Vivi(Si)1d|𝒜|.\sum_{i\in\mathcal{A}}w_{i}(S_{i})=\sum_{i\in\mathcal{A}}\frac{1}{V_{i}}v_{i}(S_{i})\geq\frac{1}{d}|\mathcal{A}|.

    (As above, Vi=Svi(S)xi,SV_{i}=\sum_{S\subseteq\mathcal{I}}v_{i}(S)x_{i,S}.)

Then there is an algorithm which provides an O(cd2)O(cd^{2})-approximation in Nash social welfare for the same class of instances, using 11 call to the Relaxation Solver and a logarithmic number of calls to the Rounding Procedure. The running time is polynomial in |𝒜|,|||\mathcal{A}|,|\mathcal{I}| and the support of the fractional solution (xi,S)(x_{i,S}).

In the following, we prove this theorem by presenting an algorithm with several phases. These phases are similar to recent matching-based algorithms for Nash social welfare [GHV20, GHV21, LV21a, GHL+23] with the exception of two phases which are new (phases 3,4 below). The high-level outline is as follows.

NSW Algorithm Template.
  1. 1.

    We find an initial matching τ:𝒜\tau:\mathcal{A}\to\mathcal{I}, maximizing i𝒜vi({τ(i)})\prod_{i\in\mathcal{A}}v_{i}(\{\tau(i)\}). Let =τ[𝒜]\mathcal{H}=\tau[\mathcal{A}] denote the matching items and =\mathcal{I}^{\prime}=\mathcal{I}\setminus\mathcal{H} the remaining items. Let also 𝒜={i𝒜:vi()>0}\mathcal{A}^{\prime}=\{i\in\mathcal{A}:v_{i}(\mathcal{I}^{\prime})>0\}.

  2. 2.

    We apply the Relaxation Solver to obtain a fractional solution (xi,S)i𝒜,S(x_{i,S})_{i\in\mathcal{A}^{\prime},S\subseteq\mathcal{I}^{\prime}} and values Vi=Svi(S)xi,SV_{i}=\sum_{S\subseteq\mathcal{I}^{\prime}}v_{i}(S)x_{i,S}. We can view these values as “targets” for different agents to achieve.

  3. 3.

    Let νi=maxivi(j)\nu_{i}=\max_{i\in\mathcal{I}^{\prime}}v_{i}(j) and 𝒜′′={i𝒜:Vi6νi}\mathcal{A}^{\prime\prime}=\{i\in\mathcal{A}^{\prime}:V_{i}\geq 6\nu_{i}\}. We preprocess the fractional solution (xi,S)(x_{i,S}) for i𝒜′′i\in\mathcal{A}^{\prime\prime}, removing sets of low value and partitioning sets of high value, so that for every set in the support of the new fractional solution xi,Sx^{\prime}_{i,S} for agent ii, we have vi(S)=Θ(Vi)v_{i}(S)=\Theta(V_{i}).

  4. 4.

    We apply the Rounding Procedure to xi,Sx^{\prime}_{i,S} to find an allocation (Si𝒜′′)(S_{i}\in\mathcal{A}^{\prime\prime}) satisfying

    i𝒜1Vivi(Si)=Ω(1d|𝒜|).\sum_{i\in\mathcal{A}}\frac{1}{V_{i}}v_{i}(S_{i})=\Omega\left(\frac{1}{d}|\mathcal{A}|\right).

    Since each SiS_{i} has value at most ViV_{i} (due to our preprocessing), it must be the case that a Θ(1d\Theta(\frac{1}{d})-fraction of agents receive value at least Θ(1dVi)\Theta(\frac{1}{d}V_{i}). We allocate a random Θ(1d)\Theta(\frac{1}{d})-fraction of items to this Θ(1d)\Theta(\frac{1}{d})-fraction of agents (each item from their respective sets independently with probability Θ(1d)\Theta(\frac{1}{d})); call the resulting set TiT_{i} for agent ii. We repeat this phase for the remaining items and agents, until there are no agents left. For agents i𝒜𝒜′′i\in\mathcal{A}\setminus\mathcal{A}^{\prime\prime}, we define Ti=T_{i}=\emptyset.

  5. 5.

    We recompute the initial matching to obtain a new matching σ:𝒜\sigma:\mathcal{A}\to\mathcal{H}, which maximizes i𝒜vi(Ti+σ(i))\prod_{i\in\mathcal{A}}v_{i}(T_{i}+\sigma(i)). We allocate Ti+σ(i)T_{i}+\sigma(i) to agent ii.

Now we proceed to analyze the phases of this algorithm more rigorously.

2.1 Initial Matching

There is nothing new in this phase. We can find a matching τ:𝒜\tau:\mathcal{A}\to\mathcal{I} maximizing i𝒜vi(τ(i))\prod_{i\in\mathcal{A}}v_{i}(\tau(i)) by solving a max-weight matching problem with edges (i,j)(i,j) where vi(j)>0v_{i}(j)>0, and weights wij=logvi(j)w_{ij}=\log v_{i}(j).

We denote by =τ[𝒜]\mathcal{H}=\tau[\mathcal{A}] the matched items, by =\mathcal{I}^{\prime}=\mathcal{I}\setminus\mathcal{H} the remaining items, and by 𝒜={i𝒜:vi()>0}\mathcal{A}^{\prime}=\{i\in\mathcal{A}:v_{i}(\mathcal{I}^{\prime})>0\} the agents who get positive value from \mathcal{I}^{\prime}.

A property we need in the following is the following.

Lemma 2.

If τ:𝒜\tau:\mathcal{A}\to\mathcal{I} is a matching maximizing i𝒜vi(τ(i))\prod_{i\in\mathcal{A}}v_{i}(\tau(i)) then for any j=τ[𝒜]j\in\mathcal{I}^{\prime}=\mathcal{I}\setminus\tau[\mathcal{A}], vi(j)vi(τ(i))v_{i}(j)\leq v_{i}(\tau(i)).

Proof.

If there is jj\in\mathcal{I}^{\prime}, vi(j)>vi(τ(i))v_{i}(j)>v_{i}(\tau(i)), then we can swap τ(i)\tau(i) for jj in the matching and increase its value. ∎

For subadditive valuations, we also get vi(S+j)vi(S)vi(τ(i))v_{i}(S+j)-v_{i}(S)\leq v_{i}(\tau(i)) for any S,jSS\subset\mathcal{I}^{\prime},j\in\mathcal{I}^{\prime}\setminus S (since vi(S+j)vi(S)+vi(j)v_{i}(S+j)\leq v_{i}(S)+v_{i}(j)).

2.2 Relaxation Solver

Here we assume that the Relaxation Solver is available as a black-box. We return to its implementations in specific settings in Section 3.

We apply the Relaxation Solver to the residual instance on items =\mathcal{I}^{\prime}=\mathcal{I}\setminus\mathcal{H} and agents 𝒜\mathcal{A}^{\prime} who have nonzero value for some items in \mathcal{I}^{\prime}. The important property of the obtained solution (xi,S)(x_{i,S}) is that after scaling the valuations as follows,

wi(S)=1Vivi(S),Vi=Svi(S)xi,Sw_{i}(S)=\frac{1}{V_{i}}v_{i}(S),\ \ \ \ \ V_{i}=\sum_{S\subseteq\mathcal{I}^{\prime}}v_{i}(S)x_{i,S}

the social welfare optimum for w1,,wnw_{1},\ldots,w_{n} is at most c|𝒜|c|\mathcal{A}^{\prime}|. In other words, for any feasible allocation (T1,,Tn)(T^{*}_{1},\ldots,T^{*}_{n}) of \mathcal{I}^{\prime}, we have

i𝒜vi(Ti)Vic|𝒜|.\sum_{i\in\mathcal{A}^{\prime}}\frac{v_{i}(T^{*}_{i})}{V_{i}}\leq c|\mathcal{A}^{\prime}|.

2.3 Set Splitting

Here we describe Phase 3, preprocessing of the fractional solution. We will work only with agents who get significant value from the fractional solution: Let νi=maxjvi(j)\nu_{i}=\max_{j\in\mathcal{I}^{\prime}}v_{i}(j) and

𝒜′′:={i𝒜:Vi6νi}.\mathcal{A}^{\prime\prime}:=\{i\in\mathcal{A}^{\prime}:V_{i}\geq 6\nu_{i}\}.

We prove the following.

Lemma 3.

Assume that the valuations v1,,vnv_{1},\ldots,v_{n} are subadditive. Given a feasible solution (xi,S)(x_{i,S}) of (Configuration LP) for an instance with agents 𝒜′′\mathcal{A}^{\prime\prime} and items \mathcal{I}^{\prime}, where Vi=Svi(S)xi,SV_{i}=\sum_{S\subseteq\mathcal{I}^{\prime}}v_{i}(S)x_{i,S} and νi=maxjvi(j)\nu_{i}=\max_{j\in\mathcal{I}^{\prime}}v_{i}(j), we can find (in running time and a number of value queries polynomial in the number of nonzero coefficients xi,Sx_{i,S}) a modified solution (xi,S)(x^{\prime}_{i,S}) such that

  • For every SS such that xi,S>0x^{\prime}_{i,S}>0, 13Viνivi(S)Vi\frac{1}{3}V_{i}-\nu_{i}\leq v_{i}(S)\leq V_{i}.

  • For every i𝒜′′i\in\mathcal{A}^{\prime\prime}, Sxi,S=1\sum_{S\subseteq\mathcal{I}}x^{\prime}_{i,S}=1.

  • For every jj\in\mathcal{I}^{\prime}, i,Sjxi,S1\sum_{i,S\ni j}x^{\prime}_{i,S}\leq 1.

Proof.

We apply the following procedure to the fractional solution 𝐱=(xi,S)\mathbf{x}=(x_{i,S}).

SetSplitting(𝐱\mathbf{x}).
  1. 1.

    Let Vi=Svi(S)xi,SV_{i}=\sum_{S\subseteq\mathcal{I}^{\prime}}v_{i}(S)x_{i,S}, νi=maxjvi(j)\nu_{i}=\max_{j\in\mathcal{I}^{\prime}}v_{i}(j), and i={S:vi(S)13Vi}\mathcal{F}_{i}=\{S\subseteq\mathcal{I}^{\prime}:v_{i}(S)\geq\frac{1}{3}V_{i}\}.

  2. 2.

    Set xi,S=0x^{\prime}_{i,S}=0 and ki,S=0k_{i,S}=0 for SiS\notin\mathcal{F}_{i}; i.e., discard sets whose value is too low.

  3. 3.

    For every SiS\in\mathcal{F}_{i}, let ki,S=3vi(S)Vik_{i,S}=\lfloor\frac{3v_{i}(S)}{V_{i}}\rfloor. Split SS into sets S1,,Ski,SS_{1},\cdots,S_{k_{i,S}} such that =1,,ki,S\forall\ell=1,\ldots,k_{i,S},

    vi(S)13Viνi.v_{i}(S_{\ell})\geq\frac{1}{3}V_{i}-\nu_{i}.

    Note that this is possible since by subadditivity, the average value of a subset in any partition of SS into ki,Sk_{i,S} subsets is at least vi(S)/ki,S13Viv_{i}(S)/k_{i,S}\geq\frac{1}{3}V_{i}, and indivisibility of items can cause the value to drop by at most νi\nu_{i}.

  4. 4.

    For each set SS_{\ell} produced above, remove some items if necessary to ensure that its value is at most ViV_{i}. Call the resulting set SS^{\prime}_{\ell}. Note that since removing an item can decrease the value by at most νi\nu_{i}, we start from value 13Viνi\geq\frac{1}{3}V_{i}-\nu_{i}, and we only remove items as long as the value is more than ViV_{i}, we can conclude that

    Vivi(S)13Viνi.V_{i}\geq v_{i}(S^{\prime}_{\ell})\geq\frac{1}{3}V_{i}-\nu_{i}.
  5. 5.

    Set x~i,T=Si,:S=Txi,S\tilde{x}_{i,T}=\sum_{S\in\mathcal{F}_{i},\exists\ell:S^{\prime}_{\ell}=T}x_{i,S}, and xi,T=x~i,TSx~i,Sx^{\prime}_{i,T}=\frac{\tilde{x}_{i,T}}{\sum_{S}\tilde{x}_{i,S}}.

  6. 6.

    Return 𝐱\mathbf{x}^{\prime}.

Let us now prove the desired properties of 𝐱\mathbf{x}^{\prime}. By construction (step 5), the solution is normalized in the sense that Txi,T=1\sum_{T}x^{\prime}_{i,T}=1 for every i𝒜′′i\in\mathcal{A}^{\prime\prime}. Also, as we argued above, Vivi(T)13ViνiV_{i}\geq v_{i}(T)\geq\frac{1}{3}V_{i}-\nu_{i} for every set TT participating in the support of 𝐱\mathbf{x}^{\prime}. It remains to prove that the coefficients xi,Tx^{\prime}_{i,T} add up to at most 11 on each item.

Let us first consider x~i,T\tilde{x}_{i,T}: Since each contribution to x~i,T\tilde{x}_{i,T} for jTj\in T is inherited from some coefficient xi,Sx_{i,S} where jSj\in S, each coefficient xi,Sx_{i,S} contributes at most once in this way, and the coefficients xi,Sx_{i,S} for SjS\ni j add up to at most 11, it is clear that i,Tjx~i,T1\sum_{i,T\ni j}\tilde{x}_{i,T}\leq 1. Finally, xi,Tx^{\prime}_{i,T} is obtained by normalizing x~i,T\tilde{x}_{i,T}; so we need to be concerned about the summation Sx~i,S\sum_{S}\tilde{x}_{i,S}, which could be possibly less than 11.

We have:

Sivi(S)xi,S=ViSivi(S)xi,SVi13Vi=23Vi.\sum_{S\in\mathcal{F}_{i}}v_{i}(S)x_{i,S}=V_{i}-\sum_{S\notin\mathcal{F}_{i}}v_{i}(S)x_{i,S}\geq V_{i}-\frac{1}{3}V_{i}=\frac{2}{3}V_{i}.

Observe that each coefficient xi,Sx_{i,S} for SiS\in\mathcal{F}_{i} contributes ki,Sk_{i,S} coefficients of the same value to the summation Sx~i,S\sum_{S}\tilde{x}_{i,S}, and the union of the respective sets is SS. So we have

Sx~i,S=Siki,Sxi,S=Si3vi(S)Vixi,S32Sivi(S)xi,SVi1\sum_{S\subseteq\mathcal{I}^{\prime}}\tilde{x}_{i,S}=\sum_{S\in\mathcal{F}_{i}}k_{i,S}x_{i,S}=\sum_{S\in\mathcal{F}_{i}}\left\lfloor\frac{3v_{i}(S)}{V_{i}}\right\rfloor\cdot x_{i,S}\geq\frac{3}{2}\sum_{S\in\mathcal{F}_{i}}\frac{v_{i}(S)x_{i,S}}{V_{i}}\geq 1

considering that 3vi(S)/Vi13v_{i}(S)/V_{i}\geq 1 for SiS\in\mathcal{F}_{i}, so the floor operation can decrease the ratio by at most a factor of 22. Also, we have Sivi(S)xi,S23Vi\sum_{S\in\mathcal{F}_{i}}v_{i}(S)x_{i,S}\geq\frac{2}{3}V_{i} from above. Hence xi,T=x~i,TSx~i,Sx~i,Tx^{\prime}_{i,T}=\frac{\tilde{x}_{i,T}}{\sum_{S}\tilde{x}_{i,S}}\leq\tilde{x}_{i,T} and the coefficients xi,Sx^{\prime}_{i,S} for SjS\ni j add up to at most 11. ∎

2.4 Iterated Rounding

Finally, we need to round the fractional solution (xi,S)(x^{\prime}_{i,S}) obtained in the previous phase. As a subroutine, we use the assumed Rounding Procedure for (additive) social welfare.

Given a fractional solution 𝐱=(xi,S)\mathbf{x}^{\prime}=(x^{\prime}_{i,S}) obtained in the previous phase, we call the procedure NSW-ROUND(𝐱,𝒜′′,,δ)(\mathbf{x}^{\prime},\mathcal{A}^{\prime\prime},\mathcal{I}^{\prime},\delta) with a parameter δ=17d\delta=\frac{1}{7d}, where dd is a approximation factor guaranteed by the Rounding Procedure.

Algorithm 1 Iterated Rounding
1:procedure NSW-Round(𝐱,𝒜0,0,δ\mathbf{x}^{\prime},\mathcal{A}_{0},\mathcal{I}_{0},\delta):
2:     Let ViSvi(S)xi,SV^{\prime}_{i}\leftarrow\sum_{S\subseteq\mathcal{I}^{\prime}}v_{i}(S)x^{\prime}_{i,S}
3:     For each item j0j\in\mathcal{I}_{0} independently, let rjtr_{j}\leftarrow t with probability δ(1δ)t1\delta(1-\delta)^{t-1} for t1t\geq 1.
4:     Let Rt{j0:rj=t}R_{t}\leftarrow\{j\in\mathcal{I}_{0}:r_{j}=t\} for all t1t\geq 1
5:     Let t1t\leftarrow 1
6:     while 𝒜t\mathcal{A}_{t}\neq\emptyset do
7:         (Si:i𝒜t)𝐑𝐨𝐮𝐧𝐝𝐢𝐧𝐠𝐏𝐫𝐨𝐜𝐞𝐝𝐮𝐫𝐞(𝐱,𝒜t)(S_{i}:i\in\mathcal{A}_{t})\leftarrow{\bf RoundingProcedure}(\mathbf{x}^{\prime},\mathcal{A}_{t})
8:         𝒜t+1{i𝒜t:vi(Si)<δVi}\mathcal{A}_{t+1}\leftarrow\{i\in\mathcal{A}_{t}:v_{i}(S_{i})<\delta V^{\prime}_{i}\}
9:         For each agent i𝒜t𝒜t+1i\in\mathcal{A}_{t}\setminus\mathcal{A}_{t+1}, allocate TiSiRtT_{i}\leftarrow S_{i}\cap R_{t}.
10:     end while
11:     Return (Ti:i𝒜0)(T_{i}:i\in\mathcal{A}_{0})
12:end procedure

As we mentioned above, the intuition behind this rounding procedure is that it gives good value to a large fraction of agents, and exponentially small values to an exponentially decaying number of agents, so overall its Nash social welfare is good. We prove this in a sequence of lemmas.

Lemma 4.

Under our assumption on the Rounding Procedure, and setting δ=17d\delta=\frac{1}{7d}, in each round there is at least a δ\delta-fraction of agents (rounded up to the nearest integer) who receive value at least δVi\delta V^{\prime}_{i}.

Proof.

Note that Vi16ViV^{\prime}_{i}\geq\frac{1}{6}V_{i}, since every set in the support of xi,Sx^{\prime}_{i,S} has value at least 13Viνi16Vi\frac{1}{3}V_{i}-\nu_{i}\geq\frac{1}{6}V_{i}. We assume that under valuations wi(S)=1Vivi(S)w_{i}(S)=\frac{1}{V^{\prime}_{i}}v_{i}(S), the Rounding Procedure returns an allocation (Si:i𝒜t)(S_{i}:i\in\mathcal{A}_{t}) such that i𝒜twi(Si)1d|𝒜t|\sum_{i\in\mathcal{A}_{t}}w_{i}(S_{i})\geq\frac{1}{d}|\mathcal{A}_{t}|. Also, the fractional solution 𝐱\mathbf{x}^{\prime} has been processed so that no set in its support for agent ii has value more than Vi6ViV_{i}\leq 6V^{\prime}_{i}, and the rounding only allocates subsets of sets in the support of 𝐱\mathbf{x}^{\prime}. Hence, we have wi(Si)=1Vivi(Si)6w_{i}(S_{i})=\frac{1}{V^{\prime}_{i}}v_{i}(S_{i})\leq 6 for every agent ii. Consider the agents who receive value wi(Si)δw_{i}(S_{i})\geq\delta; if the number of such agents is less than δ|𝒜t|\delta|\mathcal{A}_{t}|, then the total value collected by the agents is i𝒜twi(S)<6δ|𝒜t|+δ(1δ)|𝒜t|<7δ|𝒜t|=1d|𝒜t|\sum_{i\in\mathcal{A}_{t}}w_{i}(S)<6\cdot\delta|\mathcal{A}_{t}|+\delta\cdot(1-\delta)|\mathcal{A}_{t}|<7\delta|\mathcal{A}_{t}|=\frac{1}{d}|\mathcal{A}_{t}|, which is a contradiction. ∎

Lemma 5.

If |𝒜0|=a|\mathcal{A}_{0}|=a and the agents are ordered by the round in which they received items (and arbitrarily within each round), then the ii-th agent receives each element of her set SiS_{i} independently with probability at least δ(1i1a)\delta(1-\frac{i-1}{a}).

Proof.

Consider the ii-th agent, and suppose that i𝒜t𝒜t+1i\in\mathcal{A}_{t}\setminus\mathcal{A}_{t+1}, i.e. the agent gets items in round tt. We claim that a(1δ)t1ni+1a(1-\delta)^{t-1}\geq n-i+1: In each round, we allocate items to at least a δ\delta-fraction of agents, so the set of agents 𝒜t1\mathcal{A}_{t-1} remaining after t1t-1 rounds has size at most a(1δ)t1a(1-\delta)^{t-1}. This set must include agent ii, otherwise she would have been satisfied earlier. Therefore, ai+1a(1δ)t1a-i+1\leq a(1-\delta)^{t-1}.

The items allocated to agent ii in round tt are SiRtS_{i}\cap R_{t}, where RtR_{t} contains each element independently with probability δ(1δ)t1\delta(1-\delta)^{t-1}. By the argument above, δ(1δ)t1δai+1a\delta(1-\delta)^{t-1}\geq\delta\cdot\frac{a-i+1}{a}. ∎

Lemma 6.

If TiT_{i} is the set allocated to the ii-th agent in the ordering defined above (and we assume w.l.o.g. that the index of this agent is also ii), and maxjvi(j)νi\max_{j\in\mathcal{I}^{\prime}}v_{i}(j)\leq\nu_{i} then

𝔼[logVivi(Ti)+νi]log60δ2(1i1n).{\mathbb{E}}\left[\log\frac{V_{i}}{v_{i}(T_{i})+\nu_{i}}\right]\leq\log\frac{60}{\delta^{2}(1-\frac{i-1}{n})}.
Proof.

By definition, the set SiS_{i} tentatively chosen for the ii-th agent in the round where i𝒜t𝒜t+1i\in\mathcal{A}_{t}\setminus\mathcal{A}_{t+1} satisfies

vi(Si)δViδ(13Viνi)v_{i}(S_{i})\geq\delta V^{\prime}_{i}\geq\delta\left(\frac{1}{3}V_{i}-\nu_{i}\right)

(see Lemma 3). By Lemma 5, the ii-th agent receives a set Ti=SiRtT_{i}=S_{i}\cap R_{t} which contains each element of SiS_{i} independently with probability at least δ(1i1n)\delta(1-\frac{i-1}{n}).

Consider now the expression logVif(Ti)\log\frac{V_{i}}{f(T_{i})}, where f(Ti)=vi(Ti)+νif(T_{i})=v_{i}(T_{i})+\nu_{i}. This is a random quantity due to the randomness in RtR_{t} (the set SiS_{i} is fixed here). We use concentration of subadditive functions (Theorem 20) to argue that this expression is not too large in expectation. We have f(Si)=vi(Si)+νi13δVif(S_{i})=v_{i}(S_{i})+\nu_{i}\geq\frac{1}{3}\delta V_{i}. By the expectation property of subadditive functions (Lemma 16), we have

𝔼[f(Ti)]=𝔼[f(SiRt)]δ(1i1n)13δVi=13δ2(1i1n)Vi.\displaystyle{\mathbb{E}}[f(T_{i})]={\mathbb{E}}[f(S_{i}\cap R_{t})]\geq\delta(1-\frac{i-1}{n})\cdot\frac{1}{3}\delta V_{i}=\frac{1}{3}\delta^{2}(1-\frac{i-1}{n})V_{i}.

Let us denote the last expression μi:=13δ2(1i1n)Vi𝔼[f(Ti)]\mu_{i}:=\frac{1}{3}\delta^{2}(1-\frac{i-1}{n})V_{i}\leq{\mathbb{E}}[f(T_{i})].

Now, we apply the lower-tail inequality, Theorem 20, with q=2q=2. Observe that we can assume νi<120μi\nu_{i}<\frac{1}{20}\mu_{i}. Otherwise,

Viνi20Viμi=60δ2(1i1n)\frac{V_{i}}{\nu_{i}}\leq\frac{20V_{i}}{\mu_{i}}=\frac{60}{\delta^{2}(1-\frac{i-1}{n})}

and so the desired bound holds.

Let us set q=2q=2 and k+1=μi10νiμi15νik+1=\lfloor\frac{\mu_{i}}{10\nu_{i}}\rfloor\geq\frac{\mu_{i}}{15\nu_{i}} (considering that μi10νi2\frac{\mu_{i}}{10\nu_{i}}\geq 2) in Theorem 20. We get

Pr[f(Ti)<μi30]\displaystyle\Pr\left[f(T_{i})<\frac{\mu_{i}}{30}\right] Pr[f(Ti)𝔼[f(Ti)]15μi30]\displaystyle\leq\Pr\left[f(T_{i})\leq\frac{{\mathbb{E}}[f(T_{i})]}{15}-\frac{\mu_{i}}{30}\right]
Pr[f(Ti)𝔼[f(Ti)]5(q+1)(k+1)νiq+1]\displaystyle\leq\Pr\left[f(T_{i})\leq\frac{{\mathbb{E}}[f(T_{i})]}{5(q+1)}-\frac{(k+1)\nu_{i}}{q+1}\right]
(22k)1/222μi/(30νi).\displaystyle\leq\left(\frac{2}{2^{k}}\right)^{1/2}\leq\frac{2}{2^{\mu_{i}/(30\nu_{i})}}.

Our goal is to bound the expectation 𝔼[logVif(Ti)]{\mathbb{E}}[\log\frac{V_{i}}{f(T_{i})}]. We distinguish two cases: When f(Ti)<130μif(T_{i})<\frac{1}{30}\mu_{i}, we use the bound f(Ti)νif(T_{i})\geq\nu_{i}, which always holds. Otherwise, we use the bound f(Ti)130μif(T_{i})\geq\frac{1}{30}\mu_{i}. From here,

𝔼[logVif(Ti)]Pr[f(Ti)<μi30]logViνi+(1Pr[f(Ti)<μi30])log30Viμi{\mathbb{E}}\left[\log\frac{V_{i}}{f(T_{i})}\right]\leq\Pr\left[f(T_{i})<\frac{\mu_{i}}{30}\right]\cdot\log\frac{V_{i}}{\nu_{i}}+\left(1-\Pr\left[f(T_{i})<\frac{\mu_{i}}{30}\right]\right)\cdot\log\frac{30V_{i}}{\mu_{i}}
=Pr[f(Ti)<μi30]logμi30νi+log30Viμi=\Pr\left[f(T_{i})<\frac{\mu_{i}}{30}\right]\cdot\log\frac{\mu_{i}}{30\nu_{i}}+\log\frac{30V_{i}}{\mu_{i}}
22μi/(30νi)logμi30νi+log30Viμi.\leq\frac{2}{2^{\mu_{i}/(30\nu_{i})}}\log\frac{\mu_{i}}{30\nu_{i}}+\log\frac{30V_{i}}{\mu_{i}}.

One can verify that the function 22xlogx\frac{2}{2^{x}}\log x is upper-bounded by log2\log 2 for all x>0x>0.111We are using natural logarithms everywhere. Hence,

𝔼[logVif(Ti)]log2+log30Viμi<log60Viμi=log60δ2(1i1n).{\mathbb{E}}\left[\log\frac{V_{i}}{f(T_{i})}\right]\leq\log 2+\log\frac{30V_{i}}{\mu_{i}}<\log\frac{60V_{i}}{\mu_{i}}=\log\frac{60}{\delta^{2}(1-\frac{i-1}{n})}.

Lemma 7.

If TiT_{i} is the set allocated to the ii-th agent in the ordering defined above (and we assume w.l.o.g. that the index of this agent is also ii), and maxjvi(j)νi\max_{j\in\mathcal{I}^{\prime}}v_{i}(j)\leq\nu_{i} then

1|𝒜′′|i𝒜′′𝔼[logVivi(Ti)+νi]log165δ2.\frac{1}{|\mathcal{A}^{\prime\prime}|}\sum_{i\in\mathcal{A}^{\prime\prime}}{\mathbb{E}}\left[\log\frac{V_{i}}{v_{i}(T_{i})+\nu_{i}}\right]\leq\log\frac{165}{\delta^{2}}.
Proof.

Let us denote a=|𝒜′′|a=|\mathcal{A}^{\prime\prime}|. From Lemma 6, we have

1ai=1a𝔼[logVivi(Ti)+νi]\displaystyle\frac{1}{a}\sum_{i=1}^{a}{\mathbb{E}}\left[\log\frac{V_{i}}{v_{i}(T_{i})+\nu_{i}}\right] \displaystyle\leq 1ai=1alog60δ2(1i1n)=(log60δ21ai=1alog(1i1a))\displaystyle\frac{1}{a}\sum_{i=1}^{a}\log\frac{60}{\delta^{2}(1-\frac{i-1}{n})}=\left(\log\frac{60}{\delta^{2}}-\frac{1}{a}\sum_{i=1}^{a}\log\left(1-\frac{i-1}{a}\right)\right)

Here, we have i=1alog(1i1a)=logi=1aai+1a=loga!aaa\sum_{i=1}^{a}\log(1-\frac{i-1}{a})=\log\prod_{i=1}^{a}\frac{a-i+1}{a}=\log\frac{a!}{a^{a}}\geq-a by a standard estimate for the factorial. So we can conclude

1ai=1a𝔼[logVivi(Ti)+νi]log60δ2+1<log165δ2.\frac{1}{a}\sum_{i=1}^{a}{\mathbb{E}}\left[\log\frac{V_{i}}{v_{i}(T_{i})+\nu_{i}}\right]\leq\log\frac{60}{\delta^{2}}+1<\log\frac{165}{\delta^{2}}.

This concludes the analysis of the iterated rounding phase, which allocates the set TiT_{i} to each agent i𝒜′′i\in\mathcal{A}^{\prime\prime}. For agents i𝒜𝒜′′i\in\mathcal{A}\setminus\mathcal{A}^{\prime\prime}, we set Ti=T_{i}=\emptyset.

2.5 Rematching and finishing the analysis

The last step in the algorithm is to replace the initial matching τ:𝒜\tau:\mathcal{A}\to\mathcal{H} with a new matching σ:𝒜\sigma:\mathcal{A}\to\mathcal{H} which is optimal on top of the allocation (Ti:i𝒜)(T_{i}:i\in\mathcal{A}). To analyze this step, we need two lemmas from previous work, whose proofs can be modified easily to yield the following. (We provide full proofs in the appendix.)

Lemma 8 (matching extension).

Let τ:𝒜\tau:\mathcal{A}\rightarrow\mathcal{I} be the matching maximizing a𝒜vi(τ(a))\prod_{a\in\mathcal{A}}v_{i}(\tau(a)), =τ(𝒜)\mathcal{H}=\tau(\mathcal{A}), and =\mathcal{I}^{\prime}=\mathcal{I}\setminus\mathcal{H}. Let (Vi:i𝒜)(V_{i}:i\in\mathcal{A}) be values such that Vi>0V_{i}>0 for i𝒜i\in\mathcal{A}^{\prime}, Vi=0V_{i}=0 for i𝒜𝒜i\in\mathcal{A}\setminus\mathcal{A}^{\prime} and

i𝒜vi(Ti)Vic|𝒜|\sum_{i\in\mathcal{A}^{\prime}}\frac{v_{i}(T^{*}_{i})}{V_{i}}\leq c|\mathcal{A}^{\prime}|

for every allocation (T1,,Tn)(T^{*}_{1},\ldots,T^{*}_{n}) of the items in \mathcal{I}^{\prime}. Then there is a matching π:𝒜\pi:\mathcal{A}\rightarrow\mathcal{H} such that

i𝒜(Vi+vi(π(i)))1/|𝒜|1c+1i𝒜(vi(Si))1/|𝒜|=OPTc+1\prod_{i\in\mathcal{A}}(V_{i}+v_{i}(\pi(i)))^{1/|\mathcal{A}|}\geq\frac{1}{c+1}\prod_{i\in\mathcal{A}}\left(v_{i}(S^{*}_{i})\right)^{1/|\mathcal{A}|}=\frac{OPT}{c+1}

where (S1,,Sn)(S^{*}_{1},\ldots,S^{*}_{n}) is an allocation of \mathcal{I} optimizing Nash social welfare.

Lemma 9 (rematching).

Let τ:𝒜\tau:\mathcal{A}\rightarrow\mathcal{I} be the matching maximizing a𝒜vi(τ(a))\prod_{a\in\mathcal{A}}v_{i}(\tau(a)), =τ(𝒜)\mathcal{H}=\tau(\mathcal{A}), =\mathcal{I}^{\prime}=\mathcal{I}\setminus\mathcal{H}, π:𝒜\pi:\mathcal{A}\to\mathcal{H} another arbitrary matching, and νi=maxjvi(j)\nu_{i}=\max_{j\in\mathcal{I}^{\prime}}v_{i}(j). Let (Wi:i𝒜)(W_{i}:i\in\mathcal{A}) be nonnegative values. Then there is a matching ρ:𝒜\rho:\mathcal{A}\rightarrow\mathcal{H} such that

i𝒜(max{Wi,vi(ρ(i))})1/|𝒜|i𝒜(max{Wi,vi(π(i)),νi})1/|𝒜|.\prod_{i\in\mathcal{A}}(\max\{W_{i},v_{i}(\rho(i))\})^{1/|\mathcal{A}|}\geq\prod_{i\in\mathcal{A}}\left(\max\{W_{i},v_{i}(\pi(i)),\nu_{i}\}\right)^{1/|\mathcal{A}|}.

We apply Lemma 8 with the values Vi=Svi(S)xi,SV_{i}=\sum_{S\subseteq\mathcal{I}^{\prime}}v_{i}(S)x_{i,S}, where (xi,S)(x_{i,S}) is the fractional solution returned by Relaxation Solver. Due to our assumptions, the condition of Lemma 8 is satisfied and hence there is a matching π:𝒜\pi:\mathcal{A}\to\mathcal{H} as described in Lemma 8:

i𝒜(Vi+vi(π(i)})1/|𝒜|OPTc+1.\prod_{i\in\mathcal{A}}(V_{i}+v_{i}(\pi(i)\})^{1/|\mathcal{A}|}\geq\frac{OPT}{c+1}.

From Lemma 7, we can find with constant probability an assignment (Ti:i𝒜′′)(T_{i}:i\in\mathcal{A}^{\prime\prime}) such that

(i𝒜′′Viv(Ti)+νi)1/|𝒜′′|<200δ2<10000d2.\left(\prod_{i\in\mathcal{A}^{\prime\prime}}\frac{V_{i}}{v(T_{i})+\nu_{i}}\right)^{1/|\mathcal{A}^{\prime\prime}|}<\frac{200}{\delta^{2}}<10000\,d^{2}.

(Recall that δ=17d\delta=\frac{1}{7d}, where d>1d>1 is the parameter guaranteed by the Rounding Procedure.)

Moreover, we know that v(Ti)Viv(T_{i})\leq V_{i} and Vi6νiV_{i}\geq 6\nu_{i}, hence v(Ti)+νi2Viv(T_{i})+\nu_{i}\leq 2V_{i} for i𝒜′′i\in\mathcal{A}^{\prime\prime}. For agents in 𝒜𝒜′′\mathcal{A}\setminus\mathcal{A}^{\prime\prime}, we have Ti=T_{i}=\emptyset and Vi6νiV_{i}\leq 6\nu_{i}. From here, we have

i𝒜Vi+vi(π(i))v(Ti)+νi+vi(π(i))\displaystyle\prod_{i\in\mathcal{A}}\frac{V_{i}+v_{i}(\pi(i))}{v(T_{i})+\nu_{i}+v_{i}(\pi(i))} \displaystyle\leq i𝒜′′2Vi+vi(π(i))v(Ti)+νi+vi(π(i))i𝒜𝒜′′6νi+vi(π(i))νi+vi(π(i))\displaystyle\prod_{i\in\mathcal{A}^{\prime\prime}}\frac{2V_{i}+v_{i}(\pi(i))}{v(T_{i})+\nu_{i}+v_{i}(\pi(i))}\prod_{i\in\mathcal{A}\setminus\mathcal{A}^{\prime\prime}}\frac{6\nu_{i}+v_{i}(\pi(i))}{\nu_{i}+v_{i}(\pi(i))}
\displaystyle\leq i𝒜′′2Viv(Ti)+νii𝒜𝒜′′6(20000d2)|𝒜|.\displaystyle\prod_{i\in\mathcal{A}^{\prime\prime}}\frac{2V_{i}}{v(T_{i})+\nu_{i}}\prod_{i\in\mathcal{A}\setminus\mathcal{A}^{\prime\prime}}6\leq(20000\,d^{2})^{|\mathcal{A}|}.

Finally, we use the rematching Lemma 9, with values vi(Ti)v_{i}(T_{i}): there exists a matching ρ:𝒜\rho:\mathcal{A}\to\mathcal{H} such that

i𝒜(max{vi(Ti),vi(ρ(i))})1/|𝒜|i𝒜(max{vi(Ti),vi(π(i)),νi})1/|𝒜|13i𝒜(Vi+νi+vi(π(i)))1/|𝒜|\prod_{i\in\mathcal{A}}(\max\{v_{i}(T_{i}),v_{i}(\rho(i))\})^{1/|\mathcal{A}|}\geq\prod_{i\in\mathcal{A}}(\max\{v_{i}(T_{i}),v_{i}(\pi(i)),\nu_{i}\})^{1/|\mathcal{A}|}\geq\frac{1}{3}\prod_{i\in\mathcal{A}}(V_{i}+\nu_{i}+v_{i}(\pi(i)))^{1/|\mathcal{A}|}
120000d2i𝒜(Vi+vi(π(i)))1/|𝒜|OPT20000(c+1)d2.\geq\frac{1}{20000\,d^{2}}\prod_{i\in\mathcal{A}}(V_{i}+v_{i}(\pi(i)))^{1/|\mathcal{A}|}\geq\frac{OPT}{20000(c+1)d^{2}}.

Recall that at the end, we find a matching σ:𝒜\sigma:\mathcal{A}\to\mathcal{H} maximizing i𝒜vi(Ti+σ(i))\prod_{i\in\mathcal{A}}v_{i}(T_{i}+\sigma(i)). Therefore, the NSW value of our solution is at least as much as the one provided by the matching ρ\rho, which is i𝒜(vi(Ti+ρ(i))1/|𝒜|i𝒜(max{vi(Ti),vi(ρ(i))})1/|𝒜|120000(c+1)d2OPT\prod_{i\in\mathcal{A}}(v_{i}(T_{i}+\rho(i))^{1/|\mathcal{A}|}\geq\prod_{i\in\mathcal{A}}(\max\{v_{i}(T_{i}),v_{i}(\rho(i))\})^{1/|\mathcal{A}|}\geq\frac{1}{20000(c+1)d^{2}}OPT. This proves Theorem 1.

3 Nash social welfare with subadditive valuations

Here we explain how to use the general framework described in Section 2 to obtain a constant-factor approximation for subadditive valuations, accessible by demand queries.

Theorem 10.

There is a constant-factor approximation algorithm for the Nash social welfare problem with subadditive valuations, using polynomial running time and a polynomial number of queries to a demand oracle for each agent’s valuation.

Aside from our general reduction and the ability to solve the Eisenberg-Gale relaxation with demand queries, the main component that we need here is an implementation of a Rounding Procedure for subadditive valuations, as described in Theorem 1. To our knowledge, there is only one such procedure known, which is rather intricate and forms the basis of Feige’s ingenious 1/21/2-approximation algorithm for maximizing social welfare with subadditive valuations [Fei09]. We use it here as a black-box, which can be described as follows.

Theorem 11.

For any ϵ>0\epsilon>0, there is a polynomial-time algorithm, which given a fractional solution (xi,S)(x_{i,S}) of (Configuration LP) for an instance with subadditive valuations, produces a random allocation (Ri:i𝒜)(R_{i}:i\in\mathcal{A}) such that for every agent, RiSiR_{i}\subseteq S_{i} for some Si,xi,Si>0S_{i},x_{i,S_{i}}>0, and

𝔼[vi(Ri)](12ϵ)Vi, where Vi=Svi(S)xi,S.{\mathbb{E}}[v_{i}(R_{i})]\geq\left(\frac{1}{2}-\epsilon\right)V_{i},\mbox{ where }V_{i}=\sum_{S\subseteq\mathcal{I}}v_{i}(S)x_{i,S}.

For the proof, we refer the reader to Section 3.2.2 of [Fei09], Theorem 3.9 and the summary of its proof which shows that every player receives expected value at least (12ϵ)Vi(\frac{1}{2}-\epsilon)V_{i}.

Now we are ready to prove Theorem 10.

Proof.

Considering Theorem 1, we want to show how to implement the Relaxation Solver and Rounding Procedure for subadditive valuations.

The Relaxation Solver can be obtained applying standard convex optimization techniques to the (Eisenberg-Gale Relaxation) relaxation. As we discuss in more detail in Appendix A, we can compute the values and supergradients of the objective function using demand queries, and obtain an optimal solution satisfying the assumption of Lemma 15 (with fi=vi+f_{i}=v^{+}_{i}, α=1)\alpha=1), and hence

i𝒜vi+(𝐱i)vi+(𝐱i)2|𝒜|\sum_{i\in\mathcal{A}}\frac{v^{+}_{i}(\mathbf{x}^{*}_{i})}{v^{+}_{i}(\mathbf{x}_{i})}\leq 2|\mathcal{A}|

for every feasible solution 𝐱\mathbf{x}^{*}. Another way to interpret this condition is that for Vi=vi+(𝐱i)V_{i}=v^{+}_{i}(\mathbf{x}_{i}) and modified valuations defined as wi(S)=1Vivi(S)w_{i}(S)=\frac{1}{V_{i}}v_{i}(S), there is no feasible solution 𝐱\mathbf{x}^{*} achieving value i𝒜wi+(𝐱i)>2|𝒜|\sum_{i\in\mathcal{A}}w^{+}_{i}(\mathbf{x}^{*}_{i})>2|\mathcal{A}|. In particular, the social welfare optimum with the valuations (wi:i𝒜)(w_{i}:i\in\mathcal{A}) is at most 2|𝒜|2|\mathcal{A}|. Hence, we satisfy the Relaxation Solver assumptions with c=2c=2.

Next, we implement the Rounding Procedure: Given a fractional solution (xi,S)(x_{i,S}), Theorem 11 gives a procedure which returns a random allocation (Ri:i𝒜)(R_{i}:i\in\mathcal{A}) such that 𝔼[vi(Ri)](12ϵ)Vi=(12ϵ)Sxi,Svi(S){\mathbb{E}}[v_{i}(R_{i})]\geq(\frac{1}{2}-\epsilon)V_{i}=(\frac{1}{2}-\epsilon)\sum_{S}x_{i,S}v_{i}(S). This means that for the modified valuations, wi(S)=1Vivi(S)w_{i}(S)=\frac{1}{V_{i}}v_{i}(S), we have 𝔼[wi(Ri)]12ϵ{\mathbb{E}}[w_{i}(R_{i})]\geq\frac{1}{2}-\epsilon, and i𝒜wi(Ri)(12ϵ)|𝒜|\sum_{i\in\mathcal{A}}w_{i}(R_{i})\geq(\frac{1}{2}-\epsilon)|\mathcal{A}|. Hence, we satisfy the Rounding Procedure assumptions with d=212ϵd=\frac{2}{1-2\epsilon}.

Finally, we apply Theorem 1 with c=2c=2 and d=212ϵd=\frac{2}{1-2\epsilon}. We obtain a constant-factor approximation algorithm for the Nash social welfare problem with subadditive valuations accessible via demand queries. (The constant factor ends up being 20000(c+1)d2=37500020000(c+1)d^{2}=375000 for ϵ=0.1\epsilon=0.1.) ∎

References

  • [AGSS17] Nima Anari, Shayan Oveis Gharan, Amin Saberi, and Mohit Singh. Nash social welfare, matrix permanent, and stable polynomials. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS), 2017.
  • [AMGV18] Nima Anari, Tung Mai, Shayan Oveis Gharan, and Vijay V Vazirani. Nash social welfare for indivisible items under separable, piecewise-linear concave utilities. In Proceedings of the 29th annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2274–2290. SIAM, 2018.
  • [Bar05] Julius B Barbanel. The geometry of efficient fair division. Cambridge University Press, 2005.
  • [BBKS20] Siddharth Barman, Umang Bhaskar, Anand Krishna, and Ranjani G. Sundaram. Tight approximation algorithms for p-mean welfare under subadditive valuations. In 28th Annual European Symposium on Algorithms, ESA 2020, volume 173, pages 11:1–11:17, 2020.
  • [BCE+16] Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. Procaccia, editors. Handbook of Computational Social Choice. Cambridge University Press, 2016.
  • [BKKN21] Siddharth Barman, Anand Krishna, Pooja Kulkarni, and Shivika Narang. Sublinear approximation algorithm for nash social welfare with XOS valuations. CoRR, abs/2110.00767, 2021.
  • [BKV18] Siddharth Barman, Sanath Kumar Krishnamurthy, and Rohit Vaish. Finding fair and efficient allocations. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 557–574, 2018.
  • [BS06] Nikhil Bansal and Maxim Sviridenko. The Santa Claus problem. In Proceedings of the 38th ACM Symposium on Theory of Computing (STOC), pages 31–40, 2006.
  • [BT96] Steven J Brams and Alan D Taylor. Fair Division: From cake-cutting to dispute resolution. Cambridge University Press, 1996.
  • [CCG+18] Bhaskar Ray Chaudhury, Yun Kuen Cheung, Jugal Garg, Naveen Garg, Martin Hoefer, and Kurt Mehlhorn. On fair division for indivisible items. In Proceedings of the 38th IARCS annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS), pages 25:1–17. Springer, 2018.
  • [CDG+17] Richard Cole, Nikhil Devanur, Vasilis Gkatzelis, Kamal Jain, Tung Mai, Vijay V Vazirani, and Sadra Yazdanbod. Convex program duality, Fisher markets, and Nash social welfare. In Proceedings of the 2017 ACM Conference on Economics and Computation (EC), pages 459–460. ACM, 2017.
  • [CG15] Richard Cole and Vasilis Gkatzelis. Approximating the Nash social welfare with indivisible items. In Proceedings of the 47th ACM Symposium on Theory of Computing (STOC), pages 371–380. ACM, 2015.
  • [CGM21] Bhaskar Ray Chaudhury, Jugal Garg, and Ruta Mehta. Fair and efficient allocations under subadditive valuations. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5269–5276, 2021.
  • [DHHW23] Daniel Dadush, Christopher Hojny, Sophie Huiberts, and Stefan Weltge. A simple method for convex optimization in the oracle model. Mathematical Programmming, 2023.
  • [DNS10] Shahar Dobzinski, Noam Nisan, and Michael Schapira. Approximation algorithms for combinatorial auctions with complement-free bidders. Math. Oper. Res., 35(1):1–13, 2010.
  • [Fei08] Uriel Feige. On allocations that maximize fairness. In SODA, volume 8, pages 287–293. Citeseer, 2008.
  • [Fei09] Uriel Feige. On maximizing welfare when utility functions are subadditive. SIAM J. Comput., 39(1):122–142, 2009.
  • [GHL+23] Jugal Garg, Edin Husic, Wenzheng Li, László A. Végh, and Jan Vondrák. Approximating nash social welfare by matching and local search. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 1298–1310. ACM, 2023.
  • [GHM17] Jugal Garg, Martin Hoefer, and Kurt Mehlhorn. Satiation in Fisher markets and approximation of Nash social welfare. arXiv preprint arXiv:1707.04428, 2017.
  • [GHM18] Jugal Garg, Martin Hoefer, and Kurt Mehlhorn. Approximating the Nash social welfare with budget-additive valuations. In Proceedings of the 29th annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2326–2340. SIAM, 2018.
  • [GHV20] Jugal Garg, Edin Husić, and László A. Végh. Approximating Nash social welfare under Rado valuations. arXiv preprint arXiv:2009.14793, 2020.
  • [GHV21] Jugal Garg, Edin Husić, and László Végh. Approximating nash social welfare under rado valuations. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 1412–1425, 2021.
  • [GKK20] Jugal Garg, Pooja Kulkarni, and Rucha Kulkarni. Approximating Nash social welfare under submodular valuations through (un)matchings. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 2673–2687, 2020.
  • [Kel97] Frank Kelly. Charging and rate control for elastic traffic. European transactions on Telecommunications, 8(1):33–37, 1997.
  • [KN79] Mamoru Kaneko and Kenjiro Nakamura. The Nash social welfare function. Econometrica: Journal of the Econometric Society, pages 423–435, 1979.
  • [LV21a] Wenzheng Li and Jan Vondrák. A constant-factor approximation algorithm for nash social welfare with submodular valuations. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 25–36. IEEE, 2021.
  • [LV21b] Wenzheng Li and Jan Vondrák. Estimating the nash social welfare for coverage and other submodular valuations. In Proceedings of the Thirty-First Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1119–1130. SIAM, 2021.
  • [Mou04] Hervé Moulin. Fair division and collective welfare. MIT press, 2004.
  • [Nas50] John F Nash. The bargaining problem. Econometrica: Journal of the econometric society, pages 155–162, 1950.
  • [Rot16] Jörg Rothe, editor. Economics and Computation, An Introduction to Algorithmic Game Theory, Computational Social Choice, and Fair Division. Springer, 2016.
  • [RW98] Jack Robertson and William Webb. Cake-cutting algorithms: Be fair if you can. CRC Press, 1998.
  • [Sch03] Gideon Schechtman. Concentration, results and applications. Handbook of the Geometry of Banach Spaces, (2):1603–1634, 2003.
  • [Tal89] Michel Talagrand. Isoperimetry and integrability of the sum of independent banach space valued random variables. Annals of Probability, 17:1546–1570, 1989.
  • [Tal95] Michel Talagrand. Concentration of measure and isoperimetric inequalities in product spaces. Inst. Hautes Etudes Sci.Publ. Math., 81:73–205, 1995.
  • [Var74] Hal R Varian. Equity, envy, and efficiency. Journal of Economic Theory, 9(1):63–91, 1974.
  • [You95] H Peyton Young. Equity: in theory and practice. Princeton University Press, 1995.

Appendix A The Eisenberg-Gale Relaxation

We consider the following relaxation of the Nash Social Welfare problem similar to the relaxations in [GHV21, LV21a]. We remark that the application of (Eisenberg-Gale Relaxation) in the Nash Social Welfare algorithm excludes the items allocated in the initial matching; indeed we ignore those items for the analysis in this section.

max\displaystyle\max\quad i𝒜logfi(𝐱i)\displaystyle\sum_{i\in\mathcal{A}}\log f_{i}(\mathbf{x}_{i}) (Eisenberg-Gale Relaxation)
i𝒜xij1\displaystyle\quad\sum_{i\in\mathcal{A}}x_{ij}\leq 1 j\displaystyle\forall j\in\mathcal{I}
xij0\displaystyle\quad x_{ij}\geq 0 i𝒜,j\displaystyle\forall i\in\mathcal{A},j\in\mathcal{I}

where fif_{i} is a suitable relaxation of the valuation function viv_{i} for each ii. In particular, we will use the concave extension, fi=vi+f_{i}=v_{i}^{+}:

vi+(𝐱i):=\displaystyle v_{i}^{+}(\mathbf{x}_{i}):= maxSvi(S)xi,S:\displaystyle\max\sum_{S\subseteq\mathcal{I}}v_{i}(S)x_{i,S}: (Concave Extension)
S:jSxi,Sxij\displaystyle\sum_{S\subseteq\mathcal{I}:j\in S}x_{i,S}\leq x_{ij} j\displaystyle\forall j\in\mathcal{I}
Sxi,S=1\displaystyle\ \ \sum_{S\subseteq\mathcal{I}}x_{i,S}=1
xi,S0\displaystyle\ \ x_{i,S}\geq 0 S\displaystyle\forall S\subseteq\mathcal{I}

Note that (Concave Extension) is a linear program. The dual LP to (Concave Extension) is

vi+(𝐱i)=\displaystyle v_{i}^{+}(\mathbf{x}_{i})= minq+jpjxij:\displaystyle\min q+\sum_{j\in\mathcal{I}}p_{j}x_{ij}: (Dual LP)
q+jSpjvi(S)\displaystyle q+\sum_{j\in S}p_{j}\geq v_{i}(S) S\displaystyle\forall S\subseteq\mathcal{I}
pj0\displaystyle p_{j}\geq 0 j.\displaystyle\forall j\in\mathcal{I}.

From here, we can see that vi+(𝐱i)v^{+}_{i}(\mathbf{x}_{i}) is a minimum over a collection of linear functions, and hence a concave function.

A.1 Solving the Eisenber-Gale Relaxation

Here we show how to solve the (Eisenberg-Gale Relaxation) using demand queries.

Lemma 12.

Given demand oracles for v1,,vnv_{1},\cdots,v_{n}, an optimal solution 𝐱\mathbf{x}^{*} for (Eisenberg-Gale Relaxation) can be found within a polynomially small error in polynomial time. Moreover, the support of 𝐱\mathbf{x}^{*} has size polynomial in nn.

Since vi+(𝐱i)v^{+}_{i}(\mathbf{x}_{i}) is a nonnegative concave function, logvi+(𝐱i)\log v^{+}_{i}(\mathbf{x}_{i}) is a concave function as well (wherever vi+(𝐱i)>0v^{+}_{i}(\mathbf{x}_{i})>0). If we implement the evaluation and supergradient oracles for logvi+(𝐱i)\log v^{+}_{i}(\mathbf{x}_{i}), then we can use standard techniques (see, e.g., [DHHW23]) to maximize i𝒜logvi+(𝐱i)\sum_{i\in\mathcal{A}}\log v^{+}_{i}(\mathbf{x}_{i}) over the convex polytope

P={𝐱+×𝒜:j,i𝒜xij1}.P=\{\mathbf{x}\in{\mathbb{R}}_{+}^{\mathcal{I}\times\mathcal{A}}:\forall j\in\mathcal{I},\sum_{i\in\mathcal{A}}x_{ij}\leq 1\}.

The function vi+(𝐱i)v^{+}_{i}(\mathbf{x}_{i}) can be evaluated with polynomially many demand queries; this is well-known [Fei08] and holds because the demand oracle happens to be the separation oracle for (Dual LP). Hence we can also evaluate logvi+(𝐱i)\log v^{+}_{i}(\mathbf{x}_{i}). We focus here on the implementation of a supergradient oracle.

A supergradient of logvi+\log v^{+}_{i} at a point 𝐳\mathbf{z} is any linear function Li(𝐲)L_{i}(\mathbf{y}) such that Li(𝐳)=logvi+(𝐳)L_{i}(\mathbf{z})=\log v^{+}_{i}(\mathbf{z}) and Li(𝐲)logvi+(𝐲)L_{i}(\mathbf{y})\geq\log v^{+}_{i}(\mathbf{y}) everywhere. Given 𝐳\mathbf{z}, as a first step, we find a supergradient of vi+v^{+}_{i} itself: This can be done by solving the dual LP and finding α\alpha and (pj:j)(p_{j}:j\in\mathcal{I}) such that vi+(𝐳)=α+jpjzj=α+𝐩𝐳v^{+}_{i}(\mathbf{z})=\alpha+\sum_{j\in\mathcal{I}}p_{j}z_{j}=\alpha+\mathbf{p}\cdot\mathbf{z}. Since vi+(𝐲)v^{+}_{i}(\mathbf{y}) for every 𝐲\mathbf{y} is the minimum over such linear functions, we also have vi+(𝐲)α+𝐩𝐲v^{+}_{i}(\mathbf{y})\leq\alpha+\mathbf{p}\cdot\mathbf{y} for all 𝐲\mathbf{y}. Hence α+𝐩𝐲\alpha+\mathbf{p}\cdot\mathbf{y} is the desired supergradient at 𝐳\mathbf{z}.

Next, we compute the gradient of log(α+𝐩𝐲)\log(\alpha+\mathbf{p}\cdot\mathbf{y}) with respect to 𝐲\mathbf{y}:

log(α+𝐩𝐲)=(α+𝐩𝐲)α+𝐩𝐲=𝐩α+𝐩𝐲.\nabla\log(\alpha+\mathbf{p}\cdot\mathbf{y})=\frac{\nabla(\alpha+\mathbf{p}\cdot\mathbf{y})}{\alpha+\mathbf{p}\cdot\mathbf{y}}=\frac{\mathbf{p}}{\alpha+\mathbf{p}\cdot\mathbf{y}}.

We claim that the linear approximation of log(α+𝐩𝐲)\log(\alpha+\mathbf{p}\cdot\mathbf{y}) obtained by evaluating this gradient at 𝐳\mathbf{z},

Li(𝐲)=log(α+𝐩𝐳)+(𝐲𝐳)(log(α+𝐩𝐲))|𝐳=(α+𝐩𝐳)+(𝐲𝐳)𝐩α+𝐩𝐳L_{i}(\mathbf{y})=\log(\alpha+\mathbf{p}\cdot\mathbf{z})+(\mathbf{y}-\mathbf{z})\cdot\nabla(\log(\alpha+\mathbf{p}\cdot\mathbf{y}))|_{\mathbf{z}}=(\alpha+\mathbf{p}\cdot\mathbf{z})+(\mathbf{y}-\mathbf{z})\cdot\frac{\mathbf{p}}{\alpha+\mathbf{p}\cdot\mathbf{z}}

is a valid supergradient for logvi+(𝐲)\log v^{+}_{i}(\mathbf{y}) at 𝐳\mathbf{z}. Indeed, we have logvi+(𝐳)=log(α+𝐩𝐳)=Li(𝐳)\log v^{+}_{i}(\mathbf{z})=\log(\alpha+\mathbf{p}\cdot\mathbf{z})=L_{i}(\mathbf{z}), and for all 𝐲\mathbf{y},

logvi+(𝐲)log(α+𝐩𝐲)(α+𝐩𝐳)+(𝐲𝐳)(log(α+𝐩𝐲))|𝐳=Li(𝐲).\log v^{+}_{i}(\mathbf{y})\leq\log(\alpha+\mathbf{p}\cdot\mathbf{y})\leq(\alpha+\mathbf{p}\cdot\mathbf{z})+(\mathbf{y}-\mathbf{z})\cdot\nabla(\log(\alpha+\mathbf{p}\cdot\mathbf{y}))|_{\mathbf{z}}=L_{i}(\mathbf{y}).

where the second inequality follows from the concavity of log(α+𝐩𝐲)\log(\alpha+\mathbf{p}\cdot\mathbf{y}).

Hence, (Eisenberg-Gale Relaxation) can be solved in polynomial time, within a polynomially small error, using standard convex optimization techniques [DHHW23]. In particular, we can find a point 𝐱\mathbf{x} such that i𝒜logvi+(𝐱i)i𝒜logvi+(𝐱i)+ϵ\sum_{i\in\mathcal{A}}\log v^{+}_{i}(\mathbf{x}^{*}_{i})\leq\sum_{i\in\mathcal{A}}\log v^{+}_{i}(\mathbf{x}_{i})+\epsilon for every feasible solution 𝐱\mathbf{x}^{*}.

Finally, let’s explain why the solution can be assumed to have polynomially bounded support. Given a fractional solution xijx_{ij} (which has obviously polynomially bounded support), for each agent ii, using demand queries we also obtain a solution of (Dual LP) certifying the value of vi+(𝐱i)v^{+}_{i}(\mathbf{x}_{i}). By complementary slackness, there is a matching primal solution of (Eisenberg-Gale Relaxation) which has nonzero variables corresponding to the tight constraints in (Dual LP) that define the dual solution. Since the dimension of (Dual LP) is polynomial, the number of such tight constraints is also polynomial. Hence we can assume that the number of nonzero variables in (Eisenberg-Gale Relaxation) is polynomial.

A.2 Properties of the optimal solution

Consider now the (Eisenberg-Gale Relaxation) in a general form, with objective functions fif_{i} (which could be equal to vi+v^{+}_{i} or perhaps some other extension of viv_{i}).

max\displaystyle\max\quad i𝒜logfi(𝐱i)\displaystyle\sum_{i\in\mathcal{A}}\log f_{i}(\mathbf{x}_{i}) (Eisenberg-Gale Relaxation)
i𝒜xij1\displaystyle\quad\sum_{i\in\mathcal{A}}x_{ij}\leq 1 j\displaystyle\forall j\in\mathcal{I}
xij0\displaystyle\quad x_{ij}\geq 0 i𝒜,j\displaystyle\forall i\in\mathcal{A},j\in\mathcal{I}

Suppose that 𝐱\mathbf{x} is an optimal solution of this relaxation. We will need the following property, which is also stated in [GHV20] in the context of general concave valuations (Lemma 4.1 in [GHV20]). Our proof here is much simpler. First, we consider the case of differentiable concave fif_{i} which makes the proof cleaner. (Recall however that vi+v^{+}_{i} is not differentiable everywhere.)

Lemma 13.

For an optimal solution 𝐱\mathbf{x} of (Eisenberg-Gale Relaxation) with differentiable nonnegative monotone concave functions fif_{i}, and any other feasible solution 𝐱\mathbf{x}^{*}, we have

i𝒜fi(𝐱i)fi(𝐱i)|𝒜|.\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{*}_{i})}{f_{i}(\mathbf{x}_{i})}\leq|\mathcal{A}|.
Proof.

Since fi(𝐱)f_{i}(\mathbf{x}) is a concave function, we have

fi(𝐱i)fi(𝐱i)+(𝐱i𝐱i)fi(𝐱i).f_{i}(\mathbf{x}^{*}_{i})\leq f_{i}(\mathbf{x}_{i})+(\mathbf{x}^{*}_{i}-\mathbf{x}_{i})\cdot\nabla f_{i}(\mathbf{x}_{i}).

From here, we get

i𝒜fi(𝐱i)fi(𝐱i)\displaystyle\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{*}_{i})}{f_{i}(\mathbf{x}_{i})} i𝒜fi(𝐱i)+(𝐱i𝐱i)fi(𝐱i)fi(𝐱i)=|𝒜|+i𝒜(𝐱i𝐱i)(logfi(𝐱i))|𝒜|\displaystyle\leq\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}_{i})+(\mathbf{x}^{*}_{i}-\mathbf{x}_{i})\cdot\nabla f_{i}(\mathbf{x}_{i})}{f_{i}(\mathbf{x}_{i})}=|\mathcal{A}|+\sum_{i\in\mathcal{A}}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i})\cdot\nabla(\log f_{i}(\mathbf{x}_{i}))\leq|\mathcal{A}|

using the fact that 𝐱\mathbf{x}^{*} is feasible and 𝐱\mathbf{x} is an optimum for the objective function i𝒜logfi(𝐱i)\sum_{i\in\mathcal{A}}\log f_{i}(\mathbf{x}_{i}). ∎

To deal with a more general situation where fif_{i} is not necessarily differentiable, and we don’t find an exact optimum, we prove a robust version of this lemma.

Lemma 14.

Let fi:[0,1]f_{i}:[0,1]^{\mathcal{I}}\to{\mathbb{R}} for each i𝒜i\in\mathcal{A} be nonnegative, monotone and concave. For ϵ>0\epsilon>0, let 𝐱\mathbf{x} be an ϵ4\epsilon^{4}-approximate solution of (Eisenberg-Gale Relaxation), in the sense that for every other feasible solution 𝐱\mathbf{x}^{\prime},

i𝒜logfi(𝐱)i𝒜(logfi(𝐱)+ϵ4).\sum_{i\in\mathcal{A}}\log f_{i}(\mathbf{x}^{\prime})\leq\sum_{i\in\mathcal{A}}(\log f_{i}(\mathbf{x})+\epsilon^{4}). (1)

And suppose further that that yijϵy_{ij}\geq\epsilon for all i,ji,j, Then for every feasible solution 𝐱\mathbf{x}^{*}, we have

i𝒜fi(𝐱i)fi(𝐱i)(1+2ϵ)|𝒜|.\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{*}_{i})}{f_{i}(\mathbf{x}_{i})}\leq(1+2\epsilon)|\mathcal{A}|.

Note that we must necessarily have ϵ1/|𝒜|\epsilon\leq 1/|\mathcal{A}|, because 1i𝒜xijϵ|𝒜|1\geq\sum_{i\in\mathcal{A}}x_{ij}\geq\epsilon|\mathcal{A}|.

Proof.

Let 𝐱\mathbf{x} satisfy the assumptions of the lemma. For any feasible 𝐱\mathbf{x}^{*} and T1T\geq 1, using the concavity of fif_{i}, we can write

fi(𝐱i)fi(𝐱i)T(fi(𝐱i+1T(𝐱i𝐱i))fi(𝐱i)).f_{i}(\mathbf{x}^{*}_{i})-f_{i}(\mathbf{x}_{i})\leq T(f_{i}(\mathbf{x}_{i}+\frac{1}{T}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i}))-f_{i}(\mathbf{x}_{i})).

From here,

i𝒜fi(𝐱i)fi(𝐱i)fi(𝐱i)Ti𝒜fi(𝐱i+1T(𝐱i𝐱i))fi(𝐱i)fi(𝐱i).\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{*}_{i})-f_{i}(\mathbf{x}_{i})}{f_{i}(\mathbf{x}_{i})}\leq T\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}_{i}+\frac{1}{T}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i}))-f_{i}(\mathbf{x}_{i})}{f_{i}(\mathbf{x}_{i})}.

Note that since yijϵy_{ij}\geq\epsilon, we have 𝐱i+1T(𝐱i𝐱i)𝐱i+1T𝟏(1+1Tϵ)𝐱i\mathbf{x}_{i}+\frac{1}{T}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i})\leq\mathbf{x}_{i}+\frac{1}{T}{\bf 1}\leq(1+\frac{1}{T\epsilon})\mathbf{x}_{i}. Also, fi(0)0f_{i}(0)\geq 0, so by monotonicity and concavity, fi(𝐱i+1T(𝐱i𝐱i))fi((1+1Tϵ)𝐱i)(1+1Tϵ)fi(𝐱i)f_{i}(\mathbf{x}_{i}+\frac{1}{T}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i}))\leq f_{i}((1+\frac{1}{T\epsilon})\mathbf{x}_{i})\leq(1+\frac{1}{T\epsilon})f_{i}(\mathbf{x}_{i}). Similarly, fi(𝐱i+1T(𝐱i𝐱i))fi(𝐱i1T𝟏)(11Tϵ)fi(𝐱i)f_{i}(\mathbf{x}_{i}+\frac{1}{T}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i}))\geq f_{i}(\mathbf{x}_{i}-\frac{1}{T}{\bf 1})\geq(1-\frac{1}{T\epsilon})f_{i}(\mathbf{x}_{i}). Hence the ratio ri=fi(𝐱i+1T(𝐱i𝐱i))fi(𝐱i)fi(𝐱i)r_{i}=\frac{f_{i}(\mathbf{x}_{i}+\frac{1}{T}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i}))-f_{i}(\mathbf{x}_{i})}{f_{i}(\mathbf{x}_{i})} is at most δ=1Tϵ\delta=\frac{1}{T\epsilon} in absolute value, and we can use the following elementary approximation:

riδ2log(1+ri)ri.r_{i}-\delta^{2}\leq\log(1+r_{i})\leq r_{i}.

Plugging into the inequality above, we obtain

i𝒜fi(𝐱i)fi(𝐱i)fi(𝐱i)Ti𝒜riTi𝒜(δ2+log(1+ri))=|𝒜|Tϵ2+Ti𝒜logfi(𝐱i+1T(𝐱i𝐱i))fi(𝐱i).\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{*}_{i})-f_{i}(\mathbf{x}_{i})}{f_{i}(\mathbf{x}_{i})}\leq T\sum_{i\in\mathcal{A}}r_{i}\leq T\sum_{i\in\mathcal{A}}(\delta^{2}+\log(1+r_{i}))=\frac{|\mathcal{A}|}{T\epsilon^{2}}+T\sum_{i\in\mathcal{A}}\log\frac{f_{i}(\mathbf{x}_{i}+\frac{1}{T}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i}))}{f_{i}(\mathbf{x}_{i})}.

Applying the assumption of the lemma to the feasible solution 𝐱=𝐱i+1T(𝐱i𝐱i)\mathbf{x}^{\prime}=\mathbf{x}_{i}+\frac{1}{T}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i}), we have i𝒜logfi(𝐱i+1T(𝐱i𝐱i))fi(𝐱i)ϵ4|𝒜|,\sum_{i\in\mathcal{A}}\log\frac{f_{i}(\mathbf{x}_{i}+\frac{1}{T}(\mathbf{x}^{*}_{i}-\mathbf{x}_{i}))}{f_{i}(\mathbf{x}_{i})}\leq\epsilon^{4}|\mathcal{A}|, which gives

i𝒜fi(𝐱i)fi(𝐱i)=|𝒜|+i𝒜fi(𝐱i)fi(𝐱i)fi(𝐱i)|𝒜|+|𝒜|Tϵ2+Tϵ4|𝒜|.\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{*}_{i})}{f_{i}(\mathbf{x}_{i})}=|\mathcal{A}|+\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{*}_{i})-f_{i}(\mathbf{x}_{i})}{f_{i}(\mathbf{x}_{i})}\leq|\mathcal{A}|+\frac{|\mathcal{A}|}{T\epsilon^{2}}+T\epsilon^{4}|\mathcal{A}|.

We set TT to equate the last two terms: T=1/ϵ3T=1/\epsilon^{3}, which gives the statement of the lemma. ∎

Corollary 15.

Given a value oracle and a supergradient oracle for each fif_{i}, for any constant α>0\alpha>0, we can find a solution 𝐱\mathbf{x} of (Eisenberg-Gale Relaxation) in polynomial time such that for any feasible solution 𝐱\mathbf{x}^{*},

i𝒜fi(𝐱i)fi(𝐱i)(1+α)|𝒜|.\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{*}_{i})}{f_{i}(\mathbf{x}_{i})}\leq(1+\alpha)|\mathcal{A}|.
Proof.

For ϵ>0\epsilon>0 (to be chosen at the end), we run a convex optimization algorithm on (Eisenberg-Gale Relaxation) with the additional constraint that xijϵx_{ij}\geq\epsilon, to obtain a solution 𝐱\mathbf{x} such that for any feasible 𝐱\mathbf{x}^{\prime} satisfying the same constraint, we have

i𝒜logfi(𝐱i)i𝒜logfi(𝐱)ϵ4n.\sum_{i\in\mathcal{A}}\log f_{i}(\mathbf{x}_{i})\geq\sum_{i\in\mathcal{A}}\log f_{i}(\mathbf{x}^{\prime})-\epsilon^{4}n.

By Lemma 14, this solution also satisfies

i𝒜fi(𝐱i)fi(𝐱i)(1+2ϵ)n.\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{\prime}_{i})}{f_{i}(\mathbf{x}_{i})}\leq(1+2\epsilon)n.

Finally, note that every feasible solution 𝐱\mathbf{x}^{*} of (Eisenberg-Gale Relaxation) can be modified to obtain a feasible solution 𝐱=(1ϵn)𝐱+ϵn1n𝟏\mathbf{x}^{\prime}=(1-\epsilon n)\mathbf{x}^{*}+\epsilon n\cdot\frac{1}{n}{\bf 1} which satisfies the constraint xijϵx^{\prime}_{ij}\geq\epsilon, and we have fi(𝐱i)(1ϵn)fi(𝐱i)f_{i}(\mathbf{x}^{\prime}_{i})\geq(1-\epsilon n)f_{i}(\mathbf{x}^{*}_{i}) for every i𝒜i\in\mathcal{A}. Therefore, our solution also satisfies

i𝒜fi(𝐱i)fi(𝐱i)(1+2ϵ)n1ϵn.\sum_{i\in\mathcal{A}}\frac{f_{i}(\mathbf{x}^{*}_{i})}{f_{i}(\mathbf{x}_{i})}\leq\frac{(1+2\epsilon)n}{1-\epsilon n}.

For ϵ=α2+(1+α)n\epsilon=\frac{\alpha}{2+(1+\alpha)n}, we obtain the desired statement. ∎

Appendix B Rematching lemmas

Here we prove the rematching lemmas from Section 2.2. These are essentially identical to lemmas in previous work on Nash social welfare, only reformulated in a way convenient for our presentation. We give self-contained proofs here for completeness.

Proof of Lemma 8.

Suppose that Si=HiTiS^{*}_{i}=H^{*}_{i}\cup T^{*}_{i} where HiH^{*}_{i}\subseteq\mathcal{H} and TiT^{*}_{i}\subseteq\mathcal{I}^{\prime}. We define a matching π\pi as follows: For each nonempty HiH^{*}_{i}, let π(i)\pi(i) be the item of maximum value (as a singleton) in HiH^{*}_{i}. For Hi=H^{*}_{i}=\emptyset, let π(i)\pi(i) be an arbitrary item in \mathcal{H} not selected as π(i)\pi(i^{\prime}) for some other agent. (Since ||=|𝒜||\mathcal{H}|=|\mathcal{A}|, we can always find such items.) Recall that 𝒜\mathcal{A}^{\prime} are the agents who get positive value from \mathcal{I}^{\prime}; in particular, we can assume Ti=T^{*}_{i}=\emptyset for i𝒜i\notin\mathcal{A}^{\prime}. Then we have, using monotonicity and subadditivity

i𝒜vi(Si)max{Vi,vi(π(i))}\displaystyle\prod_{i\in\mathcal{A}}\frac{v_{i}(S^{*}_{i})}{\max\{V_{i},v_{i}(\pi(i))\}} \displaystyle\leq i𝒜vi(Ti)+vi(Hi)max{Vi,vi(π(i))}i𝒜vi(Ti)+|Hi|vi(π(i)))max{Vi,vi(π(i))}\displaystyle\prod_{i\in\mathcal{A}}\frac{v_{i}(T^{*}_{i})+v_{i}(H^{*}_{i})}{\max\{V_{i},v_{i}(\pi(i))\}}\leq\prod_{i\in\mathcal{A}}\frac{v_{i}(T^{*}_{i})+|H^{*}_{i}|v_{i}(\pi(i)))}{\max\{V_{i},v_{i}(\pi(i))\}}
\displaystyle\leq i𝒜(vi(Ti)Vi+|Hi|)i𝒜𝒜|Hi|.\displaystyle\prod_{i\in\mathcal{A}^{\prime}}\left(\frac{v_{i}(T^{*}_{i})}{V_{i}}+|H^{*}_{i}|\right)\prod_{i\in\mathcal{A}\setminus\mathcal{A}^{\prime}}|H^{*}_{i}|.

Here we use the AMGM inequality:

(i𝒜(vi(Ti)Vi+|Hi|)i𝒜𝒜|Hi|)1/|𝒜|1|𝒜|(i𝒜(vi(Ti)Vi+|Hi|)+i𝒜𝒜|Hi|)c+1\displaystyle\left(\prod_{i\in\mathcal{A}^{\prime}}\left(\frac{v_{i}(T^{*}_{i})}{V_{i}}+|H^{*}_{i}|\right)\prod_{i\in\mathcal{A}\setminus\mathcal{A}^{\prime}}|H^{*}_{i}|\right)^{1/|\mathcal{A}|}\leq\frac{1}{|\mathcal{A}|}\left(\sum_{i\in\mathcal{A}^{\prime}}\left(\frac{v_{i}(T^{*}_{i})}{V_{i}}+|H^{*}_{i}|\right)+\sum_{i\in\mathcal{A}\setminus\mathcal{A}^{\prime}}|H^{*}_{i}|\right)\leq c+1

where the last inequality is by assumption and the fact that i𝒜|Hi|||=|𝒜|\sum_{i\in\mathcal{A}}|H^{*}_{i}|\leq|\mathcal{H}|=|\mathcal{A}|. ∎

Proof of Lemma 9.

Let A~={i𝒜:Wi<max{v(π(i)),νi}\tilde{A}=\{i\in\mathcal{A}:W_{i}<\max\{v(\pi(i)),\nu_{i}\}. We define a directed bipartite graph BB between A~\tilde{A} and \mathcal{H}, with two types of edges: Eτ={(τ(i),i):iA~}E_{\tau}=\{(\tau(i),i):i\in\tilde{A}\} and Eπ={(i,π(i):iA~}E_{\pi}=\{(i,\pi(i):i\in\tilde{A}\}. We also define:

  • Aν={iA~:νi>vi(π(i))}A_{\nu}=\{i\in\tilde{A}:\nu_{i}>v_{i}(\pi(i))\},

  • Aτ=Aν{iA~: directed path in B from i to Aν}A_{\tau}=A_{\nu}\cup\{i\in\tilde{A}:\exists\mbox{ directed path in }B\mbox{ from }i\mbox{ to }A_{\nu}\},

  • Aπ=A~AτA_{\pi}=\tilde{A}\setminus A_{\tau}.

We define a matching ρ\rho as follows;

  • For iAτi\in A_{\tau}, ρ(i):=τ(i)\rho(i):=\tau(i),

  • For iAπi\in A_{\pi}, ρ(i):=π(i)\rho(i):=\pi(i).

  • For iA~i\notin\tilde{A}, we define ρ(i)\rho(i) arbitrarily, to make ρ\rho a matching.

First, observe that this is indeed a matching: If it was the case that τ(i)=π(i)=j\tau(i)=\pi(i^{\prime})=j for some iAτ,iAπi\in A_{\tau},i^{\prime}\in A_{\pi}, then we would have edges (i,j)(i^{\prime},j) and (j,i)(j,i) in the graph, and since there is a directed path from ii to AνA_{\nu} (iAτi\in A_{\tau}), there would also be a directed path from ii^{\prime} to AνA_{\nu}, contradicting the fact that iAπi^{\prime}\in A_{\pi}. Hence, ρ\rho is a matching.

Also, it’s easy to see that for every

Next, we analyze the value guarantee for ρ\rho:

i𝒜max{Wi,vi(ρ(i))}i𝒜A~WiiA~vi(ρ(i))=i𝒜A~WiiAτvi(τ(i))iAπvi(π(i)).\prod_{i\in\mathcal{A}}\max\{W_{i},v_{i}(\rho(i))\}\geq\prod_{i\in\mathcal{A}\setminus\tilde{A}}W_{i}\prod_{i\in\tilde{A}}v_{i}(\rho(i))=\prod_{i\in\mathcal{A}\setminus\tilde{A}}W_{i}\prod_{i\in A_{\tau}}v_{i}(\tau(i))\prod_{i\in A_{\pi}}v_{i}(\pi(i)).

We claim that iAτvi(τ(i))iAννiiAτAνvi(π(i))\prod_{i\in A_{\tau}}v_{i}(\tau(i))\geq\prod_{i\in A_{\nu}}\nu_{i}\prod_{i\in A_{\tau}\setminus A_{\nu}}v_{i}(\pi(i)). Observe that the vertices of AτA_{\tau} can be covered disjointly by directed paths that terminate in AνA_{\nu} (from each vertex of AτA_{\tau}, there is such a path and it is also unique, because the in-degrees / out-degrees in the graph are at most 11). Let PP denote the AτA_{\tau}-vertices on some directed path like this, and let ss be its last vertex (in AνA_{\nu}). If it was the case that iPvi(τ(i))<νsiP{s}vi(π(i))\prod_{i\in P}v_{i}(\tau(i))<\nu_{s}\prod_{i\in P\setminus\{s\}}v_{i}(\pi(i)), then we could modify the matching τ\tau by swapping its edges on PP for the π\pi-edges from P{s}P\setminus\{s\}, and finally an element of value νi\nu_{i} for ss (since this item is outside of \mathcal{H} and hence available). This would increase the value of the matching τ\tau, which was chosen to be optimal, so this cannot happen.

It follows that iPvi(τ(i))νsiP{s}vi(π(i))\prod_{i\in P}v_{i}(\tau(i))\geq\nu_{s}\prod_{i\in P\setminus\{s\}}v_{i}(\pi(i)) for every maximal directed path terminating in AνA_{\nu}, and since these paths cover AτA_{\tau} disjointly, by combining all these inequalities we obtain

iAτvi(τ(i))iAννiiAτAνvi(π(i)).\prod_{i\in A_{\tau}}v_{i}(\tau(i))\geq\prod_{i\in A_{\nu}}\nu_{i}\prod_{i\in A_{\tau}\setminus A_{\nu}}v_{i}(\pi(i)).

Substituting this into the inequality above,

i𝒜max{Wi,vi(ρ(i))}i𝒜A~ViiAννiiAτvi(π(i))=i𝒜max{Wi,νi,vi(π(i))}.\prod_{i\in\mathcal{A}}\max\{W_{i},v_{i}(\rho(i))\}\geq\prod_{i\in\mathcal{A}\setminus\tilde{A}}V_{i}\prod_{i\in A_{\nu}}\nu_{i}\prod_{i\in A_{\tau}}v_{i}(\pi(i))=\prod_{i\in\mathcal{A}}\max\{W_{i},\nu_{i},v_{i}(\pi(i))\}.

Appendix C Concentration of subadditive functions

Let us start with a simple lower bound on the expected value of a random set with independently sampled elements.

Lemma 16.

If f:2M+f:2^{M}\to{\mathbb{R}}_{+} is a monotone subadditive function and RR is a random subset of SS where each element appears independently with probability 1/k1/k, k1k\geq 1 integer, then

𝔼[f(R)]1kf(S).{\mathbb{E}}[f(R)]\geq\frac{1}{k}f(S).
Proof.

Consider a random coloring of SS, where every element jSj\in S receives independently a random color c(j)[k]c(j)\in[k]. Defining S={jS:c(j)=}S_{\ell}=\{j\in S:c(j)=\ell\}, we see that each set SS_{\ell} has the same distribution as the set RR in the Lemma. Therefore,

𝔼[f(R)]=𝔼[f(S1)]==𝔼[f(Sk)]=𝔼[1k=1kf(S)]1kf(S){\mathbb{E}}[f(R)]={\mathbb{E}}[f(S_{1})]=\ldots={\mathbb{E}}[f(S_{k})]={\mathbb{E}}\left[\frac{1}{k}\sum_{\ell=1}^{k}f(S_{\ell})\right]\geq\frac{1}{k}f(S)

by subadditivity. ∎

This property is similar to the properties of submodular or self-bounding functions, which satisfy very convenient concentration bounds (similar to additive functions). Unfortunately, the same bounds are not true for subadditive functions; however, some concentration results can be recovered with a loss of certain constant factors.

Here we state a powerful concentration result presented by Schechtman [Sch03], based on the “qq-point control” concentration inequality by Talagrand [Tal89, Tal95]. We state it here in a simplified form suitable for our purposes.

Theorem 17.

Let f:2M+f:2^{M}\to{\mathbb{R}}_{+} be a monotone subadditive function, where f({i})1f(\{i\})\leq 1 for every iMi\in M. Then for any real a>0a>0 and integers k,q1k,q\geq 1, and a random set RR from a product distribution,

Pr[f(R)(q+1)a+k](Pr[f(R)a])q1qk.\Pr[f(R)\geq(q+1)a+k]\ (\Pr[f(R)\leq a])^{q}\leq\frac{1}{q^{k}}.

This statement can be obtained from Corollary 12 in [Sch03] by extending the definition of ff to Ω=IM2I\Omega^{*}=\bigcup_{I\subset M}2^{I} simply by saying fI(S)=f(S)f_{I}(S)=f(S) for all SIS\subseteq I. Also, we identify 2I2^{I} with {0,1}I\{0,1\}^{I} in a natural way. Assuming f({i})1f(\{i\})\leq 1 means that 0f(S+i)f(S)10\leq f(S+i)-f(S)\leq 1 for any set SS, by monotonicity and subadditivity. Therefore, ff is 11-Lipschitz with respect to the Hamming distance, as required in [Sch03]. The statement holds for any product distribution, i.e. a random set RR where elements appear independently.

Note that Theorem 17 refers to tails on both sides and hence is more convenient to use with the median of ff than the expectation. The next lemma shows that this is not a big issue, since the theorem also implies that the median and expectation must be within a constant factor.

Definition 18.

We define the median of a random variable ZZ as any number med(Z)=m\mbox{\sf med}(Z)=m such that

Pr[Zm]1/2,Pr[Zm]1/2.\Pr[Z\leq m]\geq 1/2,\ \ \ \Pr[Z\geq m]\geq 1/2.

For any nonnegative variable, obviously 𝔼[Z]12med(Z){\mathbb{E}}[Z]\geq\frac{1}{2}\mbox{\sf med}(Z). For subadditive functions of independent random variables, we also get a bound in the opposite direction.

Lemma 19.

Let f:2M+f:2^{M}\to{\mathbb{R}}_{+} be a monotone subadditive function, where f({i})1f(\{i\})\leq 1 for every iMi\in M. Then for a random set RR from a product distribution,

𝔼[f(R)]5(med(f(R))+1).{\mathbb{E}}[f(R)]\leq 5(\mbox{\sf med}(f(R))+1).
Proof.

Let a=med(f(R))a=\mbox{\sf med}(f(R)) be the median. We apply Theorem 17 with k=q+1,q3k=q+1,q\geq 3:

Pr[f(R)(q+1)(a+1)]2qqq+1(23)q.\Pr[f(R)\geq(q+1)(a+1)]\leq\frac{2^{q}}{q^{q+1}}\leq\left(\frac{2}{3}\right)^{q}.

We can bound the expectation as follows:

𝔼[f(R)]4(a+1)+(a+1)q=3Pr[f(R)>(q+1)(a+1)]4(a+1)+(a+1)q=3(23)q<5(a+1).{\mathbb{E}}[f(R)]\leq 4(a+1)+(a+1)\sum_{q=3}^{\infty}\Pr[f(R)>(q+1)(a+1)]\leq 4(a+1)+(a+1)\sum_{q=3}^{\infty}\left(\frac{2}{3}\right)^{q}<5(a+1).

Hence, we obtain the following as a corollary of Theorem 17 and Lemma 19. For convenience, we also introduce a parameter ν>0\nu>0 as an upper bound on singleton values.

Theorem 20.

Let f:2M+f:2^{M}\to{\mathbb{R}}_{+} be a monotone subadditive function, where f({i})νf(\{i\})\leq\nu, ν>0\nu>0, for every iMi\in M. Then for any integers k,q1k,q\geq 1, and a random set RR where elements appear independently,

Pr[f(R)𝔼[f(R)]5(q+1)(k+1)νq+1](2qk)1/q.\Pr\left[f(R)\leq\frac{{\mathbb{E}}[f(R)]}{5(q+1)}-\frac{(k+1)\nu}{q+1}\right]\leq\left(\frac{2}{q^{k}}\right)^{1/q}.
Proof.

Assume first gg is a function satisfying the assumptions with ν=1\nu=1. We use Theorem 17 with parameter a=(med(g)k)/(q+1)a=(\mbox{\sf med}(g)-k)/(q+1). Note that Pr[g(R)(q+1)a+k]=Pr[g(R)med(g)]=1/2\Pr[g(R)\geq(q+1)a+k]=\Pr[g(R)\geq\mbox{\sf med}(g)]=1/2. Hence, Theorem 17 gives

12(Pr[g(R)a])q1qk.\frac{1}{2}\cdot(\Pr[g(R)\leq a])^{q}\leq\frac{1}{q^{k}}.

From Lemma 19, we have a=(med(g)k)/(q+1)(15𝔼[g(R)]1k)/(q+1)a=(\mbox{\sf med}(g)-k)/(q+1)\geq(\frac{1}{5}{\mathbb{E}}[g(R)]-1-k)/(q+1) which implies the statement for ν=1\nu=1:

Pr[g(R)𝔼[g(R)]5(q+1)k+1q+1](2qk)1/q.\Pr\left[g(R)\leq\frac{{\mathbb{E}}[g(R)]}{5(q+1)}-\frac{k+1}{q+1}\right]\leq\left(\frac{2}{q^{k}}\right)^{1/q}.

For a function ff satisfying the assumptions for ν>0\nu>0, we apply this inequality to the function g(R)=1νf(R)g(R)=\frac{1}{\nu}f(R), to obtain the statement of the lemma. ∎