A constant factor approximation for Nash social welfare with subadditive valuations
Abstract
We present a constant-factor approximation algorithm for the Nash social welfare maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a template for NSW optimization by solving a configuration-type LP and using a rounding procedure for (utilitarian) social welfare as a blackbox, which could be applicable to other variants of the problem.
1 Introduction
We consider the problem of allocating a set of indivisible items to a set of agents, where each agent has a valuation function . The Nash social welfare (NSW) problem is to find an allocation that maximizes the geometric mean of the agents’ valuations,
For , an -approximate solution to the NSW problem is an allocation with , where denotes the optimum value of the NSW-maximization problem.
Allocating resources to agents in a fair and efficient manner is a fundamental problem in computer science, economics, and social choice theory, with substantial prior work [Bar05, BT96, BCE+16, Mou04, RW98, Rot16, You95]. A common measure of efficiency is utilitarian social welfare, i.e., the sum of the utilities for an allocation . This objective does not take fairness into account, as all items could be allocated to one agent whose valuation function dominates the others. In order to incorporate fairness, various notions have been considered, ranging from envy-freeness, proportional fairness to various modifications of the objective function. At the end of the spectrum opposite to utilitarian social welfare, one can consider the max-min objective, , also known as the Santa Claus problem [BS06]. This objective is somewhat extreme in considering only the happiness of the least happy agent.
Nash social welfare provides a balanced tradeoff between the requirements of fairness and efficiency. It has been introduced independently in several contexts: as a discrete variant of the Nash bargaining game [KN79, Nas50]; as a notion of competitive equilibrium with equal incomes in economics [Var74]; and also as a proportional fairness notion in networking [Kel97]. Nash social welfare has several desirable features, for example invariance under scaling of the valuation functions by independent factors , i.e., each agent can express their preference in a “different currency” without changing the optimization problem (see [Mou04] for additional characteristics).
1.1 Preliminaries
The difficulty of optimizing Nash social welfare depends naturally on the class of valuation functions that we want to deal with, and how they are accessible. Various classes of valuations have been considered in the literature. For the sake of this paper, let us introduce three basic classes of valuations, and two oracle models.
Classes of valuation functions
A set function is monotone if whenever . A monotone set function with is also called a valuation function or simply valuation.
A valuation is additive if for nonnegative weights .
A valuation is submodular if .
A valuation is fractionally subadditive (or XOS) if for nonnegative weights .
A valuation is subadditive if .
We remark that these classes form a chain of inclusions: additive valuations are submodular, submodular valuations are XOS, and XOS valuations are subadditive.
Oracle access.
Note that additive valuations can be presented explicitly on the input. However, for more general classes of valuations, we need to resort to oracle access, since presenting a valuation explicitly would take an exponential amount of space. Three types of oracles to access valuation functions have been commonly considered in the literature.
-
•
Value oracle: Given a set , return the value .
-
•
Demand oracle: Given prices , return a set maximizing .
-
•
XOS oracle (for an XOS valuation ): Given a set , return an additive function from the XOS representation of such that .
1.2 Prior work
The Nash social welfare problem is NP-hard already in the case of two agents with identical additive valuations, by a reduction from the Subset-Sum problem. For multiple agents, it is NP-hard to approximate within a factor better than for additive valuations [GHM17], and better than for submodular valuations [GKK20].
The first constant-factor approximation algorithm for additive valuations, with the factor of , was given by Cole and Gkatzelis [CG15] using a continuous relaxation based on a particular market equilibrium concept. Later, [CDG+17] improved the analysis of this algorithm to achieve the factor of . Anari, Oveis Gharan, Saberi, and Singh [AGSS17] used a convex relaxation that relies on properties of real stable polynomials, to give an elegant analysis of an algorithm that gives a factor of . The current best factor is by Barman, Krishnamurthy, and Vaish [BKV18]; the algorithm uses a different market equilibrium based approach. Note also that this factor is above , hence separating the additive and submodular settings.
Constant-factor approximations have been extended to some classes beyond additive functions: capped-additive [GHM18], separable piecewise-linear concave (SPLC) [AMGV18], and their common generalization, capped-SPLC [CCG+18] valuations; the approximation factor for capped-SPLC valuations matches the factor for additive valuations. All these valuations are special classes of submodular ones. Subsequently, Li and Vondrák [LV21b] designed an algorithm that estimates the optimal value within a factor of for a broad class of submodular valuations, such as coverage and summations of matroid rank functions, by extending the techniques of [AGSS17] using real stable polynomials. However, this algorithm only estimates the optimum value but does not find a corresponding allocation in polynomial time.
An important concenptual advance was presented in [GHV20], where a relaxation combining ideas from matching theory and convex optimization was shown to give a constant factor for the class of “Rado valuations” (containing weighted matroid rank functions and some related valuations). A crucial property of this approach is that it is quite modular and ended up leadin to multiple further advances. In [LV21a], this approach was extended to provide a constant factor approximation algorithm for general submodular valuations, by replacing the concave extension of a valuation by the multilinear extension. The initial factor was rather small (). Recently, a much simpler algorithm combining matching and local search was presented to give a -approximation for submodular valuations [GHL+23].
For the more general classes of XOS and subadditive valuations [BBKS20, CGM21, GKK20], however, only polynomial approximation factors were known until now, and this is the best one can hope for in the value oracle model [BBKS20], for the same reasons that this is a barrier for the utilitarian social welfare problem [DNS10]. The best known approximation factors up to now have been for subadditive valuations, and for XOS valuations if we are given access to both demand and XOS oracles [BKKN21]. Constant factors for XOS valuations seemed quite out of reach prior to this work, and obtaining any sublinear factor for subadditive valuations was stated as an open problem in [BKKN21].
1.3 Our results and techniques
Our main result is the following.
Theorem.
(informal) There is an algorithm, with polynomial time and demand queries to agents’ valuations, that provides a constant factor approximation for the Nash Social Welfare with subadditive valuations.
As a special case, this also gives a constant-factor approximation for XOS valuations accessible via demand queries. (The algorithm for XOS valuations is somewhat simpler, as we discuss later in this section.) This completes the picture in the sense that now we have a constant factor approximation for Nash social welfare in the main settings where one is known for utilitarian social welfare: for submodular valuations with value queries, and for subadditive valuations with demand queries. (It is known that a stronger oracle than a value oracle is required for XOS and subadditive valuations, even for utilitarian social welfare.)
The basis of our approach is the matching+relaxation paradigm which gave a constant-factor approximation for submodular valuations [GHV20, LV21a]. Considering that the only constant-factor approximation for social welfare with subadditive valuations [Fei08] is based on the "Configuration LP", which can be solved using demand queries, it is a natural idea to use a relaxation similar to the Configuration LP. A natural variant for Nash social welfare is the Eisenberg-Gale relaxation, using the logarithm of the concave extension of each agent’s valuation. We apply this relaxation on top of an initial matching, as in [GHV20].
The main obstacle with this approach is that natural rounding procedures for the Configuration LP do not satisfy any concentration properties. At a high level, without concentration some agents have higher value but some have lower value - leading to poor Nash social welfare even if we can maintain the expected utilitarian social welfare. More specifically, the first challenge is that, given a fractional solution , we would ideally like to round it to an integral allocation by allocating set to agent with probability . Even though this ideal rounding preserves each agent’s expected value, the variance can be arbitrary, depending on the fractional solution . Our first technical contribution is a procedure (see Lemma 3) for finding a new feasible solution to the Configuration LP that, for each agent, has only high value subsets in its support (with the exception of agents who get most of their value from a single item — this case is handled separately with the matching procedure). This procedure is rather simple in hindsight. At a high level, we can think of the fractional solution as a distribution of allocations for each agent. We want to discard the part of the distribution that corresponds to low value subsets; but this leaves uncovered probability mass. We re-cover this remaining mass by splitting high value subsets to “stretch” over more probability mass, while allocating each item with to agent with the same total probability.
The next obstacle in rounding the Configuration LP is resolving “contentions”: aka under the ideal rounding procedure described above, we may be trying to allocate the same item to multiple agents (even though in expectation it is only allocated to one agent). For XOS valuations, a simple independent randomized contention resolution scheme guarantees a constant factor approximation and also enjoys good concentration. However the situation is more complicated for subadditive valuations. The only known constant-factor approximation for social welfare with subadditive valuations is a rather intricate rounding procedure of Feige [Fei08], which does not seem to satisfy any useful concentration properties. In any rounded solution, there might be agents who receive very low value, which hurts Nash social welfare, and hence we cannot use it directly.
Our solution is an iterated rounding procedure, where in each stage a certain fraction of agents is “satisfied” in the sense that they receive value comparable to their fractional value. We allocate the respective items to them, subject to random filtering which ensures that enough items are still left for the remaining agents. Then we recurse on the remaining agents and remaining items. Still, some agents may receive a relatively small value, but we guarantee that the fraction of agents who receive low values is proportionally small, which means that the Nash social welfare overall is guaranteed to be good. As an example: if , it suffices to solve for an allocation where agents receive value at least , agents receive value at least , agents receive value at least , and so on. Then the approximation factor in terms of Nash Social Welfare turns out to be
and this infinite product converges to (we leave this as an exercise for the reader).
In order to guarantee the success of this rounding procedure, we need a concentration inequality (as in previous works). Concentration properties of subadditive functions are somewhat weaker and more difficult to prove that for submodular or XOS functions. Here we appeal to a powerful subadditive concentration inequality presented by Schechtman [Sch03], which is based on the “-point control inequality” of Talagrand [Tal89, Tal95].
Reducing Nash Welfare to Rounding of Configuration LP
Technically, we prove a reduction theorem (Theorem 1) which shows that to achieve a constant factor approximation for Nash social welfare, is it sufficient to implement efficient subroutines for two subproblems: (1) finding a solution of the Configuration LP satisfying a certain additional property (which happens to be satisfied for example by an optimal solution of the Eisenberg-Gale relaxation [GHV20], or the continuous greedy algorithm for the log-multilinear relaxation [LV21a]), and (2) rounding a fractional solution of the Configuration LP while losing only a constant factor with respect to utilitarian social welfare. The latter problem is relatively easy for XOS valuations, but non-trivial for subadditive valuations. Fortunately, a factor rounding procedure is known due to Feige’s work on welfare maximization with subadditive bidders [Fei08], which we use here as a blackbox.
We remark that the constant factors lost in various stages of our proof are rather large and lead to a final approximation factor of for the Nash social welfare problem with subadditive valuations. One may hope that as in the case of submodular valuations, an initially large constant factor can be eventually improved to a “practical one”.
Paper organization.
In Section 2, we present the main technical result, which is a reduction of Nash social welfare to a certain relaxation solver and a rounding procedure for the Configuration LP. In Section 3, we show how this implies a constant factor approximation algorithm for Nash social welfare with subadditive valuations.
2 Optimizing NSW via relaxation and rounding for social welfare
Here we describe our general approach which allows us to derive algorithms for NSW optimization in several settings. At a high-level, we reduce the NSW optimization to finding a certain solution for the "Configuration LP" (for social welfare optimization), and having a rounding procedure for the Configuration LP, again with respect to social welfare.
Let us define the Configuration LP:
(Configuration LP) | |||||
Equivalently, this can be written as
where as before,
(Concave Extension) | |||||
The following is our main reduction theorem, which provides an algorithm for Nash social welfare, given two procedure that we call the Relaxation Solver and Rounding Procedure. Note that assumption on the Relaxation Solver is somewhat unusual: It is not that is an optimal or near-optimal solution of (Configuration LP), but a different condition that the optimum social welfare with valuations by is upper-bounded by . (The social welfare of itself with valuations is exactly , so as a consequence is -approximate optimum with respect to the valuations .) This condition is required primarily for the later “rematching” step (Lemma 8). Fortunately, this condition is satisfied by natural approaches to solve the “Gale-Eisenberg” relaxation, which replaces the continuous valuation extensions by their logarithms. We discuss this further in Section 3.
Theorem 1.
Suppose that for a certain class of instances of Nash social welfare, with subadditive valuations, we have the following procedures available, with parameters :
-
•
Relaxation Solver: Given valuations on a set of items , we can find a feasible solution of (Configuration LP) such that the social welfare optimum with valuations
is at most .
-
•
Rounding Procedure: Given a feasible solution of (Configuration LP), we can find an allocation where each is a subset of some set such that and
(As above, .)
Then there is an algorithm which provides an -approximation in Nash social welfare for the same class of instances, using call to the Relaxation Solver and a logarithmic number of calls to the Rounding Procedure. The running time is polynomial in and the support of the fractional solution .
In the following, we prove this theorem by presenting an algorithm with several phases. These phases are similar to recent matching-based algorithms for Nash social welfare [GHV20, GHV21, LV21a, GHL+23] with the exception of two phases which are new (phases 3,4 below). The high-level outline is as follows.
NSW Algorithm Template.
-
1.
We find an initial matching , maximizing . Let denote the matching items and the remaining items. Let also .
-
2.
We apply the Relaxation Solver to obtain a fractional solution and values . We can view these values as “targets” for different agents to achieve.
-
3.
Let and . We preprocess the fractional solution for , removing sets of low value and partitioning sets of high value, so that for every set in the support of the new fractional solution for agent , we have .
-
4.
We apply the Rounding Procedure to to find an allocation satisfying
Since each has value at most (due to our preprocessing), it must be the case that a )-fraction of agents receive value at least . We allocate a random -fraction of items to this -fraction of agents (each item from their respective sets independently with probability ); call the resulting set for agent . We repeat this phase for the remaining items and agents, until there are no agents left. For agents , we define .
-
5.
We recompute the initial matching to obtain a new matching , which maximizes . We allocate to agent .
Now we proceed to analyze the phases of this algorithm more rigorously.
2.1 Initial Matching
There is nothing new in this phase. We can find a matching maximizing by solving a max-weight matching problem with edges where , and weights .
We denote by the matched items, by the remaining items, and by the agents who get positive value from .
A property we need in the following is the following.
Lemma 2.
If is a matching maximizing then for any , .
Proof.
If there is , , then we can swap for in the matching and increase its value. ∎
For subadditive valuations, we also get for any (since ).
2.2 Relaxation Solver
Here we assume that the Relaxation Solver is available as a black-box. We return to its implementations in specific settings in Section 3.
We apply the Relaxation Solver to the residual instance on items and agents who have nonzero value for some items in . The important property of the obtained solution is that after scaling the valuations as follows,
the social welfare optimum for is at most . In other words, for any feasible allocation of , we have
2.3 Set Splitting
Here we describe Phase 3, preprocessing of the fractional solution. We will work only with agents who get significant value from the fractional solution: Let and
We prove the following.
Lemma 3.
Assume that the valuations are subadditive. Given a feasible solution of (Configuration LP) for an instance with agents and items , where and , we can find (in running time and a number of value queries polynomial in the number of nonzero coefficients ) a modified solution such that
-
•
For every such that , .
-
•
For every , .
-
•
For every , .
Proof.
We apply the following procedure to the fractional solution .
SetSplitting().
-
1.
Let , , and .
-
2.
Set and for ; i.e., discard sets whose value is too low.
-
3.
For every , let . Split into sets such that ,
Note that this is possible since by subadditivity, the average value of a subset in any partition of into subsets is at least , and indivisibility of items can cause the value to drop by at most .
-
4.
For each set produced above, remove some items if necessary to ensure that its value is at most . Call the resulting set . Note that since removing an item can decrease the value by at most , we start from value , and we only remove items as long as the value is more than , we can conclude that
-
5.
Set , and .
-
6.
Return .
Let us now prove the desired properties of . By construction (step 5), the solution is normalized in the sense that for every . Also, as we argued above, for every set participating in the support of . It remains to prove that the coefficients add up to at most on each item.
Let us first consider : Since each contribution to for is inherited from some coefficient where , each coefficient contributes at most once in this way, and the coefficients for add up to at most , it is clear that . Finally, is obtained by normalizing ; so we need to be concerned about the summation , which could be possibly less than .
We have:
Observe that each coefficient for contributes coefficients of the same value to the summation , and the union of the respective sets is . So we have
considering that for , so the floor operation can decrease the ratio by at most a factor of . Also, we have from above. Hence and the coefficients for add up to at most . ∎
2.4 Iterated Rounding
Finally, we need to round the fractional solution obtained in the previous phase. As a subroutine, we use the assumed Rounding Procedure for (additive) social welfare.
Given a fractional solution obtained in the previous phase, we call the procedure NSW-ROUND with a parameter , where is a approximation factor guaranteed by the Rounding Procedure.
As we mentioned above, the intuition behind this rounding procedure is that it gives good value to a large fraction of agents, and exponentially small values to an exponentially decaying number of agents, so overall its Nash social welfare is good. We prove this in a sequence of lemmas.
Lemma 4.
Under our assumption on the Rounding Procedure, and setting , in each round there is at least a -fraction of agents (rounded up to the nearest integer) who receive value at least .
Proof.
Note that , since every set in the support of has value at least . We assume that under valuations , the Rounding Procedure returns an allocation such that . Also, the fractional solution has been processed so that no set in its support for agent has value more than , and the rounding only allocates subsets of sets in the support of . Hence, we have for every agent . Consider the agents who receive value ; if the number of such agents is less than , then the total value collected by the agents is , which is a contradiction. ∎
Lemma 5.
If and the agents are ordered by the round in which they received items (and arbitrarily within each round), then the -th agent receives each element of her set independently with probability at least .
Proof.
Consider the -th agent, and suppose that , i.e. the agent gets items in round . We claim that : In each round, we allocate items to at least a -fraction of agents, so the set of agents remaining after rounds has size at most . This set must include agent , otherwise she would have been satisfied earlier. Therefore, .
The items allocated to agent in round are , where contains each element independently with probability . By the argument above, . ∎
Lemma 6.
If is the set allocated to the -th agent in the ordering defined above (and we assume w.l.o.g. that the index of this agent is also ), and then
Proof.
By definition, the set tentatively chosen for the -th agent in the round where satisfies
(see Lemma 3). By Lemma 5, the -th agent receives a set which contains each element of independently with probability at least .
Consider now the expression , where . This is a random quantity due to the randomness in (the set is fixed here). We use concentration of subadditive functions (Theorem 20) to argue that this expression is not too large in expectation. We have . By the expectation property of subadditive functions (Lemma 16), we have
Let us denote the last expression .
Now, we apply the lower-tail inequality, Theorem 20, with . Observe that we can assume . Otherwise,
and so the desired bound holds.
Let us set and (considering that ) in Theorem 20. We get
Our goal is to bound the expectation . We distinguish two cases: When , we use the bound , which always holds. Otherwise, we use the bound . From here,
One can verify that the function is upper-bounded by for all .111We are using natural logarithms everywhere. Hence,
∎
Lemma 7.
If is the set allocated to the -th agent in the ordering defined above (and we assume w.l.o.g. that the index of this agent is also ), and then
Proof.
Let us denote . From Lemma 6, we have
Here, we have by a standard estimate for the factorial. So we can conclude
∎
This concludes the analysis of the iterated rounding phase, which allocates the set to each agent . For agents , we set .
2.5 Rematching and finishing the analysis
The last step in the algorithm is to replace the initial matching with a new matching which is optimal on top of the allocation . To analyze this step, we need two lemmas from previous work, whose proofs can be modified easily to yield the following. (We provide full proofs in the appendix.)
Lemma 8 (matching extension).
Let be the matching maximizing , , and . Let be values such that for , for and
for every allocation of the items in . Then there is a matching such that
where is an allocation of optimizing Nash social welfare.
Lemma 9 (rematching).
Let be the matching maximizing , , , another arbitrary matching, and . Let be nonnegative values. Then there is a matching such that
We apply Lemma 8 with the values , where is the fractional solution returned by Relaxation Solver. Due to our assumptions, the condition of Lemma 8 is satisfied and hence there is a matching as described in Lemma 8:
From Lemma 7, we can find with constant probability an assignment such that
(Recall that , where is the parameter guaranteed by the Rounding Procedure.)
Moreover, we know that and , hence for . For agents in , we have and . From here, we have
Finally, we use the rematching Lemma 9, with values : there exists a matching such that
Recall that at the end, we find a matching maximizing . Therefore, the NSW value of our solution is at least as much as the one provided by the matching , which is . This proves Theorem 1.
3 Nash social welfare with subadditive valuations
Here we explain how to use the general framework described in Section 2 to obtain a constant-factor approximation for subadditive valuations, accessible by demand queries.
Theorem 10.
There is a constant-factor approximation algorithm for the Nash social welfare problem with subadditive valuations, using polynomial running time and a polynomial number of queries to a demand oracle for each agent’s valuation.
Aside from our general reduction and the ability to solve the Eisenberg-Gale relaxation with demand queries, the main component that we need here is an implementation of a Rounding Procedure for subadditive valuations, as described in Theorem 1. To our knowledge, there is only one such procedure known, which is rather intricate and forms the basis of Feige’s ingenious -approximation algorithm for maximizing social welfare with subadditive valuations [Fei09]. We use it here as a black-box, which can be described as follows.
Theorem 11.
For any , there is a polynomial-time algorithm, which given a fractional solution of (Configuration LP) for an instance with subadditive valuations, produces a random allocation such that for every agent, for some , and
For the proof, we refer the reader to Section 3.2.2 of [Fei09], Theorem 3.9 and the summary of its proof which shows that every player receives expected value at least .
Now we are ready to prove Theorem 10.
Proof.
Considering Theorem 1, we want to show how to implement the Relaxation Solver and Rounding Procedure for subadditive valuations.
The Relaxation Solver can be obtained applying standard convex optimization techniques to the (Eisenberg-Gale Relaxation) relaxation. As we discuss in more detail in Appendix A, we can compute the values and supergradients of the objective function using demand queries, and obtain an optimal solution satisfying the assumption of Lemma 15 (with , , and hence
for every feasible solution . Another way to interpret this condition is that for and modified valuations defined as , there is no feasible solution achieving value . In particular, the social welfare optimum with the valuations is at most . Hence, we satisfy the Relaxation Solver assumptions with .
Next, we implement the Rounding Procedure: Given a fractional solution , Theorem 11 gives a procedure which returns a random allocation such that . This means that for the modified valuations, , we have , and . Hence, we satisfy the Rounding Procedure assumptions with .
Finally, we apply Theorem 1 with and . We obtain a constant-factor approximation algorithm for the Nash social welfare problem with subadditive valuations accessible via demand queries. (The constant factor ends up being for .) ∎
References
- [AGSS17] Nima Anari, Shayan Oveis Gharan, Amin Saberi, and Mohit Singh. Nash social welfare, matrix permanent, and stable polynomials. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS), 2017.
- [AMGV18] Nima Anari, Tung Mai, Shayan Oveis Gharan, and Vijay V Vazirani. Nash social welfare for indivisible items under separable, piecewise-linear concave utilities. In Proceedings of the 29th annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2274–2290. SIAM, 2018.
- [Bar05] Julius B Barbanel. The geometry of efficient fair division. Cambridge University Press, 2005.
- [BBKS20] Siddharth Barman, Umang Bhaskar, Anand Krishna, and Ranjani G. Sundaram. Tight approximation algorithms for p-mean welfare under subadditive valuations. In 28th Annual European Symposium on Algorithms, ESA 2020, volume 173, pages 11:1–11:17, 2020.
- [BCE+16] Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. Procaccia, editors. Handbook of Computational Social Choice. Cambridge University Press, 2016.
- [BKKN21] Siddharth Barman, Anand Krishna, Pooja Kulkarni, and Shivika Narang. Sublinear approximation algorithm for nash social welfare with XOS valuations. CoRR, abs/2110.00767, 2021.
- [BKV18] Siddharth Barman, Sanath Kumar Krishnamurthy, and Rohit Vaish. Finding fair and efficient allocations. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 557–574, 2018.
- [BS06] Nikhil Bansal and Maxim Sviridenko. The Santa Claus problem. In Proceedings of the 38th ACM Symposium on Theory of Computing (STOC), pages 31–40, 2006.
- [BT96] Steven J Brams and Alan D Taylor. Fair Division: From cake-cutting to dispute resolution. Cambridge University Press, 1996.
- [CCG+18] Bhaskar Ray Chaudhury, Yun Kuen Cheung, Jugal Garg, Naveen Garg, Martin Hoefer, and Kurt Mehlhorn. On fair division for indivisible items. In Proceedings of the 38th IARCS annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS), pages 25:1–17. Springer, 2018.
- [CDG+17] Richard Cole, Nikhil Devanur, Vasilis Gkatzelis, Kamal Jain, Tung Mai, Vijay V Vazirani, and Sadra Yazdanbod. Convex program duality, Fisher markets, and Nash social welfare. In Proceedings of the 2017 ACM Conference on Economics and Computation (EC), pages 459–460. ACM, 2017.
- [CG15] Richard Cole and Vasilis Gkatzelis. Approximating the Nash social welfare with indivisible items. In Proceedings of the 47th ACM Symposium on Theory of Computing (STOC), pages 371–380. ACM, 2015.
- [CGM21] Bhaskar Ray Chaudhury, Jugal Garg, and Ruta Mehta. Fair and efficient allocations under subadditive valuations. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5269–5276, 2021.
- [DHHW23] Daniel Dadush, Christopher Hojny, Sophie Huiberts, and Stefan Weltge. A simple method for convex optimization in the oracle model. Mathematical Programmming, 2023.
- [DNS10] Shahar Dobzinski, Noam Nisan, and Michael Schapira. Approximation algorithms for combinatorial auctions with complement-free bidders. Math. Oper. Res., 35(1):1–13, 2010.
- [Fei08] Uriel Feige. On allocations that maximize fairness. In SODA, volume 8, pages 287–293. Citeseer, 2008.
- [Fei09] Uriel Feige. On maximizing welfare when utility functions are subadditive. SIAM J. Comput., 39(1):122–142, 2009.
- [GHL+23] Jugal Garg, Edin Husic, Wenzheng Li, László A. Végh, and Jan Vondrák. Approximating nash social welfare by matching and local search. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 1298–1310. ACM, 2023.
- [GHM17] Jugal Garg, Martin Hoefer, and Kurt Mehlhorn. Satiation in Fisher markets and approximation of Nash social welfare. arXiv preprint arXiv:1707.04428, 2017.
- [GHM18] Jugal Garg, Martin Hoefer, and Kurt Mehlhorn. Approximating the Nash social welfare with budget-additive valuations. In Proceedings of the 29th annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2326–2340. SIAM, 2018.
- [GHV20] Jugal Garg, Edin Husić, and László A. Végh. Approximating Nash social welfare under Rado valuations. arXiv preprint arXiv:2009.14793, 2020.
- [GHV21] Jugal Garg, Edin Husić, and László Végh. Approximating nash social welfare under rado valuations. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 1412–1425, 2021.
- [GKK20] Jugal Garg, Pooja Kulkarni, and Rucha Kulkarni. Approximating Nash social welfare under submodular valuations through (un)matchings. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 2673–2687, 2020.
- [Kel97] Frank Kelly. Charging and rate control for elastic traffic. European transactions on Telecommunications, 8(1):33–37, 1997.
- [KN79] Mamoru Kaneko and Kenjiro Nakamura. The Nash social welfare function. Econometrica: Journal of the Econometric Society, pages 423–435, 1979.
- [LV21a] Wenzheng Li and Jan Vondrák. A constant-factor approximation algorithm for nash social welfare with submodular valuations. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 25–36. IEEE, 2021.
- [LV21b] Wenzheng Li and Jan Vondrák. Estimating the nash social welfare for coverage and other submodular valuations. In Proceedings of the Thirty-First Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1119–1130. SIAM, 2021.
- [Mou04] Hervé Moulin. Fair division and collective welfare. MIT press, 2004.
- [Nas50] John F Nash. The bargaining problem. Econometrica: Journal of the econometric society, pages 155–162, 1950.
- [Rot16] Jörg Rothe, editor. Economics and Computation, An Introduction to Algorithmic Game Theory, Computational Social Choice, and Fair Division. Springer, 2016.
- [RW98] Jack Robertson and William Webb. Cake-cutting algorithms: Be fair if you can. CRC Press, 1998.
- [Sch03] Gideon Schechtman. Concentration, results and applications. Handbook of the Geometry of Banach Spaces, (2):1603–1634, 2003.
- [Tal89] Michel Talagrand. Isoperimetry and integrability of the sum of independent banach space valued random variables. Annals of Probability, 17:1546–1570, 1989.
- [Tal95] Michel Talagrand. Concentration of measure and isoperimetric inequalities in product spaces. Inst. Hautes Etudes Sci.Publ. Math., 81:73–205, 1995.
- [Var74] Hal R Varian. Equity, envy, and efficiency. Journal of Economic Theory, 9(1):63–91, 1974.
- [You95] H Peyton Young. Equity: in theory and practice. Princeton University Press, 1995.
Appendix A The Eisenberg-Gale Relaxation
We consider the following relaxation of the Nash Social Welfare problem similar to the relaxations in [GHV21, LV21a]. We remark that the application of (Eisenberg-Gale Relaxation) in the Nash Social Welfare algorithm excludes the items allocated in the initial matching; indeed we ignore those items for the analysis in this section.
(Eisenberg-Gale Relaxation) | |||||
where is a suitable relaxation of the valuation function for each . In particular, we will use the concave extension, :
(Concave Extension) | |||||
Note that (Concave Extension) is a linear program. The dual LP to (Concave Extension) is
(Dual LP) | |||||
From here, we can see that is a minimum over a collection of linear functions, and hence a concave function.
A.1 Solving the Eisenber-Gale Relaxation
Here we show how to solve the (Eisenberg-Gale Relaxation) using demand queries.
Lemma 12.
Given demand oracles for , an optimal solution for (Eisenberg-Gale Relaxation) can be found within a polynomially small error in polynomial time. Moreover, the support of has size polynomial in .
Since is a nonnegative concave function, is a concave function as well (wherever ). If we implement the evaluation and supergradient oracles for , then we can use standard techniques (see, e.g., [DHHW23]) to maximize over the convex polytope
The function can be evaluated with polynomially many demand queries; this is well-known [Fei08] and holds because the demand oracle happens to be the separation oracle for (Dual LP). Hence we can also evaluate . We focus here on the implementation of a supergradient oracle.
A supergradient of at a point is any linear function such that and everywhere. Given , as a first step, we find a supergradient of itself: This can be done by solving the dual LP and finding and such that . Since for every is the minimum over such linear functions, we also have for all . Hence is the desired supergradient at .
Next, we compute the gradient of with respect to :
We claim that the linear approximation of obtained by evaluating this gradient at ,
is a valid supergradient for at . Indeed, we have , and for all ,
where the second inequality follows from the concavity of .
Hence, (Eisenberg-Gale Relaxation) can be solved in polynomial time, within a polynomially small error, using standard convex optimization techniques [DHHW23]. In particular, we can find a point such that for every feasible solution .
Finally, let’s explain why the solution can be assumed to have polynomially bounded support. Given a fractional solution (which has obviously polynomially bounded support), for each agent , using demand queries we also obtain a solution of (Dual LP) certifying the value of . By complementary slackness, there is a matching primal solution of (Eisenberg-Gale Relaxation) which has nonzero variables corresponding to the tight constraints in (Dual LP) that define the dual solution. Since the dimension of (Dual LP) is polynomial, the number of such tight constraints is also polynomial. Hence we can assume that the number of nonzero variables in (Eisenberg-Gale Relaxation) is polynomial.
A.2 Properties of the optimal solution
Consider now the (Eisenberg-Gale Relaxation) in a general form, with objective functions (which could be equal to or perhaps some other extension of ).
(Eisenberg-Gale Relaxation) | |||||
Suppose that is an optimal solution of this relaxation. We will need the following property, which is also stated in [GHV20] in the context of general concave valuations (Lemma 4.1 in [GHV20]). Our proof here is much simpler. First, we consider the case of differentiable concave which makes the proof cleaner. (Recall however that is not differentiable everywhere.)
Lemma 13.
For an optimal solution of (Eisenberg-Gale Relaxation) with differentiable nonnegative monotone concave functions , and any other feasible solution , we have
Proof.
Since is a concave function, we have
From here, we get
using the fact that is feasible and is an optimum for the objective function . ∎
To deal with a more general situation where is not necessarily differentiable, and we don’t find an exact optimum, we prove a robust version of this lemma.
Lemma 14.
Let for each be nonnegative, monotone and concave. For , let be an -approximate solution of (Eisenberg-Gale Relaxation), in the sense that for every other feasible solution ,
(1) |
And suppose further that that for all , Then for every feasible solution , we have
Note that we must necessarily have , because .
Proof.
Let satisfy the assumptions of the lemma. For any feasible and , using the concavity of , we can write
From here,
Note that since , we have . Also, , so by monotonicity and concavity, . Similarly, . Hence the ratio is at most in absolute value, and we can use the following elementary approximation:
Plugging into the inequality above, we obtain
Applying the assumption of the lemma to the feasible solution , we have which gives
We set to equate the last two terms: , which gives the statement of the lemma. ∎
Corollary 15.
Given a value oracle and a supergradient oracle for each , for any constant , we can find a solution of (Eisenberg-Gale Relaxation) in polynomial time such that for any feasible solution ,
Proof.
For (to be chosen at the end), we run a convex optimization algorithm on (Eisenberg-Gale Relaxation) with the additional constraint that , to obtain a solution such that for any feasible satisfying the same constraint, we have
By Lemma 14, this solution also satisfies
Finally, note that every feasible solution of (Eisenberg-Gale Relaxation) can be modified to obtain a feasible solution which satisfies the constraint , and we have for every . Therefore, our solution also satisfies
For , we obtain the desired statement. ∎
Appendix B Rematching lemmas
Here we prove the rematching lemmas from Section 2.2. These are essentially identical to lemmas in previous work on Nash social welfare, only reformulated in a way convenient for our presentation. We give self-contained proofs here for completeness.
Proof of Lemma 8.
Suppose that where and . We define a matching as follows: For each nonempty , let be the item of maximum value (as a singleton) in . For , let be an arbitrary item in not selected as for some other agent. (Since , we can always find such items.) Recall that are the agents who get positive value from ; in particular, we can assume for . Then we have, using monotonicity and subadditivity
Here we use the AMGM inequality:
where the last inequality is by assumption and the fact that . ∎
Proof of Lemma 9.
Let . We define a directed bipartite graph between and , with two types of edges: and . We also define:
-
•
,
-
•
,
-
•
.
We define a matching as follows;
-
•
For , ,
-
•
For , .
-
•
For , we define arbitrarily, to make a matching.
First, observe that this is indeed a matching: If it was the case that for some , then we would have edges and in the graph, and since there is a directed path from to (), there would also be a directed path from to , contradicting the fact that . Hence, is a matching.
Also, it’s easy to see that for every
Next, we analyze the value guarantee for :
We claim that . Observe that the vertices of can be covered disjointly by directed paths that terminate in (from each vertex of , there is such a path and it is also unique, because the in-degrees / out-degrees in the graph are at most ). Let denote the -vertices on some directed path like this, and let be its last vertex (in ). If it was the case that , then we could modify the matching by swapping its edges on for the -edges from , and finally an element of value for (since this item is outside of and hence available). This would increase the value of the matching , which was chosen to be optimal, so this cannot happen.
It follows that for every maximal directed path terminating in , and since these paths cover disjointly, by combining all these inequalities we obtain
Substituting this into the inequality above,
∎
Appendix C Concentration of subadditive functions
Let us start with a simple lower bound on the expected value of a random set with independently sampled elements.
Lemma 16.
If is a monotone subadditive function and is a random subset of where each element appears independently with probability , integer, then
Proof.
Consider a random coloring of , where every element receives independently a random color . Defining , we see that each set has the same distribution as the set in the Lemma. Therefore,
by subadditivity. ∎
This property is similar to the properties of submodular or self-bounding functions, which satisfy very convenient concentration bounds (similar to additive functions). Unfortunately, the same bounds are not true for subadditive functions; however, some concentration results can be recovered with a loss of certain constant factors.
Here we state a powerful concentration result presented by Schechtman [Sch03], based on the “-point control” concentration inequality by Talagrand [Tal89, Tal95]. We state it here in a simplified form suitable for our purposes.
Theorem 17.
Let be a monotone subadditive function, where for every . Then for any real and integers , and a random set from a product distribution,
This statement can be obtained from Corollary 12 in [Sch03] by extending the definition of to simply by saying for all . Also, we identify with in a natural way. Assuming means that for any set , by monotonicity and subadditivity. Therefore, is -Lipschitz with respect to the Hamming distance, as required in [Sch03]. The statement holds for any product distribution, i.e. a random set where elements appear independently.
Note that Theorem 17 refers to tails on both sides and hence is more convenient to use with the median of than the expectation. The next lemma shows that this is not a big issue, since the theorem also implies that the median and expectation must be within a constant factor.
Definition 18.
We define the median of a random variable as any number such that
For any nonnegative variable, obviously . For subadditive functions of independent random variables, we also get a bound in the opposite direction.
Lemma 19.
Let be a monotone subadditive function, where for every . Then for a random set from a product distribution,
Proof.
Hence, we obtain the following as a corollary of Theorem 17 and Lemma 19. For convenience, we also introduce a parameter as an upper bound on singleton values.
Theorem 20.
Let be a monotone subadditive function, where , , for every . Then for any integers , and a random set where elements appear independently,
Proof.
Assume first is a function satisfying the assumptions with . We use Theorem 17 with parameter . Note that . Hence, Theorem 17 gives
From Lemma 19, we have which implies the statement for :
For a function satisfying the assumptions for , we apply this inequality to the function , to obtain the statement of the lemma. ∎