This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Online Fair Division with Contextual Bandits

Arun Verma1,  Indrajit Saha2,  Makoto Yokoo2,  Bryan Kian Hsiang Low1
1National University of Singapore   2 Kyushu University, Japan
{arun, lowkh}@comp.nus.edu.sg, {indrajit, yokoo}@inf.kyushu-u.ac.jp
Abstract

This paper considers a novel online fair division problem involving multiple agents in which a learner observes an indivisible item that has to be irrevocably allocated to one of the agents while satisfying a fairness and efficiency constraint. Existing algorithms assume a small number of items with a sufficiently large number of copies, which ensures a good utility estimation for all item-agent pairs. However, such an assumption may not hold in many real-life applications, e.g., an online platform that has a large number of users (items) who only use the platform’s service providers (agents) a few times (a few copies of items), which makes it difficult to estimate the utility for all item-agent pairs. To overcome this challenge, we model the online fair division problem using contextual bandits, assuming the utility is an unknown function of the item-agent features. We then propose algorithms for online fair division with sub-linear regret guarantees. Our experimental results also verify the different performance aspects of the proposed algorithms.

1 Introduction

Growing economic, environmental, and social pressures require us to be efficient with limited resources (Aleksandrov and Walsh, 2020). Therefore, the fair division (Steinhaus, 1948) of limited resources among multiple parties/agents is needed to efficiently balance their competing interests in many real-life applications, e.g., Fisher market (Codenotti and Varadarajan, 2007; Vazirani, 2007), housing allocation (Benabbou et al., 2019), rent division (Edward Su, 1999; Gal et al., 2016), and many more (Demko and Hill, 1988). The fair division problem has been extensively studied in algorithmic game theory (Caragiannis et al., 2019; Codenotti and Varadarajan, 2007; Eisenberg and Gale, 1959; Vazirani, 2007) but focuses on the static setting where all information (items, agents, and their utility) is known in advance. However, fair division problems are often online (Aleksandrov and Walsh, 2020; Gao et al., 2021; Yamada et al., 2024), referred to as online fair division, where indivisible items arrive sequentially and each item needs to be irrevocably allocated to an agent. Existing algorithms for online fair division assume a small number of items with a sufficiently large number of copies (Yamada et al., 2024), which ensures a good utility estimation using the observed utilities for previous allocations for all item-agent pairs. The estimated utilities of item-agent pairs are used to allocate the item to an agent that maintains a desired balance between fairness (i.e., keeping the desired level of utilities across the agents) and efficiency (i.e., maximizing the total utility) (Sinclair et al., 2022).

However, many real-life applications have a large number of items with only a few copies for each item. For example, consider an online food delivery platform that wants to recommend restaurants (agents) to its users (items) while balancing between fairly recommending restaurants to accommodate their competing interests (fairness) and maximizing its profit (efficiency). Similar scenarios also arise while recommending the cab to users by ride-hailing platform, e-commerce platforms choosing top sellers to buyers, network resource allocation (Lan et al., 2010), online advertisement (Li et al., 2019), allocating tasks among crowd-sourced workers (Patil et al., 2021; Yamada et al., 2024), providing humanitarian aid post-natural disaster (Yamada et al., 2024), among many others (Aleksandrov and Walsh, 2020; Mehta et al., 2013; Walsh, 2011). The closest works to ours are Yamada et al. (2024) and Procaccia et al. (2024), but both works assume a finite number of items with a large number of copies for each item. We consider an online fair division setting where the number of items can be large with only a few copies for each item, thus considering a more general problem setting than in existing works.

Having a large number of items with only a few copies of each item makes it impossible to estimate the utility for all item-agent pairs, making existing algorithms (Procaccia et al., 2024; Yamada et al., 2024) ineffective in our setting. Therefore, this problem raises a natural question: How to design an algorithm for online fair division problems having a large number of items with only a few copies for each item? The first challenge we faced in designing such an algorithm is how to estimate the utility for any item-agent pair using observed stochastic utilities from previous allocations. A natural solution is to assume a correlation structure exists between the item-agent features and their utility, which can be realized via parameterization of the expected utility such that the utility depends on the unknown function of item-agent features. In many real-life applications, the agents may also be concerned about fairness in each round of the allocation process (Benadè et al., 2023; Sim et al., 2021), e.g., in the online platform, if service providers (agents) do not get an adequate number of users (items), they may leave the platform and, even worse, may join another competitive platform. Being fair in each round leads to the next challenge of finding how well the item’s allocation to an agent will maintain the desired balance between fairness and efficiency. To address this challenge, we introduce the notion of goodness function (e.g., weighted Gini social-evaluation function (Weymark, 1981), more details in Section 3 and Section 4.3) that measures how well an item allocation to an agent maintains the desired balance between fairness and efficiency (larger value of the goodness function implies a better allocation). The learner’s goal is to allocate each item to the agent that maximizes the goodness function value.

After having the goodness function and the estimate of the utility function, the learner must decide which agent should be assigned to each item. Since the utility is only observed for the agent to whom item is allocated (i.e., item-agent pair), the learner has to deal with the exploration-exploitation trade-off (Auer et al., 2002; Lattimore and Szepesvári, 2020; Slivkins, 2019), i.e., choosing the best agent based on the current utility estimate (exploitation) or trying a new agent that may lead to a better utility estimates in the future (exploration). To deal with the exploration-exploitation trade-off, we adapt the contextual bandit algorithms (Agrawal and Goyal, 2013; Chu et al., 2011; Krause and Ong, 2011; Li et al., 2010; Zhang et al., 2021; Zhou et al., 2020) to get an optimistic utility estimate for each item-agent pair (in Section 4), where the reward in contextual bandits corresponds to utility, contexts to items, and arms to agents. The goodness function uses these optimistic utility estimates to allocate the given item to the best possible agent that maintains the desired balance between fairness and efficiency. We prove that our proposed algorithms have a finite sample sub-linear regret upper bound guarantee (1, 2, and 3) for a class of goodness functions that have locally monotonically non-decreasing and locally Lipschitz continuous properties (in Section 4.3). Our experimental results in different settings also corroborate the theoretical results (in Section 5).

1.1 Related work

Fair division.

The fair division is a fundamental problem (Dubins and Spanier, 1961; Steinhaus, 1948) in which a designer allocates resources to agents with diverse preferences. There are two popular notions of ‘fairness’ used in fair division – envy-freeness (EF) (Varian, 1974) (every agent prefers his or her allocation most) and proportional fairness (Steinhaus, 1948) (each agent is guaranteed her proportional share in terms of total utility independent of other agents). Many classical studies on fair division focus on divisible resources (Dubins and Spanier, 1961; Steinhaus, 1948; Varian, 1974), which are later extended to indivisible items (Brams and Taylor, 1996; Budish, 2011; Moulin, 2004) due to their many practical applications. However, obtaining envy-free allocation may not be guaranteed to exist, e.g., in the case of two agents and one item. Therefore, the appropriate relaxations of EF and proportionality, such as envy-freeness up to one good (EF1) (Moulin, 2004), envy-freeness up to any good (EFX)  (Caragiannis et al., 2019), and maximin share fairness (MMS) (Budish, 2011), are used in practice. We refer the readers to (Amanatidis et al., 2022) for a detailed survey on fair division.

Online fair division.

The online fair division involves problems where items arrive sequentially and must be immediately and irrevocably allocated to an agent. The nature of items can be different, e.g., either divisible (Walsh, 2011) or indivisible (Aleksandrov and Walsh, 2017; Procaccia et al., 2024; Yamada et al., 2024), homogeneous/heterogeneous (Kash et al., 2014; Walsh, 2011), multiple copies of items (Procaccia et al., 2024; Yamada et al., 2024), and agents receiving more than one item (Aleksandrov et al., 2015). Finding a fair allocation is also related to computing market or competitive equilibria, which are known to be computationally challenging tasks Codenotti and Varadarajan (2007); Vazirani (2007). This problem in a static setting can be simplified to the Eisenberg-Gale (EG) convex program (Eisenberg and Gale, 1959; Jain and Vazirani, 2010), but envy-freeness is not compatible with Pareto optimality in online settings even if items are divisible. Some works have taken a relaxed approach to envy-freeness, while others considered an approximate solution, e.g., relaxing envy-freeness (Kash et al., 2014), utilizing a stochastic approximation scheme to obtain an approximate solution for the EG convex program (Bateni et al., 2022), or achieving approximate fairness (Yamada et al., 2024). Recent works (Benadè et al., 2023; Sinclair et al., 2022) examine the trade-off between fairness and efficiency. However, existing methods can not be used in our setting as it is impossible to estimate the utility for all item-agent pairs due to a large number of items with only a few copies of each item. Prior work (Benadè et al., 2023) assumes that the utilities of all agents for an item are known upon its arrival (noiseless utility). In contrast, we consider the utilities unknown, and only noisy utility is observed for an item allocation. It introduces a learning problem, i.e., estimating the unknown utility and connecting online fair division problems to multi-armed bandits (Procaccia et al., 2024; Yamada et al., 2024).

Fair multi-armed bandits (MAB).

The fair MAB is a framework where an arm with a higher expected reward is selected with a lower probability than an arm with a lower expected reward (Joseph et al., 2016). The fair policy should sample arms with probability proportional to the value of a merit function of its mean reward (Wang and Joachims, 2021). Many works also assume that preserving fairness means the probability of selecting each arm should be similar if the two arms have a similar quality distribution (Chen et al., 2021; Liu et al., 2017). The fairness is also considered in the linear contextual bandits setting (Gillen et al., 2018; Grazzi et al., 2022; Schumann et al., 2019; Wang et al., 2021; Wu et al., 2023) where an unknown similarity metric imposes individual fairness constraints. Some fair MAB variant seeks to optimize the cumulative reward while also ensuring that, at any round, each arm is pulled at least a specified fraction of times (Chen et al., 2020; Claure et al., 2020; Patil et al., 2021; Li et al., 2019). The work of (Hossain et al., 2021) considers a multi-agent setting: the goal is to find a fair distribution over the arms for the agents. However, all works of fair MAB only consider fairness, whereas having a desired balance between fairness and efficiency is needed in many practical applications.

2 Problem setting

Online fair division.

We consider an online fair division problem involving NN agents in which a learner (or central planner) only observes an indivisible item each round that needs to be allocated to one of the agents while satisfying a given fairness constraint to the greatest extent possible. We denote the set of agents by 𝒩\mathcal{N} and the set of indivisible items by \mathcal{M}. In our problem, we can have a large number of items with only a few copies of each item, and the learner has no prior knowledge about future items. At the start of the round tt, the environment selects an item mtm_{t}\in\mathcal{M}, and then the learner observes and allocates the item mtm_{t} to an agent nt𝒩n_{t}\in\mathcal{N}. After that allocation, the learner observes a stochastic utility collected by the agent, denoted by ytf(mt,nt)+εty_{t}\doteq f(m_{t},n_{t})+\varepsilon_{t}, where yt+y_{t}\in\mathbb{R}^{+}, f:×𝒩+f:\mathcal{M}\times\mathcal{N}\rightarrow\mathbb{R}^{+} is an unknown utility function and εt\varepsilon_{t} is a RR-sub-Gaussian noise, i.e., λ,𝔼[eλεt|{ms,ns,εs}s=1t1,mt,nt]exp(λ2R2/2)\forall\lambda\in\mathbb{R},~{}~{}~{}\mathbb{E}\left[e^{\lambda\varepsilon_{t}}|\left\{m_{s},n_{s},\varepsilon_{s}\right\}_{s=1}^{t-1},m_{t},n_{t}\right]\leq\exp\left({\lambda^{2}R^{2}}/{2}\right).

Allocation quality measure.

Let UtnU_{t}^{n} be the cumulative total utility collected by agent nn at the beginning of the round tt, where Utns=1t1ys𝟙(ns=n)U_{t}^{n}\doteq\sum_{s=1}^{t-1}y_{s}\mathbbmss{1}{\left(n_{s}=n\right)} and 𝟙(ns=n)\mathbbmss{1}{\left(n_{s}=n\right)} denotes the indicator function. We denote the vector of all agents’ total utility by 𝑼t\bm{U}_{t}, where 𝑼t(Utn)n𝒩.\bm{U}_{t}\doteq\left(U_{t}^{n}\right)_{n\in\mathcal{N}}. We assume a goodness function G exists that incorporates the desired level needed between fairness and efficiency. This goodness function measures how well the item allocation to an agent will maintain the desired balance between fairness and efficiency (larger value implies better allocation). We discuss the different choices of the goodness function in Section B.1. For the given total utilities of agents, the value of the goodness function G after allocating the item mtm_{t} to an agent nn is denoted by G(𝑼t,n)\textrm{G}\left(\bm{U}_{t,n}\right). Here, 𝑼t,n\bm{U}_{t,n} is the same as 𝑼t\bm{U}_{t} except the total utility of nn-th agent is Utn+f(mt,n)U_{t}^{n}+f(m_{t},n). The learner’s goal is to allocate the item to an agent that maximizes the value of the goodness function or satisfies the given fairness constraint to the greatest extent possible.

Performance measure of allocation policy.

Let ntn_{t}^{\star} denote the optimal agent for item mtm_{t} having the maximum goodness function value, i.e., nt=argmaxn𝒩[G(𝑼t,n)]n_{t}^{\star}=\arg\!\max_{n\in\mathcal{N}}\left[\textrm{G}\left(\bm{U}_{t,n}\right)\right]. Since the optimal agent is unknown, we sequentially estimate the utility function ff using the historical information of the stochastic utility observed for the item-agent pairs and then use the estimated utility function to allocate an agent (nt)(n_{t}) for the item mtm_{t}. After allocating item to an agent ntn_{t}, the learner incurs a penalty (or instantaneous regret) rtr_{t}, where rt=G(𝑼t,nt)G(𝑼t,nt).r_{t}=\textrm{G}\left(\bm{U}_{t,n_{t}^{\star}}\right)-\textrm{G}\left(\bm{U}_{t,n_{t}}\right). Our aim is to learn a sequential policy that selects agents for items such that the learner’s total penalty for not assigning the item to optimal agent (or cumulative regret) is as minimal as possible. Specifically, the cumulative regret (regret in short for brevity) of a sequential policy π\pi that selects agent ntn_{t} for allocating item mtm_{t} in the round tt in the TT rounds is given by

T(π)t=1Trt=t=1T[G(𝑼t,nt)G(𝑼t,nt)].\mathfrak{R}_{T}(\pi)\doteq\sum_{t=1}^{T}r_{t}=\sum_{t=1}^{T}\left[\textrm{G}\left(\bm{U}_{t,n_{t}^{\star}}\right)-\textrm{G}\left(\bm{U}_{t,n_{t}}\right)\right]. (1)

A policy π\pi is a good policy if it has sub-linear regret, i.e., limTT(π)/T=0\lim_{T\rightarrow\infty}{\mathfrak{R}_{T}(\pi)}/T=0. This implies that the policy π\pi will eventually start allocating items to the best possible agent.

Remark 1.

Our regret definition differs from that used in standard contextual bandits algorithms as the value of the goodness function depends on the current round (as in contextual bandits) and also on utilities collected by each agent in the past (history-dependent). This difference makes regret analysis challenging for any arbitrary goodness function.

3 Goodness function incorporating fairness constraint

In this section, we first define the various performance measures for fairness and efficiency commonly used in the algorithmic game theory literature and economics. For brevity, let UnU_{n} be the utility of agent nn, then:

  1. Utilitarian Social Welfare (Feldman and Serrano, 2006) . Utilitarian Social Welfare (USW) is defined as the sum of utilities of all agents, i.e., n𝒩Un\sum_{n\in\mathcal{N}}U_{n}. Maximizing USW indicates the system’s total utility is maximum. We refer to this as an efficiency measure.

  2. Egalitarian Social Welfare (Feldman and Serrano, 2006). Egalitarian Social Welfare (ESW) is defined as the minimum utility across all agents, i.e., minn[N]Un\min_{n\in[N]}U_{n}. It is a notion of fairness where the designer aims to maximize the utility of less happy agents to obtain a fair outcome.

  3. Nash Social Welfare (Nash et al., 1950). Nash Social Welfare (NSW) is defined as the product of the utilities of all agents, i.e., [n𝒩Un]1/|𝒩|\left[\prod_{n\in\mathcal{N}}U_{n}\right]^{{1}/{|\mathcal{N}|}}. By maximizing NSW, we get an approximate fair and efficient allocation.

Goodness function.

Since the goodness function is a performance indicator of the item allocation to an agent, it is used to allocate the items to agents in a way that maintains the desired balance between fairness and efficiency. It is a well-known fact that there is a trade-off between these two performance measures. For instance, when a learner aims to maximize social welfare, the item is allocated to an agent with the highest utility. Such an allocation scheme achieves efficiency but sacrifices fairness. Intuitively, one way to obtain fairness is to apply a smaller weight factor to agents with higher cumulative utility and a larger weight factor to those with lower cumulative utility. Therefore, we must consider an appropriate goodness function G such that optimizing G corresponds to a) maximizing efficiency, i.e., maximizing the individual agent’s cumulative utility, and b) reducing the utility disparity among the agents, which ensures fairness. Our regret, as defined in Eq. 1, is formulated for a general goodness function G. However, we first consider the goodness function to be the weighted Gini social-evaluation function (Weymark, 1981), which measures the weighted fairer distribution of utility while balancing the trade-off between fairness and efficiency. To define this function, we introduce a positive and non-increasing weight vector 𝒘𝒩=(w1,,w|𝒩|)\bm{w}_{\mathcal{N}}=(w_{1},\ldots,w_{|\mathcal{N}|}), where 0wn10\leq w_{n}\leq 1, and Φn\Phi_{n} that is a sorting operator and arranges the cumulative utilities in increasing order. The weighted Gini social-evaluation function is then given as follows:

G(𝑼t,nt)=n𝒩wnΦn(𝑼t,nt),\textrm{G}\left(\bm{U}_{t,n_{t}}\right)=\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right), (2)

We now consider the following cases that coincide with the well-known social welfare functions.

  1. 1.

    If w1=1w_{1}=1 and wn=0w_{n}=0 for n2n\geq 2, then it is evident from Eq. 2 that the weighted Gini social-evaluation function coincides with ESW, i.e., maximizing the minimum utility among agents.

  2. 2.

    If wn=1w_{n}=1 for all n𝒩n\in\mathcal{N}, then the weighted Gini social-evaluation function aligns with USW, i.e., maximizing the goodness function leads towards efficiency.

  3. 3.

    If 0<wn10<w_{n}\leq 1 for all n𝒩n\in\mathcal{N}, then by appropriately choosing the weights, we can effectively control the trade-off between efficiency and fairness.

Remark 2.

In the above first two cases, we observe that larger variability in the weights leads to fairness, and smaller variability leads to efficiency. We carry forward this idea and aim to control the trade-off with a single parameter instead of |𝒩||\mathcal{N}| parameters. We set the weight wn=ρn1w_{n}=\rho^{n-1} , for all n𝒩n\in\mathcal{N} where, 0<ρ10<\rho\leq 1 is a control parameter. We experimentally investigate this behavior in detail in Section 5.

We primarily focus on the weighted Gini social-evaluation function because it allows us to interpolate between fairness (ESW) and efficiency (USW) by adjusting a single parameter ρ\rho as discussed in 2. We also explore other goodness functions, such as the NSW and log-NSW; further details are provided in Section B.1.

4 Online fair division with contextual bandits

The online fair division setting that we consider can have a large number of items with only a few copies (even only one); hence, it is impossible to get a good utility estimate for all item-agent pairs. To overcome this challenge, we assume a correlation structure exists between the utility and item-agent features, which can be realized via parameterization of the expected utility such that the utility depends on the unknown function of item-agent features. Such assumption is common in the contextual bandit literature (Agrawal and Goyal, 2013; Chu et al., 2011; Krause and Ong, 2011; Li et al., 2010; Zhang et al., 2021; Zhou et al., 2020), where the reward (utility) is assumed be an unknown function of context-arm (item-agent) features. A utility function estimation is the main component of our problem for achieving good performance. To bring out the key ideas and results, we restrict our setting to linear utility functions and then extend our results to non-linear utility functions in Section 4.2.

4.1 Linear utility function

For brevity, we denote the set of all item-agent feature vectors in the round tt by td\mathcal{M}_{t}\subset\mathbb{R}^{d} (d1)(d\geq 1) and the item-agent features for item mtm_{t} and an agent nn by mt,nm_{t,n}. After allocating the item mtm_{t} to an agent ntn_{t}, the learner observes stochastic utility ynt=mt,ntθ+εty_{n_{t}}=m_{t,n_{t}}^{\top}\theta^{\star}+\varepsilon_{t}, where θd\theta^{\star}\in\mathbb{R}^{d} is the unknown parameter and εt\varepsilon_{t} is RR-sub-Gaussian. Let Mts=1t1ms,nsms,ns+λIdM_{t}\doteq\sum_{s=1}^{t-1}m_{s,n_{s}}m_{s,n_{s}}^{\top}+\lambda I_{d}, where λ>0\lambda>0 is the regularization parameter that ensures MtM_{t} is a positive definite matrix and IdI_{d} is the d×dd\times d identity matrix. The weighted l2l_{2}-norm of vector mt,nm_{t,n} with respect to matrix MtM_{t} is denoted by mt,nM\left\|m_{t,n}\right\|_{M}.

At the beginning of the round tt, the estimate of the unknown parameter θ\theta^{\star} is given by θ^t=Mt1s=1t1ms,nsyns\hat{\theta}_{t}={M}_{t}^{-1}\sum_{s=1}^{t-1}m_{s,n_{s}}y_{n_{s}}. After having this utility function estimator, the learner has to decide which agent to allocate the given item. Since the utility is only observed for the selected item-agent pair, the learner needs to deal with the exploration-exploitation trade-off (Auer et al., 2002; Lattimore and Szepesvári, 2020), i.e., choosing the best agent based on the current utility estimate (exploitation) or trying a new agent that may lead to a better utility estimator in the future (exploration). The upper confidence bound (UCB) (Li et al., 2010; Chu et al., 2011; Zhou et al., 2020) and Thompson sampling (TS) (Krause and Ong, 2011; Agrawal and Goyal, 2013; Zhang et al., 2021) are widely-used techniques for dealing with the exploration-exploitation trade-off.

UCB-based algorithm.

We first propose an UCB-based algorithm named  OFD-UCB that works as follows: for the first NN rounds, the learner allocates items to agents in a round-robin fashion to ensure each agent has positive utility, i.e., Utn>0,n𝒩U_{t}^{n}>0,\forall n\in\mathcal{N}. In the round tt, the environment reveals an item mtm_{t} to the learner. Before selecting the agent for that item, the learner updates the utility function estimate (θ^t)(\hat{\theta}_{t}) using available historical information before the round tt (i.e., {ms,ns,yns}s=1t1\{m_{s,n_{s}},y_{n_{s}}\}_{s=1}^{t-1}). Then, the utility UCB value of allocating the item mtm_{t} to an agent nn, denoted by umt,nUCBu_{m_{t},n}^{\text{UCB}}, is computed as follows:

umt,nUCB=mt,nθ^t+αtmt,nMt1,u_{m_{t},n}^{\text{UCB}}=m_{t,n}^{\top}\hat{\theta}_{t}+\alpha_{t}\left\|m_{t,n}\right\|_{{M}_{t}^{-1}}, (3)

where mt,nθ^tm_{t,n}^{\top}\hat{\theta}_{t} is the estimated utility for allocating item mtm_{t} to agent nn and αtmt,nMt1\alpha_{t}\left\|m_{t,n}\right\|_{{M}_{t}^{-1}} is the confidence bonus in which αt=Rdlog(1+(tL2/λ)δ)+λ12S\alpha_{t}=R\sqrt{d\log\left(\frac{1+\left({tL^{2}}/{\lambda}\right)}{\delta}\right)}+\lambda^{\frac{1}{2}}S is a slowly increasing function in tt and the value of mt,nMt1\left\|m_{t,n}\right\|_{{M}_{t}^{-1}} goes to zero as tt increases.

OFD-UCB UCB-based algorithm for online fair division with linear utility function
1:  Input: λ>0\lambda>0, Number of agents N=|𝒩|N=|\mathcal{N}|
2:  For the first NN rounds: allocates items to agents in a round-robin fashion
3:  Compute MN+1=λId+s=1Nms,nsms,nsM_{N+1}=\lambda I_{d}+\sum_{s=1}^{N}m_{s,n_{s}}m_{s,n_{s}}^{\top} and θ^N+1=MN+11s=1Nms,nsyns\hat{\theta}_{N+1}={M}_{N+1}^{-1}\sum_{s=1}^{N}m_{s,n_{s}}y_{n_{s}}
4:  for t=N+1,N+2,t=N+1,N+2,\ldots do
5:     Receive an item mtm_{t}
6:     Select agent ntargmaxn𝒩G(𝑼t,nUCB)n_{t}\in\arg\!\max_{n\in\mathcal{N}}\textrm{G}\left(\bm{U}_{t,n}^{\text{UCB}}\right). Break ties randomly
7:     Observe stochastic utility ynty_{n_{t}} after allocating item mtm_{t} to agent ntn_{t}
8:     Update Mt+1=Mt+mt,ntmt,ntM_{t+1}=M_{t}+m_{t,n_{t}}m_{t,n_{t}}^{\top} and re-estimate θ^t+1=Mt+11s=1tms,nsyns\hat{\theta}_{t+1}={M}_{t+1}^{-1}\sum_{s=1}^{t}m_{s,n_{s}}y_{n_{s}}
9:  end for

Using these optimistic utility of each agent, the algorithm selects an agent for allocating the item mtm_{t} by maximizing the optimistic value of goodness function as follows:

ntargmaxn𝒩G(𝑼t,nUCB),n_{t}\in\arg\!\max_{n\in\mathcal{N}}\textrm{G}\left(\bm{U}_{t,n}^{\text{UCB}}\right), (4)

where 𝑼t,nUCB=(Uta+umt,aUCB𝟙(a=n))a𝒩\bm{U}_{t,n}^{\text{UCB}}=\left(U_{t}^{a}+u_{m_{t},a}^{\text{UCB}}\mathbbmss{1}{\left(a=n\right)}\right)_{a\in\mathcal{N}} in which UtaU_{t}^{a} is the total utility collected by the agent aa before the round tt. If there are multiple agents to whom allocating the item gives the same maximum value of goodness function, then one of these agents is chosen randomly. After allocating item mtm_{t} to agent ntn_{t}, the environment generates a stochastic utility ynty_{n_{t}}. The learner observes the utility ynty_{n_{t}} and then updates the values of Mt+1=Mt+mt,ntmt,ntM_{t+1}=M_{t}+m_{t,n_{t}}m_{t,n_{t}}^{\top} and re-estimates θ^t+1=Mt+11s=1tms,nsyns\hat{\theta}_{t+1}={M}_{t+1}^{-1}\sum_{s=1}^{t}m_{s,n_{s}}y_{n_{s}}. The same process is repeated to select the agent for allocating the subsequent items. Our next result gives the regret upper bound of  OFD-UCB when the utility function is assumed linear.

Theorem 1.

Let δ(0,1)\delta\in(0,1), λ>0\lambda>0, noise in utility be the RR-sub-Gaussian, and the goodness function be same as defined in Eq. 2 with wmax=maxn𝒩wnw_{\max}=\max_{n\in\mathcal{N}}{w_{n}}. Then, with a probability of at least 1δ1-\delta, the regret in T>0T>0 rounds is

T(OFD-UCB)2αTwmax2dTlog(λ+TL/d),\mathfrak{R}_{T}\left(\textnormal{\ref{alg:OFD-UCB}}\right)\leq 2\alpha_{T}w_{\max}\sqrt{2dT\log(\lambda+TL/d)},

where αT=Rdlog(1+(TL2/λ)δ)+λ12S\alpha_{T}=R\sqrt{d\log\left(\frac{1+\left({TL^{2}}/{\lambda}\right)}{\delta}\right)}+\lambda^{\frac{1}{2}}S, θ2S\left\|\theta^{\star}\right\|_{2}\leq S, and mt,n2Lt1,n𝒩\left\|m_{t,n}\right\|_{2}\leq L~{}\forall t\geq 1,n\in\mathcal{N}.

The cumulative regret depends on the instantaneous regret incurred in each round. We can upper bound the instantaneous regret (rt)(r_{t}) incurred in the round tt by 2wmaxθ^tθMtxtMt12w_{\max}\left\|\hat{\theta}_{t}-\theta^{\star}\right\|_{{M}_{t}}\left\|x_{t}\right\|_{{M}_{t}^{-1}} (as shown in Appendix A). As the instantaneous regret depends on the estimation error of θ\theta^{\star} using observed item-agent pairs, i.e., θ^tθMt\left\|\hat{\theta}_{t}-\theta^{\star}\right\|_{{M}_{t}} in the round tt, we adapted results from linear contextual bandits (Abbasi-Yadkori et al., 2011; Chu et al., 2011) in our setting to get an upper bound on this estimation error. With this result, the regret upper bound given in 1 follows by upper-bounding the sum of all instantaneous regret. The detailed proofs of 1 and proofs of other results below are given in Appendix A.

TS-based algorithm.

Due to the empirical superiority of TS-based algorithms over UCB-based bandit algorithms (Chapelle and Li, 2011; Agrawal and Goyal, 2013), we also propose a TS-based algorithm named   OFD-TS, which works similar to  OFD-UCB except the agent selection part. To get a TS-based utility estimate, the algorithm first samples a utility function θ~t\tilde{\theta}_{t} from the distribution 𝒩(mt,nθ^t,βt2Vt1)\mathcal{N}\left(m_{t,n}^{\top}\hat{\theta}_{t},\beta_{t}^{2}V_{t}^{-1}\right), where 𝒩\mathcal{N} denotes the normal distribution and βt=R9dlog(t/δ)\beta_{t}=R\sqrt{9d\log\left(t/\delta\right)} (Agrawal and Goyal, 2013). Using the sampled utility function θ~t\tilde{\theta}_{t}, the TS-based utility estimate, i.e., umt,nTS=mt,nθ~tu_{m_{t},n}^{\text{TS}}=m_{t,n}^{\top}\tilde{\theta}_{t}, replaces umt,nUCBu_{m_{t},n}^{\text{UCB}} for computing the value of goodness function.

4.2 Non-linear utility function

We now consider the setting in which the utility function can be non-linear. As shown in  Section 4.1, the linear contextual bandit algorithm can be used as a sub-routine to get the optimistic utility estimate in our online fair allocation. We then use these estimates to compute the value of the goodness function for allocating the received item to the agent that gives the best-desired balance between fairness and efficiency. We generalize this observation for the online fair division problem with a non-linear utility function by using a suitable non-linear contextual bandit algorithm and introduce the notion of Online Fair Division (OFD) Compatible contextual bandit algorithm.

Definition 1 (OFD Compatible Contextual Bandit Algorithm).

Let 𝒪t\mathcal{O}_{t} denote the observations of item-agent pairs at the beginning of round tt and mt,nm_{t,n}\in\mathcal{M}. Then, any contextual bandit algorithm 𝔄\mathfrak{A} is OFD Compatible if its estimated function ft𝔄f_{t}^{\mathfrak{A}}, with probability 1δ1-\delta, satisfies:

|ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t).|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}).

Many contextual bandit algorithms like Lin-UCB (Chu et al., 2011), UCB-GLM (Li et al., 2017), IGP-UCB (Chowdhury and Gopalan, 2017), GP-TS (Chowdhury and Gopalan, 2017), Neural-CUB (Zhou et al., 2020), and Neural-TS (Zhang et al., 2021) are OFD compatible. The value of h(mt,n,𝒪t)h(m_{t,n},\mathcal{O}_{t}) gives the upper bound on the goodness function of the estimated utility with respect to the true utility function. The value of function depends on the problem and the choice of contextual bandit algorithm 𝔄\mathfrak{A} and its associated hyperparameters, as shown in Table 1 at the end of Appendix A. Depending on problem setting, we can use any suitable OFD compatible contextual bandit algorithm as a sub-routine to get optimistic estimates of utility for all item-agent pairs, which are then used to compute the goodness function value and select an agent for allocating the item.

Let 𝔄\mathfrak{A} be an OFD contextual bandit algorithm with |ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t)|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}). Then, the agent for allocating the item mtm_{t} is selected by maximizing the following goodness function:

ntargmaxn𝒩G(𝑼t,n𝔄),n_{t}\in\arg\!\max_{n\in\mathcal{N}}\textrm{G}\left(\bm{U}_{t,n}^{\mathfrak{A}}\right), (5)

where 𝑼t,n𝔄=(Uta+umt,a𝔄𝟙(a=n))a𝒩\bm{U}_{t,n}^{\mathfrak{A}}=\left(U_{t}^{a}+u_{m_{t},a}^{\mathfrak{A}}\mathbbmss{1}{\left(a=n\right)}\right)_{a\in\mathcal{N}} in which UtaU_{t}^{a} is the total utility collected by agent aa thus far and umt,a𝔄u_{m_{t},a}^{\mathfrak{A}} is the optimistic estimate of f(mt,a)f(m_{t,a}). Next, we will give a regret upper bound for using any OFD compatible contextual bandit algorithm 𝔄\mathfrak{A} to get optimistic utility estimate.

Theorem 2.

Let 𝔄\mathfrak{A} be an OFD compatible contextual bandit algorithm with |ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t)|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}) and the goodness function be same as defined in Eq. 2 with wmax=maxn𝒩wnw_{\max}=\max_{n\in\mathcal{N}}{w_{n}}. If assumptions used in 𝔄\mathfrak{A} holds, then, with a probability of at least 1δ1-\delta, the regret of corresponding online fair division algorithm OFD-𝔄\mathfrak{A} in TT rounds is

T(OFD-𝔄)2wmaxTt=1T[h(mt,nt,𝒪t)]2.\mathfrak{R}_{T}\left(\textnormal{{OFD-$\mathfrak{A}$}}\right)\leq 2w_{\max}\sqrt{T}\sqrt{\sum_{t=1}^{T}\left[h(m_{t,n_{t}},\mathcal{O}_{t})\right]^{2}}.

The proof follows by upper-bounding the sum of instantaneous regrets. To do this, we first show that instantaneous regret (rt)(r_{t}) incurred in the round tt is upper bounded by 2wnth(mt,nt,𝒪t)2w_{n_{t}}h(m_{t,n_{t}},\mathcal{O}_{t}) (as shown in Appendix A), which uses the fact that the estimation error is |ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t)|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}) for an item-agent pair mt,nm_{t,n} when using an OFD compatible contextual bandit algorithm 𝔄\mathfrak{A}.

4.3 Regret bound for other goodness functions

The regret upper bounds are given in 1 and  2 hold for a specific goodness function (defined in Eq. 2). We have also derived the regret upper bounds for goodness functions that satisfy the following properties.

Definition 2 (Locally monotonically non-decreasing and Lipschitz continuous properties).

A goodness function G is
(i) locally monotonically non-decreasing for any utility u>0u>0 if G(Ut,n1,,Ut,ni,,Ut,nN)G(Ut,n1,,Ut,ni+u,,Ut,nN)i and,\textrm{G}\left(U_{t,n_{1}},\ldots,U_{t,n_{i}},\ldots,U_{t,n_{N}}\right)\leq\textrm{G}\left(U_{t,n_{1}},\ldots,U_{t,n_{i}}+u,\ldots,U_{t,n_{N}}\right)~{}\forall i\text{ and,}

(ii) locally Lipschitz continuous if, for a constant ci>0c_{i}>0 corresponding to the ii-th element, |G(Ut,n1,,Ut,ni,,Ut,nN)G(Ut,n1,,Ut,ni,,Ut,nN)|ci|Ut,niUt,ni|i.|\textrm{G}\left(U_{t,n_{1}},\ldots,U_{t,n_{i}},\ldots,U_{t,n_{N}}\right)-\textrm{G}\left(U_{t,n_{1}},\ldots,U_{t,n_{i}}^{\prime},\ldots,U_{t,n_{N}}\right)|\leq c_{i}|U_{t,n_{i}}-U_{t,n_{i}}^{\prime}|~{}\forall i.

Next, we provide regret upper bound for goodness functions that are locally monotonically non-decreasing and locally Lipschitz continuous, e.g., NSW and log-NSW.

Theorem 3.

Let 𝔄\mathfrak{A} be an OFD compatible contextual bandit algorithm with |ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t)|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}) and the goodness function G be is locally monotonically non-decreasing and Lipschitz continuous with cmax=maxn𝒩cnc_{\max}=\max_{n\in\mathcal{N}}{c_{n}}. If assumptions used in 𝔄\mathfrak{A} holds, then, with a probability of at least 1δ1-\delta, the regret of corresponding online fair division algorithm OFD-𝔄\mathfrak{A} in TT rounds is

T(OFD-𝔄, G)2cmaxTt=1T[h(mt,nt,𝒪t)]2.\mathfrak{R}_{T}\left(\textnormal{{OFD-$\mathfrak{A}$}, {G}}\right)\leq 2c_{\max}\sqrt{T}\sqrt{\sum_{t=1}^{T}\left[h(m_{t,n_{t}},\mathcal{O}_{t})\right]^{2}}.

The instantaneous regret (rt)(r_{t}) is upper bounded by 2cnth(mt,nt,𝒪t)2c_{n_{t}}h(m_{t,n_{t}},\mathcal{O}_{t}) (as shown in Appendix A). It uses the following three facts: (i) locally monotonically non-decreasing property, (ii) locally Lipschitz continuous property of goodness function, and (iii) the estimation error |ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t)|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}) for contextual bandit algorithm 𝔄\mathfrak{A}.

5 Experiments

Our goal is to corroborate our theoretical results and empirically demonstrate the performance of our proposed algorithms in different online fair allocation problems. We repeat all our experiments 20 times and show the regret (as defined in Eq. 1) with a 95% confidence interval (the vertical line on each curve shows the confidence interval). To demonstrate the different performance aspects of our proposed algorithms, we have used different synthetic problem instances (commonly used experiment choices in bandit literature) whose details are as follows.

Experiment setting.

We use a dmd_{m}-dimensional space to generate the sample features of each item, where item mtm_{t} is represented by mt=(xmt,1,,xmt,dm)m_{t}=\left(x_{m_{t},1},\ldots,x_{m_{t},d_{m}}\right) for t1t\geq 1. Similarly, we use a dnd_{n}-dimensional space to generate the sample features of each agent, where agent n𝒩n\in\mathcal{N} is represented by n=(xn,1,,xn,dn)n=\left(x_{n,1},\ldots,x_{n,d_{n}}\right) represent the agent nn. The value of ii-the feature xmt,ix_{m_{t},i} (or xn,i)x_{n,i}) is sampled uniformly at random from (0,10)\left(0,10\right). Note that agents remain the same across the rounds, whereas an item in each round is randomly sampled from the dmd_{m}-dimensional space. To get the item-agent feature vectors for item mtm_{t} in the round tt, we concatenate the item features mtm_{t} with all agent feature vectors. For item mtm_{t} and agent nn, the concatenated feature vector is denoted by mt,nm_{t,n}, which is an dd-dimensional vector with d=dm+dnd=d_{m}+d_{n}. We then select a dd-dimensional vector θ\theta^{\star} by sampling uniformly at random from (0,10)d(0,10)^{d} and normalized the sample vector to ensure θ\theta^{\star} has unit l2l_{2}-norm. In all our experiment, we use λ=0.01\lambda=0.01, R=0.1R=0.1, δ=0.05\delta=0.05, and dm=dnd_{m}=d_{n}.

Regret comparison with baselines.

We compare the regret of proposed algorithms with two other baselines: OFD-Uniform and OFD-Greedy. OFD-Uniform selects an agent uniformly at random to allocate the item. In contrast, OFD-Greedy uses ε\varepsilon-Greedy contextual algorithm, which behaves the same as OFD-UCB except it has no confidence term (i.e., setting αt=0\alpha_{t}=0) and selects an agent uniformly at random with probability 0.10.1 in each round, otherwise select agent greedily. For experiments with the linear utility (i.e., f(x)=xθf(x)=x^{\top}\theta^{\star}), we use 1000010000 items, 1010 agents, and ρ=0.85\rho=0.85 control parameter for the goodness function. We use three different problems with the same setting except dm=dn={2,5,10}d_{m}=d_{n}=\{2,5,10\}, resulting in d={4,10,20}d=\{4,10,20\}. As expected, our algorithms based on UCB and TS-based contextual linear bandit algorithms outperform both baselines as shown in Fig. 1 on different problem instances of linear utility (only varying the dimension dd while keeping remaining parameters unchanged). Note that we set a limit on the y-axis to show the sub-linear regret of our algorithm. Further, we can also observe that the TS-based algorithm performs better than the UCB-based algorithm. For the experiment with the non-linear utility, we use the square function, i.e., f(x)=10x2f(x)=10x^{2} in our experiment having non-linear utility function, which has 500500 items, 1010 agents, ρ=0.85\rho=0.85, and dm=dn=2(d=4)d_{m}=d_{n}=2(d=4). As shown in Fig. 1d, the linear contextual bandit algorithms perform poorly compared to their non-linear counterparts using the Gaussian process to estimate the non-linear function (Srinivas et al., 2010; Williams and Rasmussen, 2006).

Refer to caption
(a) Linear (d=4)(d=4)
Refer to caption
(b) Linear (d=10)(d=10)
Refer to caption
(c) Linear (d=20)(d=20)
Refer to caption
(d) Square (d=4)(d=4)
Figure 1: Comparing cumulative regret of different online fair division algorithms for ρ=0.85.\rho=0.85.

Regret vs. number of agents (NN) and dimension (d)(d).

The number of agents (N)(N) and dimension of item-agent feature vector (d)(d) in the online fair division problem control the difficulty. As their values increase, the problem becomes more difficult, making it harder to allocate the item to the best agent. We want to verify this by observing how the regret of our proposed algorithms changes while changing NN and dd in the online fair division problem. To see this in our experiments, we use the linear utility function (i.e., f(x)=xθf(x)=x^{\top}\theta^{\star}), 10001000 items, N=10N=10 when varying dimension, ρ=1\rho=1, d=40d=40 while varying the number of agents. As shown in Fig. 2a and Fig. 2b, the regret bound of our UCB- and TS- based algorithms increases as we increase the number of agents, i.e., N={5,10,15,20,25}N=\{5,10,15,20,25\}. We also observe the same trend when we increase the dimension of the item-agent feature vector from d={10,20,30,40,50}d=\{10,20,30,40,50\} as shown in Fig. 2c and Fig. 2d. In all experiments, we also observe that TS-based algorithm performs better than its UCB-based counterpart (as seen in Fig. 2 by comparing the regret of both algorithms). However, when we set the value of ρ=0.85\rho=0.85, we observe the same trend for dd, but the trend reverses as the number of agents increases due to the goodness function (defined in Eq. 2) decreases with an increase in the number of agents when ρ<1\rho<1, as shown in Fig. 3. It happens because the value of the goodness function (defined in Eq. 2) decreases with an increase in the number of agents when ρ<1\rho<1.

Refer to caption
(a) Vary agents (UCB)
Refer to caption
(b) Vary agents (TS)
Refer to caption
(c) Vary dimension (UCB)
Refer to caption
(d) Vary dimension (TS)
Figure 2: Cumulative regret of OFD-UCB and OFD-TS vs. different values of KK and dd for ρ=1.0\rho=1.0.
Refer to caption
(a) Vary agents (UCB)
Refer to caption
(b) Vary agents (TS)
Refer to caption
(c) Vary dimension (UCB)
Refer to caption
(d) Vary dimension (TS)
Figure 3: Cumulative regret of OFD-UCB and OFD-TS vs. different values of KK and dd for ρ=0.85\rho=0.85.

Total utility, Gini coefficient, and ratio of minimum utility to total utility vs. ρ\rho.

To measure the fairness of item allocation among agents using the Gini coefficient (Dorfman, 1979) (lower is better) and the ratio of minimum utility to total utility (increasing this is same as maximizing maxi-min utility, hence having higher value is better). We can use the total utility (USW) as a measure of efficiency (having a higher total utility is better). Since the value of ρ\rho controls the balance between fairness and efficiency, we want to see how it changes total utility, the Gini coefficient, and the ratio of minimum utility to total utility. To see this, we use the linear utility function (i.e., f(x)=xθf(x)=x^{\top}\theta^{\star}), 10001000 items, N=10N=10, d=40d=40, and vary value in ρ\rho in set {0,0.1,0.2,0.4,0.6,0.8,0.85,0.88,0.89,0.9,0.91,0.92,\{0,0.1,0.2,0.4,0.6,0.8,0.85,0.88,0.89,0.9,0.91,0.92, 0.93,0.94,0.95,0.96,0.97,0.98,0.99,1.0}0.93,0.94,0.95,0.96,0.97,0.98,0.99,1.0\}. For each value of ρ\rho, we note the total utility, Gini coefficient, and ratio of minimum utility to total utility after allocating all items. We repeat our experiments 2020 times and report the average value of observed results on Fig. 4. As we increase the value of ρ\rho, efficiency is preferred over fairness. As expected, total utility increases (Fig. 4c), the value of the Gini coefficient increases (Fig. 4b), and the ratio of minimum utility to total utility decreases (Fig. 4a). As expected, OFD-Uniform performs worse. Though OFD-Greedy behaves the same as UCB- and TS-based counterparts, it suffers from higher regret as shown in Fig. 1.

Refer to caption
(a) minn𝒩Utn/n𝒩Utn\min_{n\in\mathcal{N}}U_{t}^{n}/\sum_{n\in\mathcal{N}}U_{t}^{n}
Refer to caption
(b) Gini coefficient
Refer to caption
(c) Total utility
Figure 4: Fairness and efficiency measures (total utility, Gini coefficient, and ratio of minimum utility to total utility) as a function of control parameter (ρ)(\rho).

6 Conclusion

This paper considers a novel online fair division problem involving multiple agents in which a learner must irrevocably allocate an indivisible item to one of the agents while satisfying a given fairness and efficiency constraint. Since the number of items can be very large in our setting, with only a few copies of each item in many real-life applications, it is difficult to estimate the utility for all item-agent pairs. To address this challenge, we use contextual bandit to get optimistic utility estimates for each item-agent pair. We then proposed algorithms for online fair division that use these optimistic utility estimates for item allocation to agents and showed that they enjoy the sub-linear regret guarantees. Our experimental results further validate the performance of the proposed algorithms. For future research, one can explore other fairness constraints, such as envy-freeness and proportionality. Another promising direction is investigating fairness across multiple types of items in the online fair division setting with unknown utility functions.

References

  • Abbasi-Yadkori et al. (2011) Y. Abbasi-Yadkori, D. Pál, and C. Szepesvári. Improved algorithms for linear stochastic bandits. In Proc. NeurIPS, pages 2312–2320, 2011.
  • Agrawal and Goyal (2013) S. Agrawal and N. Goyal. Thompson sampling for contextual bandits with linear payoffs. In Proc. ICML, pages 127–135, 2013.
  • Aleksandrov and Walsh (2017) M. Aleksandrov and T. Walsh. Pure nash equilibria in online fair division. In Proc. IJCAI, pages 42–48, 2017.
  • Aleksandrov and Walsh (2020) M. Aleksandrov and T. Walsh. Online fair division: A survey. In Proc. AAAI, pages 13557–13562, 2020.
  • Aleksandrov et al. (2015) M. Aleksandrov, H. Aziz, S. Gaspers, and T. Walsh. Online fair division: Analysing a food bank problem. arXiv:1502.07571, 2015.
  • Amanatidis et al. (2022) G. Amanatidis, G. Birmpas, A. Filos-Ratsikas, and A. A. Voudouris. Fair division of indivisible goods: A survey. arXiv:2202.07551, 2022.
  • Auer et al. (2002) P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, pages 235–256, 2002.
  • Bateni et al. (2022) M. Bateni, Y. Chen, D. F. Ciocan, and V. Mirrokni. Fair resource allocation in a volatile marketplace. Operations Research, pages 288–308, 2022.
  • Benabbou et al. (2019) N. Benabbou, M. Chakraborty, and Y. Zick. Fairness and diversity in public resource allocation problems. Bulletin of the Technical Committee on Data Engineering, 2019.
  • Benadè et al. (2023) G. Benadè, A. M. Kazachkov, A. D. Procaccia, A. Psomas, and D. Zeng. Fair and efficient online allocations. Operations Research, 2023.
  • Brams and Taylor (1996) S. J. Brams and A. D. Taylor. Fair Division: From cake-cutting to dispute resolution. Cambridge University Press, 1996.
  • Budish (2011) E. Budish. The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. Journal of Political Economy, pages 1061–1103, 2011.
  • Caragiannis et al. (2019) I. Caragiannis, D. Kurokawa, H. Moulin, A. D. Procaccia, N. Shah, and J. Wang. The unreasonable fairness of maximum nash welfare. ACM Transactions on Economics and Computation, pages 1–32, 2019.
  • Chapelle and Li (2011) O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In Proc. NeurIPS, pages 2249–2257, 2011.
  • Chen et al. (2021) G. Chen, X. Li, and Y. Ye. Fairer lp-based online allocation via analytic center. arXiv:2110.14621, 2021.
  • Chen et al. (2020) Y. Chen, A. Cuellar, H. Luo, J. Modi, H. Nemlekar, and S. Nikolaidis. Fair contextual multi-armed bandits: Theory and experiments. In Proc. UAI, pages 181–190, 2020.
  • Chowdhury and Gopalan (2017) S. R. Chowdhury and A. Gopalan. On kernelized multi-armed bandits. In Proc. ICML, pages 844–853, 2017.
  • Chu et al. (2011) W. Chu, L. Li, L. Reyzin, and R. E. Schapire. Contextual bandits with linear payoff functions. In Proc. AISTATS, pages 208–214, 2011.
  • Claure et al. (2020) H. Claure, Y. Chen, J. Modi, M. Jung, and S. Nikolaidis. Multi-armed bandits with fairness constraints for distributing resources to human teammates. In Proc. HRI, pages 299–308, 2020.
  • Codenotti and Varadarajan (2007) B. Codenotti and K. Varadarajan. Computation of market equilibria by convex programming. Algorithmic Game Theory, pages 135–158, 2007.
  • Cole and Gkatzelis (2018) R. Cole and V. Gkatzelis. Approximating the nash social welfare with indivisible items. SIAM Journal on Computing, pages 1211–1236, 2018.
  • Demko and Hill (1988) S. Demko and T. P. Hill. Equitable distribution of indivisible objects. Mathematical Social Sciences, pages 145–158, 1988.
  • Dorfman (1979) R. Dorfman. A formula for the gini coefficient. The review of economics and statistics, pages 146–149, 1979.
  • Dubins and Spanier (1961) L. E. Dubins and E. H. Spanier. How to cut a cake fairly. The American Mathematical Monthly, pages 1–17, 1961.
  • Edward Su (1999) F. Edward Su. Rental harmony: Sperner’s lemma in fair division. The American mathematical monthly, pages 930–942, 1999.
  • Eisenberg and Gale (1959) E. Eisenberg and D. Gale. Consensus of subjective probabilities: The pari-mutuel method. The Annals of Mathematical Statistics, pages 165–168, 1959.
  • Feldman and Serrano (2006) A. M. Feldman and R. Serrano. Welfare economics and social choice theory. Springer Science & Business Media, 2006.
  • Gal et al. (2016) Y. Gal, M. Mash, A. D. Procaccia, and Y. Zick. Which is the fairest (rent division) of them all? In Proc. EC, pages 67–84, 2016.
  • Gao et al. (2021) Y. Gao, A. Peysakhovich, and C. Kroer. Online market equilibrium with application to fair division. In Proc. NeurIPS, pages 27305–27318, 2021.
  • Gillen et al. (2018) S. Gillen, C. Jung, M. Kearns, and A. Roth. Online learning with an unknown fairness metric. In Proc. NeurIPS, pages 2605–2614, 2018.
  • Grazzi et al. (2022) R. Grazzi, A. Akhavan, J. I. Falk, L. Cella, and M. Pontil. Group meritocratic fairness in linear contextual bandits. In Proc. NeurIPS, pages 24392–24404, 2022.
  • Hossain et al. (2021) S. Hossain, E. Micha, and N. Shah. Fair algorithms for multi-agent multi-armed bandits. In Proc. NeurIPS, pages 24005–24017, 2021.
  • Jain and Vazirani (2010) K. Jain and V. V. Vazirani. Eisenberg–gale markets: algorithms and game-theoretic properties. Games and Economic Behavior, pages 84–106, 2010.
  • Joseph et al. (2016) M. Joseph, M. Kearns, J. Morgenstern, S. Neel, and A. Roth. Fair algorithms for infinite and contextual bandits. arXiv:1610.09559, 2016.
  • Kash et al. (2014) I. Kash, A. D. Procaccia, and N. Shah. No agent left behind: Dynamic fair division of multiple resources. Journal of Artificial Intelligence Research, pages 579–603, 2014.
  • Krause and Ong (2011) A. Krause and C. S. Ong. Contextual Gaussian process bandit optimization. In Proc. NeurIPS, pages 2447–2455, 2011.
  • Lan et al. (2010) T. Lan, D. Kao, M. Chiang, and A. Sabharwal. An axiomatic theory of fairness in network resource allocation. In Proc. INFOCOM, pages 1–9, 2010.
  • Lattimore and Szepesvári (2020) T. Lattimore and C. Szepesvári. Bandit Algorithms. Cambridge University Press, 2020.
  • Li et al. (2019) F. Li, J. Liu, and B. Ji. Combinatorial sleeping bandits with fairness constraints. IEEE Transactions on Network Science and Engineering, pages 1799–1813, 2019.
  • Li et al. (2010) L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized news article recommendation. In Proc. WWW, pages 661–670, 2010.
  • Li et al. (2017) L. Li, Y. Lu, and D. Zhou. Provably optimal algorithms for generalized linear contextual bandits. In Proc. ICML, pages 2071–2080, 2017.
  • Liu et al. (2017) Y. Liu, G. Radanovic, C. Dimitrakakis, D. Mandal, and D. C. Parkes. Calibrated fairness in bandits. arXiv:1707.01875, 2017.
  • Mehta et al. (2013) A. Mehta et al. Online matching and ad allocation. Foundations and Trends® in Theoretical Computer Science, pages 265–368, 2013.
  • Moulin (2004) H. Moulin. Fair division and collective welfare. MIT press, 2004.
  • Nash et al. (1950) J. F. Nash et al. The bargaining problem. Econometrica, pages 155–162, 1950.
  • Patil et al. (2021) V. Patil, G. Ghalme, V. Nair, and Y. Narahari. Achieving fairness in the stochastic multi-armed bandit problem. Journal of Machine Learning Research, pages 1–31, 2021.
  • Procaccia et al. (2024) A. D. Procaccia, B. Schiffer, and S. Zhang. Honor among bandits: No-regret learning for online fair division. arXiv:2407.01795, 2024.
  • Schumann et al. (2019) C. Schumann, Z. Lang, N. Mattei, and J. P. Dickerson. Group fairness in bandit arm selection. arXiv:1912.03802, 2019.
  • Sim et al. (2021) R. H. L. Sim, Y. Zhang, B. K. H. Low, and P. Jaillet. Collaborative Bayesian optimization with fair regret. In Proc. ICML, pages 9691–9701, 2021.
  • Sinclair et al. (2022) S. R. Sinclair, S. Banerjee, and C. L. Yu. Sequential fair allocation: Achieving the optimal envy-efficiency tradeoff curve. ACM SIGMETRICS Performance Evaluation Review, pages 95–96, 2022.
  • Slivkins (2019) A. Slivkins. Introduction to multi-armed bandits. Foundations and Trends® in Machine Learning, 2019.
  • Srinivas et al. (2010) N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proc. ICML, page 1015–1022, 2010.
  • Steinhaus (1948) H. Steinhaus. The problem of fair division. Econometrica, pages 101–104, 1948.
  • Talebi and Proutiere (2018) M. S. Talebi and A. Proutiere. Learning proportionally fair allocations with low regret. Proceedings of the ACM on Measurement and Analysis of Computing Systems, pages 1–31, 2018.
  • Varian (1974) H. R. Varian. Equity, envy, and efficiency. Journal of Economic Theory, pages 63–91, 1974.
  • Vazirani (2007) V. V. Vazirani. Combinatorial algorithms for market equilibria. Algorithmic game theory, pages 103–134, 2007.
  • Walsh (2011) T. Walsh. Online cake cutting. In Proc. ADT, pages 292–305, 2011.
  • Wang and Joachims (2021) L. Wang and T. Joachims. User fairness, item fairness, and diversity for rankings in two-sided markets. In Proc. SIGIR, pages 23–41, 2021.
  • Wang et al. (2021) L. Wang, Y. Bai, W. Sun, and T. Joachims. Fairness of exposure in stochastic bandits. In Proc. ICML, pages 10686–10696, 2021.
  • Weymark (1981) J. A. Weymark. Generalized gini inequality indices. Mathematical Social Sciences, pages 409–430, 1981.
  • Williams and Rasmussen (2006) C. K. Williams and C. E. Rasmussen. Gaussian Processes for Machine Learning. MIT press, 2006.
  • Wu et al. (2023) Y. Wu, Z. Zheng, and T. Zhu. Best arm identification with fairness constraints on subpopulations. In Proc. WSC, pages 540–551, 2023.
  • Yamada et al. (2024) H. Yamada, J. Komiyama, K. Abe, and A. Iwasaki. Learning fair division from bandit feedback. In Proc. AISTATS, pages 3106–3114, 2024.
  • Zhang et al. (2021) W. Zhang, D. Zhou, L. Li, and Q. Gu. Neural Thompson sampling. In Proc. ICLR, 2021.
  • Zhou et al. (2020) D. Zhou, L. Li, and Q. Gu. Neural contextual bandits with UCB-based exploration. In Proc. ICML, pages 11492–11502, 2020.

Appendix A Regret Analysis.

A.1 Proof of 1

We first state the following result that we will use in the prove 1.

Lemma 1 (Theorem 2 of Abbasi-Yadkori et al. (2011)).

Let δ(0,1)\delta\in(0,1), λ>0\lambda>0, θ^t=Mt1s=1tms,nyns\hat{\theta}_{t}={M}_{t}^{-1}\sum_{s=1}^{t}m_{s,n}y_{n_{s}}, θ2S\left\|\theta^{\star}\right\|_{2}\leq S, and mt,n2Lt1,n𝒩\left\|m_{t,n}\right\|_{2}\leq L~{}\forall t\geq 1,n\in\mathcal{N}. Then, with probability 1δ1-\delta,

θ^tθMt(Rdlog(1+(tL2/λ)δ)+λ12S)=αt.\left\|\hat{\theta}_{t}-\theta^{\star}\right\|_{M_{t}}\leq\left(R\sqrt{d\log\left(\frac{1+\left({tL^{2}}/{\lambda}\right)}{\delta}\right)}+\lambda^{\frac{1}{2}}S\right)=\alpha_{t}.

See 1

Proof.

As h(mt,nt,𝒪t)=(Rdlog(1+(tL2/λ)δ)+λ12S)mt,ntMt1h(m_{t,n_{t}},\mathcal{O}_{t})=\left(R\sqrt{d\log\left(\frac{1+\left({tL^{2}}/{\lambda}\right)}{\delta}\right)}+\lambda^{\frac{1}{2}}S\right)\left\|m_{t,n_{t}}\right\|_{{M}_{t}^{-1}} when linear contextual bandit algorithm is used. After using 2, we have

T(OFD-UCB)\displaystyle\mathfrak{R}_{T}(\textnormal{\ref{alg:OFD-UCB}}) 2wmaxTt=1T[h(mt,nt,𝒪t)]2\displaystyle\leq 2w_{\max}\sqrt{T}\sqrt{\sum_{t=1}^{T}\left[h(m_{t,n_{t}},\mathcal{O}_{t})\right]^{2}}
=2wmaxTt=1T[(Rdlog(1+(tL2/λ)δ)+λ12S)mt,ntMt1]2\displaystyle=2w_{\max}\sqrt{T}\sqrt{\sum_{t=1}^{T}\left[\left(R\sqrt{d\log\left(\frac{1+\left({tL^{2}}/{\lambda}\right)}{\delta}\right)}+\lambda^{\frac{1}{2}}S\right)\left\|m_{t,n_{t}}\right\|_{{M}_{t}^{-1}}\right]^{2}}
=2wmaxTt=1T[(Rdlog(1+(tL2/λ)δ)+λ12S)mt,ntMt1]2\displaystyle=2w_{\max}\sqrt{T}\sqrt{\sum_{t=1}^{T}\left[\left(R\sqrt{d\log\left(\frac{1+\left({tL^{2}}/{\lambda}\right)}{\delta}\right)}+\lambda^{\frac{1}{2}}S\right)\left\|m_{t,n_{t}}\right\|_{{M}_{t}^{-1}}\right]^{2}}
=2wmaxTt=1T(Rdlog(1+(tL2/λ)δ)+λ12S)2[mt,ntMt1]2\displaystyle=2w_{\max}\sqrt{T}\sqrt{\sum_{t=1}^{T}\left(R\sqrt{d\log\left(\frac{1+\left({tL^{2}}/{\lambda}\right)}{\delta}\right)}+\lambda^{\frac{1}{2}}S\right)^{2}\left[\left\|m_{t,n_{t}}\right\|_{{M}_{t}^{-1}}\right]^{2}}
=2wmaxT(Rdlog(1+(tL2/λ)δ)+λ12S)t=1T[mt,ntMt1]2\displaystyle=2w_{\max}\sqrt{T}\left(R\sqrt{d\log\left(\frac{1+\left({tL^{2}}/{\lambda}\right)}{\delta}\right)}+\lambda^{\frac{1}{2}}S\right)\sqrt{\sum_{t=1}^{T}\left[\left\|m_{t,n_{t}}\right\|_{{M}_{t}^{-1}}\right]^{2}}
2wmaxT(Rdlog(1+(TL2/λ)δ)+λ12S)t=1T[mt,ntMt1]2\displaystyle\leq 2w_{\max}\sqrt{T}\left(R\sqrt{d\log\left(\frac{1+\left({TL^{2}}/{\lambda}\right)}{\delta}\right)}+\lambda^{\frac{1}{2}}S\right)\sqrt{\sum_{t=1}^{T}\left[\left\|m_{t,n_{t}}\right\|_{{M}_{t}^{-1}}\right]^{2}}
=2αTwmaxTt=1T[mt,ntMt1]2\displaystyle=2\alpha_{T}w_{\max}\sqrt{T}\sqrt{\sum_{t=1}^{T}\left[\left\|m_{t,n_{t}}\right\|_{{M}_{t}^{-1}}\right]^{2}}
2αTwmaxT2logdet(MT)det(λId)\displaystyle\leq 2\alpha_{T}w_{\max}\sqrt{T}\sqrt{2\log\frac{\textnormal{det}(M_{T})}{\textnormal{det}(\lambda I_{d})}}
2αTwmax2dTlog(λ+TL/d)\displaystyle\leq 2\alpha_{T}w_{\max}\sqrt{2dT\log(\lambda+TL/d)}
T(OFD-UCB)\displaystyle\implies\mathfrak{R}_{T}(\textnormal{\ref{alg:OFD-UCB}}) 2αTwmax2dTlog(λ+TL/d).\displaystyle\leq 2\alpha_{T}w_{\max}\sqrt{2dT\log(\lambda+TL/d)}.

The last two inequalities follow from Lemma 11 and Lemma 10 of Abbasi-Yadkori et al. (2011), respectively. With this, the proof of 1 is complete. ∎

A.2 Proof of 2

We first state the following result that we will use to prove our regret upper bounds for 2.

Lemma 2 (Lemma 1 of Weymark (1981)).

Let 𝐰=(w1,w2,,wN)\bm{w}=(w_{1},w_{2},\ldots,w_{N}) and 𝐮=(u1,u2,,uN)\bm{u}=(u_{1},u_{2},\ldots,u_{N}) are opposite ordered. If 𝐔={𝐮1,𝐮2,,𝐮p}\bm{U}=\{\bm{u}^{1},\bm{u}^{2},\ldots,\bm{u}^{p}\} is the set of all permutations of 𝐮\bm{u}, then

n=1Nwnunn=1Nwnunj, for j=1,2,,p.\sum_{n=1}^{N}w_{n}u_{n}\leq\sum_{n=1}^{N}w_{n}u_{n}^{j},~{}~{}\textnormal{ for }j=1,2,\ldots,p.
Lemma 3 (Lemma 1 of Sim et al. (2021)).

Consider a non-decreasing order of weights w1>>wNw_{1}>\ldots>w_{N} and let G(𝐔)=n𝒩wnΦn(𝐔)\textrm{G}\left(\bm{U}\right)=\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}\right), where 𝐔\bm{U} is an NN-dimensional utility vector and Φn\Phi_{n} returns the nn-th smallest element of 𝐔\bm{U}. For any two utility vectors 𝐔a\bm{U}^{a} and 𝐔b\bm{U}^{b}, if there exists ii such that Uia>UibU_{i}^{a}>U_{i}^{b} and n𝒩{i}\forall n\in\mathcal{N}\setminus\{i\}, Una=UnbU_{n}^{a}=U_{n}^{b}, then G(𝐔b)>G(𝐔b)\textrm{G}\left(\bm{U}^{b}\right)>\textrm{G}\left(\bm{U}^{b}\right).

The regret analysis of our proposed algorithms depends on bounding the instantaneous regret for each action. The following result gives an upper bound on the instantaneous regret when using an contextual algorithm 𝔄\mathfrak{A}.

Lemma 4.

Let G be the goodness function defined in Eq. 2 and 𝔄\mathfrak{A} be an OFD compatible contextual bandit algorithm with |ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t)|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}). Then, the instantaneous regret incurred by UCB-𝔄\mathfrak{A} for selecting agent ntn_{t} for item mtm_{t} is

rt=G(𝑼t,nt)G(𝑼t,nt)2wnth(mt,nt,𝒪t).r_{t}=\textrm{G}\left(\bm{U}_{t,n_{t}^{\star}}\right)-\textrm{G}\left(\bm{U}_{t,n_{t}}\right)\leq 2w_{n_{t}}h(m_{t,n_{t}},\mathcal{O}_{t}).
Proof.

Recall the definition of the goodness function, i.e., G(𝑼t,nt)=n𝒩wnΦn(𝑼t,nt)\textrm{G}\left(\bm{U}_{t,n_{t}}\right)=\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right).

rt(𝔄)\displaystyle r_{t}(\mathfrak{A}) =G(𝑼t,nt)G(𝑼t,nt)\displaystyle=\textrm{G}\left(\bm{U}_{t,n_{t}^{\star}}\right)-\textrm{G}\left(\bm{U}_{t,n_{t}}\right)
=n𝒩wnΦn(𝑼t,nt)n𝒩wnΦn(𝑼t,nt)\displaystyle=\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}^{\star}}\right)-\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right)
=n𝒩wnΦn(𝑼t,n)+wntf(mt,nt)n𝒩wnΦn(𝑼t,nt)\displaystyle=\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}^{\star}\left(\bm{U}_{t,n}\right)+w_{n_{t}^{\star}}f(m_{t,n_{t}^{\star}})-\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right)
n𝒩wnΦn(𝑼t,n)+wnt[ft𝔄(mt,nt)+h(mt,nt,𝒪t)]n𝒩wnΦn(𝑼t,nt)\displaystyle\leq\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}^{\star}\left(\bm{U}_{t,n}\right)+w_{n_{t}^{\star}}\left[f_{t}^{\mathfrak{A}}(m_{t,n_{t}^{\star}})+h(m_{t,n_{t}^{\star}},\mathcal{O}_{t})\right]-\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right)
n𝒩wnΦn(𝑼t,n+[ft𝔄(mt,nt)+h(mt,nt,𝒪t)]𝟙(n=nt))n𝒩wnΦn(𝑼t,nt)\displaystyle\leq\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n}+\left[f_{t}^{\mathfrak{A}}(m_{t,n_{t}^{\star}})+h(m_{t,n_{t}^{\star}},\mathcal{O}_{t})\right]\mathbbmss{1}{\left(n=n_{t}^{\star}\right)}\right)-\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right)
n𝒩wnΦn(𝑼t,n+[ft𝔄(mt,nt)+h(mt,nt,𝒪t)]𝟙(n=nt))n𝒩wnΦn(𝑼t,nt)\displaystyle\leq\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n}+\left[f_{t}^{\mathfrak{A}}(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t})\right]\mathbbmss{1}{\left(n=n_{t}\right)}\right)-\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right)
n𝒩wnΦnft(𝑼t,n+[ft𝔄(mt,nt)+h(mt,nt,𝒪t)]𝟙(n=nt))n𝒩wnΦn(𝑼t,nt)\displaystyle\leq\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}^{f_{t}}\left(\bm{U}_{t,n}+\left[f_{t}^{\mathfrak{A}}(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t})\right]\mathbbmss{1}{\left(n=n_{t}\right)}\right)-\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right)
=n𝒩wnΦnft(𝑼t,n)+wnt[ft𝔄(mt,nt)+h(mt,nt,𝒪t)]n𝒩wnΦn(𝑼t,nt)\displaystyle=\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}^{f_{t}}\left(\bm{U}_{t,n}\right)+w_{n_{t}}\left[f_{t}^{\mathfrak{A}}(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t})\right]-\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right)
=n𝒩wnΦnft(𝑼t,n)+wnt[ft𝔄(mt,nt)+h(mt,nt,𝒪t)]\displaystyle=\bcancel{\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}^{f_{t}}\left(\bm{U}_{t,n}\right)}+w_{n_{t}}\left[f_{t}^{\mathfrak{A}}(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t})\right]
n𝒩wnΦn(𝑼t,nt)wntf(mt,nt)\displaystyle\qquad\qquad-\bcancel{\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right)}-w_{n_{t}}f(m_{t,n_{t}})
=wnt[ft𝔄(mt,nt)+h(mt,nt,𝒪t)]wntf(mt,nt)\displaystyle=w_{n_{t}}\left[f_{t}^{\mathfrak{A}}(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t})\right]-w_{n_{t}}f(m_{t,n_{t}})
=wnt[ft𝔄(mt,nt)f(mt,nt)+h(mt,nt,𝒪t)]\displaystyle=w_{n_{t}}\left[f_{t}^{\mathfrak{A}}(m_{t,n_{t}})-f(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t})\right]
wnt[|ft𝔄(mt,nt)f(mt,nt)|+h(mt,nt,𝒪t)]\displaystyle\leq w_{n_{t}}\left[|f_{t}^{\mathfrak{A}}(m_{t,n_{t}})-f(m_{t,n_{t}})|+h(m_{t,n_{t}},\mathcal{O}_{t})\right]
wnt[h(mt,nt,𝒪t)+h(mt,nt,𝒪t)]\displaystyle\leq w_{n_{t}}\left[h(m_{t,n_{t}},\mathcal{O}_{t})+h(m_{t,n_{t}},\mathcal{O}_{t})\right]
=2wnth(mt,nt,𝒪t).\displaystyle=2w_{n_{t}}h(m_{t,n_{t}},\mathcal{O}_{t}).

The Φn()\Phi_{n}^{\star}(\cdot) returns the utilities in the same order as Φn(𝑼t,nt)\Phi_{n}\left(\bm{U}_{t,n_{t}^{\star}}\right), while Φnft()\Phi_{n}^{f_{t}}(\cdot) returns the utilities in the same order as Φn(𝑼t,nt)\Phi_{n}\left(\bm{U}_{t,n_{t}}\right). The first inequality is from applying |ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t)|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}) with mt,n=mt,ntm_{t,n}=m_{t,n_{t}^{\star}} and then using triangle inequality to get f(mt,nt)ft𝔄(mt,nt)+h(mt,nt,𝒪t)f(m_{t,n_{t}^{\star}})\leq f_{t}^{\mathfrak{A}}(m_{t,n_{t}^{\star}})+h(m_{t,n_{t}^{\star}},\mathcal{O}_{t}). The second inequality follows from 3 as an additional utility of ft𝔄(mt,nt)+h(mt,nt,𝒪t)f_{t}^{\mathfrak{A}}(m_{t,n_{t}^{\star}})+h(m_{t,n_{t}^{\star}},\mathcal{O}_{t}) is added to the total collected utility of agent ntn_{t}^{\star}. The third inequality follows from the fact that ntn_{t} maximizes the goodness function. The fourth inequality follows from 2. The last inequality is due to |ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t)|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}). This completes the proof, showing that rt(𝔄)2wnth(mt,nt,𝒪t)r_{t}(\mathfrak{A})\leq 2w_{n_{t}}h(m_{t,n_{t}},\mathcal{O}_{t}). ∎

See 2

Proof.

Let rt(𝔄)r_{t}(\mathfrak{A}) denote the instantaneous regret for using contextual bandit algorithm 𝔄\mathfrak{A}. After using 4, the regret with contextual bandit algorithm 𝔄\mathfrak{A} after TT rounds is given as follows:

T(𝔄)\displaystyle\mathfrak{R}_{T}(\mathfrak{A}) =t=1Trt(𝔄)2t=1Twnth(mt,nt,𝒪t)2wmaxt=1Th(mt,nt,𝒪t)\displaystyle=\sum_{t=1}^{T}r_{t}(\mathfrak{A})\leq 2\sum_{t=1}^{T}w_{n_{t}}h(m_{t,n_{t}},\mathcal{O}_{t})\leq 2w_{\max}\sum_{t=1}^{T}h(m_{t,n_{t}},\mathcal{O}_{t})
2wmaxTt=1T[h(mt,nt,𝒪t)]2\displaystyle\leq 2w_{\max}\sqrt{T\sum_{t=1}^{T}\left[h(m_{t,n_{t}},\mathcal{O}_{t})\right]^{2}}
T(𝔄)\displaystyle\implies\mathfrak{R}_{T}(\mathfrak{A}) 2wmaxTt=1T[h(mt,nt,𝒪t)]2.\displaystyle\leq 2w_{\max}\sqrt{T}\sqrt{\sum_{t=1}^{T}\left[h(m_{t,n_{t}},\mathcal{O}_{t})\right]^{2}}.\qed

A.3 Proof of 3

See 3

Proof.

First we first get upper bound on instantaneous regret:

rt(𝔄)\displaystyle r_{t}(\mathfrak{A}) =G(𝑼t,nt)G(𝑼t,nt)\displaystyle=\textrm{G}\left(\bm{U}_{t,n_{t}^{\star}}\right)-\textrm{G}\left(\bm{U}_{t,n_{t}}\right)
=G(Ut,n1,,Ut,nt+f(mt,nt),,Ut,nN)G(𝑼t,nt)\displaystyle=\textrm{G}\left(U_{t,n_{1}},\ldots,U_{t,n_{t}^{\star}}+f(m_{t,n_{t}^{\star}}),\ldots,U_{t,n_{N}}\right)-\textrm{G}\left(\bm{U}_{t,n_{t}}\right)
G(Ut,n1,,Ut,nt+ft𝔄(mt,nt)+h(mt,nt,𝒪t),,Ut,nN)G(𝑼t,nt)\displaystyle\leq\textrm{G}\left(U_{t,n_{1}},\ldots,U_{t,n_{t}^{\star}}+f_{t}^{\mathfrak{A}}(m_{t,n_{t}^{\star}})+h(m_{t,n_{t}^{\star}},\mathcal{O}_{t}),\ldots,U_{t,n_{N}}\right)-\textrm{G}\left(\bm{U}_{t,n_{t}}\right)
G(Ut,n1,,Ut,nt+ft𝔄(mt,nt)+h(mt,nt,𝒪t),,Ut,nN)G(𝑼t,nt)\displaystyle\leq\textrm{G}\left(U_{t,n_{1}},\ldots,U_{t,n_{t}}+f_{t}^{\mathfrak{A}}(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t}),\ldots,U_{t,n_{N}}\right)-\textrm{G}\left(\bm{U}_{t,n_{t}}\right)
=G(Ut,n1,,Ut,nt+ft𝔄(mt,nt)+h(mt,nt,𝒪t),,Ut,nN)\displaystyle=\textrm{G}\left(U_{t,n_{1}},\ldots,U_{t,n_{t}}+f_{t}^{\mathfrak{A}}(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t}),\ldots,U_{t,n_{N}}\right)
G(Ut,n1,,Ut,nt+f(mt,n),,Ut,nN)\displaystyle\qquad\qquad-\textrm{G}\left(U_{t,n_{1}},\ldots,U_{t,n_{t}}+f(m_{t,n}),\ldots,U_{t,n_{N}}\right)
cnt|Ut,nt+ft𝔄(mt,nt)+h(mt,nt,𝒪t)(Ut,nt+f(mt,n))|\displaystyle\leq c_{n_{t}}|\bcancel{U_{t,n_{t}}}+f_{t}^{\mathfrak{A}}(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t})-(\bcancel{U_{t,n_{t}}}+f(m_{t,n}))|
=cnt[ft𝔄(mt,nt)f(mt,nt)+h(mt,nt,𝒪t)]\displaystyle=c_{n_{t}}\left[f_{t}^{\mathfrak{A}}(m_{t,n_{t}})-f(m_{t,n_{t}})+h(m_{t,n_{t}},\mathcal{O}_{t})\right]
cnt[|ft𝔄(mt,nt)f(mt,nt)|+h(mt,nt,𝒪t)]\displaystyle\leq c_{n_{t}}\left[|f_{t}^{\mathfrak{A}}(m_{t,n_{t}})-f(m_{t,n_{t}})|+h(m_{t,n_{t}},\mathcal{O}_{t})\right]
cnt[h(mt,nt,𝒪t)+h(mt,nt,𝒪t)]=2cnth(mt,nt,𝒪t).\displaystyle\leq c_{n_{t}}\left[h(m_{t,n_{t}},\mathcal{O}_{t})+h(m_{t,n_{t}},\mathcal{O}_{t})\right]=2c_{n_{t}}h(m_{t,n_{t}},\mathcal{O}_{t}).

The first inequality follows from the monotonicity property of goodness function after applying |ft𝔄(mt,n)f(mt,n)|h(mt,n,𝒪t)|f_{t}^{\mathfrak{A}}(m_{t,n})-f(m_{t,n})|\leq h(m_{t,n},\mathcal{O}_{t}) with mt,n=mt,ntm_{t,n}=m_{t,n_{t}^{\star}} and then using triangle inequality to get f(mt,nt)ft𝔄(mt,nt)+h(mt,nt,𝒪t)f(m_{t,n_{t}^{\star}})\leq f_{t}^{\mathfrak{A}}(m_{t,n_{t}^{\star}})+h(m_{t,n_{t}^{\star}},\mathcal{O}_{t}). The second inequality follows from the fact that ntn_{t} maximizes the goodness function. The third inequality is due to the locally Lipschitz property of goodness function. Let cmax=max{c1,c2,,cN}c_{\max}=\max\{c_{1},c_{2},\ldots,c_{N}\}. Then, after TT rounds, the regret with contextual bandit algorithm 𝔄\mathfrak{A} is given as follows:

T(𝔄)\displaystyle\mathfrak{R}_{T}(\mathfrak{A}) =t=1Trt(𝔄)2t=1Tcnth(mt,nt,𝒪t)2cmaxTt=1T[h(mt,nt,𝒪t)]2\displaystyle=\sum_{t=1}^{T}r_{t}(\mathfrak{A})\leq 2\sum_{t=1}^{T}c_{n_{t}}h(m_{t,n_{t}},\mathcal{O}_{t})\leq 2c_{\max}\sqrt{T\sum_{t=1}^{T}\left[h(m_{t,n_{t}},\mathcal{O}_{t})\right]^{2}}
T(𝔄)\displaystyle\implies\mathfrak{R}_{T}(\mathfrak{A}) 2cmaxTt=1T[h(mt,nt,𝒪t)]2.\displaystyle\leq 2c_{\max}\sqrt{T}\sqrt{\sum_{t=1}^{T}\left[h(m_{t,n_{t}},\mathcal{O}_{t})\right]^{2}}.\qed
Table 1: Values of h(x,𝒪t)h(x,\mathcal{O}_{t}) for some contextual bandit algorithms.
Contextual bandit algorithm h(x,𝒪t)h(x,\mathcal{O}_{t})
Lin-UCB (Chu et al., 2011) (Rdlog(1+TL2λδ)+λ12S)xMt1\left(R\sqrt{d\log\left(\frac{1+\frac{TL^{2}}{\lambda}}{\delta}\right)}+\lambda^{\frac{1}{2}}S\right)\left\|x\right\|_{{M}_{t}^{-1}}
GLM-UCB (Li et al., 2017) d2log(1+2t/d)+log(1/δ)xMt1κ\sqrt{\frac{d}{2}\log(1+2t/d)+\log(1/\delta)}\frac{\left\|x\right\|_{{M}_{t}^{-1}}}{\kappa}
IGP-UCB (Chowdhury and Gopalan, 2017) 2(γt1+1+log(1/δ))σt1(x)\sqrt{2(\gamma_{t-1}+1+\log(1/\delta))}\sigma_{t-1}(x) + Bσt1(x)B\sigma_{t-1}(x)

Appendix B Auxiliary Observations

We first mention some of the observations that will be useful to prove the property of monotonicity and Lipschitz continuity for our various choices of the goodness function.

Fact 1.

Suppose h(x)h(x) and g(x)g(x) are Lipschitz continuous functions with non-negative bounded domains. Then the following statements are true:

  1. 1.

    h(x)+g(x)h(x)+g(x) is a Lipschitz continuous function.

  2. 2.

    h(x)g(x)h(x)g(x) is a Lipschitz continuous function.

Proof.

Let h(x)h(x) and g(x)g(x) be the Lipschitz continuous functions in a bounded domain with Lipschitz constants k1k_{1} and k2k_{2}, respectively.

1. We first prove that h(x)+g(x)h(x)+g(x) is a Lipschitz continuous function. Consider the following difference:

|(h(x)+g(x))(h(y)+g(y))||h(x)h(y)|+|g(x)g(y)|k1|xy|+k2|xy|k|xy|\displaystyle\bigg{|}(h(x)+g(x))-(h(y)+g(y))\bigg{|}\leq\bigg{|}h(x)-h(y)\bigg{|}+\bigg{|}g(x)-g(y)\bigg{|}\leq k_{1}\bigg{|}x-y\bigg{|}+k_{2}\bigg{|}x-y\bigg{|}\leq k\bigg{|}x-y\bigg{|}

where k=k1+k2k=k_{1}+k_{2}. The first inequality follows from the triangle inequality, while the second inequality uses Lipschitz continuity.

2. We now prove that h(x)g(x)h(x)g(x) is a Lipschitz continuous function. Consider the following difference:

|h(x)g(x)h(y)g(y)|\displaystyle\bigg{|}h(x)g(x)-h(y)g(y)\bigg{|} =\displaystyle= |h(x)g(x)h(y)g(x)+h(y)g(x)h(y)g(y)|\displaystyle\bigg{|}h(x)g(x)-h(y)g(x)+h(y)g(x)-h(y)g(y)\bigg{|}
\displaystyle\leq |h(x)g(x)h(y)g(x)|+|h(y)g(x)h(y)g(y)|\displaystyle\bigg{|}h(x)g(x)-h(y)g(x)\bigg{|}+\bigg{|}h(y)g(x)-h(y)g(y)\bigg{|}
\displaystyle\leq |g(x)||h(x)h(y)|+|h(y)||g(x)g(y)|\displaystyle|g(x)|\bigg{|}h(x)-h(y)\bigg{|}+|h(y)|\bigg{|}g(x)-g(y)\bigg{|}
\displaystyle\leq Mk1|xy|+Mk2|xy|M|xy| where M=Mk.\displaystyle Mk_{1}\bigg{|}x-y\bigg{|}+Mk_{2}\bigg{|}x-y\bigg{|}\leq M^{\prime}\bigg{|}x-y\bigg{|}~{}\mbox{ where }M^{\prime}=Mk.

The last inequality is due to the bound of the continuous function, and appropriately, the Lipschitz constant is chosen. ∎

Fact 2.

Suppose hi(x)h_{i}(x) with 1in1\leq i\leq n are Lipschitz continuous functions with non-negative bounded domain. Then the following statements are true:

  1. 1.

    i=1naihi(x)\sum_{i=1}^{n}a_{i}h_{i}(x) is a Lipschitz continuous function, where all aisa_{i}^{\prime}s are constant.

  2. 2.

    i=1nhi(x)\prod_{i=1}^{n}h_{i}(x) is a Lipschitz continuous function.

Proof.

1. We first prove that i=1naihi(x)\sum_{i=1}^{n}a_{i}h_{i}(x) is a Lipschitz continuous function. Consider the following difference:

|i=1naihi(x)i=1naihi(y)|\displaystyle\bigg{|}\sum_{i=1}^{n}a_{i}h_{i}(x)-\sum_{i=1}^{n}a_{i}h_{i}(y)\bigg{|} =\displaystyle= |i=1nai(hi(x)hi(y))|\displaystyle\bigg{|}\sum_{i=1}^{n}a_{i}\bigg{(}h_{i}(x)-h_{i}(y)\bigg{)}\bigg{|}
\displaystyle\leq i=1nai|hi(x)hi(y)|\displaystyle\sum_{i=1}^{n}a_{i}\bigg{|}h_{i}(x)-h_{i}(y)\bigg{|}
\displaystyle\leq i=1nkiai|xy|=L|xy|.\displaystyle\sum_{i=1}^{n}k_{i}a_{i}\bigg{|}x-y\bigg{|}=L\bigg{|}x-y\bigg{|}.

The first inequality is due to the triangle inequality, while the second inequality is by using Lipschitz continuity of the function.

2. We now prove that i=1nhi(x)\prod_{i=1}^{n}h_{i}(x) is a Lipschitz continuous function. We will use induction for this proof. For base case: When n=2n=2, the statement is true due to Fact 1. For induction hypothesis: Assume the statement is true for n=mn=m. Now, we consider the inductive step (for n=m+1n=m+1) as follows:

|i=1m+1hi(x)i=1m+1hi(y)|\displaystyle\bigg{|}\prod_{i=1}^{m+1}h_{i}(x)-\prod_{i=1}^{m+1}h_{i}(y)\bigg{|} =|i=1m+1hi(x)hm+1(y)i=1mhi(x)+hm+1(y)i=1mhi(x)i=1m+1hi(y)|\displaystyle=\bigg{|}\prod_{i=1}^{m+1}h_{i}(x)-h_{m+1}(y)\prod_{i=1}^{m}h_{i}(x)+h_{m+1}(y)\prod_{i=1}^{m}h_{i}(x)-\prod_{i=1}^{m+1}h_{i}(y)\bigg{|}
=|(hm+1(x)hm+1(y))i=1mhi(x)+hm+1(y)(i=1mhi(x)i=1mhi(y))|\displaystyle=\bigg{|}\bigg{(}h_{m+1}(x)-h_{m+1}(y)\bigg{)}\prod_{i=1}^{m}h_{i}(x)+h_{m+1}(y)\bigg{(}\prod_{i=1}^{m}h_{i}(x)-\prod_{i=1}^{m}h_{i}(y)\bigg{)}\bigg{|}
|(hm+1(x)hm+1(y))i=1mhi(x)|+|hm+1(y)(i=1mhi(x)i=1mhi(y))|\displaystyle\leq\bigg{|}\bigg{(}h_{m+1(x})-h_{m+1}(y)\bigg{)}\prod_{i=1}^{m}h_{i}(x)\bigg{|}+\bigg{|}h_{m+1}(y)\bigg{(}\prod_{i=1}^{m}h_{i}(x)-\prod_{i=1}^{m}h_{i}(y)\bigg{)}\bigg{|}
=|i=1mhi(x)||(hm+1(x)hm+1(y))|+|hm+1(y)||(i=1mhi(x)i=1mhi(y))|\displaystyle=\bigg{|}\prod_{i=1}^{m}h_{i}(x)\bigg{|}\bigg{|}\bigg{(}h_{m+1}(x)-h_{m+1}(y)\bigg{)}\bigg{|}+\bigg{|}h_{m+1}(y)\bigg{|}\bigg{|}\bigg{(}\prod_{i=1}^{m}h_{i}(x)-\prod_{i=1}^{m}h_{i}(y)\bigg{)}\bigg{|}
β|xy|.\displaystyle\leq\beta\bigg{|}x-y\bigg{|}.

The first inequality follows from the triangle inequality. The last inequality is due to the boundedness of the function combined with the induction hypothesis. ∎

Fact 3.

The function log(x)log(x) is Lipschitz continuous when xx is in a non-negative domain, i.e., x>0x>0.

Proof.

Suppose 0<xl0<x\leq l. Now consider the following difference.

|log(x)log(y)|=|log(yx)|=|log(1+yx1)||yx1|1l|xy|.\displaystyle\bigg{|}log(x)-log(y)\bigg{|}=\bigg{|}log\bigg{(}\frac{y}{x}\bigg{)}\bigg{|}=\bigg{|}log\bigg{(}1+\frac{y}{x}-1\bigg{)}\bigg{|}\leq\bigg{|}\frac{y}{x}-1\bigg{|}\leq\frac{1}{l}\bigg{|}x-y\bigg{|}.

We use that z>log(1+z)z>log(1+z) for z>1z>-1. Therefore, it proves our claim. ∎

Fact 4.

Suppose h(x)h(x) is a continuous function, bounded and positive, then log(h(x))log(h(x)) is a Lipschitz continuous function.

Proof.

Suppose 0<h(x)l0<h(x)\leq l. Now consider the following difference.

|log(h(x))log(h(y))|=|log(h(y)h(x))|=|log(1+h(y)h(x)1)||h(y)h(x)1|1l|h(x)h(y)|.\displaystyle\bigg{|}log\bigg{(}h(x)\bigg{)}-log\bigg{(}h(y)\bigg{)}\bigg{|}=\bigg{|}log\bigg{(}\frac{h(y)}{h(x)}\bigg{)}\bigg{|}=\bigg{|}log\bigg{(}1+\frac{h(y)}{h(x)}-1\bigg{)}\bigg{|}\leq\bigg{|}\frac{h(y)}{h(x)}-1\bigg{|}\leq\frac{1}{l}\bigg{|}h(x)-h(y)\bigg{|}.

Using Fact 3, the proof is complete. ∎

B.1 Goodness functions that are locally monotonically non-decreasing and Lipschitz continuous

We will briefly discuss the different choices of the goodness function for which we can theoretically guarantee a sub-linear regret upper bound.

Weighted Gini social-evaluation function.

The weighted Gini social-evaluation function is given as follows:

G(𝑼t,nt)=n𝒩wnΦn(𝑼t,nt).\textrm{G}\left(\bm{U}_{t,n_{t}}\right)=\sum_{n\in\mathcal{N}}w_{n}~{}\Phi_{n}\left(\bm{U}_{t,n_{t}}\right).

Since all wnw_{n}’s are positive constants with wmax=maxnNwnw_{\max}=\max_{n\in N}w_{n} and Φn(𝑼t,nt)\Phi_{n}\left(\bm{U}_{t,n_{t}}\right) are bounded, positive, continuous functions. Therefore, the weighted Gini social-evaluation function satisfies the locally Lipschitz condition. Also, if we change the ii-th component while keeping the rest of the component of the weighted Gini social-evaluation function fixed, the locally monotonicity non-decreasing condition holds (also from 3). Therefore, 3 (with cmax=wmaxc_{\max}=w_{\max}) also holds for weighted Gini social-evaluation function.

Targeted weights.

Let us define a fixed target weight vector in advance, with the learner’s goal being to achieve these targeted weight distributions after each allocation. These target weights represent the desired fraction of the total cumulative utility each agent should receive. Let SUnt=f(mt,nt)+nNUtn\text{SU}_{n_{t}}=f(m_{t},n_{t})+\sum_{n\in N}U_{t}^{n} be the sum of total utility after allocating item mtm_{t} to agent ntn_{t}. Suppose the target weighted fraction of the utility vector is 𝐫{\bf r^{*}}, which is given. The learner’s goal is to obtain the targeted weight vector at the end of the allocation process. The goodness function for this fairness constraint is defined as follows:

G(𝐔t,nt)=nNwnΦn(𝐔t,nt𝐩),\textrm{G}\left({\bf U}_{t,n_{t}}\right)=\sum_{n\in N}w_{n}~{}\Phi_{n}({\bf U}_{t,n_{t}}^{\bf p}),

where w1=1w_{1}=1 and wi=0w_{i}=0 for i2i\geq 2 [i.e., ESW]. The utility Ut,n𝐩{U}_{t,n}^{\bf p} in 𝐔t,nt𝐩{\bf U}_{t,n_{t}}^{\bf p} is defined as Ut,n𝐩=Ut,n/pn{U}_{t,n}^{\bf p}={U}_{t,n}/p_{n}, where pnp_{n} is the proportional ratio for the nn-th agent. Let 𝐫=(r1,,rN){\bf r^{*}}=(r_{1},\ldots,r_{N}) be the vector of targeted ratio of agents’ cumulative utility to total utilities collected by all agents (i.e., system’s total utility), where n=1Nrn=1\sum_{n=1}^{N}r_{n}=1. Then, pn=rn/mini(ri)p_{n}=r_{n}/\min_{i}(r_{i}), e.g., if N=3N=3 and 𝐫=(0.2,0.5,0.3){\bf r}=(0.2,0.5,0.3), then 𝐩=(0.2/0.2,0.5/0.2,0.3/0.2)=(1,2.5,1.5){\bf p}=(0.2/0.2,0.5/0.2,0.3/0.2)=(1,2.5,1.5).

Nash Social Welfare.

The Nash Social Welfare (NSW) is defined as the product of the utilities of all agents, i.e., [n𝒩Un]\left[\prod_{n\in\mathcal{N}}U_{n}\right]. Since we are maximizing the goodness function, we can ignore the constant factor 1|𝒩|\frac{1}{|\mathcal{N}|} here. First, observe that the utility function is G():n+\textrm{G}(\cdot):\mathbb{R}^{n}\mapsto\mathbb{R}^{+}. Since the utility function is positive, NSW satisfies the monotonically non-decreasing function property. Also, we assume that the utility of the individual agent is positive and bounded, which implies that the NSW is locally Lipschitz continuous. As a result, the properties given in 2 hold for NSW.

Log Nash Social Welfare.

This is defined as the log\log of the Nash social welfare function and is commonly considered in the fair division literature (Cole and Gkatzelis, 2018; Talebi and Proutiere, 2018).

G(𝑼t,nt)=log[n𝒩(Utn+f(mt,nt)𝟙(n=nt))]=n𝒩log(Utn+f(mt,nt)𝟙(n=nt)).\textrm{G}\left(\bm{U}_{t,n_{t}}\right)=\log\left[\prod_{n\in\mathcal{N}}\left(U_{t}^{n}+f(m_{t},n_{t})\mathbbmss{1}{\left(n=n_{t}\right)}\right)\right]=\sum_{n\in\mathcal{N}}\log(U_{t}^{n}+f(m_{t},n_{t})\mathbbmss{1}{\left(n=n_{t}\right)}).

We assume that the utility function is bounded and positive. It is easy to observe that using Fact 1- Fact 4, the log\log NSW is a Lipschitz continuous function. Also, log\log NSW satisfies the monotone property. By virtue of these observations, we can guarantee sub-linear regret, i.e., 3 holds.