Repeated Contracting with Multiple Non-Myopic Agents:
Policy Regret and Limited Liability
We study a repeated contracting setting in which a Principal adaptively chooses amongst Agents at each of rounds. The Agents are non-myopic, and so a mechanism for the Principal induces a -round extensive form game amongst the Agents. We give several results aimed at understanding an under-explored aspect of contract theory — the game induced when choosing an Agent to contract with. First, we show that this game admits a pure-strategy non-responsive equilibrium amongst the Agents — informally an equilibrium in which the Agent’s actions depend on the history of realized states of nature, but not on the history of each other’s actions, and so avoids the complexities of collusion and threats. Next, we show that if the Principal selects Agents using a monotone bandit algorithm, then for any concave contract, in any such equilibrium, the Principal obtains no regret to contracting with the best Agent in hindsight — not just given their realized actions, but also to the counterfactual world in which they had offered a guaranteed -round contract to the best Agent in hindsight, which would have induced a different sequence of actions. Finally, we show that if the Principal selects Agents using a monotone bandit algorithm which guarantees no swap-regret, then the Principal can additionally offer only limited liability contracts (in which the Agent never needs to pay the Principal) while getting no-regret to the counterfactual world in which she offered a linear contract to the best Agent in hindsight — despite the fact that linear contracts are not limited liability. We instantiate this theorem by demonstrating the existence of a monotone no swap-regret bandit algorithm, which to our knowledge has not previously appeared in the literature.
1 Introduction
Principal-Agent problems arise when a Principal seeks to hire a third party (an Agent) to perform some costly task on their behalf. Typically the action of the Agent (e.g. the effort that they put in) is unobservable to the Principal — all that the Principal observes is the resulting outcome, which is a function both of the Agent’s action and some unknown state of nature. The design problem for the Principal is to find a contract (a mapping from outcomes to payments to the Agent) that maximizes their expected utility less the payment, when the Agent best responds so as to maximize their expected payment, less their cost. Classical contract theory concerns itself with the design of the optimal contract for the Principal, in a static setting with a single Agent, in which the Principal and the Agent share a common prior belief about the distribution over states of nature. This standard setting is well understood, and linear contracts (which pay the Agent a constant fraction of the Principal’s realized value) turn out to be robustly optimal in several different senses (Carroll, 2015; Dütting et al., 2019, 2022; Castiglioni et al., 2021).
Of course, each of the standard assumptions can be subject to challenge. For example, how does the Principal choose the Agent to hire? In many markets there is an aspect of competition between Agents that is unmodelled in a standard Principal/Agent setting. What if the distribution over states of nature deviates from the expectations of the Principal and Agent—i.e. what if their beliefs are mis-specified? In this case, what guarantees can be given to the Principal? And finally, many contracts (like linear contracts) can require negative payments to the Agent in certain realizations — i.e. they can obligate the Agent to pay the Principal, which is generally unimplementable in practice. Contracts that never obligate the Agent to pay the Principal are said to satisfy the limited liability condition, and are more realistic, but complicate the design of optimal contracts. These are the problems that we focus on in this paper.
As a running example, consider a college endowment that wants to hire a fund manager to invest it’s capital. The goal of the endowment is to maximize the return on its investment (minus the fees it pays). There are fund managers that the endowment can choose between, each of which must be paid a fee, as well as a costless outside option (say, investing the money in an index fund). A linear contract in this setting pays the Agent a fixed percentage (say 10%) of its excess performance above that of the index fund. Since this might be negative (if the Agent does not outperform the index fund), a linear contract is not limited liability.
In our running example, the problem that the Principal needs to solve in the most complex setting considered in our paper is to choose a fund manager amongst the candidates, or the outside option in each of sequential rounds. In each round, the Principal offers the chosen fund manager a contract, which must be limited liability. In fixing a method for doing this, the Principal is inducing a round extensive form game amongst the fund managers, who are not myopic, and who will play in an equilibrium of this game, given their beliefs about the distribution on the sequence of states of nature to be realized. We wish to guarantee to the Principal that in the equilibrium of this induced game, their cumulative utility is at least what it would have been had they selected the best of the fund managers in hindsight and offered them a (non-limited-liability) linear contract, guaranteed for all rounds, no matter what the sequence of states of nature turn out to be (i.e. even if the beliefs of the fund managers turn out to be incorrect). Note that what we are aiming for is a form of policy regret, in that in the actual play-out, the fund managers are playing an equilibrium strategy in a player extensive form game, but in the counterfactual that we use as our benchmark, the fund managers are guaranteed a fixed contract over all rounds, and so are simply best responding given their beliefs.
We now lay out our results in a bit more detail.
We study a repeated Principal/Agent problem over rounds. We model the action space for the Agent as being the set of real valued effort levels , associated with a convex increasing cost function . In each round :
-
1.
The Principal chooses amongst a set of Agents as well as a costless outside option. If the Principal chooses one of the Agents, he offers them a concave contract (a mapping from Principal payoffs to Agent payments) for that round.
-
2.
The chosen Agent then chooses an action . Then, a state of nature is realized according to an arbitrary (possibly adversarial) process, which results in payoff for the Principal and a payment for the Agent. In that round, the Principal gets utility and the Agent gets utility . We assume throughout that for each state of nature , the Principal’s reward is a non-decreasing linear function of the Agent’s effort level (and can depend on the state of nature in arbitrary ways).
At the end of each round, the Principal as well as the Agents observe and , which can be used to inform the Principal’s choice of Agents in the following rounds. counterfactuals of the sort “what would the reward have been had the Principal chosen a different Agent” are not observed. A strategy for an Agent in this game is a mapping from transcripts (which records both the realized states of nature and actions of the other players at all previous rounds) and current-round-contracts to effort levels. The Agents each have (possibly mis-specified) beliefs over the distribution of the state of nature at each round (which can also be arbitrary functions of the transcript). An algorithm for the Principal maps their observed history to a (distribution over) Agents to choose at each round, as well as a contract to offer. Once the Principal fixes an algorithm, this induces a -round extensive form game amongst the Agents. We first show the existence of a particularly simple kind of equilibrium in this game.
Non-Responsive Pure Strategy Equilibrium Existence
Extensive form games can have many equilibria that support a wide range of behavior through threats between players, which limits the predictive power of Nash equilibria on their own. We say that an equilibrium is non-responsive if each player’s strategy at each round depends on the realized sequence of states of nature, but not on the history of actions of the other players. In Section 3.2, we prove a structural result: the game we study always admits a non-responsive pure-strategy Nash equilibrium. With this existence result in hand, in all of our subsequent results, we will give guarantees to the Principal under the assumption that the Agents are playing a non-responsive pure-strategy equilibrium of the extensive form game induced by the Principal’s algorithm.
Regret to the Best Fixed Agent
For our first algorithmic result (in Section 4), we fix any contract which is concave in (i.e. offers the Agent payment that is linear or has diminishing marginal returns of the Principal’s reward), and give an extremely simple algorithm for the Principal. At each round , the Principal selects an Agent using a monotone bandit algorithm with an external regret guarantee and offers contract . The regret guarantee of the bandit algorithm immediately implies that the Principal has diminishing regret to the realized rewards of the best Agent in hindsight. However, these can differ from the counterfactual rewards of giving the best Agent in hindsight a fixed contract, because the realized rewards are generated by actions the Agent played in a strategic game. What we show is that in the game induced by this mechanism, the rewards realized by each Agent are only higher than what they would have been counterfactually, had that Agent been given a fixed contract for all rounds. Thus the Principal has no regret to our counterfactual benchmark as well. This result hinges on the monotonicity of the bandit algorithm, which means that unilaterally increasing the reward of a single Agent in a single round only increases the probability that this Agent is chosen in future rounds. Intuitively, the result is that in the competitive game induced by the mechanism, Agents are incentivized to spend more effort than they would have otherwise, because the payoff they get is not just their one-round payoff, but also their continuation payoff over future rounds, which is increasing with current-round effort level. Monotone bandit algorithms with external regret guarantees are already known to exist Babaioff et al. (2009).
Swap Regret and Limited Liability Contracts
Our first result gives a mechanism that uses the same contract that its regret benchmark is defined with respect to. Thus if is not a limited liability contract, our mechanism also is not limited liability. This is problematic, because linear contracts which are known to be robustly optimal in a variety of settings (Carroll, 2015; Dütting et al., 2019, 2022; Castiglioni et al., 2021) and so would make ideal benchmarks, are not limited liability. Our second result (Section 5) gives an incentive-preserving reduction to limited liability contracts for any linear contract . We use a construction similar to the one Chassang (2013) used to give a limited-liability mechanism for a single Agent. Informally, each Agent can open a “tab” with our Principal, which records the cumulative payment owed from the Principal to the agent after each round. The Principal will not pay Agents throughout, and instead pays each Agent in a lump sum at the end of the game according to their tab — except that if the final tab indicates a negative payment from the Principal to that Agent. In this case, the Agent’s debt is forgiven and the Principal pays them zero. A property of this setup is that if at the end of rounds, the linear contract would have resulted in a net transfer from the Principal to the Agent, the cumulative payments made under the tab are identical to what they would have been under the linear contract; if on the other hand the linear contract would have resulted in a net transfer from the Agent to the Principal, the cumulative payments made under the tab deviate from that of the linear contract. The tab enforces limited liability, but serves to limit the downside of the Agent, and so can be incentive-distorting. We show that if the Principal chooses Agents using an algorithm that has diminishing swap regret (to the realized rewards), then the cumulative payment to the Agent under the tab is guaranteed to be non-negative (up to diminishing regret terms). This means that (again, up to low order regret terms) the utility for the Agent of playing any strategy is the same as it would have been had the mechanism been offering the linear contract rather than its limited liability proxy. We show that if the bandit algorithm used by the Principal obtains no swap-regret and is monotone (once again in the sense that unilaterally increasing the reward of a single Agent in a single round only increases the probability of choosing that Agent in future rounds), then in equilibrium, the Agents are incentivized to expend more effort than they would have had they been offered a guaranteed linear contract for all rounds, and hence the no (swap) regret guarantee of the Principal’s mechanism over the realized rewards extends to a no (swap) regret guarantee to the counterfactual rewards that would be been obtained had the Principal picked the best fixed Agent in hindsight. We show that standard no swap-regret algorithms like Blum and Mansour (2007) are not monotone in this sense — but that the recent no swap-regret algorithms given by Dagan et al. (2023) and Peng and Rubinstein (2023) are. These algorithms operate in the full information setting, but we give a monotonicity preserving reduction to the bandit setting. Thus we can use these algorithms in our reduction to promise the Principal no counterfactual (swap) regret to choosing the best Agent in hindsight (and offering them a fixed linear contract), while needing to offer only linear liability contracts.
1.1 Related Work
The origins of contract theory date to Holmström (1979) and Grossman and Hart (1992). This literature is too large to survey — we refer the reader to Bolton and Dewatripont (2004) for a textbook introduction.
Our work fits into a recent literature studying learning problems in contract theory. Ho et al. (2014) study online contract design by approaching it as a bandit problem in which an unknown distribution over myopic Agents arrive and respond to an offered contract by optimizing their expected utility with respect to a common prior. Cohen et al. (2022) extend this to the case in which the Agent has bounded risk aversion. Zhu et al. (2022) revisit this problem and characterize the sample complexity of online contract design in general (with nearly matching upper and lower bounds) and for the special case of linear contracts (with exactly matching upper and lower bounds). Camara et al. (2020) and Collina et al. (2023) study the problem of repeatedly contracting with a single non-myopic Agent in a setting in which there are no common prior assumptions, and the single Agent is assumed to follow various learning-theoretic behavioural assumptions (that their empirical play history satisfies various strengthenings of swap regret conditions). We distinguish ourselves from this line of work by studying the game induced by competition amongst multiple non-myopic Agents.
Chassang (2013) studies a repeated interaction between a Principal and a long-lived Agent, with a focus on the limited liability problem. In the context of a single Agent and an outside option (and in a fully observable setting), Chassang (2013) gives a similar “debt scheme” to simulate the incentives of a linear contract using limited liability contracts. Our work extends Chassang (2013) in several ways: we study a competitive setting amongst many Agents, operate in a partially observable setting (we do not observe the outcome of Agents we did not select), and give counterfactual policy regret bounds (rather than regret bounds to the realized rewards).
Our limited liability results hold for linear contracts, which have become focal in the contract theory literature because of their strong robustness properties. Carroll (2015) shows that linear contracts are minimax optimal for a Principal who knows some but not all of the Agent’s actions. Dütting et al. (2019) shows that if the Principal only knows the costs and expected rewards for each Agent action, then linear contracts are minimax optimal over the set of all reward distributions with the given expectation. Dütting et al. (2022) extends this robustness result to a combinatorial setting. Dütting et al. (2019) also show linear contracts are bounded approximations to optimal contracts, where the approximation factor can be bounded in terms of various quantities (e.g. the number of Agent actions, or the ratio of the largest to smallest reward, or the ratio of the largest to smallest cost, etc). Castiglioni et al. (2021) studies linear contracts in Bayesian settings (when the Principal knows a distribution over types from which the Agent’s type is drawn) and studies how well linear contracts can approximate optimal contracts.
Our results leverage the fact that in our mechanisms, the Principal is using a monotone selection algorithm. The same kind of monotonicity was used by Babaioff et al. (2009) in the context of using bandit algorithms to give truthful auctions. Babaioff et al. (2009) used a monotone algorithm for obtaining no external regret. We give a monotone algorithm for obtaining no swap-regret by giving a monotonicity preserving reduction to the full information swap regret algorithms recently given by Dagan et al. (2023); Peng and Rubinstein (2023), which we note are monotone. Prior swap regret algorithms (e.g. Blum and Mansour (2007)) even in the full information setting are (perhaps surprisingly) not monotone, as we show.
2 Preliminaries
We study a multi-round, multi-Agent contracting problem. To formalize the model, we first focus on what happens in a single round.
2.1 Single Round Contracting Model
In each round, the Principal would would like to incentivize some Agent to complete a task. The Principal will offer a fixed contract to one of Agents, or pick the “outside option” of not contracting at all. The task has multiple possible outcomes, each of which corresponds to a different value for the Principal.
Definition 1 (Outcome and Return ).
There are possible outcomes , each of which is associated with a return for the Principal .
Definition 2 (Contract).
A contract is a payment rule mapping the Principal’s return to a payment made to an Agent.
Throughout this paper, we will make the following natural assumption on contracts — that higher reward to the Principal corresponds to higher payment to the Agent, and that payments exhibit (weakly) decreasing marginal returns.
Assumption 1.
Contracts are concave and increasing functions.
If selected by the Principal, an Agent responds to the contract with an action in action set .
Definition 3 (Agent Action and Cost Function).
The action space for each Agent, , is all “effort levels” that they can exert towards the Principal’s task. Each effort level has an associated cost, possibly distinct for each Agent, given by a cost function .
Throughout the paper, we will assume that Agent cost increases in effort and exhibits (weakly) increasing marginal cost to effort. Note that increasing marginal cost is consistent with an assumption of decreasing marginal return to effort.
Assumption 2.
Agent cost functions are increasing and convex in effort level.
There is also some hidden state of nature in each round. The probability distribution over outcomes for an Agent is dictated by their chosen action and the state of nature.
Definition 4 (Outcome Distribution ).
There are functions , each associated with a particular Agent and outcome , such that for all , is affine in its first argument. We will represent these functions as . Furthermore, for all Agents , effort levels , and all states of nature , we have that
We assume that the probability that outcome occurs for Agent given effort level and state of nature is . By the constraints given above on the functions , this is always a valid probability distribution. We will refer to this distribution as . Additionally, we will refer to the randomness used to sample from this distribution as , so that fixing , and also fixes the sampled outcome . Let be the expected reward under the distribution of outcomes . Let be the expected payment to Agent over the same distribution of outcomes.
Remark 1.
If the returns and the outcome distribution functions are such that effort level zero leads to zero return with probability one and the cost associated with zero effort is also zero, then we can model the ability of each Agent to leave the interaction at any time by playing effort level zero for all remaining time steps. This also ensures that Agents always have non-negative expected utility (no matter what their beliefs are) under their best response.
After a selected Agent takes an action which induces an outcome and corresponding Principal reward , the Principal pays them according to the contract .
Definition 5 (Expected Single Round Agent Utility ).
The expected single round utility of Agent given that they are selected is .
The Principal’s total utility that round is their reward minus this payment.
Definition 6 (Expected Single Round Principal Utility ).
The expected single round utility of the Principal given that they select Agent playing action is . The single round utility if they select the outside option is for all .
Finally, we will make the natural assumption that if the Agent expends more effort, this only improves their expected return.
Assumption 3 (Monotone Relationship between Effort and Expected Return).
For any Agent , any and any two effort levels such that the outcome values satisfy .
2.2 Repeated Contracting Game
In the repeated version of this game, the Principal can select which Agent to contract with each round based on the history of play. However, the Principal cannot observe the history of actions or the states of nature . The Principal only observes the history of realizations of the returns of the selected Agents111This is a crucial aspect of contract theory — that the Principal must contract on the returns, and not directly on the actions of the Agent.. Thus, we can think of the state of the Principal at round as being determined by the history of his selections and of returns realized from rounds to .
Definition 7 (Principal Transcript).
We call the history of the selected Agents in each round and their realized returns to the Principal the Principal transcript, denoted . The space of all possible Principal transcripts is denoted .
Using only this transcript, the Principal must decide which Agent to contract with in each round. It will be useful to explicitly refer to the randomness used by the Principal in their selection mechanism when choosing an Agent in each round . We refer to this randomness as , and view the randomized selection of the Principal as a deterministic selection mechanism that takes as input.
Definition 8 (Principal Selection Mechanism ).
In each round , the Principal uses a deterministic selection mechanism to select an Agent (or the outside option). We will refer to the entire mechanism, consisting of all components, as . The mechanism is known to the Agents.
In order to model Agent decisions, it is not enough that they know the Principal selection mechanism — they must also have beliefs over the future realizations of the states of nature. We allow each Agent to have a (potentially incorrect) belief about the future states of nature at each round, which may be a function of the previously realized states of nature. While Agent beliefs may differ, we assume that all Agents are aware of the beliefs of all other Agents at that round. The Principal does not have any knowledge of these beliefs.
Definition 9 (Belief functions ).
Each Agent has a function for any which is a mapping from any -length prefix of states of nature to a distribution over the remaining states of nature. This represents Agent ’s belief at the start of round of the remaining states of nature given the -length prefix of states of nature. Let be the initial belief of Agent of the possible length sequence of natures states. Additionally, for any , we let represent Agent ’s belief in round of the states of nature in between rounds and , given some input of previous states of nature. Furthermore, represents the set of beliefs of all Agents at round given . Agents are aware of each others’ belief functions.
Assumption 4 (Full Support Beliefs).
We assume that the beliefs of each agent have full support over all possible sequences of states of nature, that is has full support. We make no assumption on the magnitude of the minimum probability, simply that it is non-zero on each sequence.
The selection function and set of Agent belief functions together define a -player extensive form game amongst the Agents, in which each Agent aims to maximize their utility over the entire time horizon of the interaction.
The Agents are more informed than the Principal, so they do have access to the history of states of nature and the actions of other Agents. Therefore, their state at each round is more complex, and determined by the following transcript.
Definition 10 (Agent Transcript).
We call the history of the realized states of nature , the history of the random bits used to sample the selected Agent’s outcome , the history random bits used by the Principal , and the history of actions of the selected Agents an Agent transcript, denoted . We refer to a single element of an Agent transcript as . The space of all possible Agent transcripts is denoted .
Note that the Agent transcript contains strictly more information than the Principal transcript in any round. The action space of each Agent is a policy mapping from Agent transcripts to actions in each round.
Definition 11 (Agent Policy).
An Agent policy is a mapping from any -length transcript to a distribution over actions. We will write as the distribution over actions output by Agent at round . Let be the set of all Agent policies.
Taken together, a Principal selection mechanism , a set of Agent policies , and a distribution over states of nature fully define a distribution over transcripts of the extensive form game. When useful, we will consider the function which generates a distribution of transcripts given these inputs. This function will generate the distribution of all possible transcripts that may occur based on a distribution over the future states of nature which is taken as an input. Namely, the additional randomness over which it is generating a distribution is the randomness in the other components of the transcript, which the Agent beliefs do not concern themselves with - the randomness used by the Principal in their selection mechanism and the randomness used to sample the outcome from the selected Agent’s outcome distribution in each round.
Definition 12 (Transcript Generating Function).
Given a Principal with selection mechanism and Agents policies , the function , takes as input a distribution over states of nature and outputs a distribution over transcripts. Given some prefix of the transcript of length , we can also write for any to denote the distribution over transcripts until round . When useful, we will also allow this function to take as input a deterministic sequence of nature states.
Now, we are ready to describe the utility of each Agent in the game defined by and belief function . We will describe this for every subgame of the game, defined by an Agent transcript prefix. While the equilibrium guarantees of our mechanisms do not require subgame perfection, it will be useful for us in our analysis to think about the payoffs in particular subgames.
Definition 13 (Agent Subgame Utility).
At round , given a previously realized transcript with states of nature of , Agent ’s payoff function for policy is defined as
In some cases, it will also be useful to refer to Agent ’s realized utility over a transcript prefix , which includes information of all relevant elements the Agent’s utility – namely, the Principal selection in each round and the outcome of the selected Agent action – as .
Definition 14 (Principal Utility).
The Principal’s payoff function for selection mechanism , Agent policies , and a distribution over state of nature sequences is defined as
We also let denote the utility of the Principal over some Principal transcript .
Definition 15 (Agent Best Response).
Given a set of belief functions and other Agent policies , Agent ’s policy is a best response if
Definition 16 (Agent Nash Equilibrium).
Given a set of belief functions and a Principal selection mechanism , Agent policies are a Nash equilibrium if, for each Agent , is a best response to .
2.3 Non-Responsive Policies
Since effort levels are continuous, the set of policies is an infinite set, and in fact each is also an infinite-dimensional object. Thus, it is not immediately obvious that the game we study (Game 1) even admits a Nash equilibrium, and if so what properties it might have. Additionally, Nash equilibria in extensive form games can lack predictive power and support a wide range of behaviors through threats and collusion. For these reasons we will want to prove the existence not only of Nash equilibria, but of non-responsive Nash equilibria, in which Agent policies do not in fact depend on the history of actions of the other players, or the history of choices of the mechanism (which themselves depend on the history of actions)—but rather only on the history of nature states.
Definition 17 (Non-responsive Equilibrium).
Given a set of belief functions and a Principal selection function , a set of Agent policies is a non-responsive equilibrium if it is an equilibrium and all policies in the set are non-responsive.
3 Existence of Non-Responsive Equilibrium
In this section we show that the game induced by the Agent selection interaction described in Game 1 admits a pure strategy non-responsive Nash equilibrium amongst the Agents. We will refer to Game 1 as the “general game,” to distinguish it from a related game we define in this section.
Theorem 1.
There exists a pure non-responsive Nash equilibrium of the Agent Selection Game (Game 1).
The remainder of this section is devoted to the sequence of arguments that result in the proof of Theorem 1. The outline of the proof is as follows:
-
•
First, we define the restricted game, in which Agents have a restricted strategy space and are only able to employ non-responsive strategies — i.e. strategies that are independent of the actions of other Agents.
- •
-
•
We show that we can purify any mixed strategy Nash equilibrium of the restricted game while maintaining its equilibrium properties; this establishes that the restricted game admits a pure strategy Nash equilibrium (Lemma 5).
-
•
Finally we show that any pure strategy Nash equilibrium of the restricted game must also be a pure strategy Nash equilibrium of the general game. This establishes that the general game admits a pure strategy Nash equilibrium in which all of the players are employing non-responsive strategies (Theorem 3).
3.1 Restricted Contracting Game
To execute the first step of the outline, we define the restricted contracting game in which Agents are restricted to playing non-responsive strategies. Unlike the general game, the strategy space in the restricted contracting game is finite-dimensional, and so it is easier to argue equilibrium existence; and of course, equilibria in the restricted contracting game are non-responsive by definition. Once we establish the existence of pure-strategy Nash equilibria in the restricted game, we will show how to “lift” them to pure strategy non-responsive equilibria of the general game. Agent policies in the restricted game will be functions of restricted transcripts. Restricted transcripts contain a history of the states of nature and randomness used in the playout, but crucially do not contain records of Agent actions or functions of Agent actions (like outcomes or Principal selections). The result is that policies that are defined as functions of restricted transcripts will be independent of the actions of other players.
Definition 18 (Restricted Transcript).
We call the collection of the history of the realized states of nature , the history of the random bits used to sample the selected Agent’s outcome in each round , and the history of the random bits used by Principal a restricted transcript, denoted We refer to a single element of a restricted transcript as . The space of all possible restricted transcripts is denoted .
We will refer to Agent ’s utility over the restricted transcript prefix by .
Definition 19 (Restricted Agent Policy).
A restricted Agent policy for an Agent is a mapping from any -length restricted transcript to a distribution over actions . We will write as the distribution over actions output by Agent at round . We will call the set of all restricted policies .
Note that all restricted Agent policies are non-responsive by construction - they do not depend on other Agents’ actions. In the general game, we refer to any policy that can be written as a function of the restricted transcript a “non-responsive” strategy.
Definition 20 (Restricted Transcript Generating Function).
Given a Principal selection mechanism , the function , takes as input a distribution over a sequence of states of nature and outputs a distribution over restricted transcripts. Given some prefix of the restricted transcript of length , we can also write for any to denote the distribution over transcripts through round .
3.2 Equilibrium Existence
We begin by applying the following result to the restricted game.
Theorem 2 (Glicksberg (1951)).
Every continuous and compact game has a mixed strategy Nash equilibrium.
To do this, we will first show a result that will be useful throughout this paper for relating the local changes in an Agent’s policy to their global utility in the restricted game.
Lemma 1.
Fix a selection function and a strategy profile of Agents in the restricted game. Let be a policy for Agent that is identical to except for the action in round for some transcript prefix so . Then, the difference in Agent ’s utilities between these two policies can be written as:
Lemma 2.
The restricted game has a mixed strategy Nash equilibrium.
Proof.
We show that the restricted game is continuous and compact and then apply Theorem 2.
Each player’s strategy space is all mappings from restricted transcripts to distributions over effort levels. As we assume that the coin flips used by the Principal, the randomness used to sample an outcome, and space of states of nature are finite, there are a finite number, say , of restricted transcripts, so this is a finite, -dimensional game. Each Agent’s strategy in the restricted game can be represented as a real-valued -dimensional vector lying in a compact space , as effort levels are defined to lie in .
Next, we show that the utility functions are continuous in Agent ’s policy . Consider a fixed restricted transcript prefix . We show that the change in Agent ’s expected payoff is continuous in . Consider perturbing the action played by to some new action . Let represent this perturbed policy. This corresponds to changing a single coordinate in the -dimensional vector representing Agent ’s policy. As this change is confined to a single subgame, by Lemma 1, the only change to the utility is in the expression . So, it is sufficient to show that the change in this expression is continuous.
Recall that an Agent’s utility in this subgame can be written as
The immediate payoff changes as a result of this perturbation only in the single round utility the Agent sees: . The cost function is, by assumption, convex on a closed set, and thus continuous. It remains to be shown that the expected payment to the Agent also continuous in the effort level. The expected payment is
This is continuous in the effort level because the expected outcome is continuous in the effort level and payment is a post-processing of the outcome via the return and the contract . The expected outcome is continuous in the effort level because the probability of each outcome, for a fixed state of nature , is affine in the effort level:
Therefore, the immediate payoff for Agent is continuous in their action.
Next, observe that the continuation payoff also changes only as a function of how the perturbation affects the return Agent realizes for the Principal. This is because the effect in the continuation payoff of an Agent’s action in round is simply via the probability that they are selected by the Principal’s selection mechanism in future rounds, which takes as input the information of previously realized returns. Thus, by the same argument above, we also have that the continuation payoff is continuous in an Agent’s action. Therefore, we have established that utility functions are continuous in Agent policies.
As this game is continuous and compact, by Theorem 2, it must have a mixed strategy Nash equilibrium. ∎
Next, we show that without loss of generality, any mixed strategy Nash equilibrium in the restricted game can be converted into a pure strategy Nash equilibrium. Towards that end, the next lemma shows that if we change any Agent’s mixed strategy at any single round into a pure strategy by having them deterministically play the expected action under their mixed strategy, then the distribution over restricted transcripts is unchanged.
Lemma 3.
Fix a Principal selection algorithm and a set of non-responsive, possibly mixed, Agent strategies . Let be an Agent who is playing a mixed strategy under in some round for a restricted Agent transcript . Let be the strategy profile that is identical to for all Agents and differs for Agent only for transcript where . Then, the distribution over future Principal transcripts induced by strategy profile is identical to the distribution over future Principal transcripts induced by strategy profile .
Proof.
Observe that before time , there is no difference in the game, and thus the distributions of transcripts until will be identical. Since all Agents are non-responsive, their actions are not a function of other Agent policies. Thus, the distributions of actions of all Agents except Agent will remain the same in both strategy profiles and . This also means that for all Agents , the outcomes distributions they induce are the same in both worlds. Furthermore, since the randomness used by the Principal is fixed between both worlds, we know the Agent selections are the same.
It remains to be shown that the outcome distributions for Agent under both strategy profiles and are the same. At time , under strategy for Agent , we have outcome distribution .
We will show that for Agent , the probability for any outcome is identical under strategy and . Recall that per Definition 4, the probability that an outcome occurs for an Agent given an effort level and state of nature is an affine function , which we represent as .
Let and let .
Now, we can write the probability of an outcome for Agent occurring under policy as
(by the fact that the expected value of is the same under and ) | ||||
(by reversing the above steps with ) |
Thus, any with the same expected effort level will give an equal distribution over outcomes. In particular, this is true for that deterministically plays the expected effort level of . Therefore, we have shown that under both strategy profiles and , the distribution over Principal transcripts is identical. ∎
The next lemma shows that if we purify a single Agent’s equilibrium strategy by having them play the expectation of their mixed strategy, then their opponents are still playing best responses.
Lemma 4.
Let be a strategy profile of an equilibrium of the restricted game. Let be the policy for an Agent who is playing a mixed strategy at some round for transcript prefix . Let be an a policy for Agent that is almost identical to , except plays a pure strategy at round , specifically the mean effort level of the distribution of : that is, . Then, for all Agents , it is the case that is a best response to :
Proof.
Consider some Agent , where . Fix some prefix of the restricted game including states of nature . Finally, let denote the Principal transcript given an arbitrary restricted Agent transcript and the Agent policies , and define as the Principal transcript given an arbitrary restricted Agent transcript and the Agent policies .
Then, the expected payoff of the Agent over their own belief given any policy against the policies is:
While their payoff using against is:
Recall that the restricted transcripts exclude information about the Agent actions, and therefore both these expressions are evaluated over the exact same restricted transcript.
Also note that, by Lemma 3, it is the case that and are identically distributed. Therefore, the two payoff expressions above are the same, and the utility of Agent is the same.
Now, assume for contradiction that there is some policy which strictly outperforms against . Given the equivalency of the utility functions, this implies that strictly outperforms against , leading to a contradiction. ∎
By iteratively applying this kind of strategy purification, we can establish the existence of a pure strategy Nash equilibrium in the restricted game.
Lemma 5.
There exists a pure Nash equilibrium of the restricted game.
The next step is to “lift” a pure strategy equilibrium of the restricted game to a pure strategy (non-responsive) equilibrium of the general game. The difference between a strategy of the restricted game and a strategy of the general game is that a strategy of the general game can depend on a richer transcript which includes information about the actions of other Agents. We first observe, however, that when all Agents are playing deterministic, non-responsive strategies, the richer “full” transcript of the general game in fact contains no additional information beyond the transcript of the restricted game.
Lemma 6.
Whenever all Agents are playing deterministic policies, the full transcript for an arbitrary round can be computed as a deterministic function of the restricted transcript .
Proof.
Consider a deterministic Agent strategy profile and a restricted transcript . We will show that we can compute the full Agent transcript using only and information known to all Agents, namely Agent policies and belief functions for all . Note that the only difference between the transcripts is that the full Agent transcript contains the the action of the selected Agent in each round, so to show the claim we must simply show that in any round, any Agent can compute which Agent was selected by the Principal and what action the selected Agent took.
We proceed via induction. In round , any Agent can compute the action since the Agent transcript is empty, all Agent strategies are deterministic, and both Agent belief functions and the initial state of the Principal’s selection mechanism are known to all Agents.
Now, consider some arbitrary round . Assume that can be derived from . We will show that can be derived from . First, observe that an arbitrary Agent can compute which Agent will be selected by the Principal in round . The Principal’s selection mechanism takes as input the Principal transcript, which includes the historical selections of Agents and their realized outcomes. By the inductive hypothesis, all Agents know the history of actions of selected Agents through round . Additionally, they know, by the restricted transcript , the randomness used to sample the selected Agent’s outcome in each round . Both of these together exactly determine the state of the Principal’s selection mechanism at round . Thus, using , which is contained in , Agents know which Agent was selected in round .
Secondly, observe that the selected Agent’s action is a known, deterministic function that depends on and . Therefore, once again, applying the inductive hypothesis, Agents can compute the selected Agent’s action . ∎
To reason about utilities in the general game, we will show an extension of Lemma 1 to this setting.
Lemma 7.
In the general game, fix a selection function and a set of (possibly responsive) policies of other Agents . Consider two (possibly responsive) policies for Agent , denoted and , which only differ in a given transcript prefix . Then,
Proof.
This proof follows the same argument as Lemma 1. ∎
Finally, combining these lemmas lets us establish that pure strategy Nash equilibria of the restricted game are in fact also pure strategy Nash equilibria of the general game.
Lemma 8.
Every pure Nash equilibrium of the restricted game is also a pure Nash equilibrium of the general game.
Proof.
Consider any pure strategy equilibrium in the restricted game. Each Agent is playing a policy in their best response set over all restricted pure strategies. Say Agent plays policy in this equilibrium. We will show that in the general game, for any Agent , there is no beneficial responsive deviation from their restricted policy , assuming all other Agents are playing . Suppose for the sake of contradiction that there is a beneficial deviation for Agent to responsive strategy in the general game.
We can show that in fact, this is a valid strategy to play in the restricted game - i.e. that it can be implemented as a non-responsive strategy - as follows. By Lemma 6, when Agents are playing deterministic, non-responsive policies – as all policies in the profile are – any Agent can in round compute, using only , the full transcript , which is the exact information they would have access to in the general game. Therefore, in fact, is also implementable as a non-responsive strategy , as it can be implemented only as a function only of the restricted transcript as follows: in round , let first compute the full transcript from the restricted transcript and then output the same action as . Since the policy can be written as an equivalent policy which is only a function of the restricted transcript, it is by definition a non-responsive policy. Thus, it would necessarily also be a beneficial deviation from in the restricted game. This gives us a contradiction to the fact that was a pure strategy equilibrium of the restricted game.
We can see this as follows.
(suppose that is a beneficial responsive deviation from ) | ||||
( can be implemented as an equivalent restricted policy ) | ||||
(restricted policies are non-responsive, thus we have a contradiction to being a policy equilibrium) |
where and are the set of responsive and restricted policies, respectively. ∎
Putting all of these lemmas together yields the main structural result of this section:
Theorem 3.
For every Principal selection mechanism, there exists a pure, non-responsive Nash equilibrium of the Agent selection game (Game 1).
4 Monotone Selection Mechanisms And Counterfactual Regret
Having established in Section 3 that the Agent Selection Game (Game 1) always admits a pure strategy non-responsive Nash equilibrium, we now go on to study the guarantees that the Principal can obtain in such equilibria. In particular, we will compare the utility obtained by the Principal in the dynamic Agent selection game to the utility that they could have obtained counterfactually had they offered a fixed contract to a single Agent , guaranteed across all rounds. In evaluating the utility of the Principal under both their selection mechanism and the counterfactual fixed-agent mechanism, we assume that all agents are playing a non-responsive equilibria in the game formed by the respective selection mechanism.
The main result we show in this section is that if the Principal uses a selection mechanism that guarantees diminishing external regret (to the realized rewards) and is in addition monotone, in a sense that we will define, then the Principal in fact obtains utility at least as high as they would have in the counterfactual benchmark, for all Agents . In other words, we show that in every non-responsive equilibrium of the Agent selection game, monotone selection mechanisms suffice to convert external regret bounds into policy regret bounds.
Benchmark Comparison
In the benchmark setting, since the Agent is promised a guaranteed contract across all rounds, they can always act so as to maximize their per-round payoff, without needing to worry about the effect that their actions today will have on whether they are chosen tomorrow; that is, they will always play the action that is myopically optimal. This is formalized via the definitions below.
Definition 21 (Constant Principal Selection Mechanism ).
A constant Principal selection mechanism is defined as the mechanism which deterministically selects a fixed agent each round, regardless of the transcript.
Definition 22 (Myopic Optimal Action).
An Agent ’s myopic optimal action in some round given transcript and corresponding previous states of nature is an action satisfying Note that for an Agent this action may differ between rounds and may not be unique within a fixed round. Let the set of all such actions in round be denoted .
Definition 23 (Myopic Optimal Policy).
An Agent’s myopic optimal policy is defined to be a policy in which each action is a myopic optimal action for all and . The set of all myopic optimal policies for player is defined to be .
Lemma 9.
In every equilibrium of the game in which the Principal plays a constant Principal selection mechanism selecting agent , agent plays a myopic optimal policy.
We now define a monotone selection mechanism. Informally, a monotone selection mechanism is one in which if the Principal’s reward is increased on a round in which Agent was chosen, and no other changes are made to the transcript, then this change only increases the probability that Agent is chosen at future rounds.
Definition 24 (Monotone Selection Mechanism).
Let be a Principal selection mechanism (Definition 8). Let be an arbitrary Principal transcript. Let be another Principal transcript that is identical to except that it differs in exactly a single entry such that and where . The Principal’s selection mechanism is said to be monotone if for all , it is the case that — that is, the selection mechanism is only more likely to select in future rounds under than .
We first establish that when the Principal employs a monotone selection mechanism, then in any non-responsive pure strategy equilibrium of the game, Agents all exert effort levels that are only higher than they would have in their myopically optimal strategies. Informally, this is because we can decompose Agent utilities into their single-round utility and their continuation utility. Any effort level that is below the myopically optimal level by definition will only improve single-round utility if raised — and because of monotonicity, will also only improve the continuation utility. By the non-responsiveness of the equilibrium, it will not affect the behavior of other Agents, and hence we can conclude that acting at any effort level below the myopically optimal level is dominated, and hence cannot be supported in equilibrium.
Lemma 10.
If the Principal is selecting Agents via a monotone algorithm, in any non-responsive, pure strategy Nash equilibrium of the selection game (Game 1), then the selected Agent ’s effort levels in each round are at least as high as .
With this result in hand, we are almost ready to state and prove our main result, which is that if the Principal employs a bandit learning algorithm that is both monotone and has a diminishing external regret guarantee, then if agent policies always form a non-responsive equilibrium, the Principal will have no regret to the best counterfactual world induced by picking a fixed agent across all rounds. To state the result, we first need to give some online learning preliminaries.
4.1 Online Learning Preliminaries
When the Principal selects an Agent each round, their utility is determined by an unknown sequence of states of nature and an unknown set of Agent policies. This can be embedded into an “online learning from experts” setting, in which the Principal selects one of the actions at each step and obtains some return based upon this selection. As the Principal does not receive full feedback and only has access to the return of the Agent selected each round, our setting is in fact a special case of online learning with bandit feedback — recalling of course that we ultimately want stronger than typical regret bounds. In this section we will make use of algorithms with general no-regret properties to derive no-regret guarantees for our setting.
We begin with a brief overview of the general online learning from experts setting with bandit feedback and then introduce some definitions, including of notions of external regret and swap regret. In this section, to align with standard notation for no-regret algorithms, we refer to losses instead of rewards. At every round, an adaptive adversary will choose a vector of losses — one for each action. Simultaneously, the learning algorithm will choose a distribution over actions, and will experience loss equal to the loss of their chosen action. The learning algorithm will observe the loss of the action chosen, but will not observe the loss of actions not chosen. Since the adversary is allowed to be adaptive, they are able to react to the history of the game when selecting their loss vector in the current round, including to the realizations of randomness used previously by the learning algorithm (though not the algorithm’s randomness in the current round). We make this dependence explicit in our definitions, as it will be important to our later analysis—in our applications, the loss sequence will be explicitly adaptive.
Definition 25 (Transcript ).
Let denote a transcript of length consisting of the history of the losses observed by the algorithm , the actions selected of the algorithm , and the randomness used by the selection algorithm . Note that since we are in a setting with bandit feedback, the transcript does not include the entire loss vector but only the loss of the selected action in that round, i.e. .
Definition 26 (Adaptive Adversary ).
An adaptive adversary plays a loss function in round as a function of the transcript . We refer the adaptive adversary across all rounds as .
Definition 27 (Learning Algorithm).
A learning algorithm is a mapping from -length transcript prefixes and random bits used by the algorithm in round , denoted , to an action selection in the current round .
Definition 28 (External Regret).
The external regret of an algorithm against an adaptive adversary over rounds is defined as
where we use to denote the transcript prefix deterministically resulting from the algorithm , adversary and a particular realization of .
Definition 29 (Swap Function).
Let the mapping represent a strategy modification function.
Definition 30 (Swap Regret).
The swap regret of an algorithm against an adaptive adversary over rounds is defined as
where we use to denote the transcript prefix deterministically resulting from the algorithm , adversary and a particular realization of .
Note that in both of these definitions of regret, the benchmark comparison is to the realized loss vectors, not the counterfactual loss vectors that would have been realized had the learning algorithm made different choices. However, we will be able to show an even stronger guarantee in our setting - that in fact, the learning algorithm satisfies regret to these counterfactual loss vectors that they did not see, but which would have been realized under different choices of the algorithm.
4.2 Counterfactual Guarantees in Equilibrium
We are now in a position to state the main results of this section. We first observe that it immediately follows that if the Principal is selecting Agents using a bandit algorithm that has worst-case external regret guarantees, then they obtain utility at least as high as they would have had they consistently selected the best Agent in hindsight fixing the actions that the Agent played in Game 1.
Lemma 11.
If the Principal is selecting Agents with a selection mechanism with external regret bounded by for all adaptive adversaries , then in any non-responsive, pure strategy equilibrium of the game induced by Game 1, the Principal has regret at most to the sequence of returns realized by the best fixed agent in hindsight.
Proof.
This follows immediately from instantiating the regret bound for the Principal in the Agent selection game. Note that the regret bound holds for an arbitrary adaptive adversary, and so in fact holds for any adversary with control over the states of nature in the Agent selection game. ∎
Now, we state stronger regret guarantees. In order to state our counterfactual guarantees, we must introduce a definition for what sorts of Principal transcripts can arise given any non-responsive equilibrium of the agents.
Definition 31 (Equilibrium Policy Set ).
Given an agent selection mechanism and a state of nature sequence , is the set of all collections of policies that Agents can play which jointly form a non-responsive equilibrium.
Now we can formally define our notion of policy regret. Informally, we compare the utility of the Principal given the mechanism they actually implemented, to the utility that the Principal would have obtained had they implemented the best fixed-agent mechanism , given the actual sequence of realized states of nature and the beliefs of the Agents. When evaluating the utility for the Principal in both cases, we assume that the Agents play according to a non-responsive pure-strategy Nash equilibrium. Because there may be multiple such equilibria, we take the minimum Principal utility over this set.
Definition 32 (Policy Regret).
For some Principal selection mechanism , agent beliefs , and state of nature sequence , the policy regret of the Principal is
Finally, we can state our main theorem: informally, if the Principal’s algorithm has an external regret guarantee and is also monotone, then not only do they obtain no regret to the realized actions of the Agents in Game 1, but in any deterministic, non-responsive equilibrium of the induced game, they obtain utility as high as they would have had they interacted with the best of the Agents every round.
Theorem 4.
If the Principal is selecting Agents with a monotone selection mechanism with external regret bounded by , the Principal has policy regret at most .
Proof.
This follows from Lemma 10, Assumption 3, and Lemma 11. Any Agent’s effort in the Agent selection game is only higher than their myopically optimal effort level if they are guaranteed to be selected, by Lemma 10. Next, by Assumption 3, the counterfactual sequence of expected returns in the game when they were guaranteed the contract would have been only lower than the expected returns in the Agent selection game. Taken with Lemma 11, that the Principal has bounded regret to the sequence of returns of the best fixed Agent in hindsight in the Agent selection game, we have the claim. ∎
To instantiate this guarantee with a concrete mechanism, we only need that a monotone bandit learning algorithm of this sort exists with a diminishing regret guarantee. Indeed, we show in Section 5 that MonoBandit(MW) is a monotone bandit selection mechanism with external regret bounded by (Theorem 10).
Theorem 5.
If the Principal selects agents each round via the MonoBandit algorithm instantiated with Multiplicative Weights, the Principal has policy regret at most .
5 Swap Regret and Limited Liability Contracts
In Section 4, we showed that by using a monotone selection mechanism with external regret guarantees, the Principal is able to guarantee payoff similar to what they could have obtained by using the same contract in a benchmark setting in which the contract was guaranteed to any single fixed Agent across all rounds. However, if we take the benchmark contract to be one that is not limited liability — i.e. that can sometimes obligate the Agent to pay the Principal — then our mechanism also must offer contracts which fail to satisfy the limited liability condition. This is problematic, because on the one hand, the natural, robust set of linear benchmark policies do not satisfy the limited liability condition, but on the other hand, contracts that fail to satisfy the limited liability condition can be difficult to implement in practice. In this section, we turn our attention to mechanisms for the Principal that only offer limited liability contracts, while still being able to offer the same kinds of counterfactual regret guarantees we derived in Section 4 with respect to linear contracts, which are not limited liability.
Definition 33 (Limited Liability Contract).
A limited liability contract is a payment rule mapping the Principal’s return to a payment made to an Agent.
Definition 34 (Linear Contract).
A linear contract with parameter is a payment rule mapping the Principal’s return to a reward to the Agent as an fraction of the total return , as .
To start, we prove that, by offering a linear contract each round and playing a monotone no swap-regret selection mechanism, the Principal guarantees that the liability of each agent is bounded by the swap regret term. Then, we prove that, if we equip this selection mechanism with a tab that ensures that the payment to each agent fulfills limited liability, it is an approximate (up to regret terms) equilibrium for the Agents to play a strategy profile corresponding to a non-responsive equilibrium of the general game. We use these results to show that, if we assume that agents will play one of these approximate non-responsive equilibria, the Principal gets low policy regret while ensuring limited liability for the Agents.
Lemma 12.
If the Principal selection mechanism has swap regret bounded by against any adaptive adversary , and the Principal is providing a fixed linear contract each round, then each agent’s cumulative payment is at least .
Proof.
Let represent the Principal utility from the rounds in which they select agent . The Principal selection mechanism has swap regret at most for any adaptive adversary . Furthermore, the Principal always has an outside option, denoted , that guarantees them return in every round. Therefore, we have that
for any swap function , and in particular, the swap function .
So we have that
(by the fact that the reward and cost of the outside option are both ) |
Thus, for all Agent , the total utility the Principal gains during these rounds is at most . As the Principal is providing a fixed linear contract each round, the total payment from the Principal to Agent is . ∎
We show in Section 5 that our algorithm MonoBandit when instantiated with the algorithm TreeSwap from Dagan et al. (2023) is a monotone bandit selection mechanism with swap regret bounded by (Theorem 11).
Theorem 6.
If the Principal selects agents each round via the MonoBandit algorithm instantiated with TreeSwap, the Principal has policy regret at most .
Theorem 7.
If the Principal selects agents each round via the MonoBandit algorithm (Algorithm 5) instantiated with TreeSwap and provides a fixed linear contract, then:
1) the Principal has policy regret at most , and
2) all agents have expected liability no more than .
Thus, by running a monotone no swap-regret algorithm in the bandit setting and offering a linear contract, the Principal ensures that after the interaction, the net payment from the Principal to the Agent must be non-negative (up to a diminishing regret term).
We now introduce a limited liability mechanism. Informally, it works as follows: the Principal opens a “tab” for each Agent that allows them to accumulate debt. The Principal maintains a running total of the cumulative payments would be made under the linear contract, but defers the payments to a single lump sum transfer at end of the interaction. At the end of the interaction, if the tab is negative, any remaining debt owed from the Agent to the Principal is forgiven, and so no payments ever need to be made from the Agent to the Principal. Otherwise, the Principal pays the agent the tab. Introducing this tab does not impact the net payment made from the Principal to the Agent whenever the original net payment is non-negative. However, since the tab results in a zero payment when the net payment from the Principal to the Agent would have been negative, this mechanism is incentive distorting. Informally, since we have established that the net payment from the Principal to the Agent cannot have been very negative, we can show that the distortion of incentives is minimal.
We formalize the limited liability game in Game 2. It is similar to the standard game, with the exception of the addition of the three boldface lines.
We first establish that if the Principal employs a selection mechanism that has a swap regret guarantee, that for any set of policies played by the Agents, the Agents’ utilities are almost identical (up to the swap regret bound) in the Limited Liability Game (Game 2) and in the original game (Game 1). Informally, this is because whenever the Agent ends the interaction without “debt”, their payments are identical in the two settings, and the swap regret guarantee that the Principal has to the outside option (which always guarantees payoff 0) will similarly guarantee that no Agent will ever end the game with significant debt (because by the nature of linear contracts, this would also imply significant swap regret for the Principal).
Definition 35 (Agent Utility in the Limited Liability Game ).
Let represent the total payment from the Principal to the Agent under the transcript . Furthermore, let represent the sum of the costs of all actions taken by Agent (on rounds they are selected). Then, let be defined as the utility of Agent in the limited liability game:
Lemma 13.
Fix any set of Agent beliefs , set of Agent policies , and linear contract . If the Principal selection mechanism has swap regret bounded by against any adaptive adversary , then each Agent will have expected payoff that differs by no more than between the standard Agent selection game (Game 1) and the limited liability Agent selection game (Game 2). That is, for any Agent ,
Proof.
We can write the utility for an Agent under the limited liability game as
First, observe that the right hand side of the desired inequality follows immediately from the above. This is simply because the total payment made to any Agent in the limited liability game is exactly the max of zero and the total payments made to that Agent in the non-limited liability game. Therefore, each Agent only has weakly higher utility in the limited liability game.
Now, note that by Lemma 12, any Agent’s expected cumulative payment in the non-limited liability game is bounded below by . In the non-limited liability game, this would mean a transfer from the Agent to the Principal of . Since payments to the Agents between the limited liability and non-limited liability games only differ when the latter are negative, we have established that the utility of an agent is at most higher in the limited liability game. ∎
We now establish that any equilibrium of the general Agent selection Game (Game 1) remains an approximate equilibrium of the limited liability game (Game 2) whenever the Principal employs a no swap-regret algorithm. Hence, the guarantees that we showed in Section 4 for the Principal in equilibrium (that they obtain payoff at least as high as they would have in the counterfactual world in which the Principal engaged in a constant selection mechanism with the best Agent in hindsight) continue to hold here (in approximate equilibrium), whenever the Principal employs a monotone selection algorithm with swap-regret guarantees.
Theorem 8.
Proof.
Let be any pure non-responsive equilibrium of the Agent selection game. We show that no Agent can unilaterally deviate to improve their expected payoff by more than . Let be a strategy profile that is defined as , i.e. is identical to except for an arbitrary deviation by Agent . The claim follows by two applications of Lemma 13:
As is assumed to be an exact Nash equilibrium of the general game, we also know
Together, these give the claim:
∎
Defining the policy space over which to evaluate policy regret is slightly more delicate now; previously, we assumed that all agents were playing a non-responsive equilibrium of the full liability game. However, now that we are in a limited liability setting, the incentives of agents may have shifted by regret terms. Thus, we have strong guarantees not over all non-responsive pure-strategy equilibria of this new game but over another reasonable set of equilibria the agents could play: the set of non-responsive pure-strategy equilibria of the general, full liability game, which we have shown are all approximate equilibria of the limited liability game.
Definition 36 (Policy Regret in the Limited-Liability Space).
For some set of agent beliefs and state of nature sequence , the policy regret of a Principal implementing selection mechanism in Game 2 is
Theorem 9.
When the Principal uses the MonoBandit selection mechanism (Algorithm 5) and employs a fixed, linear contract , for any non-responsive equilibrium of the general game:
-
•
is a -approximate equilibrium of the limited liability game
-
•
the Principal has policy regret at most .
Proof.
By Theorem 11, MonoBandit(TreeSwap) is a monotone algorithm with swap regret bounded by . Thus, by Theorem 8, if all agents play according to an equilibrium of the full liability game, this is an -approximate equilibrium of the limited liability game. Finally, we get the policy regret bound for the Principal by Theorem 4. ∎
6 Monotone No Swap-Regret Algorithms
Finally, it remains to establish the existence of a bandit no-regret learning algorithm that is simultaniously monotone and obtains a non-trivial swap regret guarantee against an adaptive adversary. First we note that even in the full information setting (when the learning algorithm gets to observe the entire loss vector, and not just the loss of the action they choose), standard no swap-regret algorithms fail to satisfy the monotonicity property.
We verify this numerically by showing two loss sequences in which Blum-Mansour violates the monotonicity condition. This example can be found in Appendix C.1.
On the other hand, the very recent swap regret algorithm given by Dagan et al. (2023) in the full information setting is indeed monotone.
Lemma 15.
The proof of this claim is deferred to Appendix C.2.
6.1 A Monotone Bandit No Swap-Regret Algorithm in the Adaptive Setting
For our application, however, we need a no swap-regret algorithm that is monotone and operates in the bandit setting. In this section we give a monotonicty preserving reduction from the full-information setting to the bandit learning setting, such that when paired with a monotone swap-regret algorithm in the full information setting (like that of Dagan et al. (2023)) we obtain the object we need.
The MonoBandit algorithm preserves any combination of the following appealing properties from a full-feedback learning algorithm: monotonicity, sublinear swap regret, and sublinear external regret. These proofs are all deferred to Appendix C.
Lemma 16.
If , then , .
Lemma 17.
If , then , .
Lemma 18.
If algorithm is monotone, then MonoBandit() is monotone.
Theorem 10.
is a monotone algorithm and
Proof.
Theorem 11.
is a monotone algorithm and
7 Conclusion and Discussion
In this paper we have studied the utility of a Principal in equilibrium in a complex -player, -stage extensive form Agent selection game, and have given mechanisms that ensure that in equilibrium, the Principal has no regret to the counterfactual world in which they had offered a guaranteed contract to the best fixed Agent in hindsight, even taking into account the Agent’s differing behavior in this world. The primary tools that we have used to derive guarantees in this complex setting are online learning algorithms and their properties. In particular, we have connected external regret guarantees for bandit learning algorithms paired with monotonicity with counterfactual policy regret guarantees in these games, and have connected swap regret guarantees with the ability to preserve the incentive properties of linear contracts in limited liability implementations. Moreover, we have established the existence of bandit learning algorithms that are simultaneously monotone and have swap regret guarantees, which instantiates our theorems with concrete mechanisms.
An objection that one can level at our limited liability mechanism is that it defers all payments until the final round. As such, the Principal can incur substantial debt to the Agent. It would be preferable if the Principal immediately paid any debt owed to the Agent, and still need not ever collect debt owed from the Agent to the Principle. We note that in this implementation, the mechanism would still induce similar incentives to that of the full liability mechanism if the Principal’s selection rule had no adaptive swap regret — i.e. no swap regret not just overall (as summed from rounds to ), but also on every subsequence simultaneously for all . Algorithms with no adaptive swap regret exist — see e.g. Lee et al. (2022) for a generic recipe for deriving such algorithms — but we are unaware of any such algorithms that are simultaneously monotone, which is a necessary ingredient for our policy regret guarantees. This is a concrete setting where a new result in online learning (the existence of a monotone bandit algorithm obtaining adaptive swap regret guarantees) would have an immediate application in mechanism design.
References
- Babaioff et al. [2009] Moshe Babaioff, Yogeshwer Sharma, and Aleksandrs Slivkins. Characterizing truthful multi-armed bandit mechanisms. In Proceedings of the 10th ACM conference on Electronic commerce, pages 79–88, 2009.
- Blum and Mansour [2007] Avrim Blum and Yishay Mansour. From external to internal regret. Journal of Machine Learning Research, 8(6), 2007.
- Bolton and Dewatripont [2004] Patrick Bolton and Mathias Dewatripont. Contract theory. MIT press, 2004.
- Camara et al. [2020] Modibo K Camara, Jason D Hartline, and Aleck Johnsen. Mechanisms for a no-regret agent: Beyond the common prior. In 2020 ieee 61st annual symposium on foundations of computer science (focs), pages 259–270. IEEE, 2020.
- Carroll [2015] Gabriel Carroll. Robustness and linear contracts. American Economic Review, 105(2):536–563, 2015.
- Castiglioni et al. [2021] Matteo Castiglioni, Alberto Marchesi, and Nicola Gatti. Bayesian agency: Linear versus tractable contracts. arXiv e-prints, pages arXiv–2106, 2021.
- Chassang [2013] Sylvain Chassang. Calibrated incentive contracts. Econometrica, 81(5):1935–1971, 2013.
- Cohen et al. [2022] Alon Cohen, Argyrios Deligkas, and Moran Koren. Learning approximately optimal contracts. In International Symposium on Algorithmic Game Theory, pages 331–346. Springer, 2022.
- Collina et al. [2023] Natalie Collina, Aaron Roth, and Han Shao. Efficient prior-free mechanisms for no-regret agents. arXiv preprint arXiv:2311.07754, 2023.
- Dagan et al. [2023] Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson, and Noah Golowich. From external to swap regret 2.0: An efficient reduction and oblivious adversary for large action spaces, 2023.
- Dütting et al. [2019] Paul Dütting, Tim Roughgarden, and Inbal Talgam-Cohen. Simple versus optimal contracts. In Proceedings of the 2019 ACM Conference on Economics and Computation, EC ’19, page 369–387, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450367929. doi: 10.1145/3328526.3329591. URL https://doi.org/10.1145/3328526.3329591.
- Dütting et al. [2022] Paul Dütting, Tomer Ezra, Michal Feldman, and Thomas Kesselheim. Combinatorial contracts. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 815–826. IEEE, 2022.
- Glicksberg [1951] I. L. Glicksberg. A further generalization of the kakutani fixed point theorem, with applications to nash equilibrium points, 1951.
- Grossman and Hart [1992] Sanford J Grossman and Oliver D Hart. An analysis of the principal-agent problem. In Foundations of Insurance Economics: Readings in Economics and Finance, pages 302–340. Springer, 1992.
- Ho et al. [2014] Chien-Ju Ho, Aleksandrs Slivkins, and Jennifer Wortman Vaughan. Adaptive contract design for crowdsourcing markets: Bandit algorithms for repeated principal-agent problems. In Proceedings of the fifteenth ACM conference on Economics and computation, pages 359–376, 2014.
- Holmström [1979] Bengt Holmström. Moral hazard and observability. The Bell journal of economics, pages 74–91, 1979.
- Lee et al. [2022] Daniel Lee, Georgy Noarov, Mallesh Pai, and Aaron Roth. Online minimax multiobjective optimization: Multicalibeating and other applications. Advances in Neural Information Processing Systems, 35:29051–29063, 2022.
- Peng and Rubinstein [2023] Binghui Peng and Aviad Rubinstein. Fast swap regret minimization and applications to approximate correlated equilibria. arXiv preprint arXiv:2310.19647, 2023.
- Zhu et al. [2022] Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, and Michael I Jordan. The sample complexity of online contract design. arXiv preprint arXiv:2211.05732, 2022.
Appendix A Proofs from Section 3.2
A.1 Proof of Lemma 1
Proof of Lemma 1.
(as is equivalent to under any transcript prefixes that are not ) |
Therefore,
∎
A.2 Proof of Lemma 5
Proof of Lemma 5.
By Theorem 2, we know that the restricted game has a mixed strategy Nash equilibrium. Let us call this equilibrium strategy profile . We will now show that there is also an exact pure strategy Nash equilibrium in the restricted game. To do this, we will alter until it is pure, while still ensuring that it is a Nash equilibrium. First, we will consider some transcript prefix with states of nature in which player ’s policy is outputting a distribution, and we will modify their policy to so that they instead deterministically output the expectation of their effort levels in . By Lemma 1, this change will be confined only to the Agent utility at that subgame.
Recall that an Agent ’s expected payoff of playing is as follows:
We will first prove that if Agent plays a pure strategy given by the expectation of the mixed strategy against , they will receive only higher utility in expectation. Note that player changing their actions will not affect the actions of any other players under , as all players are playing restricted policies. Thus, we can consider the actions of the other Agents as fixed when considering the payoff change for Agent . We consider the two parts of player ’s utility separately.
First, we will consider their difference in their immediate payoff in round when switching from their original mixed strategy to the deterministic strategy . Note that before round , the policies and are identical.
Recall that the probability of an outcome for Agent given action and state of nature is . We can now write the difference in the expected single round utility for Agent when deviating from to .
By the concavity of and the convexity of , this value is always non-negative. Thus, the expected immediate payoff change for Agent from moving from action to in round is non-negative.
Next, consider the change in the expected continuation payoff at round . Note that for rounds , the strategies and are identical once again.
By Lemma 3, the distributions over Principal transcripts in both expressions are equal. Since Agents are playing restricted policies, no future Agent actions will change if Agent moves from strategy to . Thus, this difference is also non-negative.
Therefore, under any mixed strategy Nash equilibrium, a player can always modify one of their actions in some subgame to be deterministic and only increase their payoff. Therefore, they are still best responding. Furthermore, by Lemma 4, all other non-responsive Agents remain best responding.
Thus, we can iteratively turn a mixed strategy Nash equilibrium in the restricted game into a pure strategy Nash equilibrium. ∎
Appendix B Remainder of Section 4
B.1 Proof of Lemma 10
Proof of Lemma 10.
Suppose for contradiction that this is not the case. Then, there is some strategy profile which is a non-responsive, pure strategy Nash equilibrium where there exists some Agent such that their action given some transcript prefix is strictly less than the effort of a myopic optimal action . Also let be the state of nature sequence associated with this transcript prefix. Call the Agent policy where Agent does exert the myopic optimal effort under the prefix and otherwise behaves exactly as . As is a best response, we have that
where the first implication follows from Lemma 7 and the second follows from the assumption that has non-zero support on the realized sequence of states of nature and that the transcript generating function (Definition 12) generates a distribution over all valid transcripts – over the additional randomness of the Principal selections and the randomness used to sample from the selected Agent’s outcome distribution in each round – with the distribution over states of nature that it is given.
Thus, to show contradiction is is sufficient to prove that .
Observe that Agent can weakly improve their expected immediate payoff at round , by definition of a myopic optimal action.
Agent can also improve their continuation payoff for future rounds , denoted
by deviating to play in round . As all Agents are playing non-responsive strategies, the only effect Agent ’s action in round has on their continuation payoff is through the effect it has on the probability of their selection in future rounds. The probability of their future selection depends on the returns that Agent realizes, as the Principal selection mechanism only takes as input the history of realized returns. So, we can see that this is monotonically increasing in the Agent ’s effort in round , due to the monotonicity of the Principal’s selection mechanism (Definition 24) and Assumption 3, which establishes that increase in an effort level by Agent in a single round leads to higher expected returns.
In particular, let the be deterministic action from strategy and be the higher effort level played deterministically from strategy . Then, it is the case that
Thus, we know that Agent ’s deviation will only increase their continuation payoff via an increased selection probability in future rounds.
Therefore, we have shown that if Agent is playing a policy that in some round plays an effort level less than they can unilaterally deviate to strictly improve their payoff, and thus the strategy profile cannot be a non-responsive Nash equilibrium. ∎
B.2 Proof of Lemma 9
Proof of Lemma 9.
Suppose for the sake of contradiction that there is some equilibrium strategy for agent , denoted , that is not a myopic optimal policy. If agent is playing , then there is some round where they are not playing a myopic optimal action from their set for some transcript prefix . Let their payoff differ from the myopic payoff by . We will consider some policy such that , and for , . By Lemma 7, the difference in the expected utility of the agent under and is exactly
(By the fact that the action of the agent in the current round has no impact on their payoff in future rounds) | ||||
(By the fact that the agent has full support beliefs.) |
As agent can strictly improve their payoff by deviating to , is in fact not in equilibrium, leading to a contradiction.
∎
Appendix C Monotone No Swap-Regret Algorithms
C.1 Proof of Lemma 14
Proof.
Below we show an example with and the number of actions . Note that given a fixed loss sequence, Blum-Mansour’s probability distributions it plays over each round are deterministic. Thus, we can numerically compute these probabilities to prove non-monotonicity.
Let the first loss vectors of each be . Let the final loss vectors of each be . Let be exactly the same as , except that . Then, when running Blum-Mansour as defined in Algorithm 3 with , the probability distribution maintained over actions at rounds (truncating the middle for clarity) is
While the corresponding distributions under are
Finally, we can look at the difference in move probabilities under and :
Note that at rounds and , Blum-Mansour under plays action with a higher probability than Blum-Mansour under . This violates the monotonicity condition. ∎
C.2 Proof of Lemma 15
Proof of 15.
We begin with a general description of the TreeSwap algorithm. Refer to Dagan et al. [2023] for a more detailed description. TreeSwap maintains multiple instances of a base no-external regret algorithm, which we refer to as Alg. Let be the total number of rounds that TreeSwap is run. Each instance of Alg is lazily updated, meaning that each instance has some period length , and is updated only times, during the end of each period, using the average loss of all the loss vectors given to the algorithm since the previous update. In each round , the action output by TreeSwap is the uniform mixture over the distributions output by a fixed number of these instances. The claim follows immediately by the fact that averaging preserves monotonicity. This means that neither the lazy updating of each instance of Alg, nor playing the averaging the distribution output by multiple instances of Alg in each round, violates the monotonicity of the base algorithm. ∎
C.3 Monobandit Algorithm
In addition to the definitions in Section 4.1, we introduce additional notation for a full-feedback learning algorithm, which is used in the implementation of our bandit-feedback algorithm MonoBandit.
Definition 37 (Full-Feedback Learning algorithm , loss sequence ).
A full-feedback learning algorithm consists of a deterministic function which takes as input , the realizations of randomness used by the algorithm in this round and all previous rounds, and , the -length loss vectors from all previous rounds. outputs an action in the current round .
We can now define our algorithm MonoBandit implementing this reduction from full-feedback to bandit-feedback, which preserves the monotonicity and sublinear regret (either external or swap) guarantees of some full-feedback learning algorithm . When we reason about the behavior of MonoBandit, we must be able to distinguish between the randomness inherent to and the randomness that the bandit algorithm uses in addition. The past realizations of both objects are visible to the adaptive adversary, but only has access to its own randomness. We therefore define the following distributions over randomness that are both utilized by the bandit-feedback algorithm:
Definition 38 (, and ).
Let be composed of the following two sets of randomness: and . is the randomness utilized by the full-information algorithm . is additional randomness utilized by MonoBandit, which is independent from . In particular is composed of independent random variables , where with probability and with probability for all .
C.4 Proof of Lemma 16
Proof of Lemma 16.
Recall that we previously defined Swap Regret to be
where we use to denote the transcript prefix determinstically resulting from the algorithm , adversary and a particular realization of . Note that, given a particular and , any -length prefix of the transcript is uniquely defined by the realization of the randomness of the learning algorithm up until round . Therefore, for simplicity for the entirety of this section we will write swap regret as
Let represent the loss vector that is given to at round . First, we will show that is an unbiased sample of , regardless of the behavior of the adaptive adversary.
(By the independence of and ) | ||||
Let us write the swap regret of MonoBandit in terms of the swap regret incurred on EXPLORE and EXPLOIT rounds. For brevity in notation, we let refer to MonoBandit.
We will upper bound both terms, starting with the first. For all swap functions , we have
(By the definition of MonoBandit) | |||
(By the fact that is an unbiased estimator of ) | |||
(By the fact that is independent of ) | |||
(By the definition of Swap Regret, and the assumption in the Lemma statement.) | |||
As for the second term,
Putting these together, we can upper bound the swap regret of MonoBandit as
By setting , we get that
∎
C.5 Proof of Lemma 17
Proof of Lemma 17.
Recall that we previously defined external regret to be
where we use to denote the transcript prefix determinstically resulting from the algorithm , adversary and a particular realization of . Note that, given a particular and , any -length prefix of the transcript is uniquely defined by the realization of the randomness of the learning algorithm up until round . Therefore, for simplicity for the entirety of this section we will write external regret as
Let represent the loss vector that is given to at round . First, we will show that is an unbiased sample of , regardless of the behavior of the adaptive adversary.
(By the independence of and ) | ||||
Let us write the external regret of MonoBandit in terms of the external regret incurred on EXPLORE and EXPLOIT rounds. For brevity in notation, we let refer to MonoBandit.
We will upper bound both terms, starting with the first. For all fixed actions , we have
(By the definition of MonoBandit) | |||
(By the fact that is an unbiased estimator of ) | |||
(By the fact that is independent of ) | |||
(By the definition of External Regret, and the assumption in the lemma statement.) | |||
As for the second term,
Putting these together, we can upper bound the external regret of MonoBandit as
By setting , we get that
∎
C.6 Proof of Lemma 18
Proof of Lemma 18.
For ease of notation, throughout this proof we write the MonoBandit algorithm as taking as input the full feedback vector , while keeping in mind that it can only change its state based on the bandit feedback it receives from sampling each round. Furthermore, we let denote the sampled loss vector fed to the internal full-information algorithm by MonoBandit at round .
For any fixed sequence of losses , the sampled loss vector that MonoBandit feeds to in round is independent of the sampled loss vector seen at round . To see this, note that it depends only on the current loss vector at round and the realization of , both of which are independent of any previous losses or realizations of randomness:
Now, consider the impact that decreasing loss of a single action in one round has on the probability of success later in the game. Let the loss sequence be the same as the loss sequence except at round , action has decreased loss. Let the bandit feedback loss sequence be the loss sequence seen by MonoBandit under , and let be the loss sequence seen by MonoBandit under . Furthermore, let represent a -length vector which is zeroes everywhere but at index .
On any exploit round :
(By the fact that is only different between and if action is sampled in round .) |
We can compare this probability with that of the unaltered sequence :
(By the monotonicity of ) |
∎