Decentralized Online Learning in General-Sum Stackelberg Games
Abstract
We study an online learning problem in general-sum Stackelberg games, where players act in a decentralized and strategic manner. We study two settings depending on the type of information for the follower: (1) the limited information setting where the follower only observes its own reward, and (2) the side information setting where the follower has extra side information about the leader’s reward. We show that for the follower, myopically best responding to the leader’s action is the best strategy for the limited information setting, but not necessarily so for the side information setting – the follower can manipulate the leader’s reward signals with strategic actions, and hence induce the leader’s strategy to converge to an equilibrium that is better off for itself. Based on these insights, we study decentralized online learning for both players in the two settings. Our main contribution is to derive last-iterate convergence and sample complexity results in both settings. Notably, we design a new manipulation strategy for the follower in the latter setting, and show that it has an intrinsic advantage against the best response strategy. Our theories are also supported by empirical results.
1 Introduction
Many real-world problems such as optimal auction [Myerson, 1981, Cole and Roughgarden, 2014] and security games [Tambe, 2011] can be modeled as a hierarchical game, where the two levels of players have asymmetric roles and can be partitioned into a leader who first takes an action and a follower who responds to the leader’s action. This class of games is called Stackelberg or leader-follower games [Stackelberg, 1934, Sherali et al., 1983]. General-sum Stackelberg games [Roughgarden, 2010] are a class of Stackelberg games where the sum of the leader’s and follower’s rewards is not necessarily zero. They have broad implications in many other real-world problems such as taxation policy making [Zheng et al., 2022], automated mechanism design [Conitzer and Sandholm, 2002], reward shaping [Leibo et al., 2017], security games [Gan et al., 2018, Blum et al., 2014], anti-poaching [Fang et al., 2015], and autonomous driving [Shalev-Shwartz et al., 2016].
We focus on the setting of a pure strategy in repeated general-sum Stackelberg games follows that of Bai et al. [2021], where players act in an online, decentralized, and strategic manner. Online means that the players learn on the go as opposed to learning in batches, where regret is usually used as the learning objective. Decentralized means that there is no central controller, and each player acts independently. Strategic means that the players are self-interested and aim at maximizing their own utility. Moreover, we take a learning perspective, meaning that the players learn from (noisy) samples by playing the game repeatedly. A comparable framework has been investigated in numerous related studies [Kao et al., 2022], finding widespread applications in diverse real-world scenarios, such as addressing the optimal taxation problem in the AI Economist [Zheng et al., 2022] and optimizing auction procedures [Amin et al., 2013].
There have been extensive studies on decentralized learning in multi-agent games [Blum and Mansour, 2007, Wu et al., 2022, Jin et al., 2021, Meng et al., 2021, Wei et al., 2021, Song et al., 2021, Mao and Başar, 2023, Zhong et al., 2023, Ghosh, 2023], mostly focusing on settings where all agents act simultaneously, without a hierarchical structure. Multi-agent learning in Stackelberg games has been relatively less explored. For example, Goktas et al. [2022], Sun et al. [2023] study zero-sum games, where the sum of rewards of the two players is zero. Kao et al. [2022], Zhao et al. [2023] study cooperative games, where the leader and the follower share the same reward. Though these studies make significant contributions to understanding learning in Stackelberg games, they make limiting assumptions about the reward structures of the game. The first part of this paper studies the learning problem in a more generalized setting of general-sum Stackelberg games. Due to the lack of knowledge of the reward structures, learning in general-sum Stackelberg games is much more challenging, and thus remains open.
Furthermore, in these studies [Goktas et al., 2022, Sun et al., 2023, Kao et al., 2022, Zhao et al., 2023], a hidden assumption is that the optimal strategy of the follower is to best respond to the leader’s strategy in each round. Recent studies [Gan et al., 2019a, b, Nguyen and Xu, 2019, Birmpas et al., 2020, Chen et al., 2022, 2023] show that under information asymmetry, a strategic follower can manipulate a commitment leader by misreporting their payoffs, so as to induce an equilibrium different from Stackelberg equilibrium that is better-off for the follower. The second part of this paper extends this intuition to an online learning setting, where the leader learns the commitment strategy via no-regret learning algorithms (e.g., EXP3 [Auer et al., 2002b] or UCB [Auer et al., 2002a]).
We use an example online Stackelberg game to further illustrate this intuition, the (expected) payoff matrix of which is shown in Table 1. By definition, is its Stackleberg equilibrium. This is obtained by assuming that the follower will best respond to the leader’s action. In online learning settings, when the leader uses a no-regret learning algorithm, and the (strategic) follower forms a response function as , then the leader will be tricked into believing that the expected payoffs of actions and are respectively and . Hence, the leader will take action when using the no-regret learning algorithms, and the game will converge to , where the follower has a higher payoff of compared to in the Stackleberg equilibrium.
Leader / Follower | ||
---|---|---|
(0.3, 0.1) | (0.1, 0.05) | |
(0.2, 1) | (0.3, 0.1) |
The insights gained from the above example can be generalized. In the following, we introduce two different categories of general-sum Stackelberg games that are distinguished by whether the follower has access to the information about the reward structure of the leader: (i) limited information: the follower only observes the reward of its own, and (ii) side information: the follower has extra side information of the leader’s reward in addition to itself.
In the limited information setting, the follower is not able to manipulate the game without the leader’s reward information. Therefore, best responding to the leader’s action is indeed the best strategy. This constitutes a typical general-sum Stackelberg game. Our contribution in this setting is to prove the convergence of general-sum Stackelberg equilibrium when both players use (variants of) no-regret learning algorithms. Note that this is a further step after [Kao et al., 2022] who focus on cooperative Stackelberg games where the two players share the same reward. In addition, we prove last-iterate convergence [Mertikopoulos et al., 2018, Daskalakis and Panageas, 2018, Lin et al., 2020, Wu et al., 2022] results, which are considered stronger than the average convergence results.
In the side information setting, we first consider the case when the follower is omniscient, i.e., it knows both players’ exact true reward functions. Building on the intuition from the above example, we design FBM, a manipulation strategy for the follower and prove that it gains an intrinsic advantage compared to the best response strategy. Then, we study a more intricate case called noisy side information, where the follower needs to learn the leader’s reward information from noisy bandit feedback in the online process. We design FMUCB, a variant of FBM that finds the follower’s best manipulation strategy in this case, and derive its sample complexity as well as last-iterate convergence. Our results complement existing works [Birmpas et al., 2020, Chen et al., 2022, 2023] that focus on the learning perspective of only the follower against a commitment leader in offline settings.
To validate the theoretical results, we conduct synthetic experiments for the above settings. Empirical results show that: 1) in the limited information setting, (variants of) no-regret learning algorithms lead to convergence of Stackelberg equilibrium in general-sum Stackelberg games, 2) in the side information setting, our proposed follower manipulation strategy does introduce an intrinsic reward advantage compared to best responses, both in the cases of an omniscient follower and noisy side information.
2 Related Work
Decentralized learning in simultaneous multi-agent games. This line of works has broad real-world applications, e.g., when multiple self-interested teams patrol over the same targets in security domains [Jiang et al., 2013] or wildlife conservation [Ministry of WWF-Pakistan, 2015], or when different countries that independently plan their own actions in international waters against illegal fishing [Klein, 2017]. There have been extensive studies in this category [Blum and Mansour, 2007, Wu et al., 2022, Jin et al., 2021, Meng et al., 2021, Wei et al., 2021]. The seminal work of Blum and Mansour [2007] shows that decentralized no-regret learning in multi-agent general-sum games leads to a coarse correlated equilibrium (CCE). The result has been improved using more sophisticated methods like optimistic hedge [Daskalakis et al., 2021, Chen and Peng, 2020, Anagnostides et al., 2022]. Jin et al. [2021], Liu et al. [2021] study a similar question in Makov games that involves sequential decision and reinforcement learning.
Learning in Stackelberg games. While our work also focuses on decentralized learning, we focus on learning in Stackelberg games where the players act sequentially. Lauffer et al. [2022] study a different setting than ours where the leader first commits to a randomized strategy, and the follower observes the randomized strategy and best responds to it. Such a setting also appears in Balcan et al. [2015]. In our setting, the leader first plays a deterministic action, and the follower responds to the action after observing it, a setting that is closer to Bai et al. [2021], Kao et al. [2022]. Many past works about learning in Stackelberg games, like security games [Blum et al., 2014, Peng et al., 2019, Balcan et al., 2015, Letchford et al., 2009], only focus on learning from the leader’s perspective, assuming access to an oracle of the follower’s best response. More recently, there have been rising interests for Stackelberg games involving learning of both players [Goktas et al., 2022, Sun et al., 2023, Kao et al., 2022, Zhao et al., 2023]. They focus on sub-classes of games with specific reward structures. Goktas et al. [2022], Sun et al. [2023] study zero-sum stochastic Stackelberg games where the sum of rewards of the two players is zero. Kao et al. [2022], Zhao et al. [2023] study cooperative Stackelberg games where the leader and the follower share the same reward. We study general-sum Stackelberg games, a more generalized setting without assuming specific reward structures like the above. The lack of prior knowledge of the reward structures, however, makes the learning problem much more challenging. As a matter of fact, learning in repeated general-sum Stackelberg games remains an open problem. The setting in Bai et al. [2021] is closer to us, where it also learns general-sum Stackelberg equilibrium from noisy bandit feedback. But their learning process needs a central device that queries each leader-follower action pair for sufficient times, making it essentially a centralized model or offline learning. We study an online learning problem in repeated general-sum Stackelberg games, where the players act in a decentralized and strategic manner. Last and very importantly, a hidden assumption in these works is that the follower’s best strategy is to best respond to the leader’s actions. In addition to studying Stackelberg games with a best-responding follower (Section 4), we also take a new perspective of a manipulative follower (Sections 5-6), and show that manipulative strategies can indeed yield higher payoff for the follower in an online learning setting.
Follower manipulation in Stackelberg games. It is known that the leader has the first-mover advantage in Stackelberg games: an optimal commitment in a Stakelberg equilibrium always yields a higher (or equal) payoff for the leader than in any Nash equilibrium of the same game [Von Stengel and Zamir, 2010]. This result is under the assumption that the leader has full information about the follower’s reward, or that it can learn such information via interacting with a truthful follower [Letchford et al., 2009, Blum et al., 2014, Haghtalab et al., 2016, Roth et al., 2016, Peng et al., 2019]. In other words, they assume that follower will always play myopic best responses to the leader’s strategy. Recent studies showed that a follower can actually manipulate a commitment leader, and induce an equilibrium different from the Stackelberg equilibrium by misreporting their payoffs [Gan et al., 2019a, b, Nguyen and Xu, 2019, Birmpas et al., 2020, Chen et al., 2022, 2023]. In parallel with these works, we consider an online learning setup where both players learn their best strategies from noisy bandit feedback. This is in contrast to the above existing works which take the learning perspective of only the follower against a commitment leader in an offline setup.
3 Preliminary
A general-sum Stackelberg game is represented as a tuple . represents the action set of leader , represents the action set of follower , and , . We denote the joint action set of the leader and the follower. For any joint action , we use and to respectively denote the noisy reward for the leader and the follower, with expectations and . The follower has a response set to the leader’s actions, , where is the response to leader’s action . The follower’s best response towards is defined as the action that maximizes the follower’s true reward:
(1) |
We denote as the best response set. For simplicity, we assume that the best response to each action is unique and the game has a unique Stackelberg equilibrium ,
(2) |
A repeated general-sum Stackelberg game is played iteratively in a total of rounds. In each round , the procedure is as follows:
-
•
The leader plays an action .
-
•
The follower observes the leader’s action , and plays as the response.
-
•
The leader receives a noisy reward .
-
•
The follower receives reward information which we will elaborate more in the following.
In the last step, as motivated by the example in Table 1, we study two types of settings depending on the kind of reward information that is available to the follower:
-
•
Limited information. The follower observes a (noisy) reward of its own: .
-
•
Side information. The follower has extra side information, meaning that it also knows the reward of the leader in addition to its own. This setting can be further divided into the case of an omniscient follower who knows the exact reward functions and directly; or the case of noisy side information where the follower learns the rewards from noisy bandit feedback in each round, i.e., .
4 A myopic follower with limited information
We first investigate the follower learning problem with limited information, where the best response is truly the follower’s best strategy. Then, We design appropriate online learning algorithms for the leader and prove their last-iterate convergence for the entire game involving both players.
4.1 Algorithm for the myopic follower
In the limited information setting, the follower does not know the entire game’s payoff matrix, and therefore does not have the ability to manipulate the game. Hence, myopically learning the best response for each action is indeed the best strategy for the follower. The learning problem for the follower then is equivalent to independent stochastic bandit problems, one for each that the leader plays. Upper Confidence Bound (UCB) is a good candidate algorithm for the follower as it is asymptotically optimal in stochastic bandits problems [Bubeck et al., 2012].
In each round , the follower selects its response based on UCB:
where is the number of times that action pair has been selected, and is the empirical mean of the follower’s reward for :
Following the result of [Auer et al., 2002a], for every stochastic bandit problem associated to , UCB needs approximately rounds to find the best response, where is the suboptimality gap to best response for all ,
This directly leads to the following proposition:
Proposition 1.
In a repeated general-sum Stackelberg game, if the follower uses UCB as the learning algorithm, with probability at least , the follower’s regret is bounded as:
This proposition shows that a myopic follower achieves no Stackelberg regret learning when it decomposes the Stackelberg game into stochastic bandits problems and uses the UCB algorithm to respond, no matter what learning algorithm the leader uses.
It should be emphasized that the utilization of arbitrary no-regret algorithms does not inherently ensure the convergence to the Stackelberg equilibrium in the general-sum Stackelberg game. The following result is provided to support this critical observation:
Theorem 1.
[Non-Convergence to the SE] Applying the UCB-UCB algorithm within a repeated general-sum Stackelberg game can result in the leader suffering regret linear in , which implies the game fails to converge to the Stackelberg equilibrium.
So to further understand the convergence of the entire game, we investigate two kinds of algorithms for the leader, a UCB-style algorithm and an EXP3-style algorithm, two of the most prevalent algorithms in the online learning literature. Combining the follower’s UCB algorithm, we have two decentralized learning paradigms: EXP3-UCB and UCB-UCB. It is worth noting that UCB-UCB is also studied in Kao et al. [2022] which focuses on a cooperative Stackelberg game setting. In the following, we present our first two major results on the last-iterate convergence for the two learning paradigms.
4.2 Last-iterate convergence of EXP3-UCB
We next introduce a variant of EXP3 for the leader. We denote the unbiased reward estimation of the reward the leader receives in round as , where is the weight for each action . It is updated as . Let be the weight vector of all actions . It is initialized as a discrete uniform distribution: . In round , the leader selects action according to the distribution
The second term on the right-hand side enforces a random probability of taking any action in to guarantee a minimum amount of exploration. As we will show in our proof of Theorem 2, the extra exploration is essential to the convergence to Stackelberg equilibrium as it allows the follower to do enough exploration to find its best responses using its UCB sub-routine. In the following, we will simply use EXP3 to refer to the above variant of EXP3 with the explicit uniform exploration.
Theorem 2.
[Last-iterate convergence of EXP3-UCB under limited information] In a limited information setting of a repeated general-sum Stackelberg game with noisy bandit feedback, applying EXP3-UCB, with , , with probability at least ,
where is a constant, is the leader’s suboptimality reward gap to Stackelberg equilibrium:
The proof is in Appendix A.2.
4.3 Last-iterate convergence of UCBE-UCB
When the leader uses a UCB-style algorithm, to guarantee that the game converges to the Stackelberg equilibrium, it requires that the UCB term will always upper bound with high probability. This is not guaranteed with a vanilla UCB algorithm (Theorem 1) since the follower’s response is unstabilized and not necessarily the best response before the follower’s UCB algorithm converges. Therefore, we introduce a variant of UCB, called UCB with extra Exploration (UCBE), with a bonus term that ensures upper bounding with high probability. In UCBE, the leader chooses action in round as follows
(3) |
Here , is the empirical mean of leader’s action : , and .
Theorem 3.
[Last-iterate convergence of UCBE-UCB under limited information] In a limited information setting of a repeated general-sum Stackelberg game with noisy bandit feedback, applying UCBE-UCB with , , for , with probability at least , we have:
The proof is in Appendix A.3. This result shows that the game last-iterate converges to Stackelberg equilibrium using the decentralized online learning paradigm UCBE-UCB. It is worth noting that this is a further step after Kao et al. [2022] who prove average convergence of UCB-UCB in cooperative Stackelberg games.
5 A manipulative and omniscient follower
In some real-world settings, the follower has extra side information about the leader’s reward structure. As shown in the example in Table 1, a strategic follower can use that information to manipulate the leader’s learning and induce the convergence of the game into a more desirable equilibrium. To start with, we study a simpler case where the follower is omniscient [Zhao et al., 2023], i.e., it knows the exact true rewards of the players. A manipulative follower aims to maximize its average reward:
(4) |
where the leader’s action sequence is generated by the leader’s no-regret learning algorithm. When the follower uses the response set , the leader (who only observes its own reward signals) will take actions that maximize its own reward given . When is fixed, the learning problem for the leader reduces to a classical stochastic bandit problem, and the true reward for each action is . The leader will eventually learn to take action when using no-regret learning. This implies that the leader can be misled by the follower’s manipulation.
Formulating the leader’s action selection as a constraint, when (i.e., at equilibrium), the above equation can be re-written in a more concrete form as:
(5) | ||||
s.t. |
For simplicity, we do not consider the tie-breaking rules for the leader. A more general formulation considering tie-breaking rules for the leader is discussed in Appendix C. For a manipulation strategy , it is associated with an action pair . We call a qualified manipulation if it satisfies the constraint in Eq.(5). We denote an optimal solution for the follower as , and , . We assume that is unique for simplicity. We can verify that is a qualified manipulation:
Because corresponds to the optimal manipulation (thus maximal reward), we have:
Proposition 2.
For any general-sum Stackelberg game with reward class , , if the Stackelberg equilibrium is unique, then the reward gap between and satisfies:
(6) |
The second equality holds if and only if .
5.1 Follower’s best manipulation (FBM)
Based on the insights from the example in Table 1 and Proposition 2, we propose Follower’s Best Manipulation (FBM, Algorithm 1), a greedy algorithm that finds the best manipulation strategy for an omniscient follower. On the high level, the key idea is to find a response set that exaggerates the difference in the leader’s reward when using (an action that leads to a large follower’s reward) compared to other actions (Line 2).
is the candidate manipulation pair set which is initialized as the entire action space . It starts by selecting the potential manipulation pair from the candidate set which maximizes (Line 1). It then forms the manipulation strategy (response set) that returns minimum leader’s reward for , and maximum leader’s reward when plays (Line 2). Here with a bit abuse of notation, stands for the follower’s worst response that induces the lowest leader’s reward
Next (Line 3), it verifies if the current is a qualified manipulation. If it is, then is the optimal manipulation strategy since the associated manipulation pair leads to maximum reward for the follower. Hence, it exits the loop and returns the final solution . Otherwise, the algorithm eliminates from and repeats the process (Lines 4-5).
Under the follower’s best manipulation strategy, the leader’s problem reduces to a stochastic bandit problem, where the reward for each action is , and the suboptimality gap is
Proposition 3.
In a repeated general-sum Stackelberg game with a unique Stackelberg equilibrium, if the leader uses a no-regret learning algorithm and the follower uses the best manipulation ,
where as , is generated by Algorithm and , and is generated by Algorithm and .
When the follower keeps using the best manipulation strategy, the game will eventually converge to the best manipulation pair . Similarly, if the follower uses the best response strategy, the game will eventually converge to the Stackelberg equilibrium . According to Propositions 2 and 3, the best manipulation strategy gains an extra average reward compared to that of the best response strategy.
6 Manipulative follower with noisy side information
6.1 Follower’s manipulation strategy
We now study the more intricate setting of learning the best manipulation strategy with noisy side information. We first present the follower manipulation algorithm FMUCB in Algorithm 2, a variant of FBM (Algorithm 1) that uses UCB to learn the best follower manipulation strategy with noisy side information. We can see that Algorithm 2 largely resembles Algorithm 1, and hence we omit the detailed description. Except for the inherited idea from FBM to manipulate the learning of the leader, another key intuition of FMUCB is to design appropriate UCB terms in place of the true reward terms and in FBM, that balances the trade-off between exploration and manipulation.
In Line 1, the goal is to find the candidate manipulation pair that maximizes . Therefore, we maximize the upper confidence bound :
(7) |
The suboptimal manipulation pair will not be selected after enough exploration, given the existence of the following suboptimality gap: for any action pair , satisfying ,
Conversely, in Line 2, the goal is to find the “worst response” for each action . Therefore, we define the term to be the lower confidence bound:
(8) |
For each action , the suboptimality gap against is defined as , and the minimum suboptimality gap for all is
Given , the suboptimal will not be selected after enough exploration.
When verifying whether the current is a qualified manipulation (Line 3), we also use as it lower bounds . This guarantees that the true best manipulation solution satisfies the constraint since the following holds with high probability: for any action ,
This eliminates the unfeasible solution since approaches , and the following suboptimality gap exists: for any action pair satisfying ,
Hence, Algorithm 2 can find the follower’s best manipulation with noisy side information.



6.2 Last iterate convergence analysis
We also provide theoretical guarantees for the sample efficiency and convergence of this algorithm. As a continuation of Section 4, we analyze the performance of our designed algorithm towards the leader’s two kinds of algorithms – EXP3 and UCBE.
Theorem 4 (Last iterate convergence of EXP3-FMUCB with noisy side information).
When the leader uses EXP3 with parameter , , the follower uses FMUCB, and let be the total number of rounds that the follower did not play the best manipulation strategy in rounds, with probability at least ,
where , is a constant.
Theorem 5 (Last iterate convergence of UCBE-FMUCB with noisy side information).
When the leader uses UCBE with and , the follower uses FMUCB, and let be the total number of rounds that the follower did not play the best manipulation strategy in rounds, with probability at least ,
and for , .
The proofs for Theorems 4 and 5 are respectively in Appendix B.1 and Appendix B.2. The two theorems present the sample complexity for learning the best manipulation strategy using FMUCB with noisy side information. They also guarantee that the game will achieve the last iterative convergence to the best manipulation pair with the algorithms EXP3-FMUCB and UCBE-FMUCB.
7 Empirical Results
To validate the theoretical results, we conduct experiments on a synthetic Stackelberg game. For both players, the number of actions . The players’ mean rewards and of each action pair are generated independently and identically by sampling from a uniform distribution . For any action pair , the noisy reward in round is generated from a Bernoulli distribution: and , respectively. All experiments were run on a MacBook Pro laptop with a 1.4 GHz quad-core Intel Core i5 processor, a 1536 MB Intel Iris Plus Graphics 645, and 8 GB memory.
Our first experiment validates the convergence of no-regret learning algorithms such as EXP3-UCB and UCBE-UCB (Theorems 2 and 3 in Section 4). Figure 1 shows the average regret of the leader w.r.t. round . The blue and orange curves are respectively for EXP3-UCB (, ) and UCBE-UCB. We can see that the leader achieves no Stackelberg regret learning via both algorithms, implying that the game converges to Stackelberg equilibrium in both cases.
The second experiment validates the intrinsic reward gap between FBM and BR when the follower is omniscient (Proposition 3 in Section 5). In this experiment, the leader uses EXP3 (, ), and the follower uses BR (blue curve) or FBM (red curve). The result is in Figure 1, where the x-axis is the number of rounds , and the y-axis is the average follower reward. It shows that after playing sufficient rounds, the follower’s reward does converge, and FBM yields a significant intrinsic advantage (the gap between the two dashed lines) of approximately 0.22 compared to that of BR.
The third experiment validates the intrinsic reward gap between FMUCB and UCB when the follower learns the leader’s reward structure via noisy bandit feedback (Theorems 4 and 5 in Section 6). The leader uses either EXP3 (, ) or UCBE, and we compare the average follower reward w.r.t. round when it uses FMUCB v.s. the baseline best response learned by a vanilla UCB. This introduces four curves, EXP3-UCB, EXP3-FMUCB, UCBE-UCB, and UCBE-FMUCB. We can see that no matter whether the leader uses EXP3 or UCBE, FMUCB yields a significant reward advantage of about 0.3 for the follower compared to that of UCB.
8 Conclusion
We study decentralized online learning in general-sum Stackelberg games under the limited information and side information settings. For both settings, we design respective online learning algorithms for both players to learn from noisy bandit feedback, and prove the last iterate convergence as well as derive sample complexity for our designed algorithms. Our theoretical results also show that the follower can indeed gain an advantage by using a manipulative strategy against the best response strategy, a result that complements existing works in offline settings. Our empirical results are consistent with our theoretical findings.
9 Acknowledgment
The authors would like to thank Haifeng Xu and Mengxiao Zhang for valuable discussions and feedback.
References
- Abe et al. [2022] Kenshi Abe, Kaito Ariu, Mitsuki Sakamoto, Kentaro Toyoshima, and Atsushi Iwasaki. Last-iterate convergence with full and noisy feedback in two-player zero-sum games. arXiv preprint arXiv:2208.09855, 2022.
- Amin et al. [2013] Kareem Amin, Afshin Rostamizadeh, and Umar Syed. Learning prices for repeated auctions with strategic buyers. Advances in Neural Information Processing Systems, 26, 2013.
- Anagnostides et al. [2022] Ioannis Anagnostides, Constantinos Daskalakis, Gabriele Farina, Maxwell Fishelson, Noah Golowich, and Tuomas Sandholm. Near-optimal no-regret learning for correlated equilibria in multi-player general-sum games. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pages 736–749, 2022.
- Auer et al. [2002a] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235–256, 2002a.
- Auer et al. [2002b] Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. SIAM journal on computing, 32(1):48–77, 2002b.
- Bai et al. [2021] Yu Bai, Chi Jin, Huan Wang, and Caiming Xiong. Sample-efficient learning of stackelberg equilibria in general-sum games. Advances in Neural Information Processing Systems, 34:25799–25811, 2021.
- Balcan et al. [2015] Maria-Florina Balcan, Avrim Blum, Nika Haghtalab, and Ariel D Procaccia. Commitment without regrets: Online learning in Stackelberg security games. In Proceedings of the sixteenth ACM conference on economics and computation, pages 61–78, 2015.
- Birmpas et al. [2020] Georgios Birmpas, Jiarui Gan, Alexandros Hollender, Francisco Marmolejo, Ninad Rajgopal, and Alexandros Voudouris. Optimally deceiving a learning leader in stackelberg games. Advances in Neural Information Processing Systems, 33:20624–20635, 2020.
- Blum and Mansour [2007] Avrim Blum and Yishay Mansour. From external to internal regret. Journal of Machine Learning Research, 8(6), 2007.
- Blum et al. [2014] Avrim Blum, Nika Haghtalab, and Ariel D Procaccia. Learning optimal commitment to overcome insecurity. Advances in Neural Information Processing Systems, 27, 2014.
- Bubeck et al. [2012] Sébastien Bubeck, Nicolo Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning, 5(1):1–122, 2012.
- Cai et al. [2024] Yang Cai, Haipeng Luo, Chen-Yu Wei, and Weiqiang Zheng. Uncoupled and convergent learning in two-player zero-sum markov games with bandit feedback. Advances in Neural Information Processing Systems, 36, 2024.
- Chen and Peng [2020] Xi Chen and Binghui Peng. Hedging in games: Faster convergence of external and swap regrets. Advances in Neural Information Processing Systems, 33:18990–18999, 2020.
- Chen et al. [2022] Yurong Chen, Xiaotie Deng, and Yuhao Li. Optimal private payoff manipulation against commitment in extensive-form games. In In Proceedings of 18th International Conference on Web and Internet Economics, 2022.
- Chen et al. [2023] Yurong Chen, Xiaotie Deng, Jiarui Gan, and Yuhao Li. Learning to manipulate a commitment optimizer. arXiv preprint arXiv:2302.11829, 2023.
- Cole and Roughgarden [2014] Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 243–252, 2014.
- Conitzer and Sandholm [2002] Vincent Conitzer and Tuomas Sandholm. Complexity of mechanism design. In Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, pages 103–110, 2002.
- Daskalakis and Panageas [2018] Constantinos Daskalakis and Ioannis Panageas. The limit points of (optimistic) gradient descent in min-max optimization. Advances in neural information processing systems, 31, 2018.
- Daskalakis et al. [2021] Constantinos Daskalakis, Maxwell Fishelson, and Noah Golowich. Near-optimal no-regret learning in general games. Advances in Neural Information Processing Systems, 34:27604–27616, 2021.
- Fang et al. [2015] Fei Fang, Peter Stone, and Milind Tambe. When security games go green: designing defender strategies to prevent poaching and illegal fishing. In Proceedings of the 24th International Conference on Artificial Intelligence, pages 2589–2595, 2015.
- Gan et al. [2018] Jiarui Gan, Edith Elkind, and Michael Wooldridge. Stackelberg security games with multiple uncoordinated defenders. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pages 703–711, 2018.
- Gan et al. [2019a] Jiarui Gan, Qingyu Guo, Long Tran-Thanh, Bo An, and Michael Wooldridge. Manipulating a learning defender and ways to counteract. Advances in Neural Information Processing Systems, 32, 2019a.
- Gan et al. [2019b] Jiarui Gan, Haifeng Xu, Qingyu Guo, Long Tran-Thanh, Zinovi Rabinovich, and Michael Wooldridge. Imitative follower deception in stackelberg games. In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 639–657, 2019b.
- Ghosh [2023] Arnob Ghosh. Provably efficient model-free rl in leader-follower mdp with linear function approximation. In Learning for Dynamics and Control Conference, pages 1112–1124. PMLR, 2023.
- Goktas et al. [2022] Denizalp Goktas, Sadie Zhao, and Amy Greenwald. Zero-sum stochastic stackelberg games. Advances in Neural Information Processing Systems, 35:11658–11672, 2022.
- Haghtalab et al. [2016] Nika Haghtalab, Fei Fang, Thanh H Nguyen, Arunesh Sinha, Ariel D Procaccia, and Milind Tambe. Three strategies to success: learning adversary models in security games. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 308–314, 2016.
- Jiang et al. [2013] Albert Xin Jiang, Ariel D Procaccia, Yundi Qian, Nisarg Shah, and Milind Tambe. Defender (mis) coordination in security games. In Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
- Jin et al. [2021] Chi Jin, Qinghua Liu, Yuanhao Wang, and Tiancheng Yu. V-learning–a simple, efficient, decentralized algorithm for multiagent rl. arXiv preprint arXiv:2110.14555, 2021.
- Kao et al. [2022] Hsu Kao, Chen-Yu Wei, and Vijay Subramanian. Decentralized cooperative reinforcement learning with hierarchical information structure. In International Conference on Algorithmic Learning Theory, pages 573–605, 2022.
- Klein [2017] Natalie Klein. Can international litigation solve the india-sri lanka fishing dispute? https://researchers.mq.edu.au/en/publications/can-international-litigation-solve-the-india-sri-lanka-fishing-di, 2017. Accessed on 01 October 2023.
- Lauffer et al. [2022] Niklas Lauffer, Mahsa Ghasemi, Abolfazl Hashemi, Yagiz Savas, and Ufuk Topcu. No-regret learning in dynamic stackelberg games. arXiv preprint arXiv:2202.04786, 2022.
- Leibo et al. [2017] Joel Z Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. Multi-agent reinforcement learning in sequential social dilemmas. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pages 464–473, 2017.
- Letchford et al. [2009] Joshua Letchford, Vincent Conitzer, and Kamesh Munagala. Learning and approximating the optimal strategy to commit to. In Algorithmic Game Theory: Second International Symposium, SAGT 2009, Paphos, Cyprus, October 18-20, 2009. Proceedings 2, pages 250–262. Springer, 2009.
- Lin et al. [2020] Tianyi Lin, Chi Jin, and Michael Jordan. On gradient descent ascent for nonconvex-concave minimax problems. In International Conference on Machine Learning, pages 6083–6093. PMLR, 2020.
- Liu et al. [2021] Qinghua Liu, Tiancheng Yu, Yu Bai, and Chi Jin. A sharp analysis of model-based reinforcement learning with self-play. In International Conference on Machine Learning, pages 7001–7010, 2021.
- Mao and Başar [2023] Weichao Mao and Tamer Başar. Provably efficient reinforcement learning in decentralized general-sum markov games. Dynamic Games and Applications, 13(1):165–186, 2023.
- Meng et al. [2021] Min Meng, Xiuxian Li, Yiguang Hong, Jie Chen, and Long Wang. Decentralized online learning for noncooperative games in dynamic environments. arXiv preprint arXiv:2105.06200, 2021.
- Mertikopoulos et al. [2018] Panayotis Mertikopoulos, Christos Papadimitriou, and Georgios Piliouras. Cycles in adversarial regularized learning. In Proceedings of the twenty-ninth annual ACM-SIAM symposium on discrete algorithms, pages 2703–2717. SIAM, 2018.
- Ministry of WWF-Pakistan [2015] Ministry of WWF-Pakistan. National plan of action for combating illegal wildlife trade in Pakistan, 2015.
- Myerson [1981] Roger B Myerson. Optimal auction design. Mathematics of operations research, 6(1):58–73, 1981.
- Nguyen and Xu [2019] Thanh Nguyen and Haifeng Xu. Imitative attacker deception in Stackelberg security games. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 528–534, 2019.
- Orabona [2019] Francesco Orabona. A modern introduction to online learning. arXiv preprint arXiv:1912.13213, 2019.
- Peng et al. [2019] Binghui Peng, Weiran Shen, Pingzhong Tang, and Song Zuo. Learning optimal strategies to commit to. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 2149–2156, 2019.
- Roth et al. [2016] Aaron Roth, Jonathan Ullman, and Zhiwei Steven Wu. Watch and learn: Optimizing from revealed preferences feedback. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, pages 949–962, 2016.
- Roughgarden [2010] Tim Roughgarden. Algorithmic game theory. Communications of the ACM, 53(7):78–86, 2010.
- Shalev-Shwartz et al. [2016] Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295, 2016.
- Sherali et al. [1983] Hanif D Sherali, Allen L Soyster, and Frederic H Murphy. Stackelberg-nash-cournot equilibria: characterizations and computations. Operations Research, 31(2):253–276, 1983.
- Song et al. [2021] Ziang Song, Song Mei, and Yu Bai. When can we learn general-sum markov games with a large number of players sample-efficiently? In International Conference on Learning Representations, 2021.
- Stackelberg [1934] Heinrich von Stackelberg. Marktform und gleichgewicht. J. Springer, 1934.
- Sun et al. [2023] Jingrui Sun, Hanxiao Wang, and Jiaqiang Wen. Zero-sum stackelberg stochastic linear-quadratic differential games. SIAM Journal on Control and Optimization, 61(1):252–284, 2023.
- Tambe [2011] Milind Tambe. Security and game theory: algorithms, deployed systems, lessons learned. Cambridge university press, 2011.
- Von Stengel and Zamir [2010] Bernhard Von Stengel and Shmuel Zamir. Leadership games with convex strategy sets. Games and Economic Behavior, 69(2):446–457, 2010.
- Wei et al. [2021] Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Last-iterate convergence of decentralized optimistic gradient descent/ascent in infinite-horizon competitive markov games. In Conference on learning theory, pages 4259–4299, 2021.
- Wu et al. [2022] Jibang Wu, Haifeng Xu, and Fan Yao. Multi-agent learning for iterative dominance elimination: Formal barriers and new algorithms. In Conference on Learning Theory, pages 543–543. PMLR, 2022.
- Yu et al. [2022] Yaolong Yu, Haifeng Xu, and Haipeng Chen. Learning correlated stackelberg equilibrium in general-sum multi-leader-single-follower games. arXiv preprint arXiv:2210.12470, 2022.
- Zhao et al. [2023] Geng Zhao, Banghua Zhu, Jiantao Jiao, and Michael Jordan. Online learning in Stackelberg games with an omniscient follower. In International Conference on Machine Learning, pages 42304–42316. PMLR, 2023.
- Zheng et al. [2022] Stephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C Parkes, and Richard Socher. The AI economist: Improving equality and productivity with ai-driven tax policies. Science Advances, 2022.
- Zhong et al. [2023] Han Zhong, Zhuoran Yang, Zhaoran Wang, and Michael I Jordan. Can reinforcement learning find stackelberg-nash equilibria in general-sum markov games with myopically rational followers? Journal of Machine Learning Research, 24(35):1–52, 2023.
Decentralized Online Learning in General-Sum Stackelberg Games
Appendix A Proofs for Section 4
A.1 Proof for Theorem 1
Leader / Follower | ||
---|---|---|
(0.95, 0.3) | (0.9, 0.2) | |
(1, 0.8) | (0, 0.79) |
We apply the UCB-UCB algorithm to a specific Stackelberg game, as depicted in Table 2, where the Stackelberg equilibrium is identified as . The UCB algorithms for the leader and follower are defined respectively as:
and
For simplicity in demonstrating the proof, it is assumed that players receive their actual true rewards at each round. Thus, , and for and , , the follower’s UCB values are calculated as:
Let and . The UCB mechanism implies that for the follower to prefer over , the following inequality must hold:
Given , it follows that:
Assuming yields . If , then . Given and assuming , for sufficiently large , the leader’s UCB for becomes:
which leads to a contradiction. Thus, , indicating that the leader will play the optimal action no more than times. Consequently, for sufficiently large , the leader is expected to incur regret linear in .
A.2 Proof for Theorem 2
Theorem (Last iterate convergence of EXP3-UCB under limited information).
In a limited information setting of a repeated general-sum Stackelberg game with noisy bandit feedback, applying EXP3-UCB, with , , for , with probability at least ,
where is a constant, is the leader’s suboptimality reward gap to Stackelberg equilibrium:
We first prove the following two lemmas which will then be used to prove Theorem 2. We denote . We denote and in this section when there is no ambiguity.
Lemma 1.
[Wu et al., 2022] For , , with probability at least ,
Proof of Lemma 1.
Fix and any action . Let and . Since is fixed, is bounded. Moreover, we have
By definition, is a martingale. Apply Azuma’s inequality to , for any , we have
(9) |
is an upper bound of . Note that
We take and in Eq.(9) and obtain
which implies with probability at least ,
∎
Lemma 2.
We set in , and . Denote as the number of rounds that the follower did not play best response when leader played , with probability at least , for all , we have
Similarly, we denote the number of wrong recognition rounds of in Algorithm 2 for every action as , with probability at least , for all , we have
Proof.
Combining the Theorem 10.14 of Orabona [2019] and Bernstein inequality we can get the results. ∎
Proof of Theorem 2.
We denote in this section when there is no ambiguity. For , based on E.4 of Yu et al. [2022], we have
(10) | ||||
where represents the biggest interval the leader chooses action between the -th time and the -th time for any . With probability at least , we have
For any , we set , , by the union bound, with probability at least ,
(11) | ||||
Note that
So, combined with Eq.(11), we have reached our conclusion.
∎
Remark: Last-iterate convergence refers to the phenomenon where the sequence of actions converges to the optimal action or the sequence of policies converges to the optimal policy, formulated as . It represents a particularly strong form of convergence, which is inevitably a stronger notion than the average-iterate convergence [Wu et al., 2022, Abe et al., 2022, Cai et al., 2024]: an algorithm demonstrating last-iterate convergence is proven to also be capable of achieving average-iterate convergence whereas the reverse is not necessarily true. Our findings corroborate this view, indicating that leads to .
The formulations of the bounds presented in both Theorem 2 and Theorem 3 are conventional within the learning theory community, attributable to divergent analytical approaches applied to EXP3 and UCB type algorithms. Generally speaking, Theorem 2 introduces a decay rate of , contrasting with the linear decay in observed in Theorem 3. This distinction suggests a comparative advantage for Theorem 3, contingent upon a specific constraint on . Nevertheless, given that both bounds are influenced by the gaps, denoted as , a thorough comparison necessitates consideration of these gap values.
A.3 Proof for Theorem 3
Theorem (Last iterate convergence of UCBE-UCB under limited information).
In a limited information setting of a repeated general-sum Stackelberg game with noisy bandit feedback, applying UCBE-UCB with , , for , with probability at least , we have:
Lemma 3.
Let , , set , when , with probability at least , we have
A similar result holds for the follower.
Proof of Lemma 3.
(1) When , by the Hoeffding inequality, with probability at least , we have
(2) When ,
So combining (1) and (2), when , with probability at least , we have
The bound for the UCB term of the follower is completely analogous and thus omitted. ∎
Proof of Theorem 3.
We next only consider rounds that the leader plays . With a little abuse of notation, is referred to as -th time that the leader plays the action . We denote . We ignore the subscript when there is no ambiguity.
Noticed that when , . And when , . So when total round () is big enough, the leader will play action for at least rounds.
We next bound the difference between accumulative reward the leader receives and the accumulative reward under best responses,
And when , by Lemma 3, with probability at least ,
So when , and , with probability at least , we have
We set , which will guarantee and the following inequality holds based on above analysis
When , , then with probability at least , for all , we have,
So . Then by the union bound, with probability at least , we have
∎
Appendix B Proof for Section 6
B.1 Proof for Theorem 4
Theorem (Last iterate convergence of EXP3-FMUCB with noisy side information).
When the leader uses EXP3 with parameter , , the follower uses FMUCB, and let be the total number of rounds that the follower did not play the best manipulation strategy in rounds, with probability at least ,
where , is a constant.
We study the case when the follower fails to do the manipulation when leader plays . We ignore the subscript when there is no ambiguity, including the following two cases.
Wrong recognition for
We denote the rounds that the follower did not find correct for all as . By Lemma 2, we have
Wrong recognition of manipulation when is correct for all
We next study the wrong manipulation rounds when is correct for all . For any action pair , we consider whether it will be selected as the best manipulation pair or under what condition it will be eliminated by Algorithm 2 when it is not the best manipulation pair. We classify and discuss two different situations:
(i) , but there exists , . As a reminder, . When and , by Lemma 3, the following inequality holds
which means that will be eliminated by Algorithm 2 through Line 4. So it needs at most extra rounds for action pair to eliminate the wrong manipulation pair .
When the leader plays more than rounds, because of the existence of uniform exploration parameter , with probability at least , . So by the union bound, with probability at least ,
B.2 Proof for Theorem 5
Theorem (Last iterate convergence of UCBE-FMUCB with noisy side information).
When the leader uses UCBE with and , the follower uses FMUCB, and let be the total number of rounds that the follower did not play the best manipulation strategy in rounds, with probability at least ,
and for , .
Since the leader who uses UCBE will do pure exploration for each with according to the analysis in Appendix A.3, where . So by the union bound, with probability at least ,
Similar to the proof of Theorem 3, we bound the difference between the accumulative reward the leader receives and the accumulative reward under best manipulation strategy,
And based on the proof of Theorem 4 in Appendix A.3 and Lemma 2, we have
Analogous to the proof of Theorem 3 in Appendix A.3, we can complete the proof of Theorem 5.
Appendix C Follower’s pessimistic manipulation
We denote . Because there may be multiple actions in with the same largest , so there may be more than one element in . From the follower’s perspective, pessimistically, the leader will break the tie using . Assume pessimistically tie-breaking is more reasonable considering that the leader uses no-regret learning algorithm, since the leader who uses no-regret learning algorithm may not play one action in stably. So it is important for the follower to choose an appropriate manipulation strategy that will maximize its own reward in the long run taking into account the leader’s tendency. When the follower makes the assumption that the leader will break the tie pessimistically, the optimization problem of the follower is
(12) | ||||
s.t. | ||||