Unsynchronized Decentralized Q-Learning: Two Timescale Analysis By Persistence††thanks: An earlier version of this work containing preliminary results was presented at the 2023 IEEE Conference on Decision and Control.
Abstract
Non-stationarity is a fundamental challenge in multi-agent reinforcement learning (MARL), where agents update their behaviour as they learn. Many theoretical advances in MARL avoid the challenge of non-stationarity by coordinating the policy updates of agents in various ways, including synchronizing times at which agents are allowed to revise their policies. Synchronization enables analysis of many MARL algorithms via multi-timescale methods, but such synchronization is infeasible in many decentralized applications. In this paper, we study an unsynchronized variant of the decentralized Q-learning algorithm, a recent MARL algorithm for stochastic games. We provide sufficient conditions under which the unsynchronized algorithm drives play to equilibrium with high probability. Our solution utilizes constant learning rates in the Q-factor update, which we show to be critical for relaxing the synchronization assumptions of earlier work. Our analysis also applies to unsynchronized generalizations of a number of other algorithms from the regret testing tradition, whose performance is analyzed by multi-timescale methods that study Markov chains obtained via policy update dynamics. This work extends the applicability of the decentralized Q-learning algorithm and its relatives to settings in which parameters are selected in an independent manner, and tames non-stationarity without imposing the coordination assumptions of prior work.
keywords:
Multi-agent reinforcement learning, independent learners, learning in games, stochastic games, decentralized systems91A15, 91A26, 60J20, 93A14
Bora Yongacoglu Department of Mathematics and Statistics, Queen’s University Gürdal Arslan Department of Electrical Engineering, University of Hawaii at Manoa Serdar Yüksel11footnotemark: 1
1 Introduction
Multi-agent systems are characterized by the coexistence of several autonomous agents acting in a shared environment. In multi-agent reinforcement learning (MARL), agents in the system change their behaviour in response to feedback information received in previous interactions, and the system is non-stationary from any one agent’s perspective. In this non-stationary environment, agents attempt to optimize their performance against a moving target [12]. This non-stationarity has been identified as one of the fundamental technical challenges in MARL [11]. In contrast to the rich literature on single-agent learning theory, the theory of MARL is relatively underdeveloped, due in large part to the inherent challenges of non-stationarity and decentralized information.
This paper considers learning algorithms for stochastic games, a common framework for studying MARL in which the cost-relevant history of the system is summarized by a state variable. We focus on stochastic games in which each agent fully observes the system’s state variable but does not observe the actions of other agents, exacerbating the challenge of non-stationarity.
Early theoretical work on MARL in stochastic games avoided the problem of non-stationarity by studying applications in which joint actions were observed by all agents [22, 21, 13]. There has also been interest in the local action learner setting, where actions are not shared between agents. Several rigorous contributions have recently been made in this setting, including the works of [1, 6, 31, 36, 37], to be discussed shortly.
Of the recent advances in MARL for the local action learner setting, many of the algorithms with strong guarantees have circumvented the challenge of non-stationarity by relying, implicitly or explicitly, on coordination between agents. In particular, several works in the regret testing tradition rely on some form of synchronization, whereby agents agree on the times at which they may revise their behaviour and are constrained to fix their policies during the intervening periods. This synchronized approach allows for the use of two timescale analyses common to many single- and multi-agent reinforcement learning paradigms, in which the behaviour of value function estimates and the behaviour of the joint policy process can be effectively uncoupled and analyzed separately [18, 3, 15].
A variety of regret testing algorithms have been proposed for different classes of games, including [9, 10, 1] and [37]. These algorithms differ in their policy update rules, which are typically chosen to exploit some underlying structure of particular classes of games, from better/best-response dynamics in [1] to arbitrary -satisficing policy update rules in [37]. Despite such differences, previous regret testing algorithms are united in their reliance on synchronization to facilitate the analysis of learning iterates and to guarantee accuracy of value function estimates. While this is justifiable in some settings, it can be restrictive in others, including applications where parameters are selected in a distributed or decentralized manner, as well as settings in which parameter coordination can be affected by communication noise. As such, it would be desirable to provide MARL algorithms that do not require synchronization but still come with rigorous performance guarantees in the local action learner setting.
This paper studies weakly acyclic stochastic games, where the definition of weak acyclicity is in terms of single-agent best-responding. In this context, our aim is to give guarantees of convergence to stationary deterministic equilibrium policies. We find that, with slight modification, the decentralized Q-learning algorithm of [1] can be made to tolerate unsynchronized policy updates, and that strong coordination assumptions about parameter choice and timing are not necessary. This constitutes a direct generalization of the algorithm and results in [1]. The key modification involves utilizing constant learning rates in the Q-function updates, replacing the decreasing learning rates used in [1] and in all previous regret testing algorithms. Unlike decreasing learning rates, which place relatively large weight on data encountered in the distant past, constant learning rates allow agents to rapidly correct their Q-factor estimates by discarding outdated information. Additionally, we retain the use of random inertia in the policy update, which we show acts as a decentralized stabilizing mechanism.
Although the technical exposition here focuses on weakly acyclic games, the analytic insights in this paper are not limited to this setting. In fact, the results of this paper can be generalized in various directions by studying different classes of games, different policy revision mechanisms, or by selecting different sets of policies as the target for convergence. We now list several other regret testing results that can be generalized to accommodate for unsynchronized agents by employing constant learning rates and inertia in the policy update.
Teams and Optimality. Stochastic teams and their generalizations of common interest games constitute an important special case of weakly acyclic games, with applications to distributed control [36]. In common interest games, one typically seeks team optimal policies, a subset of equilibrium policies that simultaneously achieves minimum cost for each player and may not exist in more general classes of games. A modification of the decentralized Q-learning algorithm was proposed in [36], which incorporated memory to account for past performance as well as random search of players’ policy spaces to search for team optimal policies. This algorithm, however, was developed in the synchronized setting, where the analysis of learning iterates was decoupled from policy dynamics. This algorithm can be further adapted to accommodate unsynchronized policy updates by utilizing constant learning rates.
General games and -equilibrium. Rather than restricting players to revise policies via inertial best-responding, one may be interested in more general revision mechanisms, to model settings where players are more exploratory in nature. If one imposes a stopping condition, such as refraining from updating when one is already -best-responding, then the policy dynamics can be studied using the theory of -satisficing paths presented by [37]. In [37], it was shown that a synchronized variant of decentralized Q-learning, which used random search of the policy space when not -best-responding, drove policies to -equilibrium in symmetric stochastic games. With appropriate modifications, a similar result can be obtained without synchronization.
General games and “cumber" sets. In general stochastic games, the set of stationary deterministic equilibrium policies can be empty. Even if this set is not empty, in general, it is not guaranteed that inertial best-response dynamics drives play to equilibrium. The notions of cusber sets (introduced in [14]) and cumber sets ([36]) are useful for studying the dynamics of inertial best-responding in general games. Cusber sets are subsets of policies that are closed under single-agent best-responses, while cumber sets are defined as being closed under multi-agent best-responses. It was previously shown in [36] that variants of decentralized Q-learning drive play to minimal cumber sets even in general games, though play may cycle between several policies within this minimal set. The results of the present paper can be combined with the analysis of [36] to guarantee convergence of unsynchronized decentralized Q-learning to cumber sets in general games.
Contributions: In this paper, we offer a means to relax the synchronization assumptions common in the regret testing paradigm of MARL while retaining desirable convergence guarantees. Our solution combines inertial policy updating with persistent learning via non-decreasing learning rates. We provide mathematical details for a particular set-up involving the unsynchronized variant of the decentralized Q-learning algorithm of [1], a recent MARL algorithm designed for weakly acyclic stochastic games, a class of games that admits teams and potential games as important special cases. We do not impose synchronization of policy revision times, which is a common and restrictive assumption in prior work. We show that persistent learning, when combined with inertial policy updating, provides a mechanism for taming non-stationarity and mitigating the moving target problem of MARL without coordinating agents’ parameter choices ahead of play. In particular, we show that this algorithm drives policies to equilibrium with arbitrarily high probability, under appropriate parameter selection. As we discuss below in Section 3, this appears to be the first formal analysis of an unsynchronized algorithm in the regret testing tradition. We also note that the same analysis can be adapted and applied to a variety of related learning algorithms, which all impose the synchronization condition and are analyzed via multi-timescale methods. Thus, the results of this paper establish that persistent learning can address analytic challenges relating to unsynchronized algorithms in a variety of multi-agent learning environments above and beyond weakly acyclic games.
Notation: For a standard Borel space , we let denote the set of probability measures on with its Borel -algebra. For standard Borel spaces and , we let denote the set of transition kernels on given . For a finite set , we let denote the set of functions from to . We let denote the uniform probability distribution on a compact set . We let denote the set of non-negative integers. To denote that a random variable has distribution , we write .
Terminology: We use the terms player and agent interchangeably. When considering a particular player and referring to other players in the same game, these other players may be variously called counterparts or counterplayers.
1.1 Organization
The remainder of the paper is organized as follows. In Section 2, we present the model of stochastic games. Related work is surveyed in Section 3. In Section 4, we present the unsynchronized decentralized Q-learning algorithm and we state our main result, Theorem 4.1, on high probability guarantees for equilibrium in weakly acyclic stochastic games. The proof of Theorem 4.1 is outlined in Section 5. The details of the proof of Theorem 4.1 are presented in Section 6. The results of a simulation study are presented in Section 7, and the final section concludes. A glossary of notation is provided in Appendix A.
2 Model
A stochastic game is described by a list:
(1) |
The components of are as follows: is a finite set of agents. The set is a finite set of system states. For each player , is ’s finite set of actions. We write , and refer to elements of as joint actions. For each player , a function determines player ’s stage costs, which are aggregated using a discount factor . The initial system state has distribution , and state transitions are governed by a transition kernel .
At time , the state variable is denoted by , and each player selects an action according to its policy, to be described shortly. The joint action at time is denoted . Each player then incurs a cost , and state variable evolves according to . This process is then repeated at time , and so on.
A policy for player is an action selection rule that, at each time , selects player ’s action in some (possibly random) manner that depends only on information locally available to player at time . Using to denote the information that is locally available to player at time , the following standing assumption specifies the decentralized information structure considered in this paper.
Assumption 1 (Local action learners).
For each , player ’s information variables are given by and
Under this information structure, each player observes the complete system state without noise, and additionally observes its own actions and its own cost realizations, but does not observe the actions of other players directly. This local action learner (LAL) information structure is common in the literature in MARL, with examples such as [1, 5, 6, 25, 31, 36, 37], and is sometimes called the independent learning paradigm.111This terminology is, however, not uniform, as independent learning has also recently been used to refer to learners that update their policies in a (perhaps myopic) self-interested manner [30]. In response to the changing conventions, we recommend that learners who condition only on their local action history should be called local action learners, in analogy to joint action learners. The LAL paradigm is one of the principal alternatives to the joint action learner (JAL) paradigm, studied previously in [20] and [21] among others. The key difference is that the JAL paradigm assumes that each agent gets to observe the complete joint action profile after it is played. (That is, in the JAL paradigm, one assumes .)
Definition 2.1.
For and , let . A sequence is called a policy for player if for each .
We let denote player ’s set of policies. When following a policy , player selects actions according to .
In general, action selection can incorporate randomness, and players may use arbitrarily complicated, history-dependent policies. However, our analysis will focus on stationary (Markov) policies, a subset of policies that randomly select actions in a time invariant manner that conditions only on the currently observed state variable. Such a restriction entails no loss in optimality for a particular player, provided the remaining players also use stationary policies. Focusing on stationary policies is quite natural: we refer the reader to [19] for an excellent elaboration.
Definition 2.2.
For , a policy is called stationary, if the following holds for any and any :
where and denote the final components of and , respectively.
The set of stationary policies for player is denoted and we identify with , the set of transition kernels on given . Henceforth, unqualified reference to a policy shall be understood to mean a stationary policy, while non-stationary policies will be explicitly identified as such.
Definition 2.3.
For , , a policy is called -soft if for all . A policy is called soft if it is -soft for some .
Definition 2.4.
A policy is called deterministic if for each , there exists such that .
The set of deterministic stationary policies for player is denoted by and is identified with the set of functions from to .
Notation: We let denote the set of joint policies. To isolate player ’s component in a particular joint policy , we write , where is used in the agent index to represent all agents other than . Similarly, we write the joint policy set as , and so on.
For any joint policy and initial distribution , there is a unique probability measure on the set of state-action trajectories . We denote this measure by , and let denote its expectation. We use this to define player ’s value function:
When is the Dirac measure of some state , we write instead of . For , we will also write to isolate the role of .
Definition 2.5.
Let , . A policy is called an -best-response to if, for every ,
The set of -best-responses to is denoted . It is well-known that for any , player ’s set of 0-best-responses is non-empty, and the infimum above is in fact attained by some policy in : that is, .
Definition 2.6.
Let . A joint policy is called an -equilibrium if for all .
For , we let denote the set of -equilibrium policies. It is known that the set is non-empty [7]. We also let denote the subset of stationary deterministic -equilibrium policies, which may be empty in general.
2.1 Weakly Acyclic Stochastic Games
We now introduce weakly acyclic games, an important subclass of games that will be the main focus of this paper.
Definition 2.7.
A sequence in is called a strict best-response path if for any there is a unique player such that and .
Definition 2.8.
The stochastic game is weakly acyclic if (i) , and (ii) for any , there is a strict best-response path from to some .
The multi-state formulation above was stated in [1], though weakly acyclic games had previously been studied in stateless games [38]. An important special case is that of stochastic teams, where and for any players , and the interests of all agents are perfectly aligned. Markov potential games with finitely many states constitute another special case of weakly acyclic games [28, 16, 39].
2.2 Q-Functions in Stochastic Games
In the stochastic game , when player ’s counterplayers use a stationary policy , player faces an environment that is equivalent to a single-agent MDP. The MDP in question depends on the policy as well as the game , and (stationary Markov) optimal policies for this MDP are equivalent to 0-best-responses to in the game .
Player ’s best-responses to a policy can be characterized using an appropriately defined Q-function, .222We use the terms Q-function and (state-) action-value function interchangeably. The function can be defined by a fixed point equation of a Bellman operator, but here we give an equivalent definition in terms of the optimal policy of the corresponding MDP:
(2) |
for all where and
Definition 2.9.
For and , we define
The set is the set of stationary deterministic policies that are -greedy with respect to . The function plays the role of an action-value function, and for , we have .
When the remaining players follow a stationary policy, player can use Q-learning to estimate its action-values, which can then be used to estimate a 0-best-response policy. In the following sections, we observe that the situation is more complicated when the remaining players employ learning algorithms that adapt to feedback and revise their action selection probabilities over time.
3 Related Work
In this paper, we are interested in MARL for stochastic games in the local action learner setting of 1. In this framework, players do not have knowledge of their cost functions, the transition kernel , and perhaps even the existence or nature of other players in the system. We focus on the online setting, where players do not have access to a generative model for sampling state transitions or costs, and instead must rely on sequential interactions with the multi-agent system in order to gather data.
At a high level, our objective is to study and design learning algorithms that drive play of the game to stable and individually rational joint policies. Our objectives are closely aligned with the desiderata of MARL algorithms outlined in [4], namely convergence and rationality. In particular, we wish to establish that joint policies converge to some stationary (-) equilibrium policy.
A number of MARL algorithms have been proposed with the same high-level objectives as ours. The wide ranging field of algorithms differ from one another in terms of how and when incoming feedback data is processed and how and when the agent’s policy is accordingly modified. In this section, we survey some related work that is especially relevant to our chosen setting of MARL in stochastic games with local action learners.
3.1 Q-Learning and its Challenges
In decentralized MARL settings, where agents have minimal information about strategic interactions with other players, one natural and commonly studied approach to algorithm design is to treat each agent’s game environment as a separate single-agent RL environment. Indeed, when the remaining players use a stationary policy , then player truly does face a strategic environment equivalent to an MDP, and can approach its learning task with single-agent Q-learning.
Picking some to serve as an initial estimate of its optimal Q-function, player would iteratively apply the following update rule:
(3) |
where is a random (that is, history-dependent) learning rate that depends on the state-action pair and for all .
If the learning rate is decreased at an appropriate rate in the number of visits to the state-action pair , and if every state-action pair is visited infinitely often, then one has almost surely [35]. We recall that was defined in Section 2 as the optimal Q-function for player when playing against the stationary policy .
In order to asymptotically respond (nearly) optimally to the policy of its counterparts, player must exploit the information carried by its learning iterates. Since one must balance this exploitation with persistent exploration, a standard approach in algorithm design is to periodically revise an agent’s policy—either after every system interaction or less frequently—in a manner that places higher probability on actions with better action values. A widely used device for balancing exploration and exploitation is Boltzmann action selection, wherein
where is a temperature parameter that is typically taken to decrease to 0 as , and player ’s Q-factor estimates are constructed from the local history observed by player .
Boltzmann action selection is not the only feasible alternative for balancing the trade-off between exploration and exploitation in this context; several other greedy in the limit with infinite exploration (GLIE) mechanisms may also be used to the same effect [33].
In the preceding equations, one observes that is simply an alternative representation of the policy actually followed by player . Since player ’s Q-factor estimates and other relevant parameters (e.g. the temperature sequence) change with time, any adaptive learning algorithm that exploits learned iterates will be non-stationary. If other players use analogous learning algorithms and exploit their own learned information, then other players will also follow non-stationary policies. In symbols, this is to say that . In this case, the strategic environment defined by the non-stationary policy is not generally equivalent to an MDP, and there is no guarantee that the Q-factor estimates given by (3) will converge to some meaningful quantity, if they are to converge at all.
Provable non-convergence of Q-iterates obtained by single-agent Q-learning in small, stateless games has been established [17]. Similar non-convergence had been previously been observed empirically in stochastic games [4, 34]. Thus, some modification is needed in order to employ algorithms based on Q-learning in multi-agent settings. Modifications may involve changes to how and when Q-factor updates are performed or may involve changes to how and when action selection probabilities are updated.
3.2 Algorithms for Local Action Learners
As described in the previous section, myopic algorithms that repeatedly change an agent’s policy during the course of interaction can be effective in single-agent settings, when an agent faces a stationary environment. By contrast, when all agents employ such algorithms in a multi-agent system, each agent faces a non-stationary environment and the same convergence guarantees need not apply. In addition to the previously described non-convergence of Q-factors, provable non-convergence of policies under particular learning algorithms has also been observed in various settings, [8, 26, 27].
With the understanding that some modification of single-agent RL algorithms is needed to address the challenge of non-stationarity in MARL, several multi-agent algorithms have been proposed that use multiple timescales. The key feature of the multi-timescale approach is that certain iterates—whether policy parameters, Q-factor estimates, or otherwise—are updated slowly relative to other iterates. In effect, this allows one to analyze the rapidly changing iterates as though the slowly changing iterates were held fixed, artificially mimicking stationarity.
Some works have studied MARL algorithms in which a subset of agents update their policies slowly while the remaining agents update their policy parameters quickly. A two-timescale policy gradient algorithm was studied for two-player, zero-sum games in [6], where a performance guarantee was proven for the minimizing player. Other works, including [29], have empirically studied MARL algorithms in which the agents alternate between updating their policies slowly and quickly according to some commonly accepted schedule.
A second type of multi-timescale algorithm updates various policy-relevant parameters quickly relative to value function estimates, which are updated relatively slowly. This approach can be found in [30, 31] and [32]. This can be contrasted with a third type of multi-timescale algorithm, in which policies are updated relatively slowly while value functions are updated relatively quickly. For a recent example of this, see [23].
A third type of algorithm that retains the same essential quality of multiple timescale algorithms is the regret testing approach, outlined in the next subsection.
3.3 Regret Testers
In the MARL algorithms surveyed above, the agents update all relevant parameters after each system interaction, and the multi-timescale analysis is implemented by controlling the magnitude of these every-step updates via step-size constraints. An alternative approach is to update certain iterates after each system interaction while updating other iterates infrequently, with many system interactions between updates. Such is the approach taken by works in the tradition of regret testing, pioneered by Foster and Young in [9], which presented an algorithm for stateless games. This approach was later studied by [10] and [24] among others in the context of stateless repeated games, where impressive convergence guarantees were proven. The strong results of [10] and [24] in stateless games were enabled to a significant extent by the absence of state dynamics, which would otherwise complicate value estimation and policy evaluation in multi-state games.
In [1], the regret testing paradigm of Foster and Young was modified for multi-state stochastic games, where one must account for both the immediate cost of an action and also the cost-to-go, which depends on the evolution of the state variable. The decentralized Q-learning algorithm of [1] instructs agents to agree on an increasing sequence of policy update times, , and to fix one’s policy within the intervals , called the exploration phase.333Although the terminology of exploration phases first appeared in [1], the exploration phase approach was pioneered by [9]. In so doing, the joint policy process is fixed over each exploration phase, and within each exploration phase, each agent faces a Markov decision problem (MDP). Each agent then runs a standard Q-learning algorithm within the exploration phase, and updates its policy in light of its Q-factors at the end of the exploration phase. Since each agent faces an MDP, one can analyze the learning iterates using single-agent learning theory. Provided the exploration phase lengths are sufficiently long, the probability of correctly recovering one’s Q-function (defined with respect to the policies of the other agents and elaborated in the coming sections) can be made arbitrarily close to 1.
In many of the papers on regret testers, the proofs of convergence to equilibrium share the same core elements, which can be summarized as follows. First, one defines an equilibrium event. (In the fully synchronized regime described above, this equilibrium event is usually the event that the joint policy of the exploration phase is an equilibrium.) Second, one uniformly lower bounds the probability of transiting from non-equilibrium to equilibrium in a fixed number of steps. Third, one shows that the probability of remaining at equilibrium once at equilibrium can be made arbitrarily large relative to the aforementioned lower bound. Fourth and finally, one obtains a high probability guarantee of equilibrium by analyzing the ratio of entry and exit probabilities to equilibrium policies (see, for instance, [9, Lemma 2]).
In effect, the exploration phase technique decouples learning from adaptation, and allows for separate analysis of learning iterates and policy dynamics. This allows for approximation arguments to be used, in which the dynamics of the policy process resemble those of an idealized process where players obtain noise-free learning iterates for use in their policy updates. This has lead to a series of theoretical contributions in MARL that make use of the exploration phase technique, including [36] and [37].
One natural criticism of the exploration phase technique described above is its reliance on synchronization of policy updating. In the description above, agents agree on the policy update times exactly, and no agent ever updates its policy in the interval . Indeed, the mathematically convenient assumption of synchronization is made in various works in the regret testing tradition, including [9, 10] and [24]. This can be justified in some settings, but is demanding decentralized settings where parameters are selected locally by each player or in distributed settings where parameter choices are communicated over noisy channels and are likely to differ across agents.
To relax the synchronization assumption of regret testers, one allows each player to select its own exploration phase times . Player then has the opportunity to revise its policies at each time in . We observe that an update time for player may fall within an exploration phase for some other player . That is, for two players and exploration phase indices and . In this case, the environment faced by player during its exploration phase is non-stationary, having been disrupted (at least) by player ’s changing policies.
Intuitively, unsynchronized policy updating may be problematic for regret testers because the action-value estimates of a given player depend on historical data from that player’s most recent exploration phase, which in turn depends on the (time-varying) joint policies used during this period. As such, if other players change their policies during an individual’s exploration phase, the individual receives feedback from different sources, and its learning iterates may not approximate any quantity relevant to the prevailing environment at the time of the individual’s policy update. These changes of policy during an exploration phase constitute potential disruptions of a player’s learning, and analysis of the overall joint policy process is difficult when players do not reliably learn accurate action-values.
In this unsynchronized regime, learning and adaptation are not decoupled, as agents do not face stationary MDPs during each of their exploration phases. In particular, it is not guaranteed that a given player’s Q-factors will converge. Consequently, it is not clear that the second and (especially) the third steps of the standard line of proof described above can be reproduced in the unsynchronized setting.
In [24], a heuristic argument suggested that the use of inertia in policy updating may allow one to relax the assumption of perfect synchronization in regret testing algorithms for stateless repeated games. This argument offers a promising starting point, but also has two potential gaps. The first unresolved aspect pertains to the ratio of entry and exit probabilities. The heuristic argument correctly states that one can use inertia to both (i) lower bound the probability of transiting to equilibrium from out of equilibrium, and (ii) make the probability of remaining at equilibrium arbitrarily close to 1. However, the analysis of the overall joint policy process depends on the ratio of these entry and exit probabilities, which depends on the choice of inertia parameter and may be somewhat challenging to analyze.
A second unresolved aspect has to do with the learning routine and how quickly errors are corrected. Here, the specific learning rate used in each agent’s Q-learning algorithm plays an important role. Existing regret testing algorithms employ a decreasing learning rate, so as to ensure the convergence of learning iterates within a synchronized exploration phase. In the unsynchronized regime, however, agents learn against time-varying environments. If an agent uses a decreasing learning rate, it gives relatively high weight to outdated data that are no longer representative of its current environment. As such, it may take a long, uninterrupted stretch of learning against the prevailing environment to correct the outdated learning. The time required will depend on how much the learning rate parameters have been reduced, and these considerations must be quantified explicitly in the analysis. This point is somewhat subtle, and may pose a challenge if one attempts to use decreasing learning rates in the unsynchronized regime.
In this paper, we formalize the heuristic argument of [24] and show that it is effectively correct, with minor caveats. To rectify the unaddressed aspects mentioned above, we modify the Q-learning protocol to use a constant learning rate and conduct a somewhat involved analysis of the resulting behaviour. We focus on weakly acyclic games and best-response-based policy updating routines, but the analysis from this setting can be used to study other policy update routines and classes of games, including -satisficing policy update dynamics such as those described in [37].
4 Unsynchronized Decentralized Q-Learning
In this section, we present Algorithm 1, an unsynchronized variant of the decentralized Q-learning of [1]. Unlike in the original decentralized Q-learning algorithm, Algorithm 1 allows for the sequence of exploration phase lengths vary by agent. Although this change introduces non-stationarity and thus complicates the analysis, the algorithm itself is no more intricate than that of [1].
4.1 Assumptions
In order to state our main result, Theorem 4.1, we now impose some assumptions on the underlying game , on the algorithmic parameters, and on the mutual independence of certain randomized subroutines appearing in the algorithm.
4.1.1 Assumption on the transition kernel
Assumption 2.
For any pair , there exists and a sequence of joint actions such that
2 requires that the state process can be driven from any initial state to any other state in finitely many steps, provided a suitable selection of joint actions is made. This is a rather weak assumption on the underlying transition kernel , and is quite standard in the theory of MARL (c.f. [30, Assumption 4.1, Case iv]).
4.1.2 Assumptions on algorithmic parameters
Our next assumption restricts the hyperparameter selections in Algorithm 1. Let , where
The quantity , defined originally by [1] and recalled above, is the minimum non-zero separation between two optimal Q-factors with matching states, minimized over all agents and over all policies .
For any baseline policy and fixed experimentation parameters , we use the notation to denote a corresponding behaviour policy, which is stationary but not deterministic. When using , agent follows with probability and mixes uniformly over with probability . It was previously shown in [1, Lemma B3] that the optimal Q-factors for these two environments will be close provided is sufficiently small for all . In particular, there exists such that if , then
(4) |
Assumption 3.
For all , and .
In the first part of 3, we require that , player ’s tolerance for suboptimality in its Q-factor, is positive to account for noise in its learning iterates, as exact convergence is unreasonably demanding during a truncated learning phase. On the other hand, the tolerance parameter is upper bounded so as to exclude truly suboptimal policies from consideration in an estimated best-response set. The second part of 3 concerns the experimentation parameter . We assume that so that player can explore all of its actions and accurately learn its Q-factors in the limit. This experimentation parameter must be kept small, however, so player ’s behaviour policy closely approximates its baseline policy. 3 appears as Assumption 2 in [1].
Assumption 4.
There exist integers such that for each player and exploration phase index .
4 requires that each agent spends a long time learning during any given exploration phase ( almost surely for any player during any exploration phase ). The second part of 4, that almost surely for all players and exploration phase indices , is assumed in order to control the relative frequency at which different agents update their policies. This is done to avoid the potentially problematic regime in which some agents update their policies an enormous number of times and repeatedly disrupt the learning of another agent who uses long exploration phase lengths.
When all players use Algorithm 1, we view the resulting sequence of policies as player ’s baseline policy process, where is player ’s baseline policy during , player ’s exploration phase. This policy process is indexed on the coarser timescale of exploration phases. For ease of reference, we also introduce a sequence of baseline policies indexed by the finer timescale of stage games. For with , we let denote player ’s baseline policy during the stage game at time . The baseline joint policy at stage game is then denoted . Furthermore, we refer to the collection of Q-factor step-size parameters as .
4.1.3 Assumptions on algorithmic randomness
Algorithm 1 is a randomized algorithm: in multiple places (such as Lines 12 and 20), a player’s decision depends on both the history of play and also on exogenous randomness. In this section, we describe our assumptions on how these random selections are made. In simple terms, we assume that random choices are independent across players and are conditionally independent of the history of the system. The intuition is elaborated below and then formalized in 5 using the language of primitive random variables.
Consider the action choice in Line 12 of Algorithm 1:
The selection of action can be expressed as a function of the state along with the outcome of two random quantities: a uniform random variable, denoted , and a uniform random action choice . That is, Line 12 of Algorithm 1 can be equivalently expressed as
(5) |
Similarly, the selection of the policy in Line 20 of Algorithm 1 can be expressed as a function of player ’s Q-factor estimates and the outcome of two random quantities:
-
•
A uniform random variable, denoted .
-
•
A uniform random policy , where .
That is, Line 20 of Algorithm 1 can be equivalently expressed as
In the analysis of the coming sections, it will be convenient to isolate the role of exogenous randomness and separate it from the realization of the Q-factor iterates above. It will therefore be convenient to consider analogs of the random choice drawn from arbitrary policy subsets .
With the preceding intuition in mind, we now introduce several collections of primitive random variables that will be used to state 5 below. For any player and , we define the following random variables:
-
•
is an identically distributed, -valued stochastic process. For some , state transitions are driven by via :
for any and ;
-
•
;
-
•
;
-
•
;
-
•
For non-empty , ;
Assumption 5.
The collection of primitive random variables is mutually independent, where
Remark: The random variables in are primitive random variable that do not depend on any player’s choice of hyperparameters or on the history of play of the game.
4.2 Theorem Statement
Theorem 4.1.
In words, Theorem 4.1 says the following: if the relative frequency of policy update times of different players is controlled by some ratio , if players use sufficiently small learning rates , and if the lower bound on each player’s exploration phase length is sufficiently long relative to both the learning rates as well as the ratio parameter , then play will be driven to equilibrium with high probability.
As we will see, the interdependence of the various parameters— and —is significant in the analysis to follow.
5 Proof Overview
As we will see in Section 6, the proof of Theorem 4.1 is somewhat intricate. For this reason, we now offer an outline of the proof to facilitate its reading.
5.1 High-Level Overview
The proof of Theorem 4.1 involves three major steps. First, we introduce a sequence of equilibrium events, , defined in terms of a suitably defined sequence of time intervals , to be elaborated in the sequel. For , we put
In words, is the event in which the baseline policy did not change during the interval and moreover the baseline policy was an equilibrium during this time.
Second, for a particular integer (defined in the text preceding Lemma 6.8), we argue in Lemma 6.8 that we can control and lower bound the probability of remaining at the equilibrium event for steps: that is, can be made arbitrarily large by suitable selection of algorithm parameters. The proof of Lemma 6.8 uses convergence properties of Q-learning iterates (Lemma 6.3) and requires careful arguments about conditional probabilities, since conditioning on the event carries confounding information about the state-action trajectory from both before and after time and precludes simple analysis that disconnects the future evolution of Q-learning estimates from the history of the system before time . Consequently, the analysis is somewhat delicate, and is worded in terms of primitive random variables, which offers a path around the confounding effect of conditioning on .
Third, we argue in Lemma 6.10 that the probability of driving play to this equilibrium event in time steps can be lower bounded by a positive constant. After this is achieved, one can then explicitly lower bound for suitably large , as in [1] and [37].
6 Proof of Theorem 4.1
6.1 Additional Notation and Constructions
We begin by defining several objects that will be needed for the various lemmas to follow. The notation in this section builds on that of Section 4.1.3. For ease of reference, we have included a glossary of notation in Table 2, Table 3, and Table 4 of Appendix A.
Recall that in Section 4.1.3, we motivated the use of primitive random variables for describing the algorithmic randomness of Algorithm 1. The primitive random variables described earlier included , which appeared in randomized action selection, and , which appeared in randomized policy updated. We also recall that was previously a random sample from a particular policy subset , which was defined using player ’s Q-factor iterates. In this section, it will be helpful to further isolate the role of algorithmic randomness, and therefore we will also consider random samples from all non-empty subsets of . For the latter purpose and for each , we arbitrarily order non-empty subsets of as , where .
We introduce the following new quantities for each :
-
•
for all . We define to be the set in which the random quantity takes its values. That is,
-
In plain terms, consists of random samples of each of the quantities needed for player to execute the randomized subroutines in Algorithm 1 at a given time .
-
•
, where is the -valued independent and identically distributed (i.i.d.) noise process driving state transitions that was introduced in §4.1.3.
-
For each , is a collection of the randomized quantities described in along with the random noise that drives state transitions. Of note, the information contained in is sufficient for recovering the subsequent state , the current joint action , and the subsequent joint policy .
-
•
. This quantity describes all random samples required to fully describe the future of the state-action trajectory and thus of Q-factors.
-
•
.
-
•
. This quantity describes the states, joint policies, and Q-factors up to and including time .
-
•
.
-
•
. This set consists of those -histories for which the final joint policy is an equilibrium.
We note that is a random quantity taking values in , while is a random quantity taking values in . For each , a realization of the random quantity is a sequence with entries in .
When every player uses Algorithm 1 to play the game and each player’s randomization mechanism is governed by the primitive random variables of §4.1.3, we have that the sample path of play—including realizations of the state process , the action processes for each agent , the baseline joint policy process , and the Q-factor estimates —are deterministic functions of any given prefix history and its corresponding continuation, , where . With this in mind, for and , we define mappings and such that
(6) |
Next, for any and any , we introduce a function that reports the hypothetical Q-factors player would have obtained if the baseline policies had been frozen at time . That is, the history up to time is given by , the primitive random variables from onward are given by , and we obtain the hypothetical (-measurable) continuation trajectory , as ,
where for each player and time ,
Note that the index of is not , which reflects that, in this hypothetical continuation, the baseline policies were frozen at time . Each hypothetical Q-factor estimate is then (analytically) built out of this hypothetical trajectory using the regular Q-learning update with the initial condition prescribed by .
Remark: Hypothetical Q-factors feature prominently in the analysis of the coming sections, and are a somewhat subtle construction, so we explain their utilization here. To analyze the likelihood of driving play to equilibrium or the likelihood of remaining at equilibrium, we ultimately wish to make probabilistic statements about the realized Q-learning iterates, , conditional on various events, such as the event that players did not switch their baseline policies recently. However, conditioning on such events can be problematic in the analysis, since the conditioning event clearly carries information about the state-action trajectory that may be consequential for the evolution of the Q-factor estimates. Due to this confounding effect, we will avoid making statements about conditional probabilities directly, and we will instead study trajectories of the hypothetical Q-factors. In so doing, we can describe likelihoods of relevant events in terms of the primitive random variables, thereby avoiding the confounding effects of conditioning.
6.2 Supporting Results on Q-Learning Iterates
Lemma 6.1.
For some , the following holds almost surely:
Proof 6.2.
For all , we have
Defining , we have that if , then
This proves the lemma, as for each .
The following lemma says that players can learn their optimal Q-factors accurately when they use sufficiently small step sizes and when their learning is not disrupted by policy updates for a sufficiently long number of steps. It is worded in terms of primitive random variables and hypothetical continuation Q-factors for reasons described above.
Lemma 6.3.
For any , there exists and function such that if (1) for all , and (2) , then
where
Proof 6.4.
In the unsynchronized regime, where each agent faces a non-stationary, time-varying environment and uses Q-learning, Lemma 6.3 is an extremely useful result that allows one to quantify the amount of time needed to correct outdated information contained in the learning iterates. The choice to employ constant learning rates was made in large part due to this consideration.
In the sequel, we fix and with the properties outlined in Lemma 6.3.
6.3 A Supporting Result Controlling Update Frequencies
A core challenge of analyzing unsynchronized multi-agent learning is that policy updates for one player constitute potential destabilizations of learning for others. We now recall the sequence of time intervals previously described in Section 5. These time intervals will be useful in quantifying and analyzing the effects of such disruptions. In particular, we wish to upper bound the number of potential disruptions to a given player’s learning, which can be upper bounded by the number of times other players finish an exploration phase during the given individual’s current exploration phase. Producing an upper bound of this sort will be important for quantifying the importance of inertia for stabilizing the environment in the heuristic argument of [24].
Definition 6.5.
The intervals represent active phases, during which players may change their policies. The active phase starts at , which is defined as the first time after at which some agent has an opportunity to revise its policy. As a consequence of these definitions, no policy updates occur in .
The definition of is slightly more involved: is the minimal stage game time at which (a) each agent has an opportunity to switch policies in and (b) that no agent finishes its current exploration phase in the next stage games. The latter point is critical, as it guarantees that each player’s learning is allowed to proceed uninterrupted for at least stages after the final (potential) disruption.
Lemma 6.6.
The sequences and are well-defined (i.e. the infimum defining each term is achieved by a finite integer), and the following holds for any :
-
(a)
.
-
(b)
For each , we have
In words, part (a) of Lemma 6.6 guarantees that any successive active phases are separated by at least stage games, while part (b) guarantees that any individual player has at most opportunities to revise its policy during the active phase.
Proof 6.7.
For some , suppose that (a) and (b) hold for all . That is,
-
(a)
, for all , and
-
(b)
for each , we have
Note that this holds for . We will show the following: (1) ; (2) ; (3) For all , we have
For each , let denote the index such that
By minimality of , we have that . By 4, we have and thus
This implies , since by hypothesis.
We put , and let denote an agent with .
For each , we define as
Observe that . Moreover, if there exists some such that , then , and thus . We now argue that such exists.
Suppose that no such exists:
(7) | ||||
One concludes that the minima defining each are attained by distinct minimizing agents, one of whom is . But then, for some , we have
which implies , contradicting 4.
We have thus shown that the set , where is given by
Let , and note that, if , then for all . It follows that (1) and (2) .
We conclude by showing that (3) holds. That is,
By 4, it suffices to show that
We have already shown , which handles the case where , and we focus on . Since for all and , we have and
which concludes the proof.
Note that the proof above also shows that for any active phase , there exists some agent that has exactly one opportunity to update its policy in .
For , we define to be the event that the baseline policy is a fixed equilibrium policy throughout the active phase, . That is,
We define , where and denotes the length of a shortest strict best-response path from to an equilibrium policy in .
Lemma 6.8.
Let , and define
Remark: We note that is some constant, and is a -history,
with .
Proof 6.9.
We put and recursively define
We define to be the index achieving , and note that is possible when .
Note that counts the number of stage games after and on/before at which any player ends an exploration phase. From the proof of Lemma 6.6 we have that
where the strict inequality obtains since at least one player (player in the proof of Lemma 6.6) has exactly one exploration phase ending in , while the remaining players have at most .
For each , let denote the set of players who have an opportunity to switch policies at time . That is,
From our choices of , and and the fact that for all , we have, by Lemma 6.3, that
where, for any , we recall that is given by
Note that , and the minimality defining implies that no player updates its policy during the interval . It follows that the hypothetical continuation trajectory defining each player ’s hypothetical Q-factors coincides with the sample trajectory defining , since action selections and state transitions are decided by and . Thus, by our choice of ,
For , each player recovers its estimated Q-factors within of , since
Since we have hypothesized that and chosen , it follows that agent does not change its policy at time when given the opportunity to revise its policy. In words, the agent appraises its policy as being sufficiently close to optimal with respect to its learned Q-factors, and does not switch its policy. In symbols, that is to say
which in turn implies that
Repeating this logic, one has that if
then for each . The probability of this intersection is lower bounded using the union bound and Lemma 6.3:
as desired.
Lemma 6.10.
Let , and define
Proof 6.11.
Let be a (shortest) strict best-response path from to , and , where was defined along with immediately before Lemma 6.8. For each pair of neighbouring joint policies and , there exists exactly one player who switches its policy. That is, for all .
Consider the event that no agent updates its policy during due to inertia except player , who updates its policy exactly once to and remains inert at all other update opportunities in .
By Lemma 6.3 and Lemma 6.6, the conditional probability of this event given is lower bounded by
The same lower bound similarly applies to each transition along , which leads to
Next, consider the event that no agent updates its policy during due to inertia. The conditional probability of this event given is lower bounded by
This results in which proves the lemma since .
6.4 Proof of Theorem 4.1
Given , let be the unique solution to
and
Suppose that
Let for all . The subsequent details lower bounding for large and every are omitted, as they resemble the proofs of [1] or [37].
Remark: In the interest of clarity, Theorem 4.1 was stated and proved for a special case of deterministic exploration phase lengths . The analysis above can be immediately extended to handle random exploration phase lengths, provided these are primitive random variables (as opposed to being history dependent) and provided they satisfy the requisite assumptions appearing in Theorem 4.1 almost surely.
7 Simulations
In this section, we present empirical findings of a simulation study where Algorithm 1 was applied to the stochastic game whose stage game costs and transition probabilities are presented below.
We have chosen to simulate play of a game with two players, , state space , and two actions per player, with action sets labelled . Each player’s discount factor is set to 0.8, and the transition kernel is described as follows:
In words, the state dynamics out of state do not depend on the choice of joint action, while the dynamics out of state do. In state , if players select matching actions, then the state transitions to with (low) probability 0.25. Otherwise, if players select distinct actions in state , play transitions to state with (high) probability 0.9.
The stage game’s symmetric cost structure is summarized by the tables below.
22[]
&
0, 0 2, 2
2, 2 0, 0
22[]
&
10, 10 11, 11
11, 11 10, 10
In either state, the stage cost to a given player is minimized by selecting one’s action to match the action choice of the opponent. We note that the stage costs in state are considerably lower than those of state . Despite the apparent coordination flavour of the stage game payoffs, the transition probabilities dictate that an optimizing agent should not match its counterpart’s action in state , and should instead attempt to drive play to the low cost state .
The challenge of non-stationarity to multi-agent learning in this game is such: if players use a joint policy under which they mismatch actions in state or match actions in state , then either player has an incentive to switch its policy in the relevant state. If both players switch policies in an attempt to respond to the other, then cyclic behaviour ensues, where the individual responds to an outdated policy, as the other player will have switched its behaviour by the time the individual is ready to update its own policy.
With the cost structure, transition probabilities, and discount factors described above, we have the joint policies in which players’ actions match in state and players’ actions do not match in state constitute 0-equilibrium policies.
Parameter Choices
We conducted 500 independent trials of Algorithm 1. Each trial consisted of stage games. Each player’s exploration phase lengths, , were selected uniformly at random from the interval , with set to and the ratio parameter set to .
For our gameplay parameters, we set the stage game action experimentation parameters to be . The parameter controlling inertia in the policy update was chosen as . The parameter controlling tolerance for sub-optimality in Q-factor assessment was chosen as . The learning rate parameters were selected to be .
Results
The results of our simulations are summarized below in Figure 2 and Table 1. With randomly selected initial policies, we observed that Algorithm 1 rapidly drove play to equilibrium, with the relative frequency of equilibrium stabilizing at over 98% of trials after approximately 40,000 stage games.

Stage game index | 0 | 10,000 | 20,000 | 30,000 | 40,000 |
---|---|---|---|---|---|
0.264 | 0.620 | 0.728 | 0.974 | 0.984 |
8 Conclusions
In this paper, we considered an unsynchronized variant of the decentralized Q-learning algorithm of [1]. We formalized an argument presented in [24], which contended that inertia alone would stabilize the learning process and tame non-stationarity. In the process of formalizing the argument of [24], some modifications were needed to handle technical nuances relating to the dynamics of the policy process and the convergence of learning iterates. To accommodate unsynchronized policy updating and non-stationarity in each agent’s learning environment, we have introduced a constant learning rate that can rapidly overcome errors in learning estimates that are artifacts of outdated information. In so doing, we have shown that decentralized Q-learning can still drive policies to equilibrium in weakly acyclic stochastic games without making strong coordination assumptions such as synchronizing the schedules on which players update their policies.
References
- [1] Gürdal Arslan and Serdar Yüksel. Decentralized Q-learning for stochastic teams and games. IEEE Transactions on Automatic Control, 62(4):1545–1558, 2017.
- [2] Carolyn L Beck and Rayadurgam Srikant. Error bounds for constant step-size Q-learning. Systems & Control Letters, 61(12):1203–1208, 2012.
- [3] Vivek S Borkar. Reinforcement learning in Markovian evolutionary games. Advances in Complex Systems, 5(01):55–72, 2002.
- [4] Michael Bowling and Manuela Veloso. Multiagent learning using a variable learning rate. Artificial Intelligence, 136(2):215–250, 2002.
- [5] Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the Tenth Innovative Applications of Artificial Intelligence Conference, Madison, Wisconsin, pages 746–752, 1998.
- [6] Constantinos Daskalakis, Dylan J Foster, and Noah Golowich. Independent policy gradient methods for competitive reinforcement learning. Advances in Neural Information Processing Systems, 33:5527–5540, 2020.
- [7] Arlington M. Fink. Equilibrium in a stochastic -person game. Journal of Science of the Hiroshima University, Series AI (Mathematics), 28(1):89–93, 1964.
- [8] D. P. Foster and H. P. Young. On the nonconvergence of fictitious play in coordination games. Games Econ. Behav., 25:79–96, 1998.
- [9] Dean Foster and H. Peyton Young. Regret testing: Learning to play Nash equilibrium without knowing you have an opponent. Theoretical Economics, 1:341–367, 2006.
- [10] Fabrizio Germano and Gabor Lugosi. Global Nash convergence of Foster and Young’s regret testing. Games and Economic Behavior, 60(1):135–154, 2007.
- [11] Pablo Hernandez-Leal, Michael Kaisers, Tim Baarslag, and Enrique Munoz de Cote. A survey of learning in multiagent environments: Dealing with non-stationarity. arXiv preprint arXiv:1707.09183, 2017.
- [12] Pablo Hernandez-Leal, Bilal Kartal, and Matthew E Taylor. A survey and critique of multiagent deep reinforcement learning. Autonomous Agents and Multi-Agent Systems, 33(6):750–797, 2019.
- [13] Junling Hu and Michael P. Wellman. Nash Q-learning for general-sum stochastic games. Journal of Machine Learning Research, 4(Nov):1039–1069, 2003.
- [14] Jens Josephson and Alexander Matros. Stochastic imitation in finite games. Games and Economic Behavior, 49(2):244–259, 2004.
- [15] Vijaymohan R Konda and Vivek S Borkar. Actor-critic–type learning algorithms for Markov decision processes. SIAM Journal on Control and Optimization, 38(1):94–123, 1999.
- [16] Stefanos Leonardos, Will Overman, Ioannis Panageas, and Georgios Piliouras. Global convergence of multi-agent policy gradient in Markov potential games. In International Conference on Learning Representations, 2022.
- [17] D. Leslie and E. Collins. Individual Q-learning in normal form games. SIAM Journal on Control and Optimization, 44:495–514, 2005.
- [18] David S Leslie and Edmund J Collins. Convergent multiple-timescales reinforcement learning algorithms in normal form games. The Annals of Applied Probability, 13(4):1231–1251, 2003.
- [19] Yehuda Levy. Discounted stochastic games with no stationary Nash equilibrium: two examples. Econometrica, 81(5):1973–2007, 2013.
- [20] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine Learning Proceedings 1994, pages 157–163. Elsevier, 1994.
- [21] Michael L. Littman. Friend-or-foe Q-learning in general-sum games. In ICML, volume 1, pages 322–328, 2001.
- [22] Michael L. Littman and Csaba Szepesvári. A generalized reinforcement-learning model: Convergence and applications. In ICML, volume 96, pages 310–318. Citeseer, 1996.
- [23] Chinmay Maheshwari, Manxi Wu, Druv Pai, and Shankar Sastry. Independent and decentralized learning in markov potential games. arXiv preprint arXiv:2205.14590, 2022.
- [24] Jason R. Marden, H. Peyton Young, Gürdal Arslan, and Jeff S. Shamma. Payoff-based dynamics for multiplayer weakly acyclic games. SIAM Journal on Control and Optimization, 48(1):373–396, 2009.
- [25] Laetitia Matignon, Guillaume J. Laurent, and Nadine Le Fort-Piat. Independent reinforcement learners in cooperative Markov games: a survey regarding coordination problems. Knowledge Engineering Review, 27(1):1–31, 2012.
- [26] Eric Mazumdar, Lillian J Ratliff, Michael I Jordan, and S Shankar Sastry. Policy-gradient algorithms have no guarantees of convergence in linear quadratic games. arXiv preprint arXiv:1907.03712, 2019.
- [27] Panayotis Mertikopoulos, Christos Papadimitriou, and Georgios Piliouras. Cycles in adversarial regularized learning. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 2703–2717. SIAM, 2018.
- [28] David H Mguni, Yutong Wu, Yali Du, Yaodong Yang, Ziyi Wang, Minne Li, Ying Wen, Joel Jennings, and Jun Wang. Learning in nonzero-sum stochastic games with potentials. In International Conference on Machine Learning, pages 7688–7699. PMLR, 2021.
- [29] Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan, and Sarath Chandar. Dealing with non-stationarity in decentralized cooperative multi-agent deep reinforcement learning via multi-timescale learning. In Conference on Lifelong Learning Agents, pages 376–398. PMLR, 2023.
- [30] Asuman Ozdaglar, Muhammed O Sayin, and Kaiqing Zhang. Independent learning in stochastic games. In International Congress of Mathematicians, 2021.
- [31] Muhammed Sayin, Kaiqing Zhang, David Leslie, Tamer Başar, and Asuman Ozdaglar. Decentralized Q-learning in zero-sum Markov games. Advances in Neural Information Processing Systems, 34:18320–18334, 2021.
- [32] Muhammed O Sayin and Onur Unlu. Logit-Q learning in Markov games. arXiv preprint arXiv:2205.13266, 2022.
- [33] Satinder Singh, Tommi Jaakkola, Michael L Littman, and Csaba Szepesvári. Convergence results for single-step on-policy reinforcement-learning algorithms. Machine learning, 38(3):287–308, 2000.
- [34] Ming Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the Tenth International Conference on Machine Learning, pages 330–337, 1993.
- [35] J. N. Tsitsiklis. Asynchronous stochastic approximation and Q-learning. Machine Learning, 16(3):185–202, 1994.
- [36] Bora Yongacoglu, Gürdal Arslan, and Serdar Yüksel. Decentralized learning for optimality in stochastic dynamic teams and games with local control and global state information. IEEE Transactions on Automatic Control, 67(10):5230–5245, 2022.
- [37] Bora Yongacoglu, Gürdal Arslan, and Serdar Yüksel. Satisficing paths and independent multi-agent reinforcement learning in stochastic games. SIAM Journal on Mathematics of Data Science, to appear.
- [38] H. Peyton Young. Strategic Learning and its Limits. Oxford University Press., 2004.
- [39] Runyu Zhang, Zhaolin Ren, and Na Li. Gradient play in multi-agent Markov stochastic games: Stationary points and convergence. arXiv preprint arXiv:2106.00198, 2021.
Appendix A Glossary of Notation
Notation | Description | Reference(s) |
---|---|---|
Q-function when facing | (2) | |
Player ’s Q-factor estimates at time | (3), Algorithm 1 | |
p. 6.1 | ||
EP | Exploration Phase | |
Length of player ’s EP | Algorithm 1 | |
Start time of player ’s EP | Algorithm 1 | |
Player ’s action experimentation parameter | Algorithm 1 | |
Player ’s policy inertia parameter | Algorithm 1 | |
Player ’s “tolerance for suboptimality" | Algorithm 1 | |
Player ’s learning rate/step size | Algorithm 1 | |
Tuple of learning rates: | p. 4 | |
Player ’s baseline policy during its EP | Algorithm 1 | |
Player ’s baseline policy for stage game at time | p. 4 |
Notation | Description | Reference(s) |
---|---|---|
An upper bound on each player’s experimentation parameters | 3 | |
An upper bound on each player’s | 3 | |
Player ’s behaviour policy during EP: selects according to w.p. and selects from w.p. . | p. 4 | |
Lower bound on EP lengths: | 4 | |
Factor bounding EP lengths: | 4 | |
Active phase of policy updating | Definition 6.5 | |
Equilibrium event defined using active phase | p. 6.8 |
Notation | Description | Ref(s) |
---|---|---|
An i.i.d. noise process taking values in . | p. 4.1.3 | |
A function for expressing state transitions as a deterministic function driven by exogenous noise: | p. 4.1.3 | |
An i.i.d. sequence of actions for player , with | p. 4.1.3 | |
An i.i.d. sequence with | p. 4.1.3 | |
An i.i.d. sequence with | p. 4.1.3 | |
An i.i.d. sequence of policies for player , with , where , | p. 4.1.3 | |
Random variables for player at time : | p. 6.1 | |
Codomain of : | p. 6.1 | |
Collected state noise and player noise for time : | p. 6.1 | |
Present and future of the noise processes: | p. 6.1 | |
History of states, policies, and Q-iterates to time : | p. 6.1 | |
Codomain of : | p. 6.1 | |
p. 6.1 | ||
Mapping that reports Q-iterates at time using historical data up to time and future random variables from time onward in : for all . | (6) | |
Mapping that reports baseline policies at time using historical data to time and future random variables from onward : for all . | (6) | |
Action chosen by player at time in hypothetical scenario where (i) baseline policies were frozen at at time , and (ii) action selection and state transition noise was described by | p. 6.2 | |
State at time in hypothetical scenario where (i) and (ii) above hold. | p. 6.2 | |
Hypothetical Q-factors that would have been obtained at time in hypothetical scenario where (i) and (ii) above hold. | p. 6.2 |