Refined Sample Complexity for Markov Games with
Independent Linear Function Approximation
Abstract
Markov Games (MG) is an important model for Multi-Agent Reinforcement Learning (MARL). It was long believed that the “curse of multi-agents” (i.e., the algorithmic performance drops exponentially with the number of agents) is unavoidable until several recent works (Daskalakis et al., 2023, Cui et al., 2023, Wang et al., 2023). While these works resolved the curse of multi-agents, when the state spaces are prohibitively large and (linear) function approximations are deployed, they either had a slower convergence rate of or brought a polynomial dependency on the number of actions – which is avoidable in single-agent cases even when the loss functions can arbitrarily vary with time. This paper first refines the AVLPR framework by Wang et al. (2023), with an insight of designing data-dependent (i.e., stochastic) pessimistic estimation of the sub-optimality gap, allowing a broader choice of plug-in algorithms. When specialized to MGs with independent linear function approximations, we propose novel action-dependent bonuses to cover occasionally extreme estimation errors. With the help of state-of-the-art techniques from the single-agent RL literature, we give the first algorithm that tackles the curse of multi-agents, attains the optimal convergence rate, and avoids dependency simultaneously.111Accepted for presentation at the Conference on Learning Theory (COLT) 2024.
1 Introduction
Multi-Agent Reinforcement Learning (MARL) studies decision-making under uncertainty in a multi-agent system. Many practical MARL systems demonstrate impressive performance in games like Go (Silver et al., 2017), Poker (Brown and Sandholm, 2019), Starcraft II (Vinyals et al., 2019), Hide-and-Seek (Baker et al., 2020), and Autonomous Driving (Shalev-Shwartz et al., 2016). While MARL exhibits huge success in practice, it is still far from being understood theoretically.
A core challenge in theoretically studying MARL is the “curse of multi-agents”: When many agents are involved in the game, the joint state and action space is prohibitively large. Thus, early algorithms for multi-agent games (Liu et al., 2021) usually have a sample complexity (the number of samples the algorithm requires to attain a given accuracy) exponentially depending on the number of agents , for example, scaling with where is the action space of the -th agent.
Later, many efforts were made to resolve this issue. Jin et al. (2021) gave the first algorithm avoiding the curse of multi-agents, and Daskalakis et al. (2023) made the output policy Markovian (i.e., non-history-dependent, which we focus in our paper). Such results only depend on but not . While these algorithms work well in the tabular Markov Games (Shapley, 1953),222Tabular Markov Games refer to the Markov Games where the number of states and actions are finite and small. they cannot handle the case where the state space is prohibitively large.
However, in many real-world applications, tabular models are insufficient. For example, Go has possible states. In single-agent RL, people use function approximation to model the state space (Jiang et al., 2017, Wen and Van Roy, 2017). While this idea naturally generalizes to MARL (Xie et al., 2020, Chen et al., 2022b), unfortunately, it usually induces the curse of multi-agents.
Two recent works by Cui et al. (2023) and Wang et al. (2023) investigated this issue. They concurrently proposed that instead of the global function approximations previously used in the literature, independent function approximations should be developed. By assuming independent linear function approximations, they designed the first algorithms that can avoid the curse of multi-agents when the state spaces are prohibitively large and function approximations are deployed.
While their algorithms succeeded in yielding polynomial dependencies on , they were sub-optimal in other terms. The sample complexity of Cui et al. (2023) was , while that of Wang et al. (2023) was .333We use to hide any logarithmic factors. Here, is the number of agents, is the length of each episode, is the dimension of the feature space, is the largest size of action sets, and is the desired accuracy. The former has a sub-optimal convergence rate of , whereas the latter has a polynomial dependency on the number of actions. However, no polynomial dependency on the number of actions is necessary for single-agent RL with linear function approximations, even when the losses can arbitrarily vary with time, i.e., in the so-called adversarial regime (Dai et al., 2023).
As Linear MGs are generalizations of Linear MDPs, we aim to generalize such a property to Linear MGs. In this paper, we propose an algorithm for multi-player general-sum Markov Games with independent linear function approximations that i) retains a polynomial dependency on , ii) ensures the optimal convergence rate, and iii) only has logarithmic dependency on .
1.1 Key Insights and Technical Overview of This Paper
The key insight in our paper is developing data-dependent (i.e., stochastic) pessimistic sub-optimality gap estimators instead of deterministic ones. For more context, the AVLPR framework designed and used by Wang et al. (2023) required a deterministic gap estimation regarding the current policy during execution – so that the agents can collaborate to further improve . Unfortunately, yielding such a deterministic estimation corresponds to an open problem called “high-probability regret bounds for adversarial contextual linear bandits” in the literature (Olkhovskaya et al., 2023). Thus, Wang et al. (2023) used a uniform exploration strategy to avoid proving regret bounds; however, this approach unavoidably brings factors. On the other hand, the framework by Cui et al. (2023) was intrinsically incapable of convergence rate as it uses the epoching technique (i.e., fixing the policy for many episodes so the environment is almost stationary).
Hence, existing ideas in the literature cannot give favorable sample complexities, and we thus propose the usage of stochastic sub-optimality estimators. To fully deploy our insight, we make the following technical contributions:
-
1.
Based on the AVLPR framework by Wang et al. (2023) which required deterministic sub-optimality gap estimations, we propose a refined framework capable of data-dependent (i.e., non-deterministic) estimators in Algorithm 1. As we show in Theorem 3.2, a stochastic gap estimation with bounded expectation already suffices for an -CCE. This innovation, as we shall see shortly, gives more flexibility in algorithms and allowing techniques from expected-regret-minimization literature.
Slightly more formally, suppose that we would like to evaluate a joint policy . However, its actual sub-optimality gap, denoted by , cannot be accurately calculated during runtime since the “optimal” policy is unknown. The original approach requires a deterministic constant such that w.h.p. However, the approach we use to generate (which used the famous regret-to-sample-complexity reduction and translates the problem to regret-minimization in an adversarial contextual linear bandit; see Section 4 for more) does not allow such a deterministic . Instead of crafting in another way like Wang et al. (2023), we propose that calculating a random variable such that i) w.h.p., and ii) for some deterministic constant . More details can be found in Theorem 3.2.
-
2.
Existing expected-regret-minimization algorithms cannot be directly used, as they only guarantees the second condition of – but in addition to this, we also want to be a high-probability pessimistic estimation of . Meanwhile, previous algorithms in high-probability RL also do not directly work due to the open problem. Technically, this is because existing bonus-design mechanism cannot cover estimation errors which can occasionally have extreme magnitudes albeit with well-bounded expectations; see Section 4.2 for a more formal description.
To tackle this issue, we propose a novel technique called action-dependent bonuses which was partially inspired by the Adaptive Freedman Inequality proposed by Lee et al. (2020) and improved by Zimmert and Lattimore (2022). As we detail in Section 4.2, such a technique can be applied when a) we want to use the bonus technique to cancel some error in the form of , where is the optimal action in hindsight; but b) can be prohibitively large so Freedman fails, while c) is small where is the player’s policy. Going beyond this paper, we expect this action-dependent bonuses technique to be also applicable elsewhere when high-probability bounds are desired, but the error to be covered can sometimes be prohibitively large – though its expectation w.r.t. the player’s policy can be well controlled.
-
3.
Finally, because of unknown transitions and multiple agents, it is also non-trivial to attain an -style expected regret guarantee. Towards this, we incorporate several state-of-the-art techniques from the recent adversarial RL literature, e.g., the Magnitude-Reduced Estimators proposed by Dai et al. (2023), the Adaptive Freedman Inequality by Zimmert and Lattimore (2022), and a new covariance matrix estimation technique introduced by Liu et al. (2023a).
1.2 Related Work
Tabular Markov Games.
Markov Games date back to Shapley (1953). For tabular Markov Games where the number of states and actions is finite and small, Bai and Jin (2020) gave the first sample-efficient algorithm in two-player zero-sum games. For the harder multi-player general-sum case, the first provably sample-efficient algorithm was developed by Liu et al. (2021), albeit depending on (i.e., the “curse of multiagents”). When non-Markovian policies were allowed, based on the V-learning algorithm (Bai et al., 2020), various algorithms were proposed (Jin et al., 2021, Song et al., 2022, Mao and Başar, 2023); otherwise, the first algorithm for multi-player general-sum games avoiding the curse of multiagents only dates back to (Daskalakis et al., 2023), although with a sub-optimal convergence rate. Cui et al. (2023) and Wang et al. (2023) recently yielded the optimal convergence rate, but their dependencies on remained improvable.
Markov Games with Function Approximation.
When the state space can be prohibitively large, as in the single-agent case, the state space is often modeled via function approximations. Early works in this line (Xie et al., 2020, Chen et al., 2022b) considered global function approximations where the function class captures the joint value functions of all the agents, making it hard to avoid the curse of multiagents. The idea of independent function approximation, i.e., the function class of each agent only encodes its own value function, was concurrently proposed by Cui et al. (2023) and Wang et al. (2023). However, as mentioned, their sample complexities were sub-optimal in either or while ours is optimal in both and . Notably, while this paper only focuses on linear function approximations, more general approximation schemes were already studied in the literature, both globally (see, e.g., (Huang et al., 2022, Jin et al., 2022, Xiong et al., 2022, Chen et al., 2022a, Ni et al., 2022, Zhan et al., 2023)) or independently (Wang et al., 2023).
Markov Decision Processes with Linear Function Approximation.
With only one agent, Markov Games became Markov Decision Processes (MDPs), whose linear function approximation schemes were extensively studied. When losses are fixed across episodes, the problem was solved by Jin et al. (2020) and Yang and Wang (2020). When losses are adversarial (i.e., can arbitrarily vary with time), some types of linear approximations were recently tackled, e.g., linear mixture MDPs (Zhao et al., 2023) or linear-Q MDPs equipped with simulators (Dai et al., 2023). In contrast, other approximation methods, e.g., linear MDPs, remain open. More detailed discussions can be found in recent papers like (Dai et al., 2023, Kong et al., 2023, Sherman et al., 2023b; a, Liu et al., 2023b).
Concurrent Work by Fan et al. (2024).
After the submission of this paper, we are aware of a concurrent and independent work by Fan et al. (2024), which also studies the sample complexity of finding a CCE in Markov Games with Independent Linear Function Approximations. Different from the online model studied in this paper and in previous works by Cui et al. (2023) and Wang et al. (2023), they assumed a local access model (i.e., there exists a simulator where the learner can query for samples whenever is a previously visited state). Under this stronger assumption, Fan et al. (2024) achieved sample complexity, which resolving the curse of multi-agents, having optimal dependency on , and avoiding polynomial dependency on . Technically, by maintaining core set of well-covered state-action pairs, each agent can independently perform policy learning (via a FTRL-based subroutine) and thus avoiding the curse of multi-agents. Moreover, as core sets have sizes independent to (Yin et al., 2022), their approach avoids factors as well. Further making our results and that of Fan et al. (2024) completely independent of or enjoy better dependency on remains a valuable direction for future research.444Throughout this paper, we omit the term (due to Lemma A.2) in the sample complexity bound as it’s logarithmic. However, as discussed by Fan et al. (2024), it would be more favorable to have the factor removed.
2 Preliminaries
Markov Games.
In (multi-agent general-sum) Markov Games, there are agents sharing a common state space , but each agent has its own action space . The game repeats for several episodes, each with length . Without loss of generality, assume that is layered as such that transition only occurs from one layer to the next. At the beginning of each episode, the state resets to an initial state . For the -th step, each agent observes the current state and makes its action .
Let the joint action be . The new state is independently sampled from a distribution (hidden from the agents). Meanwhile, each agent observes and suffers a loss .555Adopting notations from the adversarial MDP literature, this paper focuses on losses instead of rewards (i.e., agents minimize total loss instead of maximizing total reward). Following the convention (Cui et al., 2023, Wang et al., 2023), we assume the loss functions are deterministic (though kept as secret from the agents). The objective of each agent is to minimize the expectation of its total loss, i.e., agent minimizes .
Policies and Value Functions.
A (Markov joint) policy is a joint strategy of each agent, formally defined as . Note that this allows the policies of different agents to be correlated. For a Markov joint policy , let be the policy induced by agent , and let be the joint policy induced by all agents except . Define as the set of all Markov joint policies. Similarly, define and .
Given a joint policy , one can define the following state-value function (V-function in short) induced by for each agent and state (where is the layer lies in):
Here, denotes a trajectory generated by following , i.e., , , and . If we are only interested in the -th state in such a trajectory, we use to denote an generated by following .
Similar to V-function, the state-action-value function (Q-function) can be defined as follows:
Slightly abusing the notation , we define an operator as . Then we can simply rewrite the Q-function as .
Coarse Correlated Equilibrium.
As all agents have different objectives, we can only find an equilibrium policy where no agent can gain much more by voluntarily deviating. In general, calculating the well-known Nash equilibrium is intractable even in normal-form general-sum games, i.e., MGs with and (Daskalakis et al., 2009). Hence, people usually consider (Markov) Coarse Correlated Equilibrium (Daskalakis et al., 2023, Cui et al., 2023, Wang et al., 2023) instead.
Formally, for each agent , we fix the strategy of all remaining agents as and consider its best-response. We define its best-response V-function against as
A Markov joint policy is a (Markov) -CCE if . We measure an algorithm’s performance by the number of samples needed for learning an -CCE, namely sample complexity. When the state space is finite and small (which we call a tabular MG), the best-known sample complexity for finding an -CCE is (Wang et al., 2023, Cui et al., 2023).
MGs with Independent Linear Function Approximation.
When is infinite, Cui et al. (2023) and Wang et al. (2023) concurrently propose to use independent linear function approximation (though with different Bellman completeness assumptions; see Appendix D of Wang et al. (2023)). Their results are (Wang et al., 2023) and (Cui et al., 2023).
Our paper also considers Markov Games with independent linear function approximations. Inspired by linear MDPs in single-agent RL (Jin et al., 2020), we assume the transitions and losses to be linear. For a detailed discussion on the connection between linear MDP and Bellman completeness, we refer the readers to Section 5 of (Jin et al., 2020). Formally, we assume the following:
Assumption 2.1.
For any agent , there exists a known -dimensional feature mapping , such that for any state , action , and any policy , 666 refers to the set of all policies that the algorithm may give, similar to (Cui et al., 2023, Wang et al., 2023).
where and are both unknown to the agent. Following the convention (Jin et al., 2020, Luo et al., 2021), we assume for all , , and that for all , , and .
This assumption also implies the Q-functions for all the agents are linear, i.e., there exists some unknown -dimensional feature mapping with such that
3 Improved AVLPR Framework
(3.1) |
Our framework, presented in Algorithm 1, is based on the AVLPR framework proposed by Wang et al. (2023). The main differences are marked in violet. Before introducing these differences, we first overview the original AVLPR framework, which is almost the same as Algorithm 1 except that — one of the most crucial innovations of our framework which we will describe later.
Overview of the AVLPR Framework by Wang et al. (2023).
The original framework starts from an arbitrary policy and then gradually improves it: In the -th epoch, all agents together make the next policy an -CCE. Thus, the number of epochs for an -CCE is .
For each epoch , the agents determine their new policies layer-by-layer in the reversed order, i.e., . Suppose that we are at layer and want to find (abbreviated as for simplicity). As are already calculated, the next-layer V-functions can be estimated (denoted by in Algorithm 1). Thus, the problem of deciding becomes a contextual bandit problem: The context is sampled from a fixed policy (which only depends on ), and the loss of every is the Q-function induced by the current policy, namely , which can be estimated via and .
Hence, Wang et al. (2023) propose to deploy a contextual bandit algorithm on this layer . This is abstracted as a plug-in subroutine in Algorithm 1.777In the CCE-Approx subroutine, an iterative approach is also used, which means the is the average of a few policies . Thus, as is varying with , each Q-function is also varying with time, which means that each agent actually faces an adversarial (i.e., non-stationary) contextual bandit problem. We shall see more details in Section 4. As we briefly mention in the Technical Overview, we should ensure that i) the joint policy has a calculable sub-optimality gap on all for all w.h.p. (see Equation 3.2), and ii) this sub-optimality gap estimation has a bounded expectation w.r.t. (see Equation 3.4).
Afterward, Wang et al. (2023) estimates the V-function for the current layer (i.e., ), which is useful when we move on to the previous layer and invoke . This is done by another subroutine called , which must ensure an “optimistic” estimation (see Equation 3.3).
By repeating this process for all , the current epoch terminates. It can be inferred from Equations 3.2, 3.3 and 3.4 that is an -CCE. To further reduce the sample complexity, Wang et al. (2023) propose another trick to “lazily” update the policies. Informally, if the Gap functions remain similar (measured by the increments in the potential function , c.f. 5), then we directly adopt the previous . By ensuring such updates only happen times by properly choosing , sample complexity is enjoyed. The main theorem of the original AVLPR framework can be summarized as follows.
Theorem 3.1 (Main Theorem of AVLPR (Wang et al., 2023, Theorem 18); Informal).
Suppose that
-
1.
(Per-state no-regret) If we call the subroutine , then the returned policy and gap estimation shall ensure the following w.p. :
(3.2) where must be a deterministic (i.e., non-stochastic) function in the form .
-
2.
(Optimistic V-function) If we call the subroutine , then the returned V-function estimation should ensure the following w.p. :
(3.3) -
3.
(Pigeon-hole condition) There exists a deterministic complexity measure such that w.p.
(3.4) where is the deterministic Gap function defined in Equation 3.2. Informally, if we execute the whole , it must have a expected sub-optimality gap of order in each.
-
4.
(Potential function) The potential functions is chosen s.t. there exists a constant ensuring that 5 is violated for at most times. Meanwhile,
(3.5)
Then, by setting , an -CCE can be yielded within samples.
One can see that Wang et al. (2023) require the Gap function to be deterministic (highlighted in Theorem 3.1). However, as we mentioned in Footnote 7, this corresponds to crafting high-probability regret bounds for adversarial linear contextual bandits, which is still open in the literature (Olkhovskaya et al., 2023). Hence, when facing MGs with (independent) linear function approximation, it is highly non-trivial to construct a non-stochastic high-probability upper bound like Equation 3.2 via deploying regret-minimization algorithm at each state . Consequently, Wang et al. (2023) adopt a pure exploration algorithm (i.e., using uniform policies in subroutine CCE-Approx), which brings undesirable dependencies.
Loosened High-Probability Bound Requirement.
To bypass this situation, instead of forcing each agent to do pure exploration like (Wang et al., 2023), our Improved AVLPR framework allows the to be a stochastic (i.e., data-dependent) upper bound of the actual sub-optimality gap, as shown in Equation 3.6. The differences between Theorems 3.1 and 3.2 are highlighted in violet.
Theorem 3.2 (Main Theorem of Improved AVLPR; Informal).
Suppose that
-
1.
(Per-state no-regret) If we call the subroutine , then the returned policy and gap estimation shall ensure the following w.p. :
(3.6) where is a random variable whose randomness comes from the environment (when generating the trajectories), the agents (when playing the policies), and internal randomness.
-
2.
(Optimistic V-function) If we call the subroutine , then the returned V-function estimation should ensure Equation 3.3 w.p. .
-
3.
(Pigeon-hole condition & Potential Function) The potential functions is chosen s.t. there exists a constant ensuring that 5 is violated for at most times. Meanwhile, there also exists a deterministic ensuring the following w.p. :
(3.7) where the expectation is taken w.r.t. the randomness in calculating the random variable Gap.
Then we have a similar conclusion as Theorem 3.1 that samples give an -CCE.
We defer the formal proof of this theorem to Appendix A and only sketch the idea here.
Proof Sketch of Theorem 3.2.
The idea is to show that our picked policy-gap pair nearly satisfies the conditions in Theorem 3.1. We can pick different ’s for different states (i.e., ) because they are in the same layer. However, on a single state , all agents must share the same . This is because the expectation in Equation 3.6 is w.r.t. the opponents’ policies. So if any agent deviates from the current , all other agents will observe a different sequence of losses and thus break Equation 3.2.
Now we focus on a single state . By Equation 3.6, any construction ensures w.h.p. that . Moreover, Equation 3.3 ensures . So we only need to find a whose is “small” for all – more preciously, we want a such that
From Markov inequality, for each , . Thus, from the choice of , the minimizing (i.e., the defined in Equation 3.1) ensures that with probability .
As states in the same layer are independent in the sense that the policy on doesn’t affect the Q-function of if , we can combine all ’s into . Equation 3.6 then ensures Condition 1 of Theorem 3.1, and Condition 3 is also closely related with our Equation 3.7. Besides the different Gap, the potential part is also slightly different from Equation 3.5. This is because the original version is mainly tailored for uniform exploration policies – as we will see in Section 5.2, we design a potential function similar to that of Cui et al. (2023) as we are both using as the exploration policy in CCE-Approx. ∎
4 Improved CCE-Approx Subroutine
(4.1) |
(4.2) |
(4.3) |
(4.4) |
To improve the sample complexity for MGs with independent linear function approximation, it is insufficient to change the framework alone. This section introduces our improved implementation of the CCE-Approx subroutine, presented in Algorithm 2. While our framework remains mostly the same as AVLPR, the subroutine CCE-Approx is very different from (Wang et al., 2023). Here, we briefly overview several main technical innovations that we make in our Algorithm 2.
4.1 Magnitude-Reduction Loss Estimators
In this regret-minimization task, we deploy EXP3 on each state, similar to Cui et al. (2023). Motivated by the recent progress in linear MDPs (Dai et al., 2023), we aggressively set the regularization parameter in covariance-estimation (the in Equation 4.1) as , instead of the usual choice of (Cui et al., 2023). This is because the regret analyses shall exhibit factors like – to get regret (which is necessary for sample complexity), we must set and .
However, the downside of this aggressive tuning of is that the estimated Q-function, namely , can lie anywhere in . To comply with the EXP3 requirement that all loss estimators are at least (c.f. Lemma E.10) while still setting the learning rate as , we adopt the Magnitude-Reduced Estimator proposed by Dai et al. (2023) to “move” the Q-estimators into range by setting (as we did in Equation 4.5). More details can be found in Lemma E.1.
4.2 Action-Dependent Bonuses
As we are using instead of the actual when defining in Equation 4.5, this incurs an estimation error. Focusing on a single state , this estimation error occurs twice in the analysis as and . These two terms are called Bias-1 and Bias-2 in our analysis (see Equation 5.2).
The former term of Bias-1 is relatively easy to control: Because is known, as we elaborate in Section 5.1, we can use some variants of the Freedman Inequality to have it bounded. For the latter term (Bias-2), due to the unknown , the aforementioned approach no longer works. One common technique in the literature is to design bonuses: Suppose that we’d like to cover where is the optimal action on in hindsight and is an abstract quantity (e.g., the Est Err), we then design bonuses such that
(4.6) |
In this way, if we feed instead of into the EXP3 algorithm deployed at , we can roughly get (see Lemma E.10 for the original EXP3 guarantee and Theorem B.2 for this form)
and conclude that (the second term of LHS also appears in the regret decomposition in Equation 5.2)
(4.7) |
which means that we replaces the unknown with the known .
A tradition way of designing is using the classical Freedman Inequality (Freedman, 1975), which roughly claims that if a.s. for all , we have888The expectation of here should actually condition on the filtration as is a stochastic process.
In problems like linear bandits, is typically as small as (see, e.g., (Zimmert and Lattimore, 2022)). Therefore, directly picking can ensure Equation 4.6 as the part can directly go into the regret bound. However, in our case, due to the aggressive choice of , can be as huge as and such an approach fails.
To tackle this, we observe that if we find a function such that
we can set a action-dependent bonus of
(4.8) |
Using the Adaptive Freedman Inequality (see Lemma E.7) given by Zimmert and Lattimore (2022),
which infers the bonus condition of in Equation 4.6. Meanwhile, the extra cost of (see the RHS of Equation 4.7) is still bounded because we assume can be controlled. Finally, when , we can also ensure that the bonus function is small as , which is necessary as the EXP3 regret guarantee requires .
In a nutshell, our action-dependent bonus technique allows the estimation errors to have more extreme values on those rarely-visited state-action pairs – compared to the classical approach which requires for all uniformly, our can have values of order occasionally on some state-action pairs, as long as remains small. We expect this technique to be useful in other problems where high-probability bounds are required.
4.3 Covariance Matrix Estimation
When analyzing linear regression, one critical step is to investigate the quality of “covariance matrix estimation”. Typically, we would like to estimate the covariance matrix of a -dimensional distribution , namely . Various approaches try to control either the additive error (Neu and Olkhovskaya, 2020, Luo et al., 2021) or the multiplicative error (Dai et al., 2023, Sherman et al., 2023b), but the convergence rate is at most where is the number of samples from . Recently, Liu et al. (2023a) bypassed this limitation by considering and gave an convergence rate; this technique is also adopted into our analysis for an regret. See Section E.2 for more details regarding this.
4.4 Main Guarantee of Algorithm 2
Incorporating all these techniques, our Algorithm 2 ensures Equations 3.6 and 3.7. We make the follow claims, whose formal statements are presented in Theorems B.1 and C.1.
Theorem 4.1 (Gap is w.h.p. Pessimistic; Informal).
When Algorithm 2 is configured properly, for each execution of , the condition in Equation 3.6 holds with probability , i.e.,
Theorem 4.2 (Algorithm 2 Allows a Potential Function; Informal).
Consider the following potential:
(4.9) |
where is defined as Equation 4.1 if 5 is violated in epoch and as otherwise (which is a recursive definition since 5 may not be violated in epoch as well). Then, when Algorithm 2 is configured properly, the condition in Equation 3.7 holds with , i.e.,
Putting our improved CCE-Approx subroutine in Algorithm 2 and the original V-Approx subroutine (Wang et al., 2023, Algorithm 3) together, we give our final algorithm for multi-player general-sum Markov Games with independent linear function approximations. As claimed, this algorithm enjoys only polynomial dependency on , the optimal convergence rate , and avoids factors. Formally, we give the following theorem.
Theorem 4.3 (Main Theorem of the Overall Algorithm).
Under proper configuration of parameters, by picking as Algorithm 2 and as Algorithm 3 (Wang et al., 2023, Algorithm 3), under the Markov Games with Independent Linear Function Approximation assumption (Assumption 2.1), Algorithm 1 enjoys a sample complexity of .
The formal proofs of Theorems 4.1, 4.2 and 4.3 are in Appendices B, C and D, respectively. In the following section, we give an overview of the high-level idea of proving Theorems 4.1 and 4.2.
5 Analysis Outline of the Improved CCE-Approx Subroutine
Analyzing the improved CCE-Approx subroutine is highly technical, and the full proof of Theorems 4.1 and 4.2 are deferred into Appendices B and C. In this section, we highlight several key steps when verifying Equations 3.6 and 3.7 in Theorems 4.1 and 4.2.
5.1 Gap is w.h.p. Pessimistic (Proof Sketch of Theorem 4.1)
In this section, we fix an agent and a state . We need to verify Equation 3.6 for this specific agent-state pair. By the construction of in 12, Equation 3.6 is equivalent to
(5.1) |
where is the “true” kernel induced by and the next-layer V-function , i.e.,
By (almost) standard regret decomposition, the LHS of Equation 5.1 can be rewritten as
(5.2) |
In the remaining part of this section, we overview the main steps in controlling these terms.
5.1.1 Controlling Reg-Term via Magnitude-Reduced Estimator
The Reg-Term is the regret w.r.t. estimated losses fed into the EXP3 instance deployed on each state , stated as follows:
As we feed into the EXP3 procedure on each (see Equation 4.3), this term usually can be directly controlled using the EXP3 regret guarantee stated in Lemma E.10, which roughly says
To verify , people usually control the estimated via (Luo et al., 2021)
(5.3) |
While this works when , it becomes prohibitively large when setting . Fortunately, thanks to the Magnitude-Reduced Estimator by Dai et al. (2023), we can ensure that is bounded from below by (Dai et al., 2023, Theorem 4.1). As is also small, we can apply the standard EXP3 guarantee in Lemma E.10 with .
5.1.2 Controlling Bias-1 Term via Adaptive Freedman Inequality
The Bias-1 term corresponds to the cost of only knowing instead of when estimating . As is biased because of the regularizer in Equation 4.1, can be further decomposed into and , namely the intrinsic bias and estimation error of :
where and ; here, is the natural filtration.
The intrinsic bias term is standard in analyzing (expected-)regret-minimization algorithms for linear MDPs – see, e.g., Lemma D.2 of Luo et al. (2021) – and can thus be controlled analogously.
The estimation error term is a martingale. Consequently, regret-minimization papers in single-agent RL usually omit it by its zero-mean nature. However, it plays an important role here since Gap must be a high-probability upper bound. Existing high-probability results for adversarial linear bandits usually apply the famous Freedman inequality (Freedman, 1975): Suppose that a.s., then
However, as we calculated in Equation 5.3, due to our choice of , must be of order . Hence, a direct application of the traditional Freedman inequality results in a factor of order , which is unacceptable.
Fortunately, as our strengthened Theorem 3.2 only requires a stochastic Gap with bounded expectation, we are allowed to have a data-dependent variant of the term. Inspired by the Adaptive Freedman Inequality proposed by Lee et al. (2020) and improved by Zimmert and Lattimore (2022), we prove a variant of Freedman Inequality in Lemma E.8 which roughly reads
(5.4) |
As both and is known to the learner during execution, we can directly put them into and ensure Equation 3.6. The more detailed calculation can be found in the proof of Theorem B.3.
5.1.3 Cancelling Bias-2 Using Bonus-2
While the Bias-2 term looks almost the same as Bias-1, we cannot directly put the RHS of Equation 5.4 into as is unknown. Following the classical idea of designing bonuses, we can make use of the Bonus-2 term, i.e., , to cancel Bias-2.
Although bonuses are standard in the literature, we shall remark again that our action-dependent bonus is novel. Following the notations in Section 4.2, we would like to cover
using such that Equation 4.7 happens. Because , each martingale difference term can be of order – thus, directly picking would make Gap too large, which means the classical Freedman inequality approach described in Section 4.2 again fails.
To tackle this issue, as motivated in Section 4.2, we introduce an action-dependent and average it into the episodes, resulting in the bonus definition of (the -related term corresponds to the part, while the -related term corresponds to the part in Equation 4.8)
Thus, the Bias-2 term is indeed cancelled by Bonus-2. Moreover, let us discuss a little bit why this approach is more favorable when proving Theorem 4.2: Regarding the term in Equation 4.7, we no longer need to suffer from , but only instead. As (see Lemma C.6)
we have . Using the arguments presented shortly in Section 5.2, we can see this term is indeed nicely bounded.
To conclude, action-dependent bonuses allow us to cancel a random variable that can exhibit extreme values but has small expectations. We expect this technique to be of independent interest.
5.1.4 Directly Putting Bonus-1 into Gap
The Bonus-1 term is easy to handle in designing as and are both known. Still, to ensure Equation 3.7, we need to control – the same arguments from the previous paragraph then work.
5.1.5 Controlling Mag-Reduce Like Bias-1 and Bias-2
This term is a unique challenge in our paper, which is due to the Magnitude-Reduced Estimator by Dai et al. (2023): Since the in Equation 4.5 differs from , the following term shows up:
However, this term vanishes in the original paper because is unbiased (see Lemma E.1) and Dai et al. (2023) studied expected-regret-minimization. Fortunately, because in (see Equation 4.2) and in (see Equation 4.5) are i.i.d., we can decompose Mag-Reduce into the sum of a few martingales and apply the Adaptive Freedman Inequality to each of them. Informally, we observe that the resulting concentrations is smaller than those of Bias-1 and Bias-2, and thus Mag-Reduce is automatically controlled – more detailed arguments are presented in the proof of Theorem B.9.
5.2 Controlling the Sum of ’s via Potential Function
The potential function is defined in Equation 4.9, presented below for the ease of readers:
Below we briefly explain why it ensures Theorem 4.2. After taking expectation to the randomness in , we would get (omitting all dependencies on and ; see Theorem C.2 for formal statements and calculations)
Therefore, the definition of ensures that if is not so different from some , we can pretend that is also similar to , i.e., setting will not violate the above inequality by a lot. Hence, suppose that the “lazy” update mechanism re-uses for times (i.e., 5 is not violated from epoch until and the ’s and ’s will be the same). We roughly have
(5.5) |
where is the covariance matrix of agent in layer when following , i.e.,
By the definition of , we would roughly have . Therefore,
Recall that the scalar version of this sum will give if is bounded, one can imagine that this sum should also be small. Indeed, from Lemma 11 of Zanette and Wainwright (2022), we can conclude that
and thus (again, omitting all dependencies on and ). Theorem 4.2 then follows.
6 Conclusion
In this paper, we consider multi-player general-sum Markov Games with independent linear function approximations. By enhancing the AVLPR framework recently proposed by Wang et al. (2023) with a data-dependent pessimistic gap estimation, proposing novel action-dependent bonuses, and incorporating several state-of-the-art techniques from the recent advances in the adversarial RL literature (Zimmert and Lattimore, 2022, Dai et al., 2023, Liu et al., 2023a), we give the first algorithm that i) bypasses the curse of multi-agents, ii) attains optimal convergence rate, and iii) avoids polynomial dependencies on the number of actions. We a) design data-dependent pessimistic sub-optimality gap estimations (Section 3), and b) propose an action-dependent bonus technique to cover extreme estimation errors (Section 4.2), which can be of independent interest.
Acknowledgement
We thank Haipeng Luo (University of Southern California), Chen-Yu Wei (University of Virginia), and Zihan Zhang (University of Washington) for their insightful discussions. We thank Fan et al. (2024) for pointing out a mistake in the original proof of Lemma A.2. We greatly acknowledge the anonymous reviewers for their comments. SSD acknowledges the support of NSF IIS 2110170, NSF DMS 2134106, NSF CCF 2212261, NSF IIS 2143493, NSF CCF 2019844, NSF IIS 2229881.
References
- Bai and Jin (2020) Yu Bai and Chi Jin. Provable self-play algorithms for competitive reinforcement learning. In International Conference on Machine Learning, pages 551–560. PMLR, 2020.
- Bai et al. (2020) Yu Bai, Chi Jin, and Tiancheng Yu. Near-optimal reinforcement learning with self-play. Advances in Neural Information Processing Systems, 33:2159–2170, 2020.
- Baker et al. (2020) Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. In International Conference on Learning Representations, 2020.
- Brown and Sandholm (2019) Noam Brown and Tuomas Sandholm. Superhuman ai for multiplayer poker. Science, 365(6456):885–890, 2019.
- Chen et al. (2022a) Fan Chen, Song Mei, and Yu Bai. Unified algorithms for rl with decision-estimation coefficients: No-regret, pac, and reward-free learning. arXiv preprint arXiv:2209.11745, 2022a.
- Chen et al. (2022b) Zixiang Chen, Dongruo Zhou, and Quanquan Gu. Almost optimal algorithms for two-player zero-sum linear mixture markov games. In International Conference on Algorithmic Learning Theory, pages 227–261. PMLR, 2022b.
- Cui et al. (2023) Qiwen Cui, Kaiqing Zhang, and Simon Du. Breaking the curse of multiagents in a large state space: Rl in markov games with independent linear function approximation. In Proceedings of Thirty Sixth Conference on Learning Theory, volume 195, pages 2651–2652. PMLR, 2023.
- Dai et al. (2023) Yan Dai, Haipeng Luo, Chen-Yu Wei, and Julian Zimmert. Refined regret for adversarial mdps with linear function approximation. In Proceedings of the 40th International Conference on Machine Learning, volume 202, pages 6726–6759. PMLR, 2023.
- Daskalakis et al. (2009) Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a nash equilibrium. Communications of the ACM, 52(2):89–97, 2009.
- Daskalakis et al. (2023) Constantinos Daskalakis, Noah Golowich, and Kaiqing Zhang. The complexity of markov equilibrium in stochastic games. In Proceedings of Thirty Sixth Conference on Learning Theory, volume 195, pages 4180–4234. PMLR, 2023.
- Fan et al. (2024) Junyi Fan, Yuxuan Han, Jialin Zeng, Jian-Feng Cai, Yang Wang, Yang Xiang, and Jiheng Zhang. Rl in markov games with independent function approximation: Improved sample complexity bound under the local access model. In International Conference on Artificial Intelligence and Statistics, pages 2035–2043. PMLR, 2024.
- Freedman (1975) David A Freedman. On tail probabilities for martingales. the Annals of Probability, pages 100–118, 1975.
- Huang et al. (2022) Baihe Huang, Jason D. Lee, Zhaoran Wang, and Zhuoran Yang. Towards general function approximation in zero-sum markov games. In International Conference on Learning Representations, 2022.
- Jiang et al. (2017) Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, and Robert E Schapire. Contextual decision processes with low bellman rank are pac-learnable. In International Conference on Machine Learning, pages 1704–1713. PMLR, 2017.
- Jin et al. (2020) Chi Jin, Zhuoran Yang, Zhaoran Wang, and Michael I Jordan. Provably efficient reinforcement learning with linear function approximation. In Conference on Learning Theory, pages 2137–2143. PMLR, 2020.
- Jin et al. (2021) Chi Jin, Qinghua Liu, Yuanhao Wang, and Tiancheng Yu. V-learning–a simple, efficient, decentralized algorithm for multiagent rl. arXiv preprint arXiv:2110.14555, 2021.
- Jin et al. (2022) Chi Jin, Qinghua Liu, and Tiancheng Yu. The power of exploiter: Provable multi-agent rl in large state spaces. In International Conference on Machine Learning, pages 10251–10279. PMLR, 2022.
- Kong et al. (2023) Fang Kong, Xiangcheng Zhang, Baoxiang Wang, and Shuai Li. Improved regret bounds for linear adversarial mdps via linear optimization. arXiv preprint arXiv:2302.06834, 2023.
- Lee et al. (2020) Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, and Mengxiao Zhang. Bias no more: high-probability data-dependent regret bounds for adversarial bandits and mdps. Advances in Neural Information Processing Systems, 33:15522–15533, 2020.
- Liu et al. (2023a) Haolin Liu, Chen-Yu Wei, and Julian Zimmert. Bypassing the simulator: Near-optimal adversarial linear contextual bandits. arXiv preprint arXiv:2309.00814, 2023a.
- Liu et al. (2023b) Haolin Liu, Chen-Yu Wei, and Julian Zimmert. Towards optimal regret in adversarial linear mdps with bandit feedback. arXiv preprint arXiv:2310.11550, 2023b.
- Liu et al. (2021) Qinghua Liu, Tiancheng Yu, Yu Bai, and Chi Jin. A sharp analysis of model-based reinforcement learning with self-play. In International Conference on Machine Learning, pages 7001–7010. PMLR, 2021.
- Luo et al. (2021) Haipeng Luo, Chen-Yu Wei, and Chung-Wei Lee. Policy optimization in adversarial mdps: Improved exploration via dilated bonuses. Advances in Neural Information Processing Systems, 34:22931–22942, 2021.
- Mao and Başar (2023) Weichao Mao and Tamer Başar. Provably efficient reinforcement learning in decentralized general-sum markov games. Dynamic Games and Applications, 13(1):165–186, 2023.
- Neu and Olkhovskaya (2020) Gergely Neu and Julia Olkhovskaya. Efficient and robust algorithms for adversarial linear contextual bandits. In Conference on Learning Theory, pages 3049–3068. PMLR, 2020.
- Ni et al. (2022) Chengzhuo Ni, Yuda Song, Xuezhou Zhang, Chi Jin, and Mengdi Wang. Representation learning for general-sum low-rank markov games. arXiv preprint arXiv:2210.16976, 2022.
- Olkhovskaya et al. (2023) Julia Olkhovskaya, Jack Mayo, Tim van Erven, Gergely Neu, and Chen-Yu Wei. First-and second-order bounds for adversarial linear contextual bandits. arXiv preprint arXiv:2305.00832, 2023.
- Shalev-Shwartz et al. (2016) Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295, 2016.
- Shapley (1953) Lloyd S Shapley. Stochastic games. Proceedings of the National Academy of Sciences, 39(10):1095–1100, 1953.
- Sherman et al. (2023a) Uri Sherman, Alon Cohen, Tomer Koren, and Yishay Mansour. Rate-optimal policy optimization for linear markov decision processes. arXiv preprint arXiv:2308.14642, 2023a.
- Sherman et al. (2023b) Uri Sherman, Tomer Koren, and Yishay Mansour. Improved regret for efficient online reinforcement learning with linear function approximation. In Proceedings of the 40th International Conference on Machine Learning, volume 202, pages 31117–31150. PMLR, 2023b.
- Silver et al. (2017) David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354–359, 2017.
- Song et al. (2022) Ziang Song, Song Mei, and Yu Bai. When can we learn general-sum markov games with a large number of players sample-efficiently? In International Conference on Learning Representations, 2022.
- Vinyals et al. (2019) Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350–354, 2019.
- Wang et al. (2023) Yuanhao Wang, Qinghua Liu, Yu Bai, and Chi Jin. Breaking the curse of multiagency: Provably efficient decentralized multi-agent rl with function approximation. In Proceedings of Thirty Sixth Conference on Learning Theory, volume 195, pages 2793–2848. PMLR, 2023.
- Wen and Van Roy (2017) Zheng Wen and Benjamin Van Roy. Efficient reinforcement learning in deterministic systems with value function generalization. Mathematics of Operations Research, 42(3):762–782, 2017.
- Xie et al. (2020) Qiaomin Xie, Yudong Chen, Zhaoran Wang, and Zhuoran Yang. Learning zero-sum simultaneous-move markov games using function approximation and correlated equilibrium. In Conference on Learning Theory, pages 3674–3682. PMLR, 2020.
- Xiong et al. (2022) Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, and Tong Zhang. A self-play posterior sampling algorithm for zero-sum markov games. In International Conference on Machine Learning, pages 24496–24523. PMLR, 2022.
- Yang and Wang (2020) Lin Yang and Mengdi Wang. Reinforcement learning in feature space: Matrix bandit, kernels, and regret bound. In International Conference on Machine Learning, pages 10746–10756. PMLR, 2020.
- Yin et al. (2022) Dong Yin, Botao Hao, Yasin Abbasi-Yadkori, Nevena Lazić, and Csaba Szepesvári. Efficient local planning with linear function approximation. In International Conference on Algorithmic Learning Theory, pages 1165–1192. PMLR, 2022.
- Zanette and Wainwright (2022) Andrea Zanette and Martin Wainwright. Stabilizing q-learning with linear architectures for provable efficient learning. In International Conference on Machine Learning, pages 25920–25954. PMLR, 2022.
- Zhan et al. (2023) Wenhao Zhan, Jason D. Lee, and Zhuoran Yang. Decentralized optimistic hyperpolicy mirror descent: Provably no-regret learning in markov games. In The Eleventh International Conference on Learning Representations, 2023.
- Zhao et al. (2023) Canzhe Zhao, Ruofeng Yang, Baoxiang Wang, and Shuai Li. Learning adversarial linear mixture markov decision processes with bandit feedback and unknown transition. In The Eleventh International Conference on Learning Representations, 2023.
- Zimmert and Lattimore (2022) Julian Zimmert and Tor Lattimore. Return of the bias: Almost minimax optimal high probability bounds for adversarial linear bandits. In Conference on Learning Theory, pages 3285–3312. PMLR, 2022.
[section] \printcontents[section]l1
Appendix A Analysis of the Improved AVLPR Framework
This section proves the main theorem of our Improved AVLPR framework, restated as follows.
Theorem A.1 (Main Theorem of Improved AVLPR; Restatement of Theorem 3.2).
Suppose that
-
1.
(Per-state no-regret) ensures the following w.p. :
(A.1) where is a random variable whose randomness comes from the environment (when generating the trajectories), the agents (when playing the policies), and internal randomness.
-
2.
(Optimistic V-function) ensures the following w.p. :
(A.2) -
3.
(Pigeon-hole condition & Potential Function) The potential function ensures that 5 in Algorithm 1 is violated for at most times. Moreover, there exists a deterministic ensuring the following w.p. :
(A.3) where the expectation is taken w.r.t. the randomness in Gap.
Then with probability , the sum of CCE-Regret over all agents satisfies
where is the policy for the -th epoch (which is the policy with smallest Gap; see Equation 3.1).
Suppose that one call to will cost samples ( is the last parameter to the subroutine), and, similarly, one call to will cost samples. Then by picking , the roll-out policy is an -CCE and the sample complexity is bounded by
Proof.
The proof mostly inherits the proof of Theorem 18 from (Wang et al., 2023); all the main differences are sketched in the proof sketch of Theorem 3.2 (in the main text).
Let be the iterations where 5 is violated. For any and , we first pass a “next-layer” V-function to and then calculate the “current-layer” V-function via . By Equation A.1 of Condition 1 and Equation A.2 of Condition 2, for any “next-layer” ,
Thus, by induction over all , we know that for all and ,
In words, this indicates that our is indeed an optimistic estimation of the best-response V-function. Again applying induction by invoking the other part of Equation A.2, we know for all ,
Putting these two inequalities together, the following holds for all :
Hence, by denoting as the policy where the -th epoch is using (i.e., the last time where 5 is violated). By above calculation, we can write the sum of CCE-Regret over all agents as
From Lemma A.2, we conclude that the following holds w.p. :
Moreover, recall from Equation A.3 that
We then have
which means that our choice of indeed roll-outs an -CCE.
It only remains to calculate the total sample complexity. From Condition 3, . Thus, the total sample complexity is no more than
as claimed. ∎
Lemma A.2.
For any , w.p. , we have , .
Proof.
Denote the policy-Gap pairs yielded as . By definition, where is defined as (in Equation 3.1)
By Markov inequality, for each , . Thus, as and defined as the with smallest in Equation 3.1, we have with probability . Taking a union bound over all gives our conclusion that for all w.p. . ∎
Appendix B Analysis of the Improved CCE-Approx Subroutine
Theorem B.1 (Gap is w.h.p. Pessimistic; Formal Version of Theorem 4.1).
Suppose that , , , and . For each execution of ,
(B.1) |
w.p. , where is defined as (the names are in correspondence to those in Equation 5.2)
(B.2) |
Proof.
To make the proof easy to read, we first restate Equation B.1 using the notations of CCE-Approx. Let be the Q-function kernel induced by the next-layer V-function and , i.e.,
(B.3) |
Then is a sample with mean .
Let be the best-response policy of agent when facing (which is the average policy for episodes, i.e., ). We need to ensure the following with probability :
while Gap is allowed to involve data-dependent quantities that are available during run-time. By plugging in the definition of and decomposing the right-handed-side, this inequality becomes
(B.4) |
for all player and state , with probability .
In Theorems B.2, B.3, B.6, B.8 and B.9, we control each term in the RHS of Equation B.4 and show that Equation B.4 indeed holds with probability when Gap is defined in Equation B.2. ∎
B.1 Bounding Reg-Term via EXP3 Regret Guarantee
In EXP3, one typical requirement is that the loss vector fed into EXP3 should satisfy (see Lemma E.10). To comply with the condition, people usually control the Q-estimate via (Luo et al., 2021) and set , suffering loss of order .
However, can be prohibitively large when setting , which Theorem E.5 requires. Fortunately, thanks to the Magnitude-Reduced Estimator by Dai et al. (2023), (defined in Equation 4.5) can be bounded from below by and thus we can pick the standard learning rate of . The other component in is the bonuses, which is for all .
Theorem B.2.
When , with probability , for all and :
Proof.
The only thing we need to verify before invoking Lemma E.10 is that . We first show . Recall the definition of in Equation 4.5:
As , we know as holds for any . Thus, to lower bound , it suffices to lower bound . Notice that
where (a) used Cauchy-Schwartz inequality and (b) used Jensen inequality and the fact that for all . Let the true covariance of of layer , and agent be
(B.5) |
The average matrix in the middle, namely
is an empirical estimation of the true covariance . Hence, by stochastic matrix concentration results stated as Corollary E.4 of Section E.2, we know
Meanwhile, the following matrix from Equation 4.1 (where we defined ) is yet another empirical estimation of that is independent to :
(B.6) |
By similar arguments stated as Corollary E.3 of Section E.2, we have
Taking a union bound over all , the following holds for all with probability :
Therefore, we have with probability , which is at least with our choice of .
For the bonus term , we consider the two parts related to and separatedly. For the -term, we have the following upper bound since :
For the -related term, notice that for any , we have
where is the one-hot vector at the -th coordinate. By Cauchy-Schwartz, this is further bounded by . We also have for any . Thus
(B.7) |
So the -related term is controlled by , which means .
Thus the condition that holds once , i.e., . Applying the EXP3 regret bound (Lemma E.10) gives
Reg-Term | |||
where the last step uses . All these terms are available during run-time, so the algorithm can include them into . ∎
B.2 Bounding Bias-1 via Adaptive Freedman Inequality
Theorem B.3.
With probability , we have the following for all and :
Bias-1 | |||
Proof.
Let be a sequence of random variables adapted to filtration where
Let be the conditional expectations. Then forms a martingale difference sequence. We divide Bias-1 into two parts, one for the intrinsic bias of (how differs from ) and the other for the estimation error (how differs from ). Namely,
(B.8) |
The first term is a standard term appearing in regret-minimization analyses of single-agent RL. In Lemma B.4, we control it in analog to Lemma D.2 of Luo et al. (2021), but invoking the new covariance estimation analyses by Liu et al. (2023a) (which we restated in Section E.2).
The second term is the main obstacle stopping people from obtaining high-probability regret bounds for adversarial contextual linear bandits. While we are also unable to provide a deterministic high-probability upper bound, thanks to our Improved AVLPR framework (see the discussions after Theorem 3.2), data-dependent high-probability bounds are allowed. This is yielded in Lemma B.5 by developing a variant of the Adaptive Freedman Inequality proposed by Lee et al. (2020) and improved by Zimmert and Lattimore (2022) (the variant can be found in Section E.3). ∎
B.2.1 Controlling Intrinsic Bias
Lemma B.4.
With probability , for any and , we have
Proof.
The conditional expectation can be directly calculated as
where (a) uses the independence between and the trajectory , and (b) uses the definition of in Equation B.5.
To handle the first term of Equation B.8, we use Cauchy-Schwartz inequality, triangle inequality, and AM-GM inequality (the calculation follows Lemma D.2 of Luo et al. (2021)).
where is defined in Equation B.6 such that . The first term directly goes to Gap as it is available during run-time. The second term can be controlled by the following inequality (Liu et al., 2023a, Lemma 14) which we include as Theorem E.5:
(B.9) |
where we plugged in the definition of that . Conditioning on the good events in Equation B.9 and taking a union bound over all , our conclusion follows. ∎
B.2.2 Controlling Estimation Error
Lemma B.5.
With probability , for all and , we have
Proof.
From the Adaptive Freedman Inequality (Lemma E.8 in Section E.3), we have
(B.10) |
where . By definition of ,
where we used . Hence, .
As is available during run-time, it only remains to control to make Equation B.10 calculable. By definition of , we have
We focus on the expectation in the middle, i.e., . Plugging in the definition of :
From Corollary E.3, w.p. when . Hence,
Putting this into gives
(B.11) |
Our conclusion follows by combining Equations B.11 and B.10. ∎
B.3 Cancelling Bias-2 Using Bonus-2
Bias-2 looks pretty similar to Bias-1, except that we now have instead of . This subtle difference actually forbids us from handling Bias-2 analogue to Bias-1 as is unknown to the agent. As we sketched in the main text, we also adopt the classical idea of using bonuses to cancel biases. However, as the maximum among can be as large as , it can no longer be neglected like previous papers.
As mentioned in the main text, we use a state-action-wise bonus to cancel the maximum martingale difference term induced by Adaptive Freedman Inequality. As Equation B.4 is linear in , we only need to consider the ’s that are one-hot on some action . For notional simplicity, we abbreviate when is clear from the context.
Theorem B.6.
When and , w.p. , for all and ,
Proof.
Imitating the analysis in Section B.2 but applying the original Adaptive Freedman Inequality (Lemma E.7) instead of our Lemma E.8 gives Lemma B.7, i.e.,
(B.12) |
By definition of Bonus-2, we have
Bonus-2 | ||||
(B.13) |
So we only need to control Equation B.12 using . The first term in Equation B.12 is already contained in Equation B.13, while the second term is a constant. For the third term, we would like to control it using the remaining , i.e., we show
In other words, we would like to control by . Equivalently,
As , (recall our assumption that ). As , this inequality is ensured so long as
For the last term, by definition of , it’s covered by the second part in Equation B.13 once as where , which means in Equation B.13 covers . Hence our conclusion follows given that and . ∎
Lemma B.7.
With probability , for all and , we have
Bias-2 | |||
Proof.
Imitating Section B.2, we also decompose Bias-2 into intrinsic bias and estimation error. Let and , we have
The first term is the same as Lemma B.4: Concluding that and then applying Cauchy-Schwartz inequality, triangle inequality, and AM-GM inequality gives
For the second term, the proof is also similar to Lemma B.5. The only difference is that instead of our Lemma E.8, we now apply the original Adaptive Freedman Inequality (in Lemma E.7). We get
where . Following the calculations in Lemma B.5, is bounded by and . The maximum part is directly contained in our conclusion by noticing that . ∎
B.4 Putting Bonus-1 into Gap Directly
The two components in the Bonus-1 term, namely and , are both known during run-time. So we trivially have the following theorem:
Theorem B.8.
For all and , we have
Proof.
This is the definition of Bonus-1. ∎
B.5 Bounding Mag-Reduce via Martingale Properties
Theorem B.9.
Mag-Reduce is bounded by the sum of RHS of Lemmas B.5 and B.7 w.p. .
Proof.
By definition of in Equation 4.5, we have
As and are both sampled from , all these ’s are common mean. Thus, by telescoping, we can decompose Mag-Reduce into the sum of martingales.
It suffices to consider only one of them, for example,
(B.14) |
where means the -th layer state and the -th layer -th agent action sampled from .
Again, there are two components in Equation B.14, one related to and the other related to . Fortunately, they can be handled pretty similarly to what we did in Theorem B.3 and Theorem B.6: For the part, applying Lemma E.8 as in Section B.2.2, we have
Note that , it becomes identical to the conclusion of Lemma B.5. Thus, this component only causes a contribution to the final Gap and can be neglected.
Appendix C Controlling the Expectation of Gap Using Potentials
In this section, we verify Equation A.3, i.e., prove Theorem 4.2.
Theorem C.1 (Algorithm 2 Allows a Potential Function; Formal Version of Theorem 4.2).
With probability , under the conditions of Theorem B.1, it is possible to give a tuning of Algorithm 2 such that
In other words, Equation A.3 is ensured by picking .
Proof.
The proof is divided into two parts. In Theorem C.2, we calculate for any roll-in policy (although we only use as the roll-in policy, it is unknown at the point when is executed, because we iterated ). Then, in Theorem C.7, we control their summation using the definition of potential functions in Equation 4.9 and also borrowing techniques from (Zanette and Wainwright, 2022, Cui et al., 2023). ∎
C.1 Calculating the Expectation of Gap w.r.t. Any
Theorem C.2.
Consider a single agent , epoch , and layer . For any outcome of with set as . Then for any “roll-in” policy (which is chosen as in Equation A.3), under the conditions in Theorem B.1, i.e., setting , , , and for the execution of CCE-Approx with , we have
Note that, although in Algorithm 1 we mixed up Gap’s from different ’s, all Gap’s are i.i.d. samples from the same distribution and thus does not depend on the choice of in Equation 3.1.
Proof.
Recall the definition of from Equation B.2, we have the following decomposition:
(C.1) |
We then go by these terms one by one in Lemmas C.3, C.4, C.5 and C.6 and give the conclusion. ∎
Lemma C.3.
Consider the Reg-Term part in Equation C.1, we have
where stands for the state-action pair that agent observes in layer .
Proof.
The first component is already a constant and only contributes to the final bound. For the second component, we invoke the property of the Magnitude-Reduced Estimator by Dai et al. (2023) (which we summarize as Lemma E.1) and conclude , where the expectation is only taken w.r.t. the randomness in Equation 4.5. Thus
where (a) uses the fact that are sampled from (recall the definition of in Equation 4.1), and (b) uses Corollary E.3 from Liu et al. (2023a) (which gives ) together with the fact that (for those states in ). Recall the configuration that in CCE-Approx, the above quantity is . ∎
Lemma C.4.
Consider the Bias-1 part in Equation C.1, we have
Proof.
As , we can write (ignoring constants and logarithmic factors)
According to the calculations in Lemma B.5 on , the second term is exactly bounded by the first one. Utilizing the fact that , we have
Bias-1 | |||
where the last line again uses the configuration and the definition of . ∎
Lemma C.5.
The Bias-2 + Bonus-2 part in Equation C.1 is of order .
Proof.
This part is already a constant in Equation A.3 ∎
Lemma C.6.
The Bonus-1 term in Equation C.1 is bounded by
Proof.
For the -part, we have
which directly becomes the first part in the conclusion (where, again, we used the configuration that and also the definition of ).
For the -part, from the calculations in Equation B.7, we know
Utilizing Cauchy-Schwartz inequality and the fact that , we can get
Putting two parts together gives our conclusion. ∎
C.2 Summing Up ’s Using Potentials
Theorem C.7.
Under the conditions of Theorem B.1, i.e., setting , , , and for CCE-Approx executions with parameter (which is set to in each epoch where 5 is violated), we have
Proof.
For any , let the last time 5 that was violated be . Then and by 5. For any , we denote as the number of indices such that . Throughout the proof, we use to denote the used by CCE-Approx in the -th epoch, respectively. Recall the conclusion from Theorem C.2 (note that the LHS of Theorem C.2 is ),
We focus on a single agent . For notational simplicity, we denote as the state-action pair that agent observes in layer . The first two terms are bounded by . For the third term, we replace the coefficients with a sup:
where, for simplicity, we define as zero for those where 5 isn’t violated. Similarly, for the last term, we replace the coefficients with a sup and then apply Cauchy-Schwartz. We get
Thus the only thing we need to do before concluding the proof is to show that
(C.2) |
Indeed, this quantity is of order for all and w.p. : In Lemma C.8, we imitate Lemma 10 of Cui et al. (2023) and conclude that Equation C.2 is of order for any fixed and w.p. ; taking a union bound then gives the aforementioned fact.
To summarize, we have
and we need to ensure that , , , and . Setting , , , and gives
as claimed. ∎
Lemma C.8.
For any agent and layer , w.p. , we have
Proof.
For a where 5 is violated, recall the definition of in Equation 4.1 and the definition of in Equation B.6, we know . Also recall the choice of in Algorithm 2 and Corollary E.3 by Liu et al. (2023a) that says , we have
Let be the true covariance of , i.e., . Equation C.2 becomes
Note that for any where 5 is violated, thus Equation C.2 can be viewed as a matrix version of where . We use the following lemma by Zanette and Wainwright (2022, Lemma 11):
Lemma C.9 (Lemma 11 by Zanette and Wainwright (2022)).
For a random vector , scalar , and PSD matrix , suppose that where , then
In our case, we apply Lemma C.9 to all with , , and the distribution as with . We first calculate , which is bounded by our potential construction and is in analog to Lemma 4 of Cui et al. (2023):
Lemma C.10.
For any where 5 is violated, we have the following w.p. :
The proof of Lemma C.10 is presented shortly after.
Therefore, when defining , we can conclude from Lemma C.9:
Proof of Lemma C.10.
Recall the potential function definition in Equation 4.9, which says . Suppose that the first time after that 5 is violated again is in epoch , then is the first potential to reach , which means
Appendix D Proof of the Resulting Sample Complexity (Theorem 4.3)
Proof of Theorem 4.3.
We verify the conditions in Theorem 3.2. The first condition, i.e., Equation A.1, is formalized as Theorem B.1, whose proof is in Appendix B. The second condition, i.e., Equation A.2, is postponed to Section D.1. The third condition, i.e., Equation A.3, is justified in Theorem C.1.
Then we can invoke the conclusion from Theorem 3.2 to conclude a sample complexity of
where (see Algorithms 2 and 3), (from Theorem C.1), and (from Section E.5 of Wang et al. (2023)). ∎
D.1 V-Approx Procedure by Wang et al. (2023) and Its Guarantee
In this section, we introduce the V-Approx procedure by Wang et al. (2023). In Algorithm 3, we present the algorithm for linear Markov Games (i.e., the Optimistic-Regress procedure in their Algorithm 3 is replaced by that in Section E.1).
Similar to Section E.3 of Wang et al. (2023), we have the following lemma:
Lemma D.1.
Consider using Algorithm 3 with some roll-in policy and episode length . Let . Then Algorithm 3 with parameters ensures that it’s output satisfies the following with probability :
Proof.
Consider a fixed and . By Assumption 2.1, there exists such that
Let and . From Lemma 21 of Wang et al. (2023), we know when , with probability ,
(D.1) |
where (which is also the expectation of ). We also conclude from Lemma 22 of Wang et al. (2023) that . Therefore, for any and , we have
where the last step uses both Theorem E.5 and Equation D.1. Noticing that is also an empirical covariance of policy , we can conclude that from Corollaries E.3 and E.4. Consequently, the error in is no more than
where the last step used Cauchy-Schwartz. Recall the definition of Gap from Equation B.2, we can conclude that the total estimation error of V-Approx is no more than . Our claim then follows from imitating the remaining arguments of Wang et al. (2023, Section E.3). ∎
Appendix E Auxiliary Lemmas
E.1 Magnitude-Reduced Estimators
The following lemma characterizes the Magnitude-Reduced Estimator (Dai et al., 2023).
Lemma E.1 (Magnitude-Reduced Estimators (Dai et al., 2023)).
For a random variable , its magnitude-reduced estimator where satisfies
Proof.
The first conclusion follows from .
For the second conclusion, by definition of , . As , . By Jensen’s inequality, . Hence, we arrive at the conclusion that .
The last inequality follows from the fact that is if and if . Therefore, , as desired. ∎
E.2 Stochastic Matrix Concentration
We then present some stochastic matrix concentration results.
Lemma E.2 (Lemma A.4 by Dai et al. (2023)).
If are i.i.d. -dimensional PSD matrices such that for all : i) , ii) a.s., and iii) , then
Corollary E.3.
If are i.i.d. -dimensional PSD matrices such that for all : i) , and ii) a.s. for some positive constant , then
The second corollary is the opposite direction of Corollary E.3.
Corollary E.4.
For i.i.d. PSD with expectation such that a.s. for all ,
Proof.
The proof mostly follows from Corollary 10 of Liu et al. (2023a). Using the fact that for any , we know from Lemma E.2 that under the same conditions of Lemma E.2,
In other words,
(E.1) |
Now we show Corollary E.4. For the case where , define . Then . Moreover, also ensures . Hence, applying Equation E.1 to gives
By the definitions of and , we further have the following, which shows our claim:
The case of is trivial because . ∎
A key fact from the above corollaries is Lemma 14 of Liu et al. (2023a), which we include below.
Theorem E.5 (Lemma 14 of Liu et al. (2023a)).
For a -dimensional distribution , let be i.i.d. samples from . Define and . If where , then with probability , we have
E.3 Adaptive Freedman Inequality
In this paper, we will make use of the Adaptive Freedman Inequality proposed by Lee et al. (2020, Theorem 2.2) and improved by Zimmert and Lattimore (2022, Theorem 9), stated as follows.
Lemma E.6 (Theorem 9 of Zimmert and Lattimore (2022)).
For a sequence of martingale differences adapted to the filtration , suppose that a.s. Then
where
A direct corollary of Lemma E.6 is the following lemma:
Lemma E.7.
For a sequence of random variables adapted to the filtration , let the conditional expectation of be . Suppose that a.s. Then
where
Proof.
We apply Lemma E.6 to the martingale difference sequence , giving
It is clear that . Furthermore, we know that . Putting these two parts together gives our conclusion. ∎
We also give the following variant of Lemma E.7:
Lemma E.8.
For a sequence of random variables adapted to the filtration , let the conditional expectation of be . Suppose that a.s. Then
where
Proof.
Applying Lemma E.6 to the martingale difference sequence , the following inequality holds with probability :
where
Utilizing the fact that for all , we have
Hence, we can write
Similarly, . By exactly the same arguments, the same inequality also holds for . Our conclusion then follows. ∎
E.4 Relative Concentration Bounds
Lemma E.9 (Lemma 48 by Cui et al. (2023)).
Let be i.i.d. random variables supported in and let . Let be the stopping time that . Suppose that , then w.p. , .
E.5 EXP3 Regret Guarantee
The following lemma is a classical result for the EXP3 algorithm. For completeness, we also include the proof by Dai et al. (2023, Lemma C.1) here.
Lemma E.10.
Let be defined as
where is the loss corresponding to the -th iteration. Suppose that for all and . Then
holds for any distribution when .
Proof.
By linearity, it suffices to prove the inequality for all one-hot ’s. Without loss of generality, let where . Define as the prefix sum of . Let
then by definition of , we have
where (a) used for all and (b) used (again for all ). Therefore, summing over gives
Moving to the LHS then shows the inequality for . The result then extends to all by linearity. ∎