This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Mean-Field Sampling for Cooperative Multi-Agent Reinforcement Learning

Emile Anand    Ishani Karmarkar    Guannan Qu
Abstract

Designing efficient algorithms for multi-agent reinforcement learning (MARL) is fundamentally challenging because the size of the joint state and action spaces grows exponentially in the number of agents. These difficulties are exacerbated when balancing sequential global decision-making with local agent interactions. In this work, we propose a new algorithm SUBSAMPLE-MFQ (Subsample-Mean-Field-Q-learning) and a decentralized randomized policy for a system with nn agents. For knk\leq n, our algorithm learns a policy for the system in time polynomial in kk. We show that this learned policy converges to the optimal policy on the order of O~(1/k)\tilde{O}(1/\sqrt{k}) as the number of subsampled agents kk increases. We empirically validate our method in Gaussian squeeze and global exploration settings.

Machine Learning, ICML

1 Introduction

Reinforcement Learning (RL) has become a popular framework to solve sequential decision making problems in unknown environments and has achieved tremendous success in a wide array of domains such as playing the game of Go (Silver et al., 2016), robotic control (Kober et al., 2013), and autonomous driving (Kiran et al., 2022; Lin et al., 2023). A key feature of most real-world systems is their uncertain nature, and thus, RL has emerged as a powerful tool for learning optimal policies for multi-agent systems to operate in unknown environments (Kim & Giannakis, 2017; Zhang et al., 2021; Lin et al., 2024; Anand & Qu, 2024). While early RL works focused on the single-agent setting, recently, multi-agent RL (MARL) has achieved impressive success for many applications, such as coordinating robotic swarms (Preiss et al., 2017), self-driving vehicles (DeWeese & Qu, 2024), real-time bidding (Jin et al., 2018), ride-sharing (Li et al., 2019), and stochastic games (Jin et al., 2020).

Despite growing interest in multi-agent RL (MARL), extending RL to multi-agent settings poses significant computational challenges due to the curse of dimensionality (Sayin et al., 2021): even if the individual agents’ state or action spaces are small, the global state space or action space can take values from a set with size that is exponentially large as a function of the number of agents. For example, even model-free RL algorithms such as temporal difference (TD) learning (Sutton et al., 1999) or tabular QQ-learning require computing and storing a QQ-function (Bertsekas & Tsitsiklis, 1996) that is as large as the state-action space, which is exponential in the number of agents. These scalability issues have been observed in the multi-agent reinforcement learning literature in a variety of settings (Blondel & Tsitsiklis, 2000; Papadimitriou & Tsitsiklis, 1999; Littman, 1994). To overcome this challenge, Tan (1997) proposes to perform independent QQ-learning, where each agent models other agents as part of the environment to decouple decision-making and reduce the state-action space; however, this simplification often fails to capture a fundamental feature of MARL: inter-agent interactions.

MARL is fundamentally difficult because agents in the real-world not only interact with the environment but also with each other (Shapley, 1953). An exciting line of work that addresses this intractability is mean-field MARL (Lasry & Lions, 2007; Yang et al., 2018; Gu et al., 2021, 2022; Hu et al., 2023). The mean-field approach assumes all agents are homogeneous in their state and action spaces, enabling their interactions to be approximated by a two-agent setting: here, each agent interacts with a representative “mean agent” which evolves as the empirical distribution of states of all other agents. Under these assumptions, mean-field MARL allows learning of optimal policies with a sample complexity that is polynomial in the number of agents. However, if the number of homogeneous agents is large, storing a polynomially-large QQ table (where the polynomial’s degree depends on the size of the state space of a single agent) may remain infeasible. In this paper, we study the cooperative setting– where agents work collaboratively to maximize a structured global reward–and ask: Can we design an efficient and approximately optimal MARL algorithm for policy-learning in a cooperative multi-agent system as the number of agents increases asymptotically?

Contributions. We answer this question affirmatively. Our key contributions are outlined below.

Subsampling Algorithm. We propose SUBSAMPLE-MFQ to address the challenge of MARL with a large number of local agents. We model the problem as a Markov Decision Process (MDP) with a global agent and nn local agents. SUBSAMPLE-MFQ selects knk\leq n local agents to learn a deterministic policy π^kest\hat{\pi}^{\mathrm{est}}_{k}. It does this by applying mean-field value iteration on the kk-local-agent subsystem to learn Q^kest\hat{Q}_{k}^{\mathrm{est}}, which can be viewed as a smaller QQ function. It then deploys a stochastic policy πkest{\pi}^{\mathrm{est}}_{k}, where the global agent samples kk local agents uniformly at each step and uses π^kest\hat{\pi}^{\mathrm{est}}_{k} to determine its action, while each local agent samples k1k-1 other local agents and uses π^kest\hat{\pi}^{\mathrm{est}}_{k} to determine its action.

Sample Complexity and Theoretical Guarantee. As the number of local agents increases, the size of Q^k\hat{Q}_{k} scales polynomially with kk, rather than polynomially with nn as in mean-field MARL. Analogously, when the size of the local agent’s state space grows, the size of Q^k\hat{Q}_{k} scales exponentially with kk, rather than exponentially with nn, as in traditional QQ-learning. The key analytic technique underlying our results is a novel MDP sampling result. Through it, we show that the performance gap between πkest{{\pi}_{k}^{\mathrm{est}}} and the optimal policy π{\pi^{*}} is O~(1/k)\tilde{O}(1/\sqrt{k}). The choice of kk reveals a fundamental trade-off between the size of the QQ-table and the optimality of πkest{\pi}_{k}^{\mathrm{est}}. When k=O(logn)k=O(\log n), SUBSAMPLE-MFQ is the first centralized MARL algorithm to achieve a polylogarithmic run-time in nn, representing an exponential speedup over the previously best-known polytime mean-field MARL methods, while maintaining a decaying optimality gap.

Numerical Simulations. We evaluate the effectiveness of SUBSAMPLE-MFQ in a bounding box problem, and a Gaussian squeeze problem. Our experiments reveal a monotonic improvement in the learned policies as knk\to n, while providing a substantial speedup over mean-field QQ-learning.

While our results are theoretical in nature, it is our hope that SUBSAMPLE-MFQ will lead to a further exploration of sampling in Markov games, and inspire the development of practical algorithms for multi-agent settings.

2 Preliminaries

Notation.

For k,nk,n\in\mathbb{N} where knk\leq n, let ([n]k)\binom{[n]}{k} denote the set of kk-sized subsets of [n]={1,,n}[n]=\{1,\dots,n\}. For any vector zdz\in\mathbb{R}^{d}, let z1\|z\|_{1} and z\|z\|_{\infty} denote the standard 1\ell_{1} and \ell_{\infty} norms of zz respectively. Let 𝐀1\|\mathbf{A}\|_{1} denote the matrix 1\ell_{1}-norm of 𝐀n×m\mathbf{A}\in\mathbb{R}^{n\times m}. Given a collection of variables s1,,sns_{1},\dots,s_{n} the shorthand sΔs_{\Delta} denotes the set {si:iΔ}\{s_{i}:i\in\Delta\} for Δ[n]\Delta\subseteq[n]. We use O~()\tilde{O}(\cdot) to suppress polylogarithmic factors in all problem parameters except nn. For a discrete measurable space (𝒳,)(\mathcal{X},\mathcal{F}), the total variation distance between probability measures ρ1,ρ2\rho_{1},\rho_{2} is given by TV(ρ1,ρ2)=12x𝒳|ρ1(x)ρ2(x)|\mathrm{TV}(\rho_{1},\rho_{2})=\frac{1}{2}\sum_{x\in\mathcal{X}}|\rho_{1}(x)-\rho_{2}(x)|. Next, x𝒟()x\sim\mathcal{D}(\cdot) denotes that xx is a random element sampled from a distribution 𝒟\mathcal{D}, and we denote that xx is a random sample from the uniform distribution over a finite set Ω\Omega by x𝒰(Ω)x\sim\mathcal{U}(\Omega). For the readers’ convenience, we include a detailed notation table in Table 1 in the Appendix A.

Problem Formulation. We consider a system of n+1n+1 agents, where agent gg is a “global decision making agent” and the remaining nn agents, denoted by [n][n], are “local agents.” At time tt, the agents are in state s(t)=(sg(t),s1(t),,sn(t))𝒮:-𝒮g×𝒮lns(t)=(s_{g}(t),s_{1}(t),...,s_{n}(t))\in\mathcal{S}\coloneq\mathcal{S}_{g}\times\mathcal{S}_{l}^{n}, where sg(t)𝒮gs_{g}(t)\in\mathcal{S}_{g} denotes the global agent’s state, and for each i[n]i\in[n], si(t)𝒮ls_{i}(t)\in\mathcal{S}_{l} denotes the state of the ii’th local agent. The agents cooperatively select actions a(t)=(ag(t),a1(t),,an(t))𝒜a(t)=(a_{g}(t),a_{1}(t),...,a_{n}(t))\in\mathcal{A} where ag(t)𝒜ga_{g}(t)\in\mathcal{A}_{g} denotes the global agent’s action and ai(t)𝒜la_{i}(t)\in\mathcal{A}_{l} denotes the ii’th local agent’s action. At time t+1t+1, the next state for all the agents is independently generated by stochastic transition kernels Pg:𝒮g×𝒮g×𝒜g[0,1]P_{g}:\mathcal{S}_{g}\times\mathcal{S}_{g}\times\mathcal{A}_{g}\to[0,1] and Pl:𝒮l×𝒮l×𝒮g×𝒜g[0,1]P_{l}:\mathcal{S}_{l}\times\mathcal{S}_{l}\times\mathcal{S}_{g}\times\mathcal{A}_{g}\to[0,1] as follows:

sg(t+1)Pg(|sg(t),ag(t)),s_{g}(t+1)\sim P_{g}(\cdot|s_{g}(t),a_{g}(t)), (1)
si(t+1)Pl(|si(t),sg(t),ai(t)),i[n].s_{i}(t+1)\sim P_{l}(\cdot|s_{i}(t),s_{g}(t),a_{i}(t)),\forall i\in[n]. (2)

The system then collects a structured stage reward r(s(t),a(t))r(s(t),a(t)) where the reward r:𝒮×𝒜r:\mathcal{S}\times\mathcal{A}\to\mathbb{R} depends on s(t)s(t) and a(t)a(t) through Equation 3, and where the choice of functions rgr_{g} and rlr_{l} is typically application specific.

r(s,a)=rg(sg,ag)global component+1ni[n]rl(si,sg,ai)local componentr(s,a)=\underbrace{r_{g}(s_{g},a_{g})}_{{\text{global component}}}+\frac{1}{n}\sum_{i\in[n]}\underbrace{r_{l}(s_{i},s_{g},a_{i})}_{\text{local component}} (3)

We define a policy π:𝒮𝒫(𝒜)\pi:\mathcal{S}\to\mathcal{P}(\mathcal{A}) as a map from states to distributions of actions such that aπ(|s)a\sim\pi(\cdot|s). We seek to learn a policy π\pi that maximizes the value function which is defined for each sSs\in S as the expected discounted reward

Vπ(s)=𝔼a(t)π(|s)[t=0γtr(s(t),a(t))|s(0)=s],V^{\pi}(s)\!=\!\mathbb{E}_{a(t)\sim\pi(\cdot|s)}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s(t),a(t))|s(0)\!=\!s\right]\!\!, (4)

where γ(0,1)\gamma\in(0,1) is a discounting factor.

Notably, the cardinality of the search space simplex for the optimal policy is |𝒮g||𝒮l|n|𝒜g||𝒜l|n|\mathcal{S}_{g}||\mathcal{S}_{l}|^{n}|\mathcal{A}_{g}||\mathcal{A}_{l}|^{n}, which is exponential in the number of agents. When noting that the local agents are all homogeneous, and therefore permutation-invariant with respect to the rewards of the system (the order of the other agents does not matter to any single decision-making agent), techniques from mean-field MARL restrict the cardinality of the search space simplex for the optimal policy to |𝒮g||𝒜g||𝒮l||𝒜l|n|𝒮l||𝒜l||\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{S}_{l}||\mathcal{A}_{l}|n^{|\mathcal{S}_{l}||\mathcal{A}_{l}|}, reducing the complexity’s exponential dependence on nn to a polynomial dependence on nn. In practical systems, when nn is large, the poly(n)\mathrm{poly}(n) sample-complexity may still be computationally infeasible. Therefore, the goal of this problem is to learn an approximately optimal policy with subpolynomial sample complexity, further overcoming the curse of dimensionality.

Example 2.1 (Gaussian Squeeze).

In this task, nn homogeneous agents determine individual actions aia_{i} to jointly maximize the objective r(x)=xe(xμ)2/σ2r(x)=xe^{-(x-\mu)^{2}/\sigma^{2}}, where x=i=1naix=\sum_{i=1}^{n}a_{i}, and μ\mu and σ\sigma are the pre-defined mean and variance of the system. In scenarios of traffic congestion, each agent i[n]i\in[n] is a controller trying to send aia_{i} vehicles into the main road, where controllers coordinate with each other to avoid congestion, hence avoiding either over-use or under-use, thereby contributing to the entire system. This GS problem is previously studied in Yang et al. (2018), and serves as an ablation study on the impact of subsampling for MARL.

Example 2.2 (Constrained Exploration).

Consider an M×MM\times M grid. Each agent’s state is a coordinate in [M]×[M][M]\times[M]. The state represents the center of a d×dd\times d box where the global agent wishes to constrain the local agents’ movements. Initially, all agents are in the same location. At each time-step, the local agents take actions ai(t)2a_{i}(t)\in\mathbb{R}^{2} (e.g., up, down, left, right) to transition between states and collect rewards. The transition kernel ensures that local agents remain within the d×dd\times d box dictated by the global agent, using knowledge of ai(t),sg(t)a_{i}(t),s_{g}(t), and si(t)s_{i}(t). In warehouse settings where some shelves have collapsed, creating hazardous or inaccessible areas, we want agents to clean these areas. However, exploration in these regions may be challenging due to physical constraints or safety concerns. Therefore, through an appropriate design of the reward and transition functions, the global agent could guide the local agents to focus on specific d×dd\times d grids, allowing efficient cleanup while avoiding unnecessary risk or inefficiency.

Refer to caption
Figure 1: Constrained exploration for warehouse accidents.
Refer to caption
Figure 2: Traffic congestion settings with Gaussian squeeze.

To efficiently learn policies that maximize the objective, we make the following standard assumptions:

Assumption 2.3 (Finite state/action spaces).

We assume that the state and action spaces of all the agents in the MARL game are finite: |𝒮l|,|𝒮g|,|𝒜g|,|𝒜l|<|\mathcal{S}_{l}|,|\mathcal{S}_{g}|,|\mathcal{A}_{g}|,|\mathcal{A}_{l}|<\infty. Appendix H of the supplementary material weakens this assumption to the non-tabular setting with infinite sets.

Assumption 2.4 (Bounded rewards).

The global and local components of the reward function are bounded. Specifically, rg(,)r~g\|r_{g}(\cdot,\cdot)\|_{\infty}\leq\tilde{r}_{g}, and rl(,,)r~l\|r_{l}(\cdot,\cdot,\cdot)\|_{\infty}\leq\tilde{r}_{l}. Together, this implies that r(,)r~g+r~l:=r~\|r(\cdot,\cdot)\|_{\infty}\leq\tilde{r}_{g}+\tilde{r}_{l}:=\tilde{r}.

Definition 2.5 (ϵ\epsilon-optimal policy).

Given an objective function VV and policy simplex Π\Pi, a policy πΠ\pi\in\Pi is ϵ\epsilon-optimal if V(π)supπΠV(π)ϵV(\pi)\geq\sup_{\pi^{*}\in\Pi}V(\pi^{*})-\epsilon.

Remark 2.6.

Following Mondal et al. (2022); Xu & Klabjan (2023), our model captures heterogeneity among the local agents by modeling agent types as part of the agent state: assign a type εi\varepsilon_{i}\in\mathcal{E} to each local agent i[n]i\in[n] by letting 𝒮l=×𝒮l\mathcal{S}_{l}=\mathcal{E}\times\mathcal{S}^{\prime}_{l}, where \mathcal{E} is a set of possible types that are treated as a fixed part of the agent’s state. The transition and reward functions can vary depending on the agent’s type. The global agent can provide unique signals to local agents of each type by letting sg𝒮gs_{g}\in\mathcal{S}_{g} and ag𝒜ga_{g}\in\mathcal{A}_{g} denote a state/action vector where each element matches to a type.

Related Literature. MARL. MARL has a rich history, starting with early works on Markov games (Littman, 1994; Sutton et al., 1999), which can be regarded as a multi-agent extension of the MDP. MARL has since been actively studied (Zhang et al., 2021) in a broad range of settings. MARL is most similar to the category of “succinctly described” MDPs (Blondel & Tsitsiklis, 2000), where the state/action space is a product space formed by the individual state/action spaces of multiple agents, and where the agents interact to maximize an objective. A promising line of research that has emerged over recent years constrains the problem to sparse networked instances to enforce local interactions between agents (Qu et al., 2020a; Lin et al., 2020; Mondal et al., 2022). In this formulation, the agents correspond to vertices on a graph who only interact with nearby agents. By exploiting Gamarnik’s correlation decay property from combinatorial optimization (Gamarnik et al., 2009), they overcome the curse of dimensionality by simplifying the problem to only search over the policy space derived from the truncated graph to learn approximately optimal solutions. However, as the underlying network structure becomes dense with many local interactions, the neighborhood of each agent gets large, causing these algorithms to become intractable.

Mean-Field RL. Under assumptions of homogeneity in the state/action spaces of the agents, the problem of densely networked multi-agent RL was answered in Yang et al. (2018); Gu et al. (2021) by approximating the solution with a mean-field approach where the approximation error scales in O(1/n)O(1/\sqrt{n}). To avoid designing algorithms on probability spaces, they study MARL under Pareto optimality and consider a lifted space with a mean agent that aggregates the system’s rewards and dynamics. It then applies kernel regression on ϵ\epsilon-nets of the lifted space to design policies in time polynomial in nn. In contrast, our work achieves subpolynomial runtimes by directly sampling from this mean-field distribution. Cui & Koeppl (2022) introduce heterogeneity to mean-field MARL by modeling non-uniform interactions through graphons; however, these methods make critical assumptions on the existence of a sequence of graphons converging in cut-norm to the finite instance. In the cooperative setting, Cui et al. (2023) considers a mean-field setting with qq types of homogeneous agents; however, their learned policy does not provably converge to the optimum policy.

Structured RL. Our work is related to factored MDPs and exogenous MDPs. In factored MDPs, there is a global action affecting every agent whereas in our case, each agent has its own action (Min et al., 2023; Lauer & Riedmiller, 2000). Our result has a similar flavor to MDPs with exogenous inputs from learning theory, (Dietterich et al., 2018; Foster et al., 2022; Anand & Qu, 2024), wherein our subsampling algorithm treats each sampled state as an endogenous state, but where the exogenous dependencies can be dynamic.

Other related works. Jin et al. (2020) reduces the dependence of the product action space to an additive dependence using V-learning. In contrast, our work further reduces the complexity of the joint state space, which has not been previously accomplished. Our work adds to the growing literature on the Centralized Training with Decentralized Execution regime (Zhou et al., 2023), as our algorithm learns a provably approximately optimal policy using centralized information, but makes decisions using only local information during execution. Finally, one can efficiently approximate the QQ-table through function approximation (Jin et al., 2020). However, achieving theoretical bounds on the performance loss due to function approximation is intractable without making strong assumptions such as linear Bellman completeness (Golowich & Moitra, 2024) or low Bellman-Eluder dimension (Jin et al., 2021a). While our work primarily considers the finite tabular setting, we extend it to the non-tabular linear MDP setting in Appendix H.

2.1 Background in Reinforcement Learning

Q-learning. At the core of the standard Q-learning framework (Watkins & Dayan, 1992) for offline-RL is the QQ-function Q:𝒮×𝒜Q:\mathcal{S}\times\mathcal{A}\to\mathbb{R}. QQ-learning seeks to produce a policy π(|s)\pi^{*}(\cdot|s) that maximizes the expected infinite horizon discounted reward. For any policy π\pi, Qπ(s,a)=𝔼π[t=0γtr(s(t),a(t))|s(0)=s,a(0)=a]Q^{\pi}(s,a)=\mathbb{E}^{\pi}[\sum_{t=0}^{\infty}\gamma^{t}r(s(t),a(t))|s(0)\!=\!s,a(0)\!=\!a]. One approach to learning the optimal policy π(|s)\pi^{*}(\cdot|s) is dynamic programming, where the QQ-function is iteratively updated using value-iteration: Q0(s,a)=0Q^{0}(s,a)\!=\!0, for all (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\!\times\!\mathcal{A}. Then, for all t[T]t\!\in\![T], Qt+1(s,a)=𝒯Qt(s,a)Q^{t+1}(s,a)\!=\!\mathcal{T}Q^{t}(s,a), where 𝒯\mathcal{T} is the Bellman operator defined as

𝒯\displaystyle\mathcal{T} Qt(s,a)=r(s,a)\displaystyle Q^{t}(s,a)=r(s,a)
+γ𝔼sgPg(|sg,a),siPl(|si,sg),i[n]maxa𝒜Qt(s,a).\displaystyle+\gamma\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim P_{g}(\cdot|s_{g},a),s_{i}^{\prime}\sim P_{l}(\cdot|s_{i},s_{g}),\forall i\in[n]\end{subarray}}\max_{a^{\prime}\in\mathcal{A}}Q^{t}(s^{\prime},a^{\prime}).

The Bellman operator 𝒯\mathcal{T} is γ\gamma-contractive, which ensures the existence of a unique fixed-point QQ^{*} such that 𝒯Q=Q\mathcal{T}Q^{*}=Q^{*}, by the Banach fixed-point theorem (Banach, 1922). Here, the optimal policy is the deterministic greedy map π:𝒮𝒜\pi^{*}\!:\!\mathcal{S}\!\to\!\mathcal{A}, where π(s)=argmaxa𝒜Q(s,a)\pi^{*}(s)=\arg\max_{a\in\mathcal{A}}Q^{*}(s,a). However, the update complexity for the QQ-function is O(|𝒮||𝒜|)=O(|𝒮g||𝒮l|n|𝒜g||𝒜l|n)O(|\mathcal{S}||\mathcal{A}|)=O(|\mathcal{S}_{g}||\mathcal{S}_{l}|^{n}|\mathcal{A}_{g}||\mathcal{A}_{l}|^{n}), which grows exponentially with nn. As the number of local agents increases (n|𝒮l|(n\gg|\mathcal{S}_{l}|), this update complexity renders QQ-learning impractical.

Mean-field transformation. To address this, mean-field MARL (Yang et al., 2018) (under homogeneity assumptions) studies the distribution function Fz[n]:𝒵lF_{z_{[n]}}:\mathcal{Z}_{l}\to\mathbb{R}, where 𝒵l:-𝒮l×𝒜l\mathcal{Z}_{l}\coloneq\mathcal{S}_{l}\times\mathcal{A}_{l}, defined for all z:-(zs,za)𝒮l×𝒜lz\coloneq(z_{s},z_{a})\in\mathcal{S}_{l}\times\mathcal{A}_{l} by

Fz[n](z):-1ni=1n𝟏{si=zs,ai=za}.F_{z_{[n]}}(z)\coloneq\frac{1}{n}\sum_{i=1}^{n}\mathbf{1}\{s_{i}=z_{s},a_{i}=z_{a}\}. (5)

Let μn(𝒵l)={bn|b{0,,n}}|𝒮l|×|𝒜l|\mu_{n}(\mathcal{Z}_{l})=\{\frac{b}{n}|b\in\{0,\dots,n\}\}^{|\mathcal{S}_{l}|\times|\mathcal{A}_{l}|} be the space of |𝒮l|×|𝒜l||\mathcal{S}_{l}|\times|\mathcal{A}_{l}|-sized tables, where each entry is an element of {0,1n,2n,,1}\{0,\frac{1}{n},\frac{2}{n},\dots,1\}. Then, Fz[n]μn(𝒵l)F_{z_{[n]}}\in\mu_{n}{(\mathcal{Z}_{l})} represents the proportion of agents in each state/action pair. The QQ-function is permutation-invariant in the local agents, since permuting the labels of homogeneous local agents with the same state will not change the action of the decision-making agent. So, Q(sg,s[n],ag,a[n])=Q^(sg,s1,ag,a1,Fz[n]1)Q(s_{g},s_{[n]},a_{g},a_{[n]})=\hat{Q}(s_{g},s_{1},a_{g},a_{1},F_{z_{[n]\setminus 1}}). Here, Q^:𝒮g×𝒮l×𝒜g×𝒜1×μn1(𝒵l)\hat{Q}:\mathcal{S}_{g}\times\mathcal{S}_{l}\times\mathcal{A}_{g}\times\mathcal{A}_{1}\times\mu_{n-1}(\mathcal{Z}_{l})\to\mathbb{R} is an equivalent reparameterized QQ-function learned by mean-field value iteration: one initializes Q^0(sg,s1,ag,a1,Fz[n]1)=0\hat{Q}^{0}(s_{g},s_{1},a_{g},a_{1},F_{z_{[n]\setminus 1}})=0. At time tt, we update Q^\hat{Q} as Q^t+1=𝒯^Q^t\hat{Q}^{t+1}=\hat{\mathcal{T}}\hat{Q}^{t}, where 𝒯^\hat{\mathcal{T}} is the Bellman operator in distribution space, given by:

𝒯^Q^t(sg,s1,ag,a1,Fz[n]1)=r(s,a)\displaystyle\hat{\mathcal{T}}\hat{Q}^{t}(s_{g},s_{1},a_{g},a_{1},F_{z_{[n]\setminus 1}})=r(s,a)
+γ𝔼sgPg(sg,ag)siPl(si,sg,ai)i[n]max(ag,a1,a[n]1)𝒜g×𝒜l×𝒜ln1Q^t(sg,s1,ag,a1,Fz[n]1)\displaystyle\!+\!\gamma\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim P_{g}(\cdot\mid s_{g},a_{g})\\ s_{i}^{\prime}\sim P_{l}(\cdot\mid s_{i},s_{g},a_{i})\\ \forall i\in[n]\end{subarray}}\!\max_{\begin{subarray}{c}(a_{g}^{\prime},a_{1}^{\prime},a_{[n]\setminus 1}^{\prime})\\ \in\mathcal{A}_{g}\times\mathcal{A}_{l}\times\mathcal{A}_{l}^{n-1}\end{subarray}}\!\!\hat{Q}^{t}(s^{\prime}_{g},s^{\prime}_{1},a^{\prime}_{g},a^{\prime}_{1},F_{z^{\prime}_{[n]\setminus 1}})

Since 𝒯\mathcal{T} is a γ\gamma-contraction, so is 𝒯^\hat{\mathcal{T}}. Hence, T^\hat{T} has a unique fixed-point Q^\hat{Q}^{*} such that Q^(sg,s1,ag,a1,Fz[n]1)=Q(sg,s[n],ag,a[n])\hat{Q}^{*}(s_{g},s_{1},a_{g},a_{1},F_{z_{[n]\setminus 1}})=Q^{*}(s_{g},s_{[n]},a_{g},a_{[n]}). The optimal policy is the deterministic greedy policy given by

π^(sg,s1,Fs[n]1)=\displaystyle\hat{\pi}^{*}(s_{g},s_{1},F_{s_{[n]\setminus 1}})=
argmax(ag,a1,a[n]1)𝒜g×𝒜l×𝒜ln1Q^(sg,s1,ag,a1,Fs[n]1,a[n]1,ag).\displaystyle\mathop{\arg\max}_{\begin{subarray}{c}(a_{g},a_{1},a_{[n]\setminus 1})\\ \in\mathcal{A}_{g}\times\mathcal{A}_{l}\times\mathcal{A}_{l}^{n-1}\end{subarray}}\hat{Q}^{*}(s_{g},s_{1},a_{g},a_{1},F_{s_{[n]\setminus 1},a_{[n]\setminus 1}},a_{g}).

Here, the update complexity of Q^\hat{Q} is O(|𝒮g||𝒜g||𝒵l|n|𝒵l|)O(|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{Z}_{l}|n^{|\mathcal{Z}_{l}|}), which scales polynomially with the number of agents nn.

Remark 2.7.

Existing solutions require a sample complexity of min{O~(|𝒮g||𝒜g||𝒵l|n),O~(|𝒮g||𝒜g||𝒵l|n|𝒵l|)}\min\{\tilde{O}(|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{Z}_{l}|^{n}),\tilde{O}(|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{Z}_{l}|n^{|\mathcal{Z}_{l}|})\}. Here, one uses QQ-learning if |𝒵l|n1<n|𝒵l||\mathcal{Z}_{l}|^{n-1}<n^{|\mathcal{Z}_{l}|}, and mean-field value iteration otherwise. In each regime, as nn scales, the update complexity becomes computationally infeasible.

3 Method and Theoretical Results

In this section, we propose the SUBSAMPLE-MFQ algorithm to overcome the polynomial (in nn) sample complexity of mean-field value iteration and the exponential (in nn) sample complexity of traditional QQ-learning. In the algorithm, the global agent randomly samples a subset of local agents Δ[n]\Delta\subseteq[n] such that |Δ|=k|\Delta|=k, for knk\leq n. It ignores all other local agents [n]Δ[n]\setminus\Delta, and performs value iteration to learn the QQ-function Q^k,mest\hat{Q}_{k,m}^{\mathrm{est}} and policy π^k,mest\hat{\pi}_{k,m}^{\mathrm{est}} for this surrogate subsystem of kk local agents, where mm is the number of samples used to update the QQ-functions’ estimates of the unknown system. When |𝒵l|k1<k|𝒵l||\mathcal{Z}_{l}|^{k-1}<k^{|\mathcal{Z}_{l}|}, the algorithm uses traditional value-iteration (Algorithm 1), and when |𝒵l|k1>k|𝒵l||\mathcal{Z}_{l}|^{k-1}>k^{|\mathcal{Z}_{l}|}, it uses mean-field value iteration (Algorithm 2). We denote the surrogate reward gained by this subsystem at each time step by rΔ:𝒮×𝒜r_{\Delta}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}, where

rΔ(s,a)=rg(sg,ag)+1|Δ|iΔrl(sg,si,ai).r_{\Delta}(s,a)=r_{g}(s_{g},a_{g})+\frac{1}{|\Delta|}\sum_{i\in\Delta}r_{l}(s_{g},s_{i},a_{i}). (6)

To convert the optimality of each agent’s action in the kk local-agent subsystem to an approximate optimality guarantee on the full nn-agent system, we use a randomized policy π~k,mest\tilde{\pi}^{\mathrm{est}}_{k,m} (Algorithm 3), where the global agent samples Δ𝒰([n]k)\Delta\in\mathcal{U}\binom{[n]}{k} at each time-step to derive an action agπ^k,mest(sg,sΔ)a_{g}\leftarrow\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{\Delta}), and where each local ii agent samples k1k-1 other local agents Δi\Delta_{i} to derive a action π^k,mest(sg,si,sΔi)\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},s_{\Delta_{i}}). Finally, Theorem 3.4 shows that the policy π~k,mest\tilde{\pi}_{k,m}^{\mathrm{est}} converges to the optimal policy π\pi^{*} as knk\to n. We first present Algorithms 2 and 1 (SUBSAMPLE-MFQ: Learning) and Algorithm 3 (SUBSAMPLE-MFQ: Execution), which we describe below. For this, a crucial characterization is the notion of the empirical distribution function:

Definition 3.1 (Empirical Distribution Function).

For any population (z1,,zn)𝒵ln(z_{1},\dots,z_{n})\in\mathcal{Z}_{l}^{n}, where 𝒵l:-𝒮l×𝒜l\mathcal{Z}_{l}\coloneq\mathcal{S}_{l}\times\mathcal{A}_{l}, define the empirical distribution function FzΔ:𝒵l+F_{z_{\Delta}}:\mathcal{Z}_{l}\to\mathbb{R}_{+} for all z:-(zs,za)𝒮l×𝒜lz\coloneq(z_{s},z_{a})\in\mathcal{S}_{l}\times\mathcal{A}_{l} and for all Δ([n]k)\Delta\in{[n]\choose k} by:

FzΔ(x):-1kiΔ𝟏{si=zs,ai=za}.F_{z_{\Delta}}(x)\coloneq\frac{1}{k}\sum_{i\in\Delta}\mathbf{1}\{s_{i}=z_{s},a_{i}=z_{a}\}. (7)

Let μk(𝒵l):-{bk|b{0,,k}}|𝒮l|×|𝒜l|\mu_{k}(\mathcal{Z}_{l})\coloneq\left\{\frac{b}{k}|b\in\{0,\dots,k\}\right\}^{|\mathcal{S}_{l}|\times|\mathcal{A}_{l}|} be the space of |𝒮l|×|𝒜l||\mathcal{S}_{l}|\times|\mathcal{A}_{l}|-length vectors where each entry is an element of {0,1k,2k,,1}\{0,\frac{1}{k},\frac{2}{k},\dots,1\}. Here, FzΔμk(𝒵l)F_{z_{\Delta}}\in\mu_{k}(\mathcal{Z}_{l}) is the proportion of agents in the kk-local-agent subsystem at each state.

Algorithms 2 and 1 (Offline learning). Let mm\in\mathbb{N} denote the sample size for the learning algorithm with sampling parameter knk\leq n. When |𝒵l|k1k|𝒵l||\mathcal{Z}_{l}|^{k-1}\leq k^{|\mathcal{Z}_{l}|}, we iteratively learn the optimal QQ-function for a subsystem with kk-local agents denoted by Q^k,mt:𝒮g×𝒮lk×𝒜g×𝒜lk\hat{Q}_{k,m}^{t}:\mathcal{S}_{g}\times\mathcal{S}_{l}^{k}\times\mathcal{A}_{g}\times\mathcal{A}_{l}^{k}\to\mathbb{R}, which is initialized to 0. At time tt, we update Q^k,mt+1(sg,sΔ,ag,aΔ)=𝒯~k,mQ^k,mt(sg,sΔ,ag,aΔ)\hat{Q}_{k,m}^{t+1}(s_{g},{s_{\Delta}},a_{g},a_{\Delta})={\tilde{\mathcal{T}}}_{k,m}\hat{Q}_{k,m}^{t}(s_{g},s_{\Delta},a_{g},a_{\Delta}), where 𝒯~k,m{\tilde{\mathcal{T}}}_{k,m} is the empirically adapted Bellman operator in Equation 8.

Algorithm 1 SUB-SAMPLE-MFQ: Learning (if |𝒵l|k1k|𝒵l||\mathcal{Z}_{l}|^{k-1}\leq k^{|\mathcal{Z}_{l}|})
0:  A multi-agent system. Parameter TT for the number of iterations in the initial value iteration step. Sampling parameters k[n]k\in[n] and mm\in\mathbb{N}. Discount parameter γ(0,1)\gamma\in(0,1). Oracle 𝒪\mathcal{O} to sample sgPg(|sg,ag)s_{g}^{\prime}\sim{P}_{g}(\cdot|s_{g},a_{g}) and siPl(|si,sg,ai)s_{i}^{\prime}\sim{P}_{l}(\cdot|s_{i},s_{g},a_{i}) for all i[n]i\in[n].
1:  Uniformly sample Δ[n]\Delta\subseteq[n] such that |Δ|=k|\Delta|=k.
2:  Initialize Q^k,m0(sg,sΔ,ag,aΔ)=0,(sg,ag,zΔ)\hat{Q}^{0}_{k,m}(s_{g},s_{\Delta},a_{g},a_{\Delta})=0,\forall(s_{g},a_{g},z_{\Delta}).
3:  for t=1t=1 to TT do
4:     for (sg,sΔ,ag,aΔ)𝒮g×𝒮lk×𝒜g×𝒜lk(s_{g},s_{\Delta},a_{g},a_{\Delta})\in\mathcal{S}_{g}\times\mathcal{S}_{l}^{k}\times\mathcal{A}_{g}\times\mathcal{A}_{l}^{k} do
5:        ​​Q^k,mt+1(sg,sΔ,ag,aΔ)=𝒯~k,mQ^k,mt(sg,sΔ,ag,aΔ)\hat{Q}^{t+1}_{k,m}(s_{g},s_{\Delta},a_{g},a_{\Delta})\!=\!\tilde{\mathcal{T}}_{k,m}\hat{Q}^{t}_{k,m}(s_{g},s_{\Delta},a_{g},a_{\Delta})​​
6:     end for
7:  end for
8:  Return Q^k,mT\hat{Q}_{k,m}^{T}.
9:  For all sg𝒮gs_{g}\in\mathcal{S}_{g} and sΔ𝒮lks_{\Delta}\in\mathcal{S}_{l}^{k}, define the greedy argmax policy by π^k,mest(sg,sΔ)\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{\Delta}) such that π^k,mest(sg,sΔ)=argmaxag𝒜g,aΔ𝒜lkQ^k,mT(sg,sΔ,ag,aΔ)\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{\Delta})=\arg\max_{a_{g}\in\mathcal{A}_{g},a_{\Delta}\in\mathcal{A}_{l}^{k}}\hat{Q}_{k,m}^{T}(s_{g},s_{\Delta},a_{g},a_{\Delta}).

When |𝒵l|k1>k|𝒵l||\mathcal{Z}_{l}|^{k-1}>k^{|\mathcal{Z}_{l}|}, we learn the optimal mean-field QQ-function for a kk local agent system, denoted by Q^k,mt:𝒮g×𝒮1×μk1(𝒵l)×𝒜g×𝒜l\hat{Q}_{k,m}^{t}:\mathcal{S}_{g}\times\mathcal{S}_{1}\times\mu_{k-1}(\mathcal{Z}_{l})\times\mathcal{A}_{g}\times\mathcal{A}_{l}\to\mathbb{R}, which is initialized to 0. At time tt, we update Q^k,mt+1(sg,s1,FzΔ~,a1,ag)=𝒯^k,mQ^k,mt(sg,s1,FzΔ~,a1,ag)\hat{Q}_{k,m}^{t+1}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g})=\hat{\mathcal{T}}_{k,m}\hat{Q}_{k,m}^{t}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g}), where 𝒯^k,m\hat{\mathcal{T}}_{k,m} is the empirically adapted mean-field Bellman operator in Equation 9.

Algorithm 2 SUBSAMPLE-MFQ: Learning (if |𝒵l|k1>k|𝒵l||\mathcal{Z}_{l}|^{k-1}>k^{|\mathcal{Z}_{l}|})
0:  A multi-agent system. Parameter TT for the number of iterations in the initial value iteration step. Sampling parameters k[n]k\in[n] and mm\in\mathbb{N}. Discount parameter γ(0,1)\gamma\in(0,1). Oracle 𝒪\mathcal{O} to sample sgPg(|sg,ag)s_{g}^{\prime}\sim{P}_{g}(\cdot|s_{g},a_{g}) and siPl(|si,sg,ai)s_{i}\sim{P}_{l}(\cdot|s_{i},s_{g},a_{i}) for all i[n]i\in[n].
1:  Set Δ~={2,,k}\tilde{\Delta}=\{2,\dots,k\}.
2:  Set μk1(𝒵l)={bk1:b{0,1,,k1}}|𝒮l|×|𝒜l|\mu_{k-1}(\mathcal{Z}_{l})\!=\!\{\frac{b}{k-1}:b\in\{0,1,\dots,k-1\}\}^{|\mathcal{S}_{l}|\times|\mathcal{A}_{l}|}.
3:  Set Q^k,m0(sg,s1,FzΔ~,a1,ag)=0\hat{Q}^{0}_{k,m}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g})=0, (sg,s[k],a[k],ag)\forall(s_{g},s_{[k]},a_{[k]},a_{g}).
4:  for t=1t=1 to TT do
5:     for (sg,s1,FzΔ~,a1,ag)𝒮g×𝒮l×μk1(𝒵l)×𝒜l×𝒜g(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g})\in\mathcal{S}_{g}\times\mathcal{S}_{l}\times\mu_{k-1}(\mathcal{Z}_{l})\times\mathcal{A}_{l}\times\mathcal{A}_{g} do
6:        Q^k,mt+1(sg,s1,FzΔ~,a1,ag)\hat{Q}^{t+1}_{k,m}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g})
7:         =𝒯^k,mQ^k,mt(sg,s1,FzΔ~,a1,ag)=\hat{\mathcal{T}}_{k,m}\hat{Q}^{t}_{k,m}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g})
8:     end for
9:  end for
10:  (sg,si,FsΔ~)𝒮g×𝒮l×μk1(𝒮l)\forall(s_{g},s_{i},F_{s_{\tilde{\Delta}}})\in\mathcal{S}_{g}\times\mathcal{S}_{l}\times\mu_{k-1}(\mathcal{S}_{l}), let
π^k,mest(sg,si,FsΔ~):=\displaystyle\!\!\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\tilde{\Delta}}})\!:=\!\!\!\!\!\!\! argmaxag𝒜g,ai𝒜l,FaΔ~μk1(𝒜l)Q^k,mT(sg,s1,FzΔ~,a1,ag)\displaystyle\mathop{\arg\max}_{\begin{subarray}{c}a_{g}\in\mathcal{A}_{g},a_{i}\in\mathcal{A}_{l},\\ F_{a_{\tilde{\Delta}}}\in\mu_{k-1}{(\mathcal{A}_{l})}\end{subarray}}\!\!\!\!\hat{Q}_{k,m}^{T}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g})

Since the system is unknown, 𝒯~k,m\tilde{\mathcal{T}}_{k,m} and 𝒯^k,m\hat{\mathcal{T}}_{k,m} cannot directly perform the Bellman update and instead use mm random samples sgjPg(|sg,ag)s_{g}^{j}\sim P_{g}(\cdot|s_{g},a_{g}) and sijPl(|si,sg,ai)s_{i}^{j}\sim P_{l}(\cdot|s_{i},s_{g},a_{i}) for each j[m]j\in[m], iΔi\in\Delta to approximate the system:

𝒯~k,mQ^k,mt(sg,sΔ,ag,aΔ)=rΔ(s,a)\displaystyle{\tilde{\mathcal{T}}}_{k,m}\hat{Q}_{k,m}^{t}(s_{g},{s_{\Delta}},a_{g},a_{\Delta})=r_{\Delta}(s,a) (8)
+γmj[m]maxag𝒜g,aΔ𝒜lkQ^k,mt(sgj,sΔj,ag,aΔ)\displaystyle+\frac{\gamma}{m}\sum_{j\in[m]}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}\end{subarray}}\hat{Q}_{k,m}^{t}(s_{g}^{j},{s_{\Delta}^{j}},a_{g}^{\prime},a_{\Delta}^{\prime})
𝒯^k,mQ^k,mt(sg,s1,FzΔ~,a1,ag)=rΔ(s,a)+\displaystyle\hat{\mathcal{T}}_{k,m}\hat{Q}_{k,m}^{t}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g})=r_{\Delta}(s,a)+ (9)
γmj[m]maxag𝒜g,a1𝒜l,FaΔ~μk1(𝒜l)Q^k,mt(sgj,s1j,FsΔ~j,aΔ~,a1,ag).\displaystyle\frac{\gamma}{m}\sum_{j\in[m]}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},a_{1}^{\prime}\in\mathcal{A}_{l},\\ F_{a_{\tilde{\Delta}}^{\prime}}\in\mu_{k-1}(\mathcal{A}_{l})\end{subarray}}\hat{Q}_{k,m}^{t}(s_{g}^{j},s_{1}^{j},F_{s_{\tilde{\Delta}}^{j},a_{\tilde{\Delta}^{\prime}}},a_{1}^{\prime},a_{g}^{\prime}).

Q^k,mt\hat{Q}_{k,m}^{t} depends on sΔs_{\Delta} and aΔa_{\Delta} through FzΔF_{z_{\Delta}}, and 𝒯~k,m\tilde{\mathcal{T}}_{k,m}/𝒯^k,m\hat{\mathcal{T}}_{k,m} are γ\gamma-contractive. So, Algorithms 1 and 2 apply value iteration with their respective Bellman operators 𝒯~\tilde{\mathcal{T}} and 𝒯^\hat{\mathcal{T}} until Q^k,m\hat{Q}_{k,m} converges to a fixed point satisfying 𝒯~k,mQ^k,mest=Q^k,mest\tilde{\mathcal{T}}_{k,m}\hat{Q}_{k,m}^{\mathrm{est}}=\hat{Q}_{k,m}^{\mathrm{est}} and 𝒯^k,mQ^k,mest=Q^k,mest\hat{\mathcal{T}}_{k,m}\hat{Q}_{k,m}^{\mathrm{est}}=\hat{Q}_{k,m}^{\mathrm{est}}, yielding equivalent deterministic policies π^k,mest(sg,sΔ)\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},{s_{\Delta}}) and π^k,mest(sg,s1,FsΔ~)\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},s_{1},F_{s_{\tilde{\Delta}}}):

π^k,mest(sg,sΔ)\displaystyle\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},{s_{\Delta}}) =argmaxag𝒜g,aΔ𝒜lkQ^k,mest(sg,sΔ,ag,aΔ)\displaystyle\!=\!\!\mathop{\arg\max}_{a_{g}\in\mathcal{A}_{g},a_{\Delta}\in\mathcal{A}_{l}^{k}}\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},{s_{\Delta}},a_{g},a_{\Delta})
π^k,mest(sg,s1,FsΔ~)\displaystyle\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},s_{1},F_{s_{\tilde{\Delta}}}) =argmaxag𝒜g,a1𝒜l,FaΔ~μk1(𝒜l)Q^k,mest(sg,s1,FzΔ~,a1,ag)\displaystyle\!=\!\!\mathop{\arg\max}_{\begin{subarray}{c}a_{g}\in\mathcal{A}_{g},a_{1}\in\mathcal{A}_{l},\\ F_{a_{\tilde{\Delta}}}\in\mu_{k-1}(\mathcal{A}_{l})\end{subarray}}\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g})

For π^k,mest(sg,si,sΔi)=ag,ai,aΔi\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},s_{\Delta\setminus i})\!=\!a_{g}^{*},a_{i}^{*},a_{\Delta\setminus i}^{*}, let [π^k,mest(sg,sΔ)]g=ag[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{\Delta})]_{g}\!=\!a_{g}^{*} and [π^k,mest(sg,si,sΔi)]l=ai[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},s_{\Delta\setminus i})]_{l}\!=\!a_{i}^{*}.

Algorithm 3 (Online implementation). In Algorithm 3, (SUBSAMPLE-MFQ: Execution) the global agent samples local agents Δ(t)𝒰([n]k)\Delta(t)\sim\mathcal{U}\binom{[n]}{k} at each time step to derive action ag(t)=[π^k,mest(sg,sΔ(t))]ga_{g}(t)=[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{\Delta}(t))]_{g}, and each local agent ii samples other local agents Δi(t)𝒰([n]ik1)\Delta_{i}(t)\sim\mathcal{U}{[n]\setminus i\choose k-1} to derive action ai(t)=[π^k,mest(sg,si,sΔ(t))]la_{i}(t)=[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},s_{\Delta}(t))]_{l}. The system then incurs a reward r(s,a)r(s,a). This procedure of first sampling agents and then applying π^k,mest\hat{\pi}^{\mathrm{est}}_{k,m} is denoted by a stochastic policy π~k,mest(a|s)\tilde{\pi}_{k,m}^{\mathrm{est}}(a|s), where π~k,mest(ag|s)\tilde{\pi}_{k,m}^{\mathrm{est}}(a_{g}|s) is the global agent’s action distribution and π~k,mest(al|s)\tilde{\pi}_{k,m}^{\mathrm{est}}(a_{l}|s) is the local agent’s action distribution:

π~k,mest(ag|s)\displaystyle\!\!\tilde{\pi}_{k,m}^{\mathrm{est}}(a_{g}|s)\! =1(nk)Δ([n]k)𝟏(π^k,mest(sg,sΔ)=a)\displaystyle=\!\frac{1}{\binom{n}{k}}\sum_{\Delta\in\binom{[n]}{k}}\mathbf{1}(\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{\Delta})=a) (10)
π~k,mest(ai|s)\displaystyle\!\!\tilde{\pi}_{k,m}^{\mathrm{est}}(a_{i}|s)\! =1(n1k1)Δ~([n]ik1)𝟏(π^k,mest(sg,si,FsΔ~)=ai)\displaystyle=\!\frac{1}{\binom{n-1}{k-1}}\!\!\sum_{\tilde{\Delta}\in\binom{[n]\setminus i}{k-1}}\!\!\!\!\mathbf{1}(\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\tilde{\Delta}}})\!=\!a_{i})\! (11)

The agents then transition to their next states.

Algorithm 3 SUB-SAMPLE-MFQ: Execution
0:  A multi-agent system as described in Section 2. Parameter TT^{\prime} for the number of iterations for the decision-making sequence. Hyperparameter k[n]k\in[n]. Discount parameter γ\gamma. Policy π^k,mest(sg,FsΔ)\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},F_{s_{\Delta}}).
1:  If |𝒵l|k1k|𝒵l||\mathcal{Z}_{l}|^{k-1}\leq k^{|\mathcal{Z}_{l}|}, learn π^k,mest\hat{\pi}_{k,m}^{\mathrm{est}} from Algorithm 1.
2:  If |𝒵l|k1>k|𝒵l||\mathcal{Z}_{l}|^{k-1}>k^{|\mathcal{Z}_{l}|}, learn π^k,mest\hat{\pi}_{k,m}^{\mathrm{est}} from Algorithm 2.
3:  Sample (sg(0),s[n](0))s0(s_{g}(0),s_{[n]}(0))\sim s_{0}, where s0s_{0} is a distribution on the initial global state (sg,s[n])(s_{g},s_{{[n]}})
4:  Initialize the total reward R0=0R_{0}=0.
5:  Policy πkest(s)\pi_{k}^{\mathrm{est}}(s) is defined as follows:
6:  for t=0t=0 to TT^{\prime} do
7:     Choose Δ\Delta uniformly at random from ([n]k){[n]\choose k} and let ag(t)=[π^k,mest(sg(t),sΔ(t))]ga_{g}(t)=[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g}(t),s_{\Delta}(t))]_{g}.
8:     for i=1i=1 to nn do
9:        Choose Δi\Delta_{i} uniformly at random from ([n]ik1){[n]\setminus i\choose k-1} and let ai(t)=[π^k,mest(sg(t),si(t),sΔi(t))]la_{i}(t)=[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g}(t),s_{i}(t),s_{\Delta_{i}}(t))]_{l}.
10:     end for
11:     Let sg(t+1)Pg(|sg(t),ag(t))s_{g}(t+1)\sim P_{g}(\cdot|s_{g}(t),a_{g}(t)).
12:     Let si(t+1)Pl(|si(t),sg(t),ai(t))s_{i}(t+1)\sim P_{l}(\cdot|s_{i}(t),s_{g}(t),a_{i}(t)), i[n]\forall i\in[n].
13:     Rt+1=Rt+γtr(s,a)R_{t+1}=R_{t}+\gamma^{t}\cdot r(s,a)
14:  end for

3.1 Theoretical Guarantee

We now show the value of the expected discounted cumulative reward produced by π~k,mest\tilde{\pi}^{\mathrm{est}}_{k,m} is approximately optimal, where the optimality gap decays as knk\to n and mm grows.

Bellman noise. We introduce the notion of Bellman noise, which is used in the main theorem. Note that 𝒯^k,m\hat{\mathcal{T}}_{k,m} is an unbiased estimator of the adapted Bellman operator 𝒯^k\hat{\mathcal{T}}_{k},

𝒯^kQ^k(sg,sΔ,ag,aΔ)=rΔ(s,a)+γ𝔼sgPg(sg,ag),siPl(si,sg,ai),iΔmaxag𝒜g,aΔ𝒜lkQ^k(sg,sΔ,ag,aΔ).\begin{split}&\hat{\mathcal{T}}_{k}\hat{Q}_{k}(s_{g},s_{\Delta},a_{g},a_{\Delta})=r_{\Delta}(s,a)\\ &+\gamma\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim P_{g}(\cdot\mid s_{g},a_{g}),\\ s_{i}^{\prime}\sim P_{l}(\cdot\mid s_{i},s_{g},a_{i}),\\ \forall i\in\Delta\end{subarray}}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}\end{subarray}}\hat{Q}_{k}(s_{g}^{\prime},s_{\Delta}^{\prime},a_{g}^{\prime},a_{\Delta}^{\prime}).\end{split} (12)

Initialize Q^k0(sg,sΔ,ag,aΔ)=0\hat{Q}^{0}_{k}(s_{g},s_{\Delta},a_{g},a_{\Delta})=0. For tt\in\mathbb{N}, let Q^kt+1=𝒯^kQ^kt\hat{Q}_{k}^{t+1}=\hat{\mathcal{T}}_{k}\hat{Q}_{k}^{t}, where 𝒯^k\hat{\mathcal{T}}_{k} is defined for knk\leq n in Equation 12. Then, 𝒯^k\hat{\mathcal{T}}_{k} is also a γ\gamma-contraction with fixed-point Q^k\hat{Q}_{k}^{*}. By the law of large numbers, limm𝒯^k,m=𝒯^k\lim_{m\to\infty}\hat{\mathcal{T}}_{k,m}=\hat{\mathcal{T}}_{k} and Q^k,mestQ^k0\|\hat{Q}_{k,m}^{\mathrm{est}}-\hat{Q}_{k}^{*}\|_{\infty}\to 0 as mm\to\infty. For finite mm, ϵk,m:-Q^k,mestQ^k\epsilon_{k,m}\coloneq\|\hat{Q}_{k,m}^{\mathrm{est}}-\hat{Q}_{k}^{*}\|_{\infty} is the well-studied Bellman noise.

Lemma 3.2.

For k[n]k\in[n] and mm\in\mathbb{N}, where mm is the number of samples in Equation 9, if T21γlogr~k1γT\geq\frac{2}{1-\gamma}\log\frac{\tilde{r}\sqrt{k}}{1-\gamma} then Q^k,mestQ^kϵk,mO~(1/k)\|\hat{Q}_{k,m}^{\mathrm{est}}-\hat{Q}_{k}^{*}\|_{\infty}\leq\epsilon_{k,m}\leq\tilde{O}(1/\sqrt{k}), when m=m=2|𝒮g||𝒜g||𝒮l||𝒜l|k2.5+|𝒮l||𝒜l|(1γ)5log(|𝒮g||𝒜g||𝒜l||𝒮l|)log1(1γ)2m=m^{*}=\frac{2|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{S}_{l}||\mathcal{A}_{l}|k^{2.5+|\mathcal{S}_{l}||\mathcal{A}_{l}|}}{(1-\gamma)^{5}}\log(|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{A}_{l}||\mathcal{S}_{l}|)\log\frac{1}{(1-\gamma)^{2}}.

We defer the proof of Lemma 3.2 to Appendix F.1.

Let πkest:-π~k,mest{\pi}_{k}^{\mathrm{est}}\coloneq\tilde{\pi}_{k,m^{*}}^{\mathrm{est}}. To compare the performance between π\pi^{*} and πkest{\pi}_{k}^{\mathrm{est}}, we define the value function of a policy π\pi:

Definition 3.3.

For a given policy π\pi, the value function Vπ:𝒮V^{\pi}:\mathcal{S}\to\mathbb{R} for 𝒮:=𝒮g×𝒮ln\mathcal{S}:=\mathcal{S}_{g}\times\mathcal{S}_{l}^{n} is given by:

Vπ(s)=𝔼a(t)π(|s(t))[t=0γtr(s(t),a(t))|s(0)=s].V^{\pi}(s)\!=\!\mathop{\mathbb{E}}_{\begin{subarray}{c}a(t)\sim\pi(\cdot|s(t))\end{subarray}}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s(t),a(t))\bigg{|}s(0)\!=\!s\right]\!\!.\! (13)

Intuitively, Vπ(s)V^{\pi}(s) is the expected discounted cumulative reward when starting from state ss and applying actions from the policy π\pi across an infinite horizon.

With the above preparations, we are primed to present our main result: a bound on the optimality gap for our learned policy πkest{\pi}_{k}^{\mathrm{est}} that decays with rate O~(1/k)\tilde{O}(1/\sqrt{k}).

Theorem 3.4.

Let πkest{\pi}^{\mathrm{est}}_{k} denote the learned policy from SUBSAMPLE-MFQ. Suppose T21γlogr~k1γT\geq\frac{2}{1-\gamma}\log\frac{\tilde{r}\sqrt{k}}{1-\gamma}. Then, s𝒮:-𝒮g×𝒮ln\forall s\in\mathcal{S}\coloneq\mathcal{S}_{g}\times\mathcal{S}_{l}^{n}, we have:

Vπ(s)Vπkest(s)\displaystyle V^{\pi^{*}}(s)-V^{{\pi}_{k}^{\mathrm{est}}}(s)
r~(1γ)2nk+12nkln40r~|𝒮l||𝒜l||𝒜g|k|𝒜l|+12(1γ)2+4k.\displaystyle\leq\frac{\tilde{r}}{(1-\gamma)^{2}}\sqrt{\frac{n-k+1}{2nk}\ln\frac{40\tilde{r}|\mathcal{S}_{l}||\mathcal{A}_{l}||\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|+\frac{1}{2}}}{(1-\gamma)^{2}}}\!+\!\frac{4}{\sqrt{k}}.

We defer the proof of Theorem 3.4 to Appendix C, and generalize the result to stochastic rewards in Appendix G.

Remark 3.5.

Between Algorithms 2 and 1, the asymptotic sample complexity to learn π^kest\hat{\pi}^{\mathrm{est}}_{k} for a fixed kk is min{O~(|𝒵l|k),O~(k|𝒵l|)}\min\{\tilde{O}(|\mathcal{Z}_{l}|^{k}),\tilde{O}(k^{|\mathcal{Z}_{l}|})\}. By Theorem 3.4, as knk\to n, the optimality gap decays, revealing a fundamental trade-off in the choice of kk: increasing kk improves the performance of the policy, but increases the size of the QQ-function. We explore this trade-off further in our experiments. For k=O(logn)k=O(\log n), this leads to an asymptotic runtime of min{O~(nlog|𝒵l|),O~((logn)|𝒵l|)}\min\{\tilde{O}(n^{\log|\mathcal{Z}_{l}|}),\tilde{O}((\log n)^{|\mathcal{Z}_{l}|})\}. This is an exponential speedup on the complexity from mean-field value iteration (from poly(n)\mathrm{poly}(n) to poly(logn)\mathrm{poly}(\log n)), as well as over traditional value-iteration (from exp(n)\mathrm{exp}(n) to poly(n)\mathrm{poly}(n)), where the optimality gap decays to 0 with rate O(1/logn)O(1/\sqrt{\log n}).

Remark 3.6.

If k=O(logn)k=O(\log n), SUBSAMPLE-MFQ handles ||O(logn/loglogn)|\mathcal{E}|\leq O(\log n/\log\log n) types of local agents, since the run-time of the learning algorithm becomes poly(n)\mathrm{poly}(n). This surpasses the previous heterogeneity capacity from Mondal et al. (2022), which only handles constant ||O(1)|\mathcal{E}|\leq O(1).

In the non-tabular setting with infinite state/action spaces, one could replace the QQ-learning algorithm with any arbitrary value-based RL method that learns Q^k\hat{Q}_{k} with function approximation (Sutton et al., 1999) such as deep QQ-networks (Silver et al., 2016). Doing so raises an additional error that factors into Theorem 3.4. We formalize this below.

Assumption 3.7 (Linear MDP with infinite state spaces).

Suppose 𝒮g\mathcal{S}_{g} and 𝒮l\mathcal{S}_{l} are infinite compact sets. Furthermore, suppose there exists a feature map ϕ:𝒮×𝒜d\phi:\mathcal{S}\times\mathcal{A}\to\mathbb{R}^{d} and dd unknown (signed) measures μ=(μ1,,μd)\mu=(\mu^{1},\dots,\mu^{d}) over 𝒮\mathcal{S} and a vector θd\theta\in\mathbb{R}^{d} such that for any (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\times\mathcal{A}, we have (|s,a)=ϕ(s,a),μ()\mathbb{P}(\cdot|s,a)=\langle\phi(s,a),\mu(\cdot)\rangle and r(s,a)=ϕ(s,a),θr(s,a)=\langle\phi(s,a),\theta\rangle.

By assuming the existence of ϕ:𝒮×𝒮d\phi:\mathcal{S}\times\mathcal{S}\to\mathbb{R}^{d}, we can approximate the QQ-function of any policy as a linear function. This assumption is commonly used in conjunction with policy iteration algorithms (Lattimore et al., 2020; Wang et al., 2023) and allows one to obtain sample complexity bounds that are independent of the cardinality of the state and action spaces of the agents. We also make an assumption of bounded feature-norm which is standard in the RL literature (Tkachuk et al., 2023; Abbasi Yadkori et al., 2013):

Assumption 3.8 (Bounded features).

We assume that ϕ(s,a)21\|\phi(s,a)\|_{2}\leq 1 for all (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\times\mathcal{A}.

Then, through a reduction from Ren et al. (2024) that uses function approximation to learn the spectral features ϕk\phi_{k} for Q^k\hat{Q}_{k}, we derive a performance guarantee for the learned policy πkest\pi_{k}^{\mathrm{est}}, where the optimality gap decays with kk.

Theorem 3.9.

When πkest{\pi}^{\mathrm{est}}_{k} is derived from the spectral features ϕk\phi_{k} learned in Q^k\hat{Q}_{k}, and MM is the number of samples used in the function approximation, then

Pr[Vπ(s)Vπkest(s)O~(1k+ϕk5log2k2M\displaystyle\Pr\bigg{[}V^{\pi^{*}}(s)-V^{{\pi}^{\mathrm{est}}_{k}}(s)\leq\tilde{O}\bigg{(}\frac{1}{\sqrt{k}}+\frac{\|{\phi}_{k}\|^{5}\log 2k^{2}}{\sqrt{M}}
+2γr~(1γ)kϕk)](11100k)(12k)\displaystyle+\frac{2\gamma\tilde{r}}{(1-\gamma)\sqrt{k}}\|\phi_{k}\|\bigg{)}\bigg{]}\geq\left(1-\frac{1}{100\sqrt{k}}\right)\cdot\left(1-\frac{2}{\sqrt{k}}\right)

We defer the proof of Theorem 3.9 to Appendix H in the supplementary material, under assumptions of a linear MDP.

4 Numerical Experiments

This section provides numerical simulations for the examples outlined in Section 2. All experiments were run on a 2-core CPU server with 12GB RAM. We chose a parameter complexity for each simulation that was sufficient to emphasize characteristics of the theory, such as the complexity improvement and the decaying optimality gap.

Refer to caption
Refer to caption
Refer to caption
Figure 3: a) Reward optimality gap (log scale) with πk,mest{\pi}_{k,m}^{\mathrm{est}} running 300300 iterations. b) Computation time (in minutes) against sampling parameter kk, for kn=8k\leq n=8, to learn policy π^k,mest\hat{\pi}_{k,m}^{\mathrm{est}}. c) Discounted cumulative rewards for kn=50k\leq n=50.

4.1 Constrained Exploration

Let 𝒮g=𝒮l=[6]2\mathcal{S}_{g}\!=\!\mathcal{S}_{l}\!=\![6]^{2} where for sa{𝒮g,𝒮l},sa(i)s_{a}\in\{\mathcal{S}_{g},\mathcal{S}_{l}\},s_{a}^{(i)} denotes the projection of sas_{a} onto its ii’th coordinate. Further, let 𝒜l=𝒜g={“up”, “down”, “right”, “left”}\mathcal{A}_{l}=\mathcal{A}_{g}=\{\text{``up", ``down", ``right", ``left"}\} given formally by {(10),(10),(01),(01)}\{\binom{1}{0},\binom{-1}{0},\binom{0}{1},\binom{0}{-1}\}. Let ΠD(x)\Pi_{D}(x) denote the 1\ell_{1}-projection of xx onto set DD. Then, let sgt+1(sgt,agt)=ΠSg(sgt+agt),sit+1(sit,sgt,ait)=ΠSl(sit+|sitsgt|+ait)s_{g}^{t+1}(s_{g}^{t},a_{g}^{t})=\Pi_{S_{g}}(s_{g}^{t}+a_{g}^{t}),s_{i}^{t+1}(s_{i}^{t},s_{g}^{t},a_{i}^{t})=\Pi_{S_{l}}(s_{i}^{t}+|s_{i}^{t}-s_{g}^{t}|+a_{i}^{t}). We let the global agent’s reward be rg(sg,ag)=2i=13𝟙{ag=(10),sg(1)i}+2i=36𝟙{ag=(10),sg(1)>i}r_{g}(s_{g},a_{g})=2\sum_{i=1}^{3}\mathbbm{1}\{a_{g}={1\choose 0},s_{g}^{(1)}\leq i\}+2\sum_{i=3}^{6}\mathbbm{1}\{a_{g}={-1\choose 0},s_{g}^{(1)}>i\}, and the local agent’s rewards be rl(si,sg,ai)=𝟙{ait0}+6𝟙{si(1)=sg(1)}+2sitsgt1r_{l}(s_{i},s_{g},a_{i})=\mathbbm{1}\{a_{i}^{t}\neq 0\}+6\cdot\mathbbm{1}\{s_{i}^{(1)}=s_{g}^{(1)}\}+2\cdot\|s_{i}^{t}-s_{g}^{t}\|_{1}.

Intuitively, the reward function is designed to force the global agent to oscillate vertically in the grid, while forcing the local agents to oscillate horizontally around the global agent, thereby simulating a constrained exploration.

For this task, we ran a simulation with n=8n=8 agents, with m=20m=20 samples in the empirically adapted Bellman operator. We provide simulation results in Figure 3a. We observe monotonic improvements in the cumulative discounted rewards as knk\to n. Since k=nk=n recovers value-iteration and mean-field MARL algorithms, the reward at k=nk=n is the baseline we compare our algorithm to. When k<nk<n, we observe that the reward accrued by SUBSAMPLE-MFQ is only marginally less than the reward gained by value-iteration.

4.2 Gaussian Squeeze

In this task, nn homogeneous agents determine their individual action aia_{i} to jointly maximize the objective r(x)=xe(xμ)2/σ2r(x)=xe^{-(x-\mu)^{2}/\sigma^{2}}, where x=i=1naix=\sum_{i=1}^{n}a_{i}, ai={0,,9}a_{i}=\{0,\dots,9\}, and μ\mu and σ\sigma are the pre-defined mean and variance of the system. We consider a homogeneous system devoid of a global agent to match the mean-field setting in Yang et al. (2018), where the richness of our model setting can still express GS. We use the global agent to model the state of the system, given by the number of vehicles in each controller’s lane.

We set 𝒮l=[4]\mathcal{S}_{l}=[4], 𝒜l={0,,4}\mathcal{A}_{l}=\{0,\dots,4\}, and 𝒮g=𝒮l\mathcal{S}_{g}=\mathcal{S}_{l} such that sg=1ni=1nsis_{g}=\left\lceil\frac{1}{n}\sum_{i=1}^{n}s_{i}\right\rceil, and 𝒜g={0}\mathcal{A}_{g}=\{0\}. The transition functions are given by si(t+1)=si(t)1𝟙{si(t)>sg(t)}+Ber(p)s_{i}(t+1)=s_{i}(t)-1\cdot\mathbbm{1}\{s_{i}(t)>s_{g}(t)\}+\mathrm{Ber}(p) and sg(t+1)=1ni=1nsi(t+1)s_{g}(t+1)=\left\lceil\frac{1}{n}\sum_{i=1}^{n}s_{i}(t+1)\right\rceil, where Ber(p)\mathrm{Ber}(p) is a Bernoulli random variable with parameter p>0p>0. Finally, the global agent’s reward function is given by rg(sg,ag)=sgr_{g}(s_{g},a_{g})=-s_{g} and the local agent’s reward function is given by rl(si,sg,ai)=4𝟙{si>sg}2𝟙{ai>sg}r_{l}(s_{i},s_{g},a_{i})=4\cdot\mathbbm{1}\{s_{i}>s_{g}\}-2\cdot\mathbbm{1}\{a_{i}>s_{g}\}.

For this task, we ran a small-scale simulation with n=8n=8 agents, and a large-scale simulation with n=50n=50 agents, and used m=20m=20 samples in the empirical Bellman operator. We provide simulation results in Figure 3b and Figure 3c, where Figure 3b demonstrates the exponential improvement in computational complexity of SUBSAMPLE-MFQ, and Figure 3c demonstrates a monotonic improvement in the cumulative rewards, consistent with Theorem 3.4. Here, both metrics beat the mean-field value iteration benchmark.

5 Conclusion and Future Works

This work develops subsampling for mean field MARL in a cooperative system with a global decision-making agent and nn homogeneous local agents. We propose SUBSAMPLE-MFQ which learns each agent’s best response to the mean effect from a sample of its neighbors, allowing an exponential reduction on the sample complexity of approximating a solution to the MDP. We provide a theoretical analysis on the optimality gap of the learned policy, showing that the learned policy converges to the optimal policy with the number of agents kk sampled at the rate O~(1/k)\tilde{O}(1/\sqrt{k}) validate our theoretical results through numerical experiments. We further extend this result to the non-tabular setting with infinite state and action spaces.

We recognize several future directions. One direction would be to extend this subsampling framework to general networks. We believe expander-graph decompositions (Anand & Umans, 2023) are amenable for this. A second direction would be to apply the subsampling framework to federated learning algorithms. Another direction would be to extend the subsampling framework to the model-free actor-critic algorithm (Schulman et al., 2016). A limitation of our work is that it only incorporates mild heterogeneity among the agents; so, a third future direction would be to consider the setting of truly heterogeneous local agents or competitive settings. Lastly, it would be very exciting to generalize this work to the online setting without a generative oracle: we conjecture that tools from recent works on stochastic approximation (Chen & Maguluri, 2022) and no-regret RL (Jin et al., 2021b; Pasztor et al., 2021) might be valuable.

6 Impact Statement

This paper contributes to the theoretical foundations of multi-agent reinforcement learning, with the goal of developing mean-field tools that can apply to the control of networked systems. The work can potentially lead to RL-based algorithms for the adaptive control of cyber-physical systems, such as the power grid, smart traffic systems, and other smart infrastructure systems (Bichuch et al., 2024). While the subsampling approach we describe is promising, it is limited by its assumptions. Furthermore, any applications of the proposed algorithm in its current form should be considered cautiously since the analysis here focuses on efficiency and optimality, and does not consider the issue of fairness (Jusup et al., 2023).

References

  • Abbasi Yadkori et al. (2013) Abbasi Yadkori, Y., Bartlett, P. L., Kanade, V., Seldin, Y., and Szepesvari, C. Online learning in markov decision processes with adversarially chosen transition probability distributions. In Burges, C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. (eds.), Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013. URL https://proceedings.neurips.cc/paper_files/paper/2013/file/4f284803bd0966cc24fa8683a34afc6e-Paper.pdf.
  • Anand & Qu (2024) Anand, E. and Qu, G. Efficient reinforcement learning for global decision making in the presence of local agents at scale. arXiV, 2024. URL https://arxiv.org/abs/2403.00222.
  • Anand & Umans (2023) Anand, E. and Umans, C. Pseudorandomness of the Sticky Random Walk. Caltech Undergraduate Thesis, 2023. URL https://arxiv.org/abs/2307.11104.
  • Anand et al. (2024) Anand, E., van den Brand, J., Ghadiri, M., and Zhang, D. J. The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants. In Bringmann, K., Grohe, M., Puppis, G., and Svensson, O. (eds.), 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024), volume 297 of Leibniz International Proceedings in Informatics (LIPIcs), pp.  10:1–10:20, Dagstuhl, Germany, 2024. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. ISBN 978-3-95977-322-5. doi: 10.4230/LIPIcs.ICALP.2024.10.
  • Banach (1922) Banach, S. Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fundamenta Mathematicae, 3(1):133–181, 1922. URL http://eudml.org/doc/213289.
  • Bertsekas & Tsitsiklis (1996) Bertsekas, D. P. and Tsitsiklis, J. N. Neuro-Dynamic Programming. Athena Scientific, 1st edition, 1996. ISBN 1886529108.
  • Bichuch et al. (2024) Bichuch, M., Dayanıklı, G., and Lauriere, M. A stackelberg mean field game for green regulator with a large number of prosumers. Available at SSRN 4985557, 2024.
  • Blondel & Tsitsiklis (2000) Blondel, V. D. and Tsitsiklis, J. N. A Survey of Computational Complexity Results in Systems and Control. Automatica, 36(9):1249–1274, 2000. ISSN 0005-1098. doi: https://doi.org/10.1016/S0005-1098(00)00050-9. URL https://www.sciencedirect.com/science/article/abs/pii/S0005109800000509.
  • Chaudhari et al. (2024) Chaudhari, S., Pranav, S., Anand, E., and Moura, J. M. F. Peer-to-peer learning dynamics of wide neural networks. IEEE International Conference on Acoustics, Speech, and Signal Processing 2025, 2024. URL https://arxiv.org/abs/2409.15267.
  • Chen & Maguluri (2022) Chen, Z. and Maguluri, S. T. Sample complexity of policy-based methods under off-policy sampling and linear function approximation. In Camps-Valls, G., Ruiz, F. J. R., and Valera, I. (eds.), Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pp.  11195–11214. PMLR, 28–30 Mar 2022.
  • Cui & Koeppl (2022) Cui, K. and Koeppl, H. Learning Graphon Mean Field Games and Approximate Nash Equilibria. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=0sgntlpKDOz.
  • Cui et al. (2023) Cui, K., Fabian, C., and Koeppl, H. Multi-Agent Reinforcement Learning via Mean Field Control: Common Noise, Major Agents and Approximation Properties, 2023. URL https://arxiv.org/abs/2303.10665.
  • DeWeese & Qu (2024) DeWeese, A. and Qu, G. Locally interdependent multi-agent MDP: Theoretical framework for decentralized agents with dynamic dependencies. In Forty-first International Conference on Machine Learning, 2024.
  • Dietterich et al. (2018) Dietterich, T. G., Trimponias, G., and Chen, Z. Discovering and removing exogenous state variables and rewards for reinforcement learning, 2018. URL https://arxiv.org/abs/1806.01584.
  • Dvoretzky et al. (1956) Dvoretzky, A., Kiefer, J., and Wolfowitz, J. Asymptotic Minimax Character of the Sample Distribution Function and of the Classical Multinomial Estimator. The Annals of Mathematical Statistics, 27(3):642 – 669, 1956. doi: 10.1214/aoms/1177728174. URL https://doi.org/10.1214/aoms/1177728174.
  • Foster et al. (2022) Foster, D. J., Rakhlin, A., Sekhari, A., and Sridharan, K. On the Complexity of Adversarial Decision Making. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=pgBpQYss2ba.
  • Gamarnik et al. (2009) Gamarnik, D., Goldberg, D., and Weber, T. Correlation Decay in Random Decision Networks, 2009.
  • Ghosal & van der Vaart (2017) Ghosal, S. and van der Vaart, A. Fundamentals of Nonparametric Bayesian Inference: Space of Probability Densities, pp.  516–527. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2017.
  • Golowich & Moitra (2024) Golowich, N. and Moitra, A. The role of inherent bellman error in offline reinforcement learning with linear function approximation, 2024. URL https://arxiv.org/abs/2406.11686.
  • Gu et al. (2021) Gu, H., Guo, X., Wei, X., and Xu, R. Mean-Field Controls with Q-Learning for Cooperative MARL: Convergence and Complexity Analysis. SIAM Journal on Mathematics of Data Science, 3(4):1168–1196, 2021. doi: 10.1137/20M1360700. URL https://doi.org/10.1137/20M1360700.
  • Gu et al. (2022) Gu, H., Guo, X., Wei, X., and Xu, R. Mean-Field Multi-Agent Reinforcement Learning: A Decentralized Network Approach, 2022. URL https://arxiv.org/pdf/2108.02731.pdf.
  • Hu et al. (2023) Hu, Y., Wei, X., Yan, J., and Zhang, H. Graphon Mean-Field Control for Cooperative Multi-Agent Reinforcement Learning. Journal of the Franklin Institute, 360(18):14783–14805, 2023. ISSN 0016-0032. URL https://doi.org/10.1016/j.jfranklin.2023.09.002.
  • Jin et al. (2020) Jin, C., Yang, Z., Wang, Z., and Jordan, M. I. Provably efficient reinforcement learning with linear function approximation. In Abernethy, J. and Agarwal, S. (eds.), Proceedings of Thirty Third Conference on Learning Theory, volume 125 of Proceedings of Machine Learning Research, pp.  2137–2143. PMLR, 09–12 Jul 2020. URL https://proceedings.mlr.press/v125/jin20a.html.
  • Jin et al. (2021a) Jin, C., Liu, Q., and Miryoosefi, S. Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms, 2021a. URL https://arxiv.org/abs/2102.00815.
  • Jin et al. (2021b) Jin, C., Liu, Q., Wang, Y., and Yu, T. V-learning – a simple, efficient, decentralized algorithm for multiagent rl, 2021b. URL https://arxiv.org/abs/2110.14555.
  • Jin et al. (2018) Jin, J., Song, C., Li, H., Gai, K., Wang, J., and Zhang, W. Real-time bidding with multi-agent reinforcement learning in display advertising. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM ’18, pp.  2193–2201, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450360142. doi: 10.1145/3269206.3272021.
  • Jin et al. (2024) Jin, Y., Karmarkar, I., Sidford, A., and Wang, J. Truncated variance reduced value iteration. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=BiikUm6pLu.
  • Jusup et al. (2023) Jusup, M., Pásztor, B., Janik, T., Zhang, K., Corman, F., Krause, A., and Bogunovic, I. Safe model-based multi-agent mean-field reinforcement learning. arXiv preprint arXiv:2306.17052, 2023.
  • Kakade & Langford (2002) Kakade, S. and Langford, J. Approximately Optimal Approximate Reinforcement Learning. In Sammut, C. and Hoffman, A. (eds.), Proceedings of the Nineteenth International Conference on Machine Learning (ICML 2002), pp.  267–274, San Francisco, CA, USA, 2002. Morgan Kauffman. ISBN 1-55860-873-7. URL http://ttic.uchicago.edu/~sham/papers/rl/aoarl.pdf.
  • Kim & Giannakis (2017) Kim, S.-J. and Giannakis, G. B. An Online Convex Optimization Approach to Real-time Energy Pricing for Demand Response. IEEE Transactions on Smart Grid, 8(6):2784–2793, 2017. doi: 10.1109/TSG.2016.2539948. URL https://ieeexplore.ieee.org/document/7438918.
  • Kiran et al. (2022) Kiran, B. R., Sobh, I., Talpaert, V., Mannion, P., Sallab, A. A. A., Yogamani, S., and Pérez, P. Deep Reinforcement Learning for Autonomous Driving: A Survey. IEEE Transactions on Intelligent Transportation Systems, 23(6):4909–4926, 2022. doi: 10.1109/TITS.2021.3054625. URL https://ieeexplore.ieee.org/document/9351818.
  • Kleinberg (2005) Kleinberg, R. A multiple-choice secretary algorithm with applications to online auctions. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’05, pp.  630–631, USA, 2005. Society for Industrial and Applied Mathematics. ISBN 0898715857.
  • Kober et al. (2013) Kober, J., Bagnell, J. A., and Peters, J. Reinforcement Learning in Robotics: A Survey. The International Journal of Robotics Research, 32(11):1238–1274, 2013. doi: 10.1177/0278364913495721. URL https://doi.org/10.1177/0278364913495721.
  • Larsen et al. (2024) Larsen, K. G., Montasser, O., and Zhivotovskiy, N. Derandomizing multi-distribution learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=twYE75Mnkt.
  • Lasry & Lions (2007) Lasry, J.-M. and Lions, P.-L. Mean Field Games. Japanese Journal of Mathematics, 2(1):229–260, March 2007. ISSN 1861-3624. doi: 10.1007/s11537-007-0657-8. URL https://link.springer.com/article/10.1007/s11537-007-0657-8.
  • Lattimore et al. (2020) Lattimore, T., Szepesvari, C., and Weisz, G. Learning with good feature representations in bandits and in RL with a generative model. In III, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp.  5662–5670. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/lattimore20a.html.
  • Lauer & Riedmiller (2000) Lauer, M. and Riedmiller, M. A. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In Proceedings of the Seventeenth International Conference on Machine Learning, ICML ’00, pp.  535–542, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. ISBN 1558607072.
  • Li et al. (2022) Li, G., Wei, Y., Chi, Y., Gu, Y., and Chen, Y. Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction. IEEE Transactions on Information Theory, 68(1):448–473, 2022. doi: 10.1109/TIT.2021.3120096. URL https://arxiv.org/abs/2006.03041.
  • Li et al. (2019) Li, M., Qin, Z., Jiao, Y., Yang, Y., Wang, J., Wang, C., Wu, G., and Ye, J. Efficient ridesharing order dispatching with mean field multi-agent reinforcement learning. In The World Wide Web Conference, WWW ’19, pp.  983–994, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366748. doi: 10.1145/3308558.3313433. URL https://doi.org/10.1145/3308558.3313433.
  • Lin et al. (2020) Lin, Y., Qu, G., Huang, L., and Wierman, A. Distributed Reinforcement Learning in Multi-Agent Networked Systems. CoRR, abs/2006.06555, 2020. URL https://arxiv.org/abs/2006.06555.
  • Lin et al. (2023) Lin, Y., Preiss, J. A., Anand, E. T., Li, Y., Yue, Y., and Wierman, A. Online Adaptive Policy Selection in Time-Varying Systems: No-Regret via Contractive Perturbations. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=hDajsofjRM.
  • Lin et al. (2024) Lin, Y., Preiss, J. A., Xie, F., Anand, E., Chung, S.-J., Yue, Y., and Wierman, A. Online policy optimization in unknown nonlinear systems. In Agrawal, S. and Roth, A. (eds.), Proceedings of Thirty Seventh Conference on Learning Theory, volume 247 of Proceedings of Machine Learning Research, pp.  3475–3522. PMLR, 30 Jun–03 Jul 2024. URL https://proceedings.mlr.press/v247/lin24a.html.
  • Littman (1994) Littman, M. L. Markov Games as a Framework for Multi-Agent Reinforcement Learning. In Machine learning proceedings, Elsevier, pp.  157–163, 1994.
  • Massart (1990) Massart, P. The Tight Constant in the Dvoretzky-Kiefer-Wolfowitz Inequality. The Annals of Probability, 18(3):1269 – 1283, 1990. doi: 10.1214/aop/1176990746.
  • Min et al. (2023) Min, Y., He, J., Wang, T., and Gu, Q. Cooperative Multi-Agent Reinforcement Learning: Asynchronous Communication and Linear Function Approximation. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp.  24785–24811. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/min23a.html.
  • Mondal et al. (2022) Mondal, W. U., Agarwal, M., Aggarwal, V., and Ukkusuri, S. V. On the Approximation of Cooperative Heterogeneous Multi-Agent Reinforcement Learning (MARL) Using Mean Field Control (MFC). Journal of Machine Learning Research, 23(1), jan 2022. ISSN 1532-4435.
  • Naaman (2021) Naaman, M. On the Tight Constant in the Multivariate Dvoretzky–Kiefer–Wolfowitz Inequality. Statistics & Probability Letters, 173:109088, 2021. ISSN 0167-7152. doi: https://doi.org/10.1016/j.spl.2021.109088. URL https://www.sciencedirect.com/science/article/pii/S016771522100050X.
  • Papadimitriou & Tsitsiklis (1999) Papadimitriou, C. H. and Tsitsiklis, J. N. The Complexity of Optimal Queuing Network Control. Mathematics of Operations Research, 24(2):293–305, 1999. ISSN 0364765X, 15265471.
  • Pasztor et al. (2021) Pasztor, B., Bogunovic, I., and Krause, A. Efficient model-based multi-agent mean-field reinforcement learning. arXiv preprint arXiv:2107.04050, 2021.
  • Preiss et al. (2017) Preiss, J. A., Honig, W., Sukhatme, G. S., and Ayanian, N. Crazyswarm: A large nano-quadcopter swarm. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp.  3299–3304, 2017. doi: 10.1109/ICRA.2017.7989376.
  • Qu et al. (2020a) Qu, G., Lin, Y., Wierman, A., and Li, N. Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, Red Hook, NY, USA, 2020a. Curran Associates Inc. ISBN 9781713829546.
  • Qu et al. (2020b) Qu, G., Wierman, A., and Li, N. Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems. In Bayen, A. M., Jadbabaie, A., Pappas, G., Parrilo, P. A., Recht, B., Tomlin, C., and Zeilinger, M. (eds.), Proceedings of the 2nd Conference on Learning for Dynamics and Control, volume 120 of Proceedings of Machine Learning Research, pp.  256–266. PMLR, 10–11 Jun 2020b.
  • Ren et al. (2024) Ren, Z., Runyu, Zhang, Dai, B., and Li, N. Scalable spectral representations for network multiagent control, 2024. URL https://arxiv.org/abs/2410.17221.
  • Sayin et al. (2021) Sayin, M. O., Zhang, K., Leslie, D. S., Basar, T., and Ozdaglar, A. E. Decentralized Q-learning in Zero-sum Markov Games. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=nhkbYh30Tl.
  • Schulman et al. (2016) Schulman, J., Moritz, P., Levine, S., Jordan, M., and Abbeel, P. High-dimensional continuous control using generalized advantage estimation. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • Shachter (2013) Shachter, R. D. Bayes-ball: The rational pastime (for determining irrelevance and requisite information in belief networks and influence diagrams), 2013.
  • Shapley (1953) Shapley, L. S. A value for n-person games. In Contributions to the Theory of Games, 1953.
  • Silver et al. (2016) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature, 529(7587):484–489, January 2016. ISSN 1476-4687. doi: 10.1038/nature16961. URL https://www.nature.com/articles/nature16961.
  • Sutton et al. (1999) Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. In Solla, S., Leen, T., and Müller, K. (eds.), Advances in Neural Information Processing Systems, volume 12. MIT Press, 1999. URL https://proceedings.neurips.cc/paper_files/paper/1999/file/464d828b85b0bed98e80ade0a5c43b0f-Paper.pdf.
  • Tan (1997) Tan, M. Multi-agent reinforcement learning: independent vs. cooperative agents, pp.  487–494. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1997. ISBN 1558604952.
  • Tkachuk et al. (2023) Tkachuk, V., Bakhtiari, S. A., Kirschner, J., Jusup, M., Bogunovic, I., and Szepesvári, C. Efficient planning in combinatorial action spaces with applications to cooperative multi-agent reinforcement learning. arXiV, 2023. URL https://arxiv.org/abs/2302.04376.
  • Tsybakov (2008) Tsybakov, A. B. Introduction to Nonparametric Estimation. Springer Publishing Company, Incorporated, 1st edition, 2008. ISBN 0387790519. URL https://link.springer.com/book/10.1007/b13794.
  • Wang et al. (2023) Wang, J., Ye, D., and Lu, Z. More centralized training, still decentralized execution: Multi-agent conditional policy factorization, 2023. URL https://arxiv.org/abs/2209.12681.
  • Watkins & Dayan (1992) Watkins, C. J. C. H. and Dayan, P. Q-learning. Machine Learning, 8(3):279–292, May 1992. ISSN 1573-0565. doi: 10.1007/BF00992698. URL https://link.springer.com/article/10.1007/BF00992698.
  • Xu & Klabjan (2023) Xu, M. and Klabjan, D. Decentralized randomly distributed multi-agent multi-armed bandit with heterogeneous rewards. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=DqfdhM64LI.
  • Yang et al. (2018) Yang, Y., Luo, R., Li, M., Zhou, M., Zhang, W., and Wang, J. Mean Field Multi-Agent Reinforcement Learning. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp.  5571–5580. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/yang18d.html.
  • Zhang et al. (2021) Zhang, K., Yang, Z., and Başar, T. Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms, 2021. URL https://arxiv.org/abs/1911.10635.
  • Zhou et al. (2023) Zhou, Y., Liu, S., Qing, Y., Chen, K., Zheng, T., Huang, Y., Song, J., and Song, M. Is centralized training with decentralized execution framework centralized enough for marl?, 2023.

Appendix A Mathematical Background and Additional Remarks

Outline of the Appendices.

  • Section 5 presents the proof of the Lipschitz continuity between Q^k\hat{Q}_{k} and QQ^{*}

  • Section 6 presents the TV distance bound

  • Section 7 proves the bound on the optimality gap between the learned policy π~k,mest\tilde{\pi}_{k,m}^{\mathrm{est}} and the optimal policy π\pi^{*}

Table 1: Important notations in this paper.
Notation Meaning
1\|\cdot\|_{1} 1\ell_{1} (Manhattan) norm;
\|\cdot\|_{\infty} \ell_{\infty} norm;
+\mathbb{Z}_{+} The set of strictly positive integers;
d\mathbb{R}^{d} The set of dd-dimensional reals;
[m][m] The set {1,,m}\{1,\dots,m\}, where m+m\in\mathbb{Z}_{+};
([m]k){[m]\choose k} The set of kk-sized subsets of {1,,m}\{1,\dots,m\};
aga_{g} ag𝒜ga_{g}\in\mathcal{A}_{g} is the action of the global agent;
sgs_{g} sg𝒮gs_{g}\in\mathcal{S}_{g} is the state of the global agent;
a1,,ana_{1},\dots,a_{n} a1,,an𝒜lna_{1},\dots,a_{n}\in\mathcal{A}_{l}^{n} are the actions of the local agents 1,,n1,\dots,n
s1,,sns_{1},\dots,s_{n} s1,,sn𝒮lns_{1},\dots,s_{n}\in\mathcal{S}_{l}^{n} are the states of the local agents 1,,n1,\dots,n
aa a=(ag,a1,,an)𝒜g×𝒜lna=(a_{g},a_{1},\dots,a_{n})\in\mathcal{A}_{g}\times\mathcal{A}_{l}^{n} is the tuple of actions of all agents
ss s=(sg,s1,,sn)𝒮g×𝒮lns=(s_{g},s_{1},\dots,s_{n})\in\mathcal{S}_{g}\times\mathcal{S}_{l}^{n} is the tuple of states of all agents
ziz_{i} zi=(si,ai)𝒵lz_{i}=(s_{i},a_{i})\in\mathcal{Z}_{l}, for i[n]i\in[n]
μk(𝒵l)\mu_{k}(\mathcal{Z}_{l}) μk(𝒵l)={0,1/k,2/k,,1}|𝒵l|\mu_{k}(\mathcal{Z}_{l})=\{0,1/k,2/k,\dots,1\}^{|\mathcal{Z}_{l}|}
μ(𝒵l)\mu(\mathcal{Z}_{l}) μ(𝒵l):-μn(𝒵l){0,1/n,2/n,,1}|𝒵l|\mu(\mathcal{Z}_{l})\coloneq\mu_{n}(\mathcal{Z}_{l})\{0,1/n,2/n,\dots,1\}^{|\mathcal{Z}_{l}|}
sΔs_{\Delta} For Δ[n]\Delta\subseteq[n], and a collection of variables {s1,,sn}\{s_{1},\dots,s_{n}\}, sΔ:-{si:iΔ}s_{\Delta}\coloneq\{s_{i}:i\in\Delta\}
σ(zΔ,zΔ)\sigma(z_{\Delta},z^{\prime}_{\Delta}) Product sigma-algebra generated by sequences zΔz_{\Delta} and zΔz^{\prime}_{\Delta};
π\pi^{*} π\pi^{*} is the optimal deterministic policy function such that a=π(s)a=\pi^{*}(s);
π^k\hat{\pi}_{k}^{*} π^k\hat{\pi}_{k}^{*} is the optimal deterministic policy function on a constrained system of |Δ|=k|\Delta|=k local agents
π~kest\tilde{\pi}^{\mathrm{est}}_{k} π~kest\tilde{\pi}^{\mathrm{est}}_{k} is the stochastic policy mapping learned with parameter kk such that aπ~kest(s)a\sim\tilde{\pi}_{k}^{\mathrm{est}}(s)
Pg(|sg,ag)P_{g}(\cdot|s_{g},a_{g}) Pg(|sg,ag)P_{g}(\cdot|s_{g},a_{g}) is the stochastic transition kernel for the state of the global agent
Pl(|ai,si,sg)P_{l}(\cdot|a_{i},s_{i},s_{g}) Pl(|ai,si,sg)P_{l}(\cdot|a_{i},s_{i},s_{g}) is the stochastic transition kernel for the state of any local agent i[n]i\in[n]
rg(sg,ag)r_{g}(s_{g},a_{g}) rgr_{g} is the global agent’s component of the reward
rl(si,sg,ai)r_{l}(s_{i},s_{g},a_{i}) rlr_{l} is the component of the reward for local agent i[n]i\in[n]
r(s,a)r(s,a) r(s,a):-r[n](s,a)=rg(sg,ag)+1ni[n]rl(si,sg,ai)r(s,a)\coloneq r_{[n]}(s,a)=r_{g}(s_{g},a_{g})+\frac{1}{n}\sum_{i\in[n]}r_{l}(s_{i},s_{g},a_{i}) is the reward of the system
rΔ(s,a)r_{\Delta}(s,a) rΔ(s,a)=rg(sg,ag)+1|Δ|iΔrl(si,sg,ai)r_{\Delta}(s,a)=r_{g}(s_{g},a_{g})+\frac{1}{|\Delta|}\sum_{i\in\Delta}r_{l}(s_{i},s_{g},a_{i}) is the constrained system’s reward with |Δ|=k|\Delta|=k local agents
𝒯\mathcal{T} 𝒯\mathcal{T} is the centralized Bellman operator
𝒯^k\hat{\mathcal{T}}_{k} 𝒯^k\hat{\mathcal{T}}_{k} is the Bellman operator on a constrained system of |Δ|=k|\Delta|=k local agents
ΠΘ(y)\Pi^{\Theta}(y) 1\ell_{1} projection of yy onto set Θ\Theta;
Definition A.1 (Lipschitz continuity).

Given two metric spaces (𝒳,d𝒳)(\mathcal{X},d_{\mathcal{X}}) and (𝒴,d𝒴)(\mathcal{Y},d_{\mathcal{Y}}) and a constant L+L\in\mathbb{R}_{+}, a mapping f:𝒳𝒴f:\mathcal{X}\to\mathcal{Y} is LL-Lipschitz continuous if for all x,y𝒳x,y\in\mathcal{X}, d𝒴(f(x),f(y))Ld𝒳(x,y)d_{\mathcal{Y}}(f(x),f(y))\leq L\cdot d_{\mathcal{X}}(x,y).

Theorem A.2 (Banach-Caccioppoli fixed point theorem (Banach, 1922)).

Consider the metric space (𝒳,d𝒳)(\mathcal{X},d_{\mathcal{X}}), and T:𝒳𝒳T:\mathcal{X}\to\mathcal{X} such that TT is a γ\gamma-Lipschitz continuous mapping for γ(0,1)\gamma\in(0,1). Then, by the Banach-Cacciopoli fixed-point theorem, there exists a unique fixed point x𝒳x^{*}\in\mathcal{X} for which T(x)=xT(x^{*})=x^{*}. Additionally, x=limsTs(x0)x^{*}=\lim_{s\to\infty}T^{s}(x_{0}) for any x0𝒳x_{0}\in\mathcal{X}.

Appendix B Notation and Basic Lemmas

For convenience, we restate below the various Bellman operators under consideration.

Definition B.1 (Bellman Operator 𝒯\mathcal{T}).
𝒯Qt(s,a):=r(s,a)+γ𝔼sgPg(|sg,ag),siPl(|si,sg,ai),i[n]maxa𝒜Qt(s,a)\mathcal{T}Q^{t}(s,a):=r(s,a)+\gamma\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim P_{g}(\cdot|s_{g},a_{g}),\\ s_{i}^{\prime}\sim P_{l}(\cdot|s_{i},s_{g},a_{i}),\forall i\in[n]\end{subarray}}\max_{a^{\prime}\in\mathcal{A}}Q^{t}(s^{\prime},a^{\prime}) (14)
Definition B.2 (Adapted Bellman Operator 𝒯^k\hat{\mathcal{T}}_{k}).

The adapted Bellman operator updates a smaller QQ function (which we denote by Q^k\hat{Q}_{k}), for a surrogate system with the global agent and k[n]k\in[n] local agents denoted by Δ\Delta, using mean-field value iteration and jΔj\in\Delta such that:

𝒯^kQ^kt(sg,sj,FzΔj,aj,ag):=rΔ(s,a)+γ𝔼sgPg(|sg,ag),siPl(|si,sg,ai),iΔmaxa𝒜Q^kt(sg,sj,FzΔj,aj,ag)\hat{\mathcal{T}}_{k}\hat{Q}_{k}^{t}(s_{g},s_{j},F_{z_{\Delta\setminus j}},a_{j},a_{g}):=r_{\Delta}(s,a)+\gamma\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim P_{g}(\cdot|s_{g},a_{g}),\\ s_{i}^{\prime}\sim P_{l}(\cdot|s_{i},s_{g},a_{i}),\forall i\in\Delta\end{subarray}}\max_{a^{\prime}\in\mathcal{A}}\hat{Q}_{k}^{t}(s_{g}^{\prime},s_{j}^{\prime},F_{z^{\prime}_{\Delta\setminus j}},a^{\prime}_{j},a^{\prime}_{g}) (15)
Definition B.3 (Empirical Adapted Bellman Operator 𝒯^k,m\hat{\mathcal{T}}_{k,m}).

The empirical adapted Bellman operator 𝒯^k,m\hat{\mathcal{T}}_{k,m} empirically estimates the adapted Bellman operator update using mean-field value iteration by drawing mm random samples of sgPg(|sg,ag)s_{g}\sim P_{g}(\cdot|s_{g},a_{g}) and siPl(|si,sg,ai)s_{i}\sim P_{l}(\cdot|s_{i},s_{g},a_{i}) for iΔi\in\Delta, where for [m]\ell\in[m], the \ell’th random sample is given by sgs_{g}^{\ell} and sΔs_{\Delta}^{\ell}, and jΔj\in\Delta:

𝒯^k,mQ^k,mt(sg,sj,FzΔj,aj,ag):=rΔ(s,a)+γm[m]maxa𝒜Q^k,mt(sg,sj,FzΔj,aj,ag)\hat{\mathcal{T}}_{k,m}\hat{Q}_{k,m}^{t}(s_{g},s_{j},F_{z_{\Delta\setminus j}},a_{j},a_{g}):=r_{\Delta}(s,a)+\frac{\gamma}{m}\sum_{\ell\in[m]}\max_{a^{\prime}\in\mathcal{A}}\hat{Q}_{k,m}^{t}(s_{g}^{\ell},s_{j}^{\ell},F_{z_{\Delta\setminus j}^{\ell}},a_{j}^{\ell},a_{g}^{\ell}) (16)
Remark B.4.

Since the local agents 1,,n1,\dots,n are all homogeneous in their state/action spaces, the Q^k\hat{Q}_{k}-function only depends on them through their empirical distribution FzΔF_{z_{\Delta}}. Therefore, throughout the remainder of the paper, we will use Q^k(sg,FzΔ,ag):-Q^k(sg,si,FzΔi,ag,ai):-Q^k(sg,sΔ,ag,aΔ)\hat{Q}_{k}(s_{g},F_{z_{\Delta}},a_{g})\coloneq\hat{Q}_{k}(s_{g},s_{i},F_{z_{\Delta\setminus i}},a_{g},a_{i})\coloneq\hat{Q}_{k}(s_{g},s_{\Delta},a_{g},a_{\Delta}) interchangeably, unless making a remark about the computational complexity of learning each function.

Lemma B.5.

For any Δ[n]\Delta\subseteq[n] such that |Δ|=k|\Delta|=k, suppose 0rΔ(s,a)r~0\leq r_{\Delta}(s,a)\leq\tilde{r}. Then, for all tt\in\mathbb{N}, Q^ktr~1γ\hat{Q}_{k}^{t}\leq\frac{\tilde{r}}{1-\gamma}.

Proof.

The proof follows by induction on tt. The base case follows from Q^k0:=0\hat{Q}_{k}^{0}:=0. For the induction, note that by the triangle inequality Q^kt+1rΔ+γQ^ktr~+γr~1γ=r~1γ\|\hat{Q}_{k}^{t+1}\|_{\infty}\leq\|r_{\Delta}\|_{\infty}+\gamma\|\hat{Q}_{k}^{t}\|_{\infty}\leq\tilde{r}+\gamma\frac{\tilde{r}}{1-\gamma}=\frac{\tilde{r}}{1-\gamma}. ∎

Remark B.6.

By the law of large numbers, limm𝒯^k,m=𝒯^k\lim_{m\to\infty}\hat{\mathcal{T}}_{k,m}=\hat{\mathcal{T}}_{k}, where the error decays in O(1/m)O(1/\sqrt{m}) by the Chernoff bound. Also, 𝒯^n:-𝒯\hat{\mathcal{T}}_{n}\coloneq\mathcal{T}. Further, Lemma B.5 is independent of the choice of kk. Therefore, for k=nk=n, this implies an identical bound on QtQ^{t}. An identical argument implies the same bound on Q^k,mt\hat{Q}_{k,m}^{t}.

𝒯\mathcal{T} satisfies a γ\gamma-contractive property under the infinity norm (Watkins & Dayan, 1992). We similarly show that 𝒯^k\hat{\mathcal{T}}_{k} and 𝒯^k,m\hat{\mathcal{T}}_{k,m} satisfy a γ\gamma-contractive property under infinity norm in Lemmas B.7 and B.8.

Lemma B.7.

𝒯^k\hat{\mathcal{T}}_{k} satisfies the γ\gamma-contractive property under infinity norm:

𝒯^kQ^k𝒯^kQ^kγQ^kQ^k\|\hat{\mathcal{T}}_{k}\hat{Q}_{k}^{\prime}-\hat{\mathcal{T}}_{k}\hat{Q}_{k}\|_{\infty}\leq\gamma\|\hat{Q}_{k}^{\prime}-\hat{Q}_{k}\|_{\infty} (17)
Proof.

Suppose we apply 𝒯^k\hat{\mathcal{T}}_{k} to Q^k(sg,FzΔ,ag)\hat{Q}_{k}(s_{g},F_{z_{\Delta}},a_{g}) and Q^k(sg,FzΔ,ag)\hat{Q}^{\prime}_{k}(s_{g},F_{z_{\Delta}},a_{g}) for |Δ|=k|\Delta|=k. Then:

𝒯^kQ^k𝒯^kQ^k\displaystyle\|\hat{\mathcal{T}}_{k}\hat{Q}_{k}^{\prime}-\hat{\mathcal{T}}_{k}\hat{Q}_{k}\|_{\infty} =γmaxsg𝒮g,ag𝒜g,FzΔμk(𝒵l)|𝔼sgPg(|sg,ag),siPl(|si,sg,ai),iΔ,maxa𝒜Q^k(sg,FzΔ,ag)𝔼sgPg(|sg,ag),siPl(|si,sg,ai),iΔmaxa𝒜Q^k(sg,FzΔ,ag)|\displaystyle=\gamma\max_{\begin{subarray}{c}s_{g}\in\mathcal{S}_{g},\\ a_{g}\in\mathcal{A}_{g},\\ F_{z_{\Delta}}\in\mu_{k}{(\mathcal{Z}_{l})}\end{subarray}}\!\left|\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim P_{g}(\cdot|s_{g},a_{g}),\\ s_{i}^{\prime}\sim P_{l}(\cdot|s_{i},s_{g},a_{i}),\\ \forall i\in\Delta,\\ \end{subarray}}\max_{a^{\prime}\in\mathcal{A}}\hat{Q}_{k}^{\prime}(s_{g}^{\prime},F_{z_{\Delta}^{\prime}},a_{g}^{\prime})-\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim P_{g}(\cdot|s_{g},a_{g}),\\ s_{i}^{\prime}\sim P_{l}(\cdot|s_{i},s_{g},a_{i}),\\ \forall i\in\Delta^{\prime}\end{subarray}}\max_{a^{\prime}\in\mathcal{A}}\hat{Q}_{k}(s_{g}^{\prime},F_{z_{\Delta}^{\prime}},a_{g}^{\prime})\right|
γmaxsg𝒮g,FzΔμk(𝒵l),a𝒜|Q^k(sg,FzΔ,ag)Q^k(sg,FzΔ,ag)|=γQ^kQ^k\displaystyle\leq\gamma\max_{\begin{subarray}{c}s_{g}^{\prime}\in\mathcal{S}_{g},F_{z_{\Delta}}\in\mu_{k}{(\mathcal{Z}_{l})},a^{\prime}\in\mathcal{A}\end{subarray}}\left|\hat{Q}_{k}^{\prime}(s_{g}^{\prime},F_{z_{\Delta}^{\prime}},a_{g}^{\prime})-\hat{Q}_{k}(s_{g}^{\prime},F_{z_{\Delta}^{\prime}},a_{g}^{\prime})\right|=\gamma\|\hat{Q}_{k}^{\prime}-\hat{Q}_{k}\|_{\infty}

The equality cancels common rΔ(s,a)r_{\Delta}(s,a) terms in each operator. The second line uses Jensen’s inequality, maximizes over actions, and bounds expected values with the maximizers of the random variables.∎

Lemma B.8.

𝒯^k,m\hat{\mathcal{T}}_{k,m} satisfies the γ\gamma-contractive property under infinity norm.

Proof.

Similarly to Lemma B.7, suppose we apply 𝒯^k,m\hat{\mathcal{T}}_{k,m} to Q^k,m(sg,FzΔ,ag)\hat{Q}_{k,m}(s_{g},F_{z_{\Delta}},a_{g}) and Q^k,m(sg,FzΔ,ag)\hat{Q}_{k,m}^{\prime}(s_{g},F_{z_{\Delta}},a_{g}). Then:

𝒯^k,mQ^k𝒯^k,mQ^k\displaystyle\|\hat{\mathcal{T}}_{k,m}\hat{Q}_{k}-\hat{\mathcal{T}}_{k,m}\hat{Q}^{\prime}_{k}\|_{\infty} =γm[m](maxa𝒜Q^k(sg,FzΔ,ag)maxa𝒜Q^k(sg,FzΔ,ag))\displaystyle=\frac{\gamma}{m}\left\|\sum_{\ell\in[m]}\left(\max_{a^{\prime}\in\mathcal{A}}\hat{Q}_{k}(s_{g}^{\ell},F_{z_{\Delta}^{\ell}},a_{g}^{\prime})-\max_{a^{\prime}\in\mathcal{A}}\hat{Q}^{\prime}_{k}(s_{g}^{\ell},F_{z_{\Delta}^{\ell}},a_{g}^{\prime})\right)\right\|_{\infty}
γmaxag𝒜g,sg𝒮g,zΔ𝒵lk|Q^k(sg,FzΔ,ag)Q^k(sg,FzΔ,ag)|=γQ^kQ^k\displaystyle\leq\gamma\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},s_{g}^{\prime}\in\mathcal{S}_{g},z_{\Delta}\in\mathcal{Z}_{l}^{k}\end{subarray}}|\hat{Q}_{k}(s_{g}^{\prime},F_{z_{\Delta}^{\prime}},a_{g}^{\prime})-\hat{Q}^{\prime}_{k}(s_{g}^{\prime},F_{z_{\Delta}^{\prime}},a_{g}^{\prime})|=\gamma\|\hat{Q}_{k}-\hat{Q}^{\prime}_{k}\|_{\infty}

The first inequality uses the triangle inequality and the general property |maxaAf(a)maxbAf(b)|maxcA|f(a)f(b)||\max_{a\in A}f(a)-\max_{b\in A}f(b)|\leq\max_{c\in A}|f(a)-f(b)|. In the last line, we recover the definition of infinity norm.∎

Remark B.9.

The γ\gamma-contractivity of 𝒯^k\hat{\mathcal{T}}_{k} and 𝒯^k,m\hat{\mathcal{T}}_{k,m} attracts the trajectory between two Q^k\hat{Q}_{k} and Q^k,m\hat{Q}_{k,m} functions on the same state-action tuple by γ\gamma at each step. Repeatedly applying the Bellman operators produces a unique fixed-point from the Banach fixed-point theorem which we introduce in Definitions B.10 and B.11.

Definition B.10 (Q^k\hat{Q}_{k}^{*}-function).

Suppose Q^k0:=0\hat{Q}_{k}^{0}:=0 and let Q^kt+1(sg,FzΔ,ag)=𝒯^kQ^kt(sg,FzΔ,ag)\hat{Q}_{k}^{t+1}(s_{g},F_{z_{\Delta}},a_{g})=\hat{\mathcal{T}}_{k}\hat{Q}_{k}^{t}(s_{g},F_{z_{\Delta}},a_{g}) for tt\in\mathbb{N}. Denote the fixed-point of 𝒯^k\hat{\mathcal{T}}_{k} by Q^k\hat{Q}^{*}_{k} such that 𝒯^kQ^k(sg,FzΔ,ag)=Q^k(sg,FzΔ,ag)\hat{\mathcal{T}}_{k}\hat{Q}^{*}_{k}(s_{g},F_{z_{\Delta}},a_{g})=\hat{Q}^{*}_{k}(s_{g},F_{z_{\Delta}},a_{g}).

Definition B.11 (Q^k,mest\hat{Q}_{k,m}^{\mathrm{est}}-function).

Suppose Q^k,m0:=0\hat{Q}_{k,m}^{0}:=0 and let Q^k,mt+1(sg,FzΔ,ag)=𝒯^k,mQ^k,mt(sg,FzΔ,ag)\hat{Q}_{k,m}^{t+1}(s_{g},F_{z_{\Delta}},a_{g})=\hat{\mathcal{T}}_{k,m}\hat{Q}_{k,m}^{t}(s_{g},F_{z_{\Delta}},a_{g}) for tt\in\mathbb{N}. Denote the fixed-point of 𝒯^k,m\hat{\mathcal{T}}_{k,m} by Q^k,mest\hat{Q}^{\mathrm{est}}_{k,m} such that 𝒯^k,mQ^k,mest(sg,FzΔ,ag)=Q^k,mest(sg,FzΔ,ag)\hat{\mathcal{T}}_{k,m}\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},F_{z_{\Delta}},a_{g})=\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},F_{z_{\Delta}},a_{g}).

Corollary B.12.

Observe that by backpropagating results of the γ\gamma-contractive property for TT time steps:

Q^kQ^kTγTQ^kQ^k0,Q^k,mestQ^k,mTγTQ^k,mestQ^k,m0\|\hat{{Q}}_{k}^{*}-\hat{{Q}}_{k}^{T}\|_{\infty}\leq\gamma^{T}\cdot\|\hat{Q}_{k}^{*}-\hat{Q}_{k}^{0}\|_{\infty},\quad\quad\quad\|\hat{{Q}}_{k,m}^{\mathrm{est}}-\hat{{Q}}_{k,m}^{T}\|_{\infty}\leq\gamma^{T}\cdot\|\hat{Q}_{k,m}^{\mathrm{est}}-\hat{Q}^{0}_{k,m}\|_{\infty} (18)

Further, noting that Q^k0=Q^k,m0:=0\hat{Q}_{k}^{0}=\hat{Q}_{k,m}^{0}:=0, Q^kr~1γ\|\hat{Q}_{k}^{*}\|_{\infty}\leq\frac{\tilde{r}}{1-\gamma}, and Q^k,mestr~1γ\|\hat{Q}_{k,m}^{\mathrm{est}}\|_{\infty}\leq\frac{\tilde{r}}{1-\gamma} from Lemma B.5:

Q^kQ^kTγTr~1γ,Q^k,mestQ^k,mTγTr~1γ\|\hat{Q}_{k}^{*}-\hat{Q}_{k}^{T}\|_{\infty}\leq\gamma^{T}\frac{\tilde{r}}{1-\gamma},\quad\quad\quad\|\hat{Q}_{k,m}^{\mathrm{est}}-\hat{Q}_{k,m}^{T}\|_{\infty}\leq\gamma^{T}\frac{\tilde{r}}{1-\gamma} (19)
Remark B.13.

Corollary B.12 characterizes the error decay between Q^kT\hat{Q}_{k}^{T} and Q^k\hat{Q}_{k}^{*} and shows that it decays exponentially in the number of Bellman iterations by a γT\gamma^{T} multiplicative factor.

Furthermore, we characterize the maximal policies greedy policies obtained from Q,Q^kQ^{*},\hat{Q}_{k}^{*}, and Q^k,mest\hat{Q}_{k,m}^{\mathrm{est}}.

Definition B.14 (Optimal policy π\pi^{*}).

The greedy policy derived from QQ^{*} is

π(s):=argmaxa𝒜Q(s,a).\pi^{*}(s):=\arg\max_{a\in\mathcal{A}}Q^{*}(s,a). (20)
Definition B.15 (Optimal subsampled policy π^k\hat{\pi}_{k}^{*}).

The greedy policy from Q^k\hat{Q}_{k}^{*} is

π^k(sg,si,FsΔi):-argmax(ag,ai,FaΔi)𝒜g×𝒜l×μk1(𝒜l)Q^k(sg,si,FzΔi,ai,ag).\hat{\pi}_{k}^{*}(s_{g},s_{i},F_{s_{\Delta\setminus i}})\coloneq\mathop{\operatorname*{arg\,max}}_{(a_{g},a_{i},F_{a_{\Delta\setminus i}})\in\mathcal{A}_{g}\times\mathcal{A}_{l}\times\mu_{k-1}{(\mathcal{A}_{l})}}\hat{Q}_{k}^{*}(s_{g},s_{i},F_{z_{\Delta\setminus i}},a_{i},a_{g}). (21)
Definition B.16 (Optimal empirically subsampled policy π^k,mest\hat{\pi}_{k,m}^{\mathrm{est}}).

The greedy policy from Q^k,mest\hat{Q}_{k,m}^{\mathrm{est}} is given by

π^k,mest(sg,FsΔ):=argmax(ag,ai,FaΔi)𝒜g×𝒜l×μk1(𝒜l)Q^k,mest(sg,si,FzΔi,ai,ag).\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}}):=\mathop{\operatorname*{arg\,max}}_{(a_{g},a_{i},F_{a_{\Delta\setminus i}})\in\mathcal{A}_{g}\times\mathcal{A}_{l}\times\mu_{k-1}{(\mathcal{A}_{l})}}\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{z_{\Delta\setminus i}},a_{i},a_{g}). (22)

Figure 5 details the analytic flow on how we use the empirical adapted Bellman operator to perform value iteration on Q^k,m\hat{Q}_{k,m} to get Q^k,mest\hat{Q}_{k,m}^{\mathrm{est}} which approximates QQ^{*}.

Algorithm 4 gives a stable implementation of Algorithm 2 with learning rates {ηt}t[T]\{\eta_{t}\}_{t\in[T]}. Algorithm 4 is provably numerical stable under fixed-point arithmetic (Anand et al., 2024). Q^k,mt\hat{Q}_{k,m}^{t} in Algorithm 4 follows the γ\gamma-contractivity as in Lemma B.7, given an appropriately conditioned sequence of learning rates ηt\eta_{t}:

Algorithm 4 Stable (Practical) Implementation of Algorithm 2: SUBSAMPLE-Q: Learning
0:  A multi-agent system as described in Section 2. Parameter TT for the number of iterations in the initial value iteration step. Hyperparameter k[n]k\in[n]. Discount parameter γ(0,1)\gamma\in(0,1). Oracle 𝒪\mathcal{O} to sample sgPg(|sg,ag)s_{g}^{\prime}\sim{P}_{g}(\cdot|s_{g},a_{g}) and siPl(|si,sg,ai)s_{i}\sim{P}_{l}(\cdot|s_{i},s_{g},a_{i}) for all i[n]i\in[n]. Learning rate sequence {ηt}t[T]\{\eta_{t}\}_{t\in[T]} where ηt(0,1]\eta_{t}\in(0,1].
1:  Let Δ=[k]\Delta=[k].
2:  Set Q^k,m0(sg,s1,FzΔ1,a1,ag)=0\hat{Q}^{0}_{k,m}(s_{g},s_{1},F_{z_{\Delta\setminus 1}},a_{1},a_{g})=0 for (sg,s1,FzΔ1,a1,ag)𝒮g×𝒮l×μk1(𝒵l)×𝒜l×𝒮g(s_{g},s_{1},F_{z_{\Delta\setminus 1}},a_{1},a_{g})\in\mathcal{S}_{g}\times\mathcal{S}_{l}\times\mu_{k-1}{(\mathcal{Z}_{l})}\times\mathcal{A}_{l}\times\mathcal{S}_{g}.
3:  for t=1t=1 to TT do
4:     for (sg,s1,FzΔ1)𝒮g×𝒮l×μk1(𝒵l)(s_{g},s_{1},F_{z_{\Delta\setminus 1}})\in\mathcal{S}_{g}\times\mathcal{S}_{l}\times\mu_{k-1}^{(\mathcal{Z}_{l})} do
5:        for (ag,a1)𝒜g×𝒜l(a_{g},a_{1})\in\mathcal{A}_{g}\times\mathcal{A}_{l} do
6:           Q^k,mt+1(sg,s1,FzΔ1,ag,a1)(1ηt)Q^k,mt(sg,s1,FzΔ1,ag,a1)+ηt𝒯^k,mQ^k,mt(sg,s1,FzΔ1,ag,a1)\hat{Q}^{t+1}_{k,m}(s_{g},s_{1},F_{z_{\Delta\setminus 1}},a_{g},a_{1})\leftarrow(1-\eta_{t})\hat{Q}_{k,m}^{t}(s_{g},s_{1},F_{z_{\Delta\setminus 1}},a_{g},a_{1})+\eta_{t}\hat{\mathcal{T}}_{k,m}\hat{Q}^{t}_{k,m}(s_{g},s_{1},F_{z_{\Delta\setminus 1}},a_{g},a_{1})
7:        end for
8:     end for
9:  end for
10:  Let the approximate policy be
π^k,mT(sg,s1,FsΔ1)=argmax(ag,a1,aΔ1)𝒜g×𝒜l×μk1(𝒜l)Q^k,mT(sg,s1,FzΔ1,ag,a1).\hat{\pi}_{k,m}^{T}(s_{g},s_{1},F_{s_{\Delta\setminus 1}})=\mathop{\operatorname*{arg\,max}}_{(a_{g},a_{1},a_{\Delta\setminus 1})\in\mathcal{A}_{g}\times\mathcal{A}_{l}\times\mu_{k-1}{(\mathcal{A}_{l})}}\hat{Q}_{k,m}^{T}(s_{g},s_{1},F_{z_{\Delta\setminus 1}},a_{g},a_{1}).
Theorem B.17.

As TT\to\infty, if t=1Tηt=\sum_{t=1}^{T}\eta_{t}=\infty, and t=1Tηt2<\sum_{t=1}^{T}\eta_{t}^{2}<\infty, then QQ-learning converges to the optimal QQ function asymptotically with probability 11.

Furthermore, finite-time guarantees with the learning rate and sample complexity have been shown recently in (Chen & Maguluri, 2022), which when adapted to our Q^k,m\hat{Q}_{k,m} framework in Algorithm 4 yields:

Theorem B.18 ((Chen & Maguluri, 2022)).

For all t={1,,T}t=\{1,\dots,T\} and for any ϵ>0\epsilon>0, if the learning rate sequence ηt\eta_{t} satisfies ηt=(1γ)4ϵ2\eta_{t}=(1-\gamma)^{4}\epsilon^{2} and T=|𝒮l||𝒜l|k|𝒮l||𝒜l||𝒮g||𝒜g|/(1γ)5ϵ2T=|\mathcal{S}_{l}||\mathcal{A}_{l}|k^{|\mathcal{S}_{l}||\mathcal{A}_{l}|}|\mathcal{S}_{g}||\mathcal{A}_{g}|/(1-\gamma)^{5}\epsilon^{2}, then

Q^k,mTQ^k,mestϵ.\displaystyle\|\hat{Q}_{k,m}^{T}-\hat{Q}_{k,m}^{\mathrm{est}}\|\leq\epsilon.
Definition B.19 (Star Graph SnS_{n}).

For nn\in\mathbb{N}, the star graph SnS_{n} is the complete bipartite graph K1,nK_{1,n}.

SnS_{n} captures the graph density notion by saturating the set of neighbors of the central node. Such settings find applications beyond reinforcement learning as well (Chaudhari et al., 2024; Anand & Qu, 2024; Li et al., 2019). The cardinality of the search space simplex for the optimal policy is exponential in nn, so it cannot be naively modeled by an MDP: we need to exploit the symmetry of the local agents. This intuition allows our subsampling algorithm to run in polylogarithmic time (in nn). Some works leverage an exponential decaying property that truncates the search space for policies over immediate neighborhoods of agents; however, this still relies on the assumption that the graph neighborhood for the agent is sparse (Qu et al., 2020a, b); however, SnS_{n} is not locally sparse; hence, previous methods do not apply to this problem instance.

1122033\dotsnn
Figure 4: Star graph SnS_{n}

Appendix C Proof Sketch

This section details an outline for the proof of Theorem 3.4, as well as some key ideas. At a high level, our SUBSAMPLE-MFQ framework recovers exact mean-field QQ learning and traditional value iteration when k=nk=n and as mm\to\infty. Further, as knk\!\to\!n, Q^k\hat{Q}_{k}^{*} should intuitively get closer to QQ^{*} from which the optimal policy is derived. Thus, the proof is divided into three major steps: firstly, we prove a Lipschitz continuity bound between Q^k\hat{Q}_{k}^{*} and Q^n\hat{Q}_{n}^{*} in terms of the total variation (TV) distance between FzΔF_{z_{\Delta}} and Fz[n]F_{z_{[n]}}. Next, we bound the TV distance between FzΔF_{z_{\Delta}} and Fz[n]F_{z_{[n]}}. Finally, we bound the value differences between πkest{\pi}_{k}^{\mathrm{est}} and π\pi^{*} by bounding Q(s,π(s))Q(s,πkest(s))Q^{*}(s,\pi^{*}(s))-Q^{*}(s,{\pi}_{k}^{\mathrm{est}}(s)) and then using the performance difference lemma from Kakade & Langford (2002).

Step 1: Lipschitz Continuity Bound. To compare Q^k(sg,FsΔ,ag)\hat{Q}_{k}^{*}(s_{g},F_{s_{\Delta}},a_{g}) with Q(s,ag)Q^{*}(s,a_{g}), we prove a Lipschitz continuity bound between Q^k(sg,FsΔ,ag)\hat{Q}^{*}_{k}(s_{g},F_{s_{\Delta}},a_{g}) and Q^k(sg,FsΔ,ag)\hat{Q}^{*}_{k^{\prime}}(s_{g},F_{s_{\Delta^{\prime}}},a_{g}) with respect to the TV distance measure between sΔ(s[n]k)s_{\Delta}\in\binom{s_{[n]}}{k} and sΔ(s[n]k)s_{\Delta^{\prime}}\in\binom{s_{[n]}}{k^{\prime}}:

Theorem C.1 (Lipschitz continuity in Q^k\hat{Q}_{k}^{*}).

For all (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\times\mathcal{A}, Δ([n]k)\Delta\in\binom{[n]}{k} and Δ([n]k)\Delta^{\prime}\in\binom{[n]}{k^{\prime}},

|Q^k(sg,FzΔ,ag)\displaystyle|\hat{Q}^{*}_{k}(s_{g},F_{z_{\Delta}},a_{g}) Q^k(sg,FzΔ,ag)|21γrl(,)TV(FzΔ,FzΔ)\displaystyle-\hat{Q}^{*}_{k^{\prime}}(s_{g},F_{z_{\Delta^{\prime}}},a_{g})|\leq\frac{2}{1-\gamma}\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}\left(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}}\right)

We defer the proof of Theorem C.1 to Appendix D. See Figure 5 for a comparison between the Q^k\hat{Q}_{k}^{*} learning and estimation process, and the exact Q{Q}-learning framework.

Q^k,m0(sg,FzΔ,ag){\hat{Q}^{0}_{k,m}(s_{g},F_{z_{\Delta}},a_{g})}Q(sg,s[n],ag,a[n]){Q^{*}(s_{g},s_{[n]},a_{g},a_{[n]})}Q^k,mest(sg,FzΔ,ag){\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},F_{z_{\Delta}},a_{g})}Q^k(sg,FzΔ,ag){\hat{Q}_{k}^{*}(s_{g},F_{z_{\Delta}},a_{g})}Q^n(sg,Fz[n],ag){\hat{Q}_{n}^{*}(s_{g},F_{z_{[n]}},a_{g})}(1)\scriptstyle{\begin{subarray}{c}(1)\end{subarray}}(2)\scriptstyle{\begin{subarray}{c}(2)\end{subarray}}(3)\scriptstyle{\begin{subarray}{c}(3)\\ \approx\end{subarray}}(4)=\scriptstyle{\begin{subarray}{c}(4)\\ =\end{subarray}}
Figure 5: Flow of the algorithm and relevant analyses in learning QQ^{*}. Here, (1) follows by performing Algorithm 2 (SUBSAMPLE-MFQ: Learning) on Q^k,m0\hat{Q}_{k,m}^{0}. (2) follows from Lemma 3.2. (3) follows from the Lipschitz continuity and total variation distance bounds in Theorems C.1 and C.2. Finally, (4) follows from noting that Q^n=Q\hat{Q}_{n}^{*}=Q^{*}.

Step 2: Bounding Total Variation (TV) Distance.

We bound the TV distance between FzΔF_{z_{\Delta}} and Fz[n]F_{z_{[n]}}, where Δ𝒰([n]k)\Delta\!\in\!\mathcal{U}\binom{[n]}{k}. This task is equivalent to bounding the discrepancy between the empirical distribution and the distribution of the underlying finite population. When each iΔi\!\in\!\Delta is uniformly sampled without replacement, we use Lemma E.3 from (Anand & Qu, 2024) which generalizes the Dvoretzky-Kiefer-Wolfowitz (DKW) concentration inequality for empirical distribution functions. Using this, we show:

Theorem C.2.

Given a finite population 𝒵=(z1,,zn)\mathcal{Z}=(z_{1},\dots,z_{n}) for 𝒵𝒵ln\mathcal{Z}\in\mathcal{Z}_{l}^{n}, let Δ[n]\Delta\subseteq[n] be a uniformly random sample from 𝒵\mathcal{Z} of size kk chosen without replacement. Fix ϵ>0\epsilon>0. Then, for all x𝒵lx\in\mathcal{Z}_{l}:

Pr[supx𝒵l|1|Δ|iΔ𝟙{zi=x}\displaystyle\Pr\bigg{[}\sup_{x\in\mathcal{Z}_{l}}\bigg{|}\frac{1}{|\Delta|}\sum_{i\in\Delta}\mathbbm{1}{\{z_{i}=x\}} 1ni[n]𝟙{zi=x}|ϵ]12|𝒵l|e2knϵ2nk+1.\displaystyle-\frac{1}{n}\sum_{i\in[n]}\mathbbm{1}{\{z_{i}=x\}}\bigg{|}\leq\epsilon\bigg{]}\geq 1-2|\mathcal{Z}_{l}|e^{-\frac{2kn\epsilon^{2}}{n-k+1}}.

Then, by Theorem C.2 and the definition of total variation distance from Section 2, we have that for δ(0,1]\delta\in(0,1], with probability at least 1δ1-\delta,

TV(FsΔ,Fs[n])nk+18nkln2|𝒵l|δ\mathrm{TV}(F_{s_{\Delta}},F_{s_{[n]}})\leq\sqrt{\frac{n-k+1}{8nk}\ln\frac{2|\mathcal{Z}_{l}|}{\delta}} (23)

We then apply this result to our MARL setting by studying the rate of decay of the objective function between the learned policy πkest{\pi}_{k}^{\mathrm{est}} and the optimal policy π\pi^{*} (Theorem 3.4).

Step 3: Performance Difference Lemma to Complete the Proof. As a consequence of the prior two steps and Lemma 3.2, Q(s,a)Q^{*}(s,a^{\prime}) and Q^k,mest(sg,FzΔ,ag)\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},F_{z_{\Delta}},a_{g}^{\prime}) become similar as knk\to n. We further prove that the value generated by the policies π\pi^{*} and πkest{\pi}_{k}^{\mathrm{est}} must also be very close (where the residue shrinks as knk\to n). We then use the well-known performance difference lemma (Kakade & Langford, 2002) which we restate in Appendix F.1. A crucial theorem needed to use the performance difference lemma is a bound on Q(s,π(s))Q(s,π^kest(sg,FsΔ))Q^{*}(s^{\prime},\pi^{*}(s^{\prime}))-Q^{*}(s^{\prime},\hat{\pi}_{k}^{\mathrm{est}}(s_{g}^{\prime},F_{s_{\Delta}^{\prime}})). Therefore, we formulate and prove Theorem C.3 which yields a probabilistic bound on this difference, where the randomness is over the choice of Δ([n]k)\Delta\in\binom{[n]}{k}:

Theorem C.3.

For a fixed s𝒮:=𝒮g×𝒮lns^{\prime}\in\mathcal{S}:=\mathcal{S}_{g}\times\mathcal{S}_{l}^{n} and for δ(0,1]\delta\in(0,1], with probability atleast 12|𝒜g|k|𝒜l|δ1-2|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta:

Q(s,π(s))Q(s,π^k,mest(sg,FsΔ))2rl(,)1γnk+12nkln(2|𝒵l|δ)+2ϵk,m.\displaystyle Q^{*}(s^{\prime},\pi^{*}(s^{\prime}))-Q^{*}(s^{\prime},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g}^{\prime},F_{s_{\Delta}^{\prime}}))\leq\frac{2\|r_{l}(\cdot,\cdot)\|_{\infty}}{1-\gamma}\sqrt{\frac{n-k+1}{2nk}\ln\left(\frac{2|\mathcal{Z}_{l}|}{\delta}\right)}+2\epsilon_{k,m}.

We defer the proof of Theorem C.3 and finding optimal value of δ\delta to Lemma F.8 in the Appendix. Using Theorem C.3 and the performance difference lemma leads to Theorem 3.4.

Appendix D Proof of Lipschitz-continuity Bound

This section proves a Lipschitz-continuity bound between Q^k\hat{Q}_{k}^{*} and QQ^{*} and includes a framework to compare 1(nk)Δ([n]k)Q^k(sg,sΔ,ag)\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}^{*}_{k}(s_{g},s_{\Delta},a_{g}) and Q(s,ag)Q^{*}(s,a_{g}) in Lemma D.11.

Let zi:=(si,ai)𝒵l:-𝒮l×𝒜lz_{i}:=(s_{i},a_{i})\in\mathcal{Z}_{l}\coloneq\mathcal{S}_{l}\times\mathcal{A}_{l} and zΔ={zi:iΔ}𝒵lkz_{\Delta}=\{z_{i}:i\in\Delta\}\in\mathcal{Z}_{l}^{k}. For zi=(si,ai)z_{i}=(s_{i},a_{i}), let zi(s)=siz_{i}(s)=s_{i}, and zi(a)=aiz_{i}(a)=a_{i}. With abuse of notation, note that Q^kT(sg,ag,sΔ,aΔ)\hat{Q}_{k}^{T}(s_{g},a_{g},s_{\Delta},a_{\Delta}) is equal to Q^kT(sg,ag,zΔ)\hat{Q}_{k}^{T}(s_{g},a_{g},z_{\Delta}).

The following definitions will be relevant to the proof of Theorem D.3.

Definition D.1 (Empirical Distribution Function).

For all z𝒵l|Δ|z\in\mathcal{Z}_{l}^{|\Delta|} and s,a𝒮l×𝒜ls^{\prime},a^{\prime}\in\mathcal{S}_{l}\times\mathcal{A}_{l}, where Δ[n]\Delta\subseteq[n],

FzΔ(s,a)=1|Δ|iΔ𝟙{zi(s)=s,zi(a)=a}F_{z_{\Delta}}(s^{\prime},a^{\prime})=\frac{1}{|\Delta|}\sum_{i\in\Delta}\mathbbm{1}\{z_{i}(s)=s^{\prime},z_{i}(a)=a^{\prime}\}
Definition D.2 (Total Variation Distance).

Let PP and QQ be discrete probability distribution over some domain Ω\Omega. Then,

TV(P,Q)=12PQ1=supEΩ|PrP(E)PrQ(E)|\mathrm{TV}(P,Q)=\frac{1}{2}\left\|P-Q\right\|_{1}=\sup_{E\subseteq\Omega}\left|\Pr_{P}(E)-\Pr_{Q}(E)\right|
Theorem D.3 (Q^kT\hat{Q}_{k}^{T} is 21γrl(,)\frac{2}{1-\gamma}\|r_{l}(\cdot,\cdot)\|_{\infty}-Lipschitz continuous with respect to FzΔF_{z_{\Delta}} in total variation distance).

Suppose Δ,Δ[n]\Delta,\Delta^{\prime}\subseteq[n] such that |Δ|=k|\Delta|=k and |Δ|=k|\Delta^{\prime}|=k^{\prime}. Then:

|Q^kT(sg,ag,FzΔ)Q^kT(sg,ag,FzΔ)|(t=0T12γt)rl(,,)TV(FzΔ,FzΔ)\left|\hat{Q}^{T}_{k}(s_{g},a_{g},F_{z_{\Delta}})-\hat{Q}^{T}_{k^{\prime}}(s_{g},a_{g},F_{z_{\Delta^{\prime}}})\right|\leq\left(\sum_{t=0}^{T-1}2\gamma^{t}\right)\|r_{l}(\cdot,\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}\left(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}}\right)
Proof.

We prove this inductively. First, note that Q^k0(,,)=Q^k0(,,)=0\hat{Q}_{k}^{0}(\cdot,\cdot,\cdot)=\hat{Q}_{k^{\prime}}^{0}(\cdot,\cdot,\cdot)=0 from the initialization step, which proves the lemma for T=0T=0, since TV(,)0\mathrm{TV}(\cdot,\cdot)\geq 0. At T=1T=1:

|Q^k1(sg,ag,FzΔ)Q^k1(sg,ag,FzΔ)|\displaystyle|\hat{Q}_{k}^{1}(s_{g},a_{g},F_{z_{\Delta}})-\hat{Q}_{k^{\prime}}^{1}(s_{g},a_{g},F_{z_{\Delta^{\prime}}})| =|𝒯^kQ^k0(sg,ag,FzΔ)𝒯^kQ^k0(sg,ag,FzΔ)|\displaystyle=\left|\hat{\mathcal{T}}_{k}\hat{Q}_{k}^{0}(s_{g},a_{g},F_{z_{\Delta}})-\hat{\mathcal{T}}_{k^{\prime}}\hat{Q}_{k^{\prime}}^{0}(s_{g},a_{g},F_{z_{\Delta^{\prime}}})\right|
=|r(sg,FsΔ,ag)+γ𝔼sg,sΔmaxag𝒜gQ^k0(sg,FsΔ,ag)\displaystyle=\bigg{|}r(s_{g},F_{s_{\Delta}},a_{g})+\gamma\mathbb{E}_{s_{g}^{\prime},s^{\prime}_{\Delta}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g}}\hat{Q}_{k}^{0}(s_{g}^{\prime},F_{s^{\prime}_{\Delta}},a^{\prime}_{g})
r(sg,FsΔ,ag)γ𝔼sg,sΔmaxag𝒜gQ^k0(sg,FsΔ,ag)|\displaystyle\quad\quad\quad\quad-r(s_{g},F_{s_{\Delta^{\prime}}},a_{g})-\gamma\mathbb{E}_{s_{g}^{\prime},s^{\prime}_{\Delta^{\prime}}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g}}\hat{Q}_{k^{\prime}}^{0}(s^{\prime}_{g},F_{s^{\prime}_{\Delta^{\prime}}},a^{\prime}_{g})\bigg{|}
=|r(sg,ag,FzΔ)r(sg,ag,FzΔ)|\displaystyle=|r(s_{g},a_{g},F_{z_{\Delta}})-r(s_{g},a_{g},F_{z_{\Delta^{\prime}}})|
=|1kiΔrl(sg,zi)1kiΔrl(sg,zi)|\displaystyle=\left|\frac{1}{k}\sum_{i\in\Delta}r_{l}(s_{g},z_{i})-\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}r_{l}(s_{g},z_{i})\right|
=|𝔼zlFzΔrl(sg,zl)𝔼zlFzΔrl(sg,zl)|\displaystyle=\bigg{|}\mathbb{E}_{z_{l}\sim F_{z_{\Delta}}}r_{l}(s_{g},z_{l})-\mathbb{E}_{z_{l}^{\prime}\sim F_{z_{\Delta^{\prime}}}}r_{l}(s_{g},z_{l}^{\prime})\bigg{|}

In the first and second equalities, we use the time evolution property of Q^k1\hat{Q}_{k}^{1} and Q^k1\hat{Q}_{k^{\prime}}^{1} by applying the adapted Bellman operators 𝒯^k\hat{\mathcal{T}}_{k} and 𝒯^k\hat{\mathcal{T}}_{k^{\prime}} to Q^k0\hat{Q}_{k}^{0} and Q^k0\hat{Q}_{k^{\prime}}^{0}, respectively, and expanding. In the third and fourth equalities, we note that Q^k0(,,)=Q^k0(,,)=0\hat{Q}_{k}^{0}(\cdot,\cdot,\cdot)=\hat{Q}_{k^{\prime}}^{0}(\cdot,\cdot,\cdot)=0, and subtract the common ‘global component’ of the reward function.

Then, noting the general property that for any function f:𝒳𝒴f:\mathcal{X}\to\mathcal{Y} for |𝒳|<|\mathcal{X}|<\infty we can write f(x)=y𝒳f(y)𝟙{y=x}f(x)=\sum_{y\in\mathcal{X}}f(y)\mathbbm{1}\{y=x\}, we have:

|Q^k1(sg,ag,FzΔ)Q^k1(sg,ag,FzΔ)|\displaystyle|\hat{Q}^{1}_{k}(s_{g},a_{g},F_{z_{\Delta}})-\hat{Q}_{k^{\prime}}^{1}(s_{g},a_{g},F_{z_{\Delta^{\prime}}})| =|𝔼zlFzΔ[z𝒵rl(sg,z)𝟙{zl=z}]𝔼zlFzΔ[z𝒵rl(sg,z)𝟙{zl=z}]|\displaystyle=\left|\mathbb{E}_{z_{l}\sim F_{z_{\Delta}}}\left[\sum_{z\in\mathcal{Z}}r_{l}(s_{g},z)\mathbbm{1}\{z_{l}=z\}\right]-\mathbb{E}_{z_{l}^{\prime}\sim F_{z_{\Delta^{\prime}}}}\left[\sum_{z\in\mathcal{Z}}r_{l}(s_{g},z)\mathbbm{1}\{z_{l}^{\prime}=z\}\right]\right|
=|z𝒵rl(sg,z)(𝔼zlFzΔ𝟙{zl=z}𝔼zlFzΔ𝟙{zl=z})|\displaystyle=\bigg{|}\sum_{z\in\mathcal{Z}}r_{l}(s_{g},z)\cdot(\mathbb{E}_{z_{l}\sim F_{z_{\Delta}}}\mathbbm{1}\{z_{l}=z\}-\mathbb{E}_{z_{l}^{\prime}\sim F_{z_{\Delta^{\prime}}}}\mathbbm{1}\{z_{l}^{\prime}=z\})\bigg{|}
=|z𝒵rl(sg,z)(FzΔ(z)FzΔ(z))|\displaystyle=\bigg{|}\sum_{z\in\mathcal{Z}}r_{l}(s_{g},z)\cdot(F_{z_{\Delta}}(z)-F_{z_{\Delta^{\prime}}}(z))\bigg{|}
|maxz𝒵rl(sg,z)|z𝒵|FzΔ(z)FzΔ(z)|\displaystyle\leq\bigg{|}\max_{z\in\mathcal{Z}}r_{l}(s_{g},z)|\cdot\sum_{z\in\mathcal{Z}}|F_{z_{\Delta}}(z)-F_{z_{\Delta^{\prime}}}(z)\bigg{|}
2rl(,)TV(FzΔ,FzΔ)\displaystyle\leq 2\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})

The second equality follows from the linearity of expectations, and the third equality follows by noting that for any random variable X𝒳X\sim\mathcal{X}, 𝔼X𝟙[X=x]=Pr[X=x]\mathbb{E}_{X}\mathbbm{1}[X=x]=\Pr[X=x]. The first inequality follows from an application of the triangle inequality and the Cauchy-Schwarz inequality, and the second inequality uses the definition of TV distance. Thus, at T=1T=1, Q^\hat{Q} is (2rl(,))(2\|r_{l}(\cdot,\cdot)\|_{\infty})-Lipschitz continuous in TV distance, proving the base case.

Assume that for TtT\leq t^{\prime}\in\mathbb{N}:

|Q^kT(sg,ag,FzΔ)Q^kT(sg,ag,FzΔ)|(t=0T12γt)rl(,)TV(FzΔ,FzΔ)\displaystyle\left|\hat{Q}^{T}_{k}(s_{g},a_{g},F_{z_{\Delta}})-\hat{Q}_{k^{\prime}}^{T}(s_{g},a_{g},F_{z_{\Delta^{\prime}}})\right|\leq\left(\sum_{t=0}^{T-1}2\gamma^{t}\right)\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}\left(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}}\right)

Then, inductively:

|Q^kT+1(sg,ag,FzΔ)\displaystyle|\hat{Q}^{T+1}_{k}(s_{g},a_{g},F_{z_{\Delta}}) Q^kT+1(sg,ag,FzΔ)|\displaystyle-\hat{Q}_{k^{\prime}}^{T+1}(s_{g},a_{g},F_{z_{\Delta^{\prime}}})|
|1|Δ|iΔrl(sg,zi)1|Δ|iΔrl(sg,zi)|\displaystyle\leq\bigg{|}\frac{1}{|\Delta|}\sum_{i\in\Delta}r_{l}(s_{g},z_{i})-\frac{1}{|\Delta^{\prime}|}\sum_{i\in\Delta^{\prime}}r_{l}(s_{g},z_{i})\bigg{|}
+γ|𝔼sg,sΔmaxag𝒜g,aΔ𝒜lkQ^kT(sg,ag,FzΔ)𝔼sg,sΔmaxag𝒜g,aΔ𝒜lkQ^kT(sg,ag,FzΔ)|\displaystyle\quad\quad\quad+\gamma\bigg{|}\mathbb{E}_{s_{g}^{\prime},s^{\prime}_{\Delta}}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}\end{subarray}}\hat{Q}_{k}^{T}(s_{g}^{\prime},a_{g}^{\prime},F_{z_{\Delta}^{\prime}})-\mathbb{E}_{s_{g}^{\prime},s^{\prime}_{\Delta^{\prime}}}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta^{\prime}}^{\prime}\in\mathcal{A}_{l}^{k^{\prime}}\end{subarray}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},a_{g}^{\prime},F_{z_{\Delta^{\prime}}^{\prime}})\bigg{|}
2rl(,)TV(FzΔ,FzΔ)\displaystyle\leq 2\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})
+γ|𝔼(sg,sΔ)𝒥kmaxag𝒜g,aΔ𝒜lkQ^kT(sg,ag,FzΔ)𝔼(sg,sΔ)𝒥kmaxag𝒜g,aΔ𝒜lkQ^kT(sg,ag,FzΔ)|\displaystyle\quad\quad\quad+\gamma\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s^{\prime}_{\Delta})\sim\mathcal{J}_{k}}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}\end{subarray}}\hat{Q}_{k}^{T}(s_{g}^{\prime},a_{g}^{\prime},F_{z_{\Delta}^{\prime}})-\mathbb{E}_{(s_{g}^{\prime},s^{\prime}_{\Delta^{\prime}})\sim\mathcal{J}_{k^{\prime}}}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta^{\prime}}^{\prime}\in\mathcal{A}_{l}^{k^{\prime}}\end{subarray}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},a_{g}^{\prime},F_{z_{\Delta^{\prime}}^{\prime}})\bigg{|}
2rl(,)TV(FzΔ,FzΔ)+γ(τ=0T12γτ)rl(,)TV(FzΔ,FzΔ)\displaystyle\leq 2\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})+\gamma\left(\sum_{\tau=0}^{T-1}2\gamma^{\tau}\right)\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})
=(τ=0T2γτ)rl(,)TV(FzΔ,FzΔ)\displaystyle=\left(\sum_{\tau=0}^{T}2\gamma^{\tau}\right)\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})

In the first inequality, we rewrite the expectations over the states as the expectation over the joint transition probabilities. The second inequality then follows from Lemma D.13. To apply it to Lemma D.13, we conflate the joint expectation over (sg,sΔΔ)(s_{g},s_{\Delta\cup\Delta^{\prime}}) and reduce it back to the original form of its expectation. Finally, the third inequality follows from Lemma D.5.

By the inductive hypothesis, the claim is proven.∎

Definition D.4.

[Joint Stochastic Kernels] The joint stochastic kernel on (sg,sΔ)(s_{g},s_{\Delta}) for Δ[n]\Delta\subseteq[n] where |Δ|=k|\Delta|=k is defined as 𝒥k:𝒮g×𝒮lk×𝒮g×𝒜g×𝒮lk×𝒜lk[0,1]\mathcal{J}_{k}:\mathcal{S}_{g}\times\mathcal{S}_{l}^{k}\times\mathcal{S}_{g}\times\mathcal{A}_{g}\times\mathcal{S}_{l}^{k}\times\mathcal{A}_{l}^{k}\to[0,1], where

𝒥k(sg,sΔ|sg,ag,sΔ,aΔ):=Pr[(sg,sΔ)|sg,ag,sΔ,aΔ]\mathcal{J}_{k}(s_{g}^{\prime},s_{\Delta}^{\prime}|s_{g},a_{g},s_{\Delta},a_{\Delta}):=\Pr[(s_{g}^{\prime},s_{\Delta}^{\prime})|s_{g},a_{g},s_{\Delta},a_{\Delta}] (24)
Lemma D.5.

For all TT\in\mathbb{N}, for any ag𝒜g,sg𝒮g,sΔ𝒮lk,aΔ𝒜lk,aΔ𝒜lka_{g}\in\mathcal{A}_{g},s_{g}\in\mathcal{S}_{g},s_{\Delta}\in\mathcal{S}_{l}^{k},a_{\Delta}\in\mathcal{A}_{l}^{k},a^{\prime}_{\Delta}\in\mathcal{A}_{l}^{k}, and for all joint stochastic kernels 𝒥k\mathcal{J}_{k} as defined in Definition D.4:

|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT(sg,ag,FzΔ)\displaystyle\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\max_{a_{g}^{\prime},a_{\Delta}^{\prime}}\hat{Q}_{k}^{T}(s^{\prime}_{g},a_{g}^{\prime},F_{z^{\prime}_{\Delta}}) 𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT(sg,ag,FzΔ)|\displaystyle-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\max_{a_{g}^{\prime},a_{\Delta^{\prime}}}\hat{Q}_{k^{\prime}}^{T}(s^{\prime}_{g},a_{g}^{\prime},F_{z^{\prime}_{\Delta^{\prime}}})\bigg{|}
(τ=0T12γτ)rl(,)TV(FzΔ,FzΔ)\displaystyle\quad\quad\quad\quad\quad\leq\left(\sum_{\tau=0}^{T-1}2\gamma^{\tau}\right)\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}\left(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}}\right)
Proof.

We prove this inductively. At T=0T=0, the statement is true since Q^k0(,,)=Q^k0(,,)=0\hat{Q}_{k}^{0}(\cdot,\cdot,\cdot)=\hat{Q}_{k^{\prime}}^{0}(\cdot,\cdot,\cdot)=0 and TV(,)0\mathrm{TV}(\cdot,\cdot)\geq 0.

At T=1T=1,

|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^k1(sg,ag,sΔ,aΔ)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^k1(sg,ag,sΔ,aΔ)|\displaystyle\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\max_{a_{g}^{\prime},a_{\Delta}^{\prime}}\hat{Q}_{k}^{1}(s^{\prime}_{g},a_{g}^{\prime},s_{\Delta}^{\prime},a_{\Delta}^{\prime})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\max_{a_{g}^{\prime},a^{\prime}_{\Delta^{\prime}}}\hat{Q}_{k^{\prime}}^{1}(s^{\prime}_{g},a_{g}^{\prime},s^{\prime}_{\Delta^{\prime}},a^{\prime}_{\Delta^{\prime}})\bigg{|}
=|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔ[rg(sg,ag)+iΔrl(si,ai,sg)k]\displaystyle=\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\max_{a_{g}^{\prime},a_{\Delta}^{\prime}}\bigg{[}r_{g}(s_{g}^{\prime},a_{g}^{\prime})+\frac{\sum_{i\in\Delta}r_{l}(s_{i}^{\prime},a_{i}^{\prime},s_{g}^{\prime})}{k}\bigg{]}
𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔ[rg(sg,ag)+iΔrl(si,ai,sg)k]|\displaystyle-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\max_{a_{g}^{\prime},a^{\prime}_{\Delta^{\prime}}}\bigg{[}r_{g}(s_{g}^{\prime},a_{g}^{\prime})+\frac{\sum_{i\in\Delta^{\prime}}r_{l}(s_{i}^{\prime},a_{i}^{\prime},s_{g}^{\prime})}{k^{\prime}}\bigg{]}\bigg{|}
=|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxaΔiΔrl(si,ai,sg)k𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxaΔiΔrl(si,ai,sg)k|\displaystyle=\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\!\max_{a_{\Delta}^{\prime}}\frac{\sum_{i\in\Delta}r_{l}(s_{i}^{\prime},a_{i}^{\prime},s_{g}^{\prime})}{k}-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\max_{a^{\prime}_{\Delta^{\prime}}}\frac{\sum_{i\in\Delta^{\prime}}r_{l}(s_{i}^{\prime},a_{i}^{\prime},s_{g}^{\prime})}{k^{\prime}}\bigg{|}
=|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔr~l(si,sg)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔr~l(si,sg)|\displaystyle=\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\frac{1}{k}\sum_{i\in\Delta}\tilde{r}_{l}(s_{i}^{\prime},s_{g}^{\prime})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}\tilde{r}_{l}(s_{i}^{\prime},s_{g}^{\prime})\bigg{|}

In the last equality, we note that

maxaΔiΔrl(si,ai,sg)\displaystyle\max_{a_{\Delta}^{\prime}}\sum_{i\in\Delta}r_{l}(s_{i}^{\prime},a_{i}^{\prime},s_{g}^{\prime}) =iΔmaxaΔrl(si,ai,sg)\displaystyle=\sum_{i\in\Delta}\max_{a_{\Delta}^{\prime}}r_{l}(s_{i}^{\prime},a_{i}^{\prime},s_{g}^{\prime})
=iΔmaxairl(si,ai,sg)\displaystyle=\sum_{i\in\Delta}\max_{a^{\prime}_{i}}r_{l}(s_{i}^{\prime},a_{i}^{\prime},s_{g}^{\prime})
=iΔr~l(si,sg)\displaystyle=\sum_{i\in\Delta}\tilde{r}_{l}(s_{i}^{\prime},s_{g}^{\prime})

where r~l(si,sg):=maxairl(si,ai,sg)\tilde{r}_{l}(s_{i}^{\prime},s_{g}^{\prime}):=\max_{a_{i}^{\prime}}r_{l}(s_{i}^{\prime},a_{i}^{\prime},s_{g}^{\prime}).

Then, we have:

|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔr~l(si,sg)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔr~l(si,sg)|\displaystyle\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\frac{1}{k}\sum_{i\in\Delta}\tilde{r}_{l}(s_{i}^{\prime},s_{g}^{\prime})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}\tilde{r}_{l}(s_{i}^{\prime},s_{g}^{\prime})\bigg{|}
=|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kx𝒮lr~l(x,sg)FsΔ(x)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kx𝒮lr~l(x,sg)FsΔ(x)|\displaystyle=\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\frac{1}{k}\sum_{x\in\mathcal{S}_{l}}\tilde{r}_{l}(x,s_{g}^{\prime})F_{s^{\prime}_{\Delta}}(x)-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\frac{1}{k^{\prime}}\sum_{x\in\mathcal{S}_{l}}\tilde{r}_{l}(x,s_{g}^{\prime})F_{s^{\prime}_{\Delta^{\prime}}}(x)\bigg{|}
=|𝔼sgsΔΔ𝒮l|ΔΔ|𝒥|ΔΔ|(,sΔΔ|sg,ag,sΔΔ,aΔΔ)\displaystyle=\bigg{|}\mathbb{E}_{s_{g}^{\prime}\sim\sum_{s^{\prime}_{\Delta\cup\Delta^{\prime}}\in\mathcal{S}_{l}^{|\Delta\cup\Delta^{\prime}|}}\mathcal{J}_{|\Delta\cup\Delta^{\prime}|(\cdot,s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}}
x𝒮lrl~(x,sg)𝔼sΔΔ𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔ)[FsΔ(x)FsΔ(x)]|\displaystyle\quad\quad\quad\quad\quad\sum_{x\in\mathcal{S}_{l}}\tilde{r_{l}}(x,s_{g}^{\prime})\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}[F_{s^{\prime}_{\Delta}}(x)-F_{s^{\prime}_{\Delta^{\prime}}}(x)]\bigg{|}
rl~(,)𝔼sgsΔΔ𝒮l|ΔΔ|𝒥|ΔΔ|(,sΔΔ|sg,ag,sΔΔ,aΔΔ)\displaystyle\leq\|\tilde{r_{l}}(\cdot,\cdot)\|_{\infty}\cdot\mathbb{E}_{s_{g}^{\prime}\sim\sum_{s^{\prime}_{\Delta\cup\Delta^{\prime}}\in\mathcal{S}_{l}^{|\Delta\cup\Delta^{\prime}|}}\mathcal{J}_{|\Delta\cup\Delta^{\prime}|(\cdot,s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}}
x𝒮l|𝔼sΔΔ|sgFsΔ(x)𝔼sΔΔ|sgFsΔ(x)|\displaystyle\quad\quad\quad\quad\quad\sum_{x\in\mathcal{S}_{l}}|\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}}F_{s_{\Delta}^{\prime}}(x)-\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}}F_{s_{\Delta^{\prime}}^{\prime}}(x)|
rl(,)𝔼sgsΔΔ𝒮l|ΔΔ|𝒥|ΔΔ|(,sΔΔ|sg,ag,sΔΔ,aΔΔ)\displaystyle\leq\|{r_{l}}(\cdot,\cdot)\|_{\infty}\cdot\mathbb{E}_{s_{g}^{\prime}\sim\sum_{s^{\prime}_{\Delta\cup\Delta^{\prime}}\in\mathcal{S}_{l}^{|\Delta\cup\Delta^{\prime}|}}\mathcal{J}_{|\Delta\cup\Delta^{\prime}|(\cdot,s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}}
x𝒮l|𝔼sΔΔ|sgFsΔ(x)𝔼sΔΔ|sgFsΔ(x)|\displaystyle\quad\quad\quad\quad\quad\sum_{x\in\mathcal{S}_{l}}|\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}}F_{s_{\Delta}^{\prime}}(x)-\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}}F_{s_{\Delta^{\prime}}^{\prime}}(x)|
2rl(,)𝔼sgsΔΔ𝒮l|ΔΔ|𝒥|ΔΔ|(,sΔΔ|sg,ag,sΔΔ,aΔΔ)TV(𝔼sΔΔ|sgFsΔ,𝔼sΔΔ|sgFsΔ)\displaystyle\leq 2\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathbb{E}_{s_{g}^{\prime}\sim\sum_{s^{\prime}_{\Delta\cup\Delta^{\prime}}\in\mathcal{S}_{l}^{|\Delta\cup\Delta^{\prime}|}}\mathcal{J}_{|\Delta\cup\Delta^{\prime}|(\cdot,s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}}\mathrm{TV}(\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}}F_{s_{\Delta}^{\prime}},\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}}F_{s_{\Delta^{\prime}}^{\prime}})
2rl(,)TV(FzΔ,FzΔ)\displaystyle\leq 2\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z^{\prime}_{\Delta}})

The first equality follows from noting that f(x)=x𝒳f(x)𝟙{x=x}f(x)=\sum_{x^{\prime}\in\mathcal{X}}f(x^{\prime})\mathbbm{1}\{x=x^{\prime}\} and from Fubini-Tonelli’s inequality which allows us to swap the order of summations as the summand is finite. The second equality uses the law of total expectation. The first inequality uses Jensen’s inequality and the triangle inequality. The second inequality uses r~l(,)rl(,)\|\tilde{r}_{l}(\cdot,\cdot)\|_{\infty}\leq\|r_{l}(\cdot,\cdot)\|_{\infty} which holds as rl\|{r}_{l}\|_{\infty} is the infinite-norm of the local reward functions and is therefore atleast as large any other element in the image of rlr_{l}. The third inequality follows from the definition of total variation distance, and the final inequality follows from Lemma D.7. This proves the base case.

Then, assume that for TtT\leq t^{\prime}\in\mathbb{N}, for all joint stochastic kernels 𝒥k\mathcal{J}_{k} and 𝒥k\mathcal{J}_{k^{\prime}}, and for all ag𝒜g,aΔ𝒜lka_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}:

|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT(sg,ag,sΔ,aΔ)\displaystyle\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\max_{a_{g}^{\prime},a_{\Delta}^{\prime}}\hat{Q}_{k}^{T}(s^{\prime}_{g},a_{g}^{\prime},s_{\Delta}^{\prime},a_{\Delta}^{\prime}) 𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT(sg,ag,sΔ,aΔ)|\displaystyle-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\max_{a_{g}^{\prime},a^{\prime}_{\Delta^{\prime}}}\hat{Q}_{k^{\prime}}^{T}(s^{\prime}_{g},a_{g}^{\prime},s^{\prime}_{\Delta^{\prime}},a^{\prime}_{\Delta^{\prime}})\bigg{|}
2(t=0T1γt)rl(,)TV(FzΔ,FzΔ)\displaystyle\leq 2\left(\sum_{t=0}^{T-1}\!\gamma^{t}\right)\!\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})

For the remainder of the proof, we adopt the shorthand 𝔼(sg,sΔ)𝒥\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}}} to denote 𝔼(sg,sΔ)𝒥|Δ|(,|sg,ag,sΔ,aΔ)\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{|\Delta|}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}, and 𝔼(sg′′,sΔ′′)𝒥\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim{\mathcal{J}}} to denote 𝔼(sg′′,sΔ′′)𝒥|Δ|(,|sg,ag,sΔ,aΔ)\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim\mathcal{J}_{|\Delta|}(\cdot,\cdot|s_{g}^{\prime},a_{g}^{\prime},s_{\Delta}^{\prime},a_{\Delta}^{\prime})}.

Then, inductively, we have:

|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT+1(sg,ag,sΔ,aΔ)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT+1(sg,ag,sΔ,aΔ)|\displaystyle\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}\max_{a_{g}^{\prime},a_{\Delta}^{\prime}}\hat{Q}_{k}^{T+1}(s^{\prime}_{g},a_{g}^{\prime},s_{\Delta}^{\prime},a_{\Delta}^{\prime})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim{\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}\max_{a_{g}^{\prime},a^{\prime}_{\Delta^{\prime}}}\hat{Q}_{k^{\prime}}^{T+1}(s^{\prime}_{g},a_{g}^{\prime},s^{\prime}_{\Delta^{\prime}},a^{\prime}_{\Delta^{\prime}})\bigg{|}
=|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔ[rΔ(sg,ag,sΔ,aΔ)+γ𝔼(sg′′,sΔ′′)𝒥k(,|sg,ag,sΔ,aΔ)maxag′′,aΔ′′Q^kT(sg′′,sΔ′′,ag′′,aΔ′′)](I)\displaystyle=\bigg{|}\underbrace{\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}\max_{a_{g}^{\prime},a_{\Delta}^{\prime}}\bigg{[}r_{\Delta}(s_{g}^{\prime},a_{g}^{\prime},s^{\prime}_{\Delta},a^{\prime}_{\Delta})+\gamma\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g}^{\prime},a_{g}^{\prime},s_{\Delta}^{\prime},a_{\Delta}^{\prime})}\max_{a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime}}\hat{Q}_{k}^{T}(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime})\bigg{]}}_{\text{(I)}}
𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔ[rΔ(sg,ag,sΔ,aΔ)+γ𝔼(sg′′,sΔ′′)𝒥k(,|sg,ag,sΔ,aΔ)maxag′′,aΔ′′Q^kT(sg′′,sΔ′′,ag′′,aΔ′′)](II)|\displaystyle-\underbrace{\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim{\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}\max_{a_{g}^{\prime},a_{\Delta^{\prime}}^{\prime}}\bigg{[}r_{\Delta^{\prime}}(s_{g}^{\prime},a_{g}^{\prime},s^{\prime}_{\Delta^{\prime}},a^{\prime}_{\Delta^{\prime}})+\gamma\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta^{\prime}}^{\prime\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g}^{\prime},a_{g}^{\prime},s_{\Delta^{\prime}}^{\prime},a_{\Delta^{\prime}}^{\prime})}\max_{a_{g}^{\prime\prime},a_{\Delta^{\prime}}^{\prime\prime}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime\prime},s_{\Delta^{\prime}}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta^{\prime}}^{\prime\prime})\bigg{]}}_{\text{(II)}}\bigg{|}

Let

a~g,a~Δ=argmaxag𝒜g,aΔ𝒜lk[rΔ(sg,ag,sΔ,aΔ)+γ𝔼(sg′′,sΔ′′)𝒥k(,|sg,ag,sΔ,aΔ)maxag′′,aΔ′′Q^kT(sg′′,sΔ′′,ag′′,aΔ′′)],\tilde{a}_{g},\tilde{a}_{\Delta}=\arg\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}}\bigg{[}r_{\Delta}(s_{g}^{\prime},a_{g}^{\prime},s^{\prime}_{\Delta},a^{\prime}_{\Delta})+\gamma\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g}^{\prime},a_{g}^{\prime},s_{\Delta}^{\prime},a_{\Delta}^{\prime})}\max_{a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime}}\hat{Q}_{k}^{T}(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime})\bigg{]},

and

a^g,a^Δ=argmaxag𝒜g,aΔ𝒜lk[rΔ(sg,ag,sΔ,aΔ)+γ𝔼(sg′′,sΔ′′)𝒥k(,|sg,ag,sΔ,aΔ)maxag′′,aΔ′′Q^kT(sg′′,sΔ′′,ag′′,aΔ′′)]\hat{a}_{g},\hat{a}_{\Delta^{\prime}}=\arg\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta^{\prime}}^{\prime}\in\mathcal{A}_{l}^{k}}\bigg{[}r_{\Delta^{\prime}}(s_{g}^{\prime},a_{g}^{\prime},s^{\prime}_{\Delta^{\prime}},a^{\prime}_{\Delta^{\prime}})+\gamma\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta^{\prime}}^{\prime\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g}^{\prime},a_{g}^{\prime},s_{\Delta^{\prime}}^{\prime},a_{\Delta^{\prime}}^{\prime})}\max_{a_{g}^{\prime\prime},a_{\Delta^{\prime}}^{\prime\prime}}\hat{Q}_{k}^{T}(s_{g}^{\prime\prime},s_{\Delta^{\prime}}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta^{\prime}}^{\prime\prime})\bigg{]}

Then, define a~ΔΔ𝒜l|ΔΔ|\tilde{a}_{\Delta\cup\Delta^{\prime}}\in\mathcal{A}_{l}^{|\Delta\cup\Delta^{\prime}|} by

a~i={a~i,if iΔ,a^i,if iΔΔ\tilde{a}_{i}=\begin{cases}\tilde{a}_{i},&\text{if $i\in\Delta$},\\ \hat{a}_{i},&\text{if $i\in\Delta^{\prime}\setminus\Delta$}\end{cases}

Similarly, define a^ΔΔ𝒜l|ΔΔ|\hat{a}_{\Delta\cup\Delta^{\prime}}\in\mathcal{A}_{l}^{|\Delta\cup\Delta^{\prime}|} by

a^i={a^i,iΔa~i,iΔΔ\hat{a}_{i}=\begin{cases}\hat{a}_{i},&i\in\Delta^{\prime}\\ \tilde{a}_{i},&i\in\Delta\setminus\Delta^{\prime}\end{cases}

Suppose (I) \geq (II). Then,

|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT+1(sg,ag,sΔ,aΔ)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT+1(sg,ag,sΔ,aΔ)|\displaystyle\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}\max_{a_{g}^{\prime},a_{\Delta}^{\prime}}\hat{Q}_{k}^{T+1}(s^{\prime}_{g},a_{g}^{\prime},s_{\Delta}^{\prime},a_{\Delta}^{\prime})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim{\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}\max_{a_{g}^{\prime},a^{\prime}_{\Delta^{\prime}}}\hat{Q}_{k^{\prime}}^{T+1}(s^{\prime}_{g},a_{g}^{\prime},s^{\prime}_{\Delta^{\prime}},a^{\prime}_{\Delta^{\prime}})\bigg{|}
=|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)Q^kT+1(sg,a~g,sΔ,a~Δ)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)Q^kT+1(sg,a^g,sΔ,a^Δ)|\displaystyle=\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}\hat{Q}_{k}^{T+1}(s^{\prime}_{g},\tilde{a}_{g},s_{\Delta}^{\prime},\tilde{a}_{\Delta})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim{\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}\hat{Q}_{k^{\prime}}^{T+1}(s^{\prime}_{g},\hat{a}_{g},s^{\prime}_{\Delta^{\prime}},\hat{a}_{\Delta^{\prime}})\bigg{|}
|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)Q^kT+1(sg,a~g,sΔ,a~Δ)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)Q^kT+1(sg,a~g,sΔ,a~Δ)|\displaystyle\leq\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}\hat{Q}_{k}^{T+1}(s^{\prime}_{g},\tilde{a}_{g},s_{\Delta}^{\prime},\tilde{a}_{\Delta})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim{\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}\hat{Q}_{k^{\prime}}^{T+1}(s^{\prime}_{g},\tilde{a}_{g},s^{\prime}_{\Delta^{\prime}},\tilde{a}_{\Delta^{\prime}})\bigg{|}
=|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)[rΔ(sg,sΔ,a~g,a~Δ)+γ𝔼(sg,sΔ)𝒥k(,|sg,a~g,sΔ,a~Δ)maxag,aΔQ^kT(sg,ag,sΔ,aΔ)]\displaystyle=\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}\bigg{[}r_{\Delta}(s_{g}^{\prime},s_{\Delta}^{\prime},\tilde{a}_{g},\tilde{a}_{\Delta})+\gamma\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g}^{\prime},\tilde{a}_{g},s_{\Delta}^{\prime},\tilde{a}_{\Delta})}\max_{a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime}}\hat{Q}_{k}^{T}(s^{\prime\prime}_{g},a_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},{a}^{\prime\prime}_{\Delta})\bigg{]}
𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)[rΔ(sg,sΔ,a~g,a~Δ)+γ𝔼(sg,sΔ)𝒥k(,|sg,a~g,sΔ,a~Δ)maxag,aΔQ^kT(sg,ag,sΔ,saΔ)]|\displaystyle-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim{\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}\bigg{[}r_{\Delta^{\prime}}(s_{g}^{\prime},s^{\prime}_{\Delta^{\prime}},\tilde{a}_{g},\tilde{a}_{\Delta^{\prime}})+\gamma\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta^{\prime}}^{\prime\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g}^{\prime},\tilde{a}_{g},s^{\prime}_{\Delta^{\prime}},\tilde{a}_{\Delta^{\prime}})}\max_{a_{g}^{\prime\prime},a_{\Delta^{\prime}}^{\prime\prime}}\hat{Q}_{k^{\prime}}^{T}(s^{\prime\prime}_{g},{a}^{\prime\prime}_{g},s^{\prime\prime}_{\Delta^{\prime}},s{a}^{\prime\prime}_{\Delta^{\prime}})\bigg{]}\bigg{|}
|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)rΔ(sg,sΔ,a~g,a~Δ)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)rΔ(sg,sΔ,a~g,a~Δ)|\displaystyle\leq\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}r_{\Delta}(s_{g}^{\prime},s_{\Delta}^{\prime},\tilde{a}_{g},\tilde{a}_{\Delta})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim{\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}r_{\Delta^{\prime}}(s_{g}^{\prime},s^{\prime}_{\Delta^{\prime}},\tilde{a}_{g},\tilde{a}_{\Delta^{\prime}})\bigg{|}
+γ|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)𝔼(sg,sΔ)𝒥k(,|sg,a~g,sΔ,a~Δ)maxag,aΔQ^kT(sg,ag,sΔ,aΔ)\displaystyle+\gamma\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g}^{\prime},\tilde{a}_{g},s_{\Delta}^{\prime},\tilde{a}_{\Delta})}\max_{a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime}}\hat{Q}_{k}^{T}(s^{\prime\prime}_{g},{a}^{\prime\prime}_{g},s_{\Delta}^{\prime\prime},{a}^{\prime\prime}_{\Delta})
𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)𝔼(sg,sΔ)𝒥k(,|sg,a~g,sΔ,a~Δ)maxag,aΔQ^kT(sg,ag,sΔ,aΔ)|\displaystyle-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim{\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta^{\prime}}^{\prime\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g}^{\prime},\tilde{a}_{g},s^{\prime}_{\Delta^{\prime}},\tilde{a}_{\Delta^{\prime}})}\max_{a_{g}^{\prime\prime},a_{\Delta^{\prime}}^{\prime\prime}}\hat{Q}_{k^{\prime}}^{T}(s^{\prime\prime}_{g},{a}^{\prime\prime}_{g},s^{\prime\prime}_{\Delta^{\prime}},{a}^{\prime\prime}_{\Delta^{\prime}})\bigg{|}
=|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔrl(si,sg,a~i)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔrl(si,sg,a~i)|\displaystyle=\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim{\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}\frac{1}{k}\sum_{i\in\Delta}r_{l}(s_{i}^{\prime},s_{g}^{\prime},\tilde{a}_{i})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim{\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}r_{l}(s_{i}^{\prime},s_{g}^{\prime},\tilde{a}_{i})\bigg{|}
+γ|𝔼(sg,sΔ)𝒥~k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT(sg,ag,sΔ,aΔ)𝔼(sg,sΔ)𝒥~k(,|sg,ag,sΔ,aΔ)maxag,aΔQ^kT(sg,ag,sΔ,aΔ)|\displaystyle+\gamma\bigg{|}\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim{\tilde{\mathcal{J}}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}}\max_{a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime}}\hat{Q}_{k}^{T}(s^{\prime\prime}_{g},{a}^{\prime\prime}_{g},s_{\Delta}^{\prime\prime},{a}^{\prime\prime}_{\Delta})-\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta^{\prime\prime}}^{\prime})\sim{\tilde{\mathcal{J}}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}}\max_{a_{g}^{\prime\prime},a_{\Delta^{\prime}}^{\prime\prime}}\hat{Q}_{k^{\prime}}^{T}(s^{\prime\prime}_{g},{a}^{\prime\prime}_{g},s^{\prime\prime}_{\Delta^{\prime}},{a}^{\prime\prime}_{\Delta^{\prime}})\bigg{|}
2rl(,)TV(FzΔ,FzΔ)+γ(t=0T2γt)rl(,)TV(FzΔ,FzΔ)\displaystyle\leq 2\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})+\gamma\left(\sum_{t=0}^{T}2\gamma^{t}\right)\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})
=(t=0T+12γt)rl(,)TV(FzΔ,FzΔ)\displaystyle=\left(\sum_{t=0}^{T+1}2\gamma^{t}\right)\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})

The first equality rewrites the equations with their respective maximizing actions. The first inequality upper-bounds this difference by allowing all terms to share the common action a~\tilde{a}. Using the Bellman equation, the second equality expands Q^kT+1\hat{Q}_{k}^{T+1} and Q^kT+1\hat{Q}_{k^{\prime}}^{T+1}. The second inequality follows from the triangle inequality. The third equality follows by subtracting the common rg(sg,ag~)r_{g}(s_{g}^{\prime},\tilde{a_{g}}) terms from the reward and noting that the two expectation terms 𝔼sg,sΔ𝒥k(,|sg,ag,sΔ,aΔ)𝔼sg,sΔ𝒥k(,|sg,a~g,sΔ,a~Δ)\mathbb{E}_{s_{g}^{\prime},s_{\Delta}^{\prime}}\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})\mathbb{E}_{s_{g}^{\prime\prime},s_{\Delta^{\prime\prime}}\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},\tilde{a}_{g},s_{\Delta},\tilde{a}_{\Delta})} can be combined into a single expectation 𝔼sg,sΔ𝒥~k(,|sg,ag,sΔ,aΔ)\mathbb{E}_{s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime}\sim\tilde{\mathcal{J}}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}.

In the special case where aΔ=a~Δa_{\Delta}=\tilde{a}_{\Delta}, we derive a closed form expression for 𝒥~k\tilde{\mathcal{J}}_{k} in Lemma D.10. To justify the third inequality, the second term follows from the induction hypothesis and the first term follows from the following derivation:

|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔrl(si,sg,a~i)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔrl(si,sg,a~i)|\displaystyle\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\frac{1}{k}\sum_{i\in\Delta}r_{l}(s_{i}^{\prime},s_{g}^{\prime},\tilde{a}_{i})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}r_{l}(s_{i}^{\prime},s_{g}^{\prime},\tilde{a}_{i})\bigg{|}
=|𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔra~l(si,sg)𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)1kiΔrla~(si,sg)|\displaystyle\quad\quad\quad=\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\frac{1}{k}\sum_{i\in\Delta}r^{\tilde{a}}_{l}(s_{i}^{\prime},s_{g}^{\prime})-\mathbb{E}_{(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{k^{\prime}}(\cdot,\cdot|s_{g},a_{g},s_{\Delta^{\prime}},a_{\Delta^{\prime}})}\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}r_{l}^{\tilde{a}}(s_{i}^{\prime},s_{g}^{\prime})\bigg{|}
rla~(,)𝔼sgsΔΔ𝒮l|ΔΔ|𝒥|ΔΔ|(,sΔΔ|sg,ag,sΔΔ,aΔΔ)TV(𝔼sΔΔ|sgFsΔ,𝔼sΔΔ|sgFsΔ)\displaystyle\quad\quad\quad\leq\|r_{l}^{\tilde{a}}(\cdot,\cdot)\|_{\infty}\cdot\mathbb{E}_{s_{g}^{\prime}\sim\sum_{s^{\prime}_{\Delta\cup\Delta^{\prime}}\in\mathcal{S}_{l}^{|\Delta\cup\Delta^{\prime}|}}\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot,s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}\mathrm{TV}(\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}}F_{s_{\Delta}},\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}}F_{s_{\Delta^{\prime}}})
2rl(,)TV(FzΔ,FzΔ)\displaystyle\quad\quad\quad\leq 2\|r_{l}(\cdot,\cdot)\|_{\infty}\cdot\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})

The above derivation follows from the same argument as in the base case where rla~(si,sg):=rl(si,sg,a~i)r_{l}^{\tilde{a}}(s_{i}^{\prime},s_{g}^{\prime}):=r_{l}(s_{i}^{\prime},s_{g}^{\prime},\tilde{a}_{i}) for any iΔΔi\in\Delta\cup\Delta^{\prime}. Similarly, if (I)<<(II), an analogous argument that replaces a~Δ\tilde{a}_{\Delta} with a^Δ\hat{a}_{\Delta} yields the same result.

Therefore, by the induction hypothesis, the claim is proven.∎

Remark D.6.

Given a joint transition probability function 𝒥|ΔΔ|\mathcal{J}_{|\Delta\cup\Delta^{\prime}|} as defined in Definition D.4, we can recover the transition function for a single agent iΔΔi\in\Delta\cup\Delta^{\prime} given by 𝒥1\mathcal{J}_{1} using the law of total probability and the conditional independence between sis_{i} and sgs[n]is_{g}\cup s_{[n]\setminus i} in Equation 25. This characterization is crucial in Lemma D.7 and Lemma D.5:

𝒥1(|sg,sg,ag,si,ai)=sΔΔi𝒮l|ΔΔ|1𝒥|ΔΔ|(sΔΔisi|sg,sg,ag,sΔΔ,aΔΔ)\mathcal{J}_{1}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i})=\sum_{s^{\prime}_{\Delta\cup\Delta^{\prime}\setminus i}\sim\mathcal{S}_{l}^{|\Delta\cup\Delta^{\prime}|-1}}\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(s^{\prime}_{\Delta\cup\Delta^{\prime}\setminus i}\circ s^{\prime}_{i}|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}}) (25)

Here, we use a conditional independence property proven in Lemma D.9.

Lemma D.7.
TV(𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔ)FsΔ,𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔ)FsΔ)TV(FzΔ,FzΔ)\mathrm{TV}\bigg{(}\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}F_{s^{\prime}_{\Delta}},\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}F_{s^{\prime}_{\Delta^{\prime}}}\bigg{)}\leq\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})
Proof.

From the definition of total variation distance, we have:

TV(𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔFsΔ,𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔFsΔ)\displaystyle\mathrm{TV}\bigg{(}\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}}}F_{s^{\prime}_{\Delta}},\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}}}F_{s^{\prime}_{\Delta^{\prime}}}\bigg{)}
=12x𝒮l|𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔ)FsΔ(x)𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔ)FsΔ(x)|\displaystyle\quad\quad=\frac{1}{2}\sum_{x\in\mathcal{S}_{l}}\bigg{|}\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}F_{s^{\prime}_{\Delta}}(x)-\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}F_{s^{\prime}_{\Delta^{\prime}}}(x)\bigg{|}
=12x𝒮l|1kiΔ𝒥1(x|sg,sg,ag,si,ai)1kiΔ𝒥1(x|sg,sg,ag,si,ai)|\displaystyle\quad\quad=\frac{1}{2}\sum_{x\in\mathcal{S}_{l}}\left|\frac{1}{k}\sum_{i\in\Delta}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i})-\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i})\right|
=12x𝒮l|1kal𝒜lsl𝒮liΔ𝒥1(x|sg,sg,ag,si,ai)𝟙{ai=alsi=sl}1kal𝒜lsl𝒮liΔ𝒥1(x|sg,sg,ag,si,ai)𝟙{ai=alsi=sl}|\displaystyle\quad\quad=\frac{1}{2}\sum_{x\in\mathcal{S}_{l}}\left|\frac{1}{k}\sum_{a_{l}\in\mathcal{A}_{l}}\sum_{s_{l}\in\mathcal{S}_{l}}\sum_{i\in\Delta}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i})\mathbbm{1}\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}-\frac{1}{k^{\prime}}\sum_{a_{l}\in\mathcal{A}_{l}}\sum_{s_{l}\in\mathcal{S}_{l}}\sum_{i\in\Delta^{\prime}}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i})\mathbbm{1}\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}\right|
=12x𝒮l|1kal𝒜lsl𝒮liΔ𝒥1(x|sg,sg,ag,,)𝟙{ai=alsi=sl}1kal𝒜lsl𝒮liΔ𝒥1(x|sg,sg,ag,,)𝟙{ai=alsi=sl}|\displaystyle\quad\quad=\frac{1}{2}\sum_{x\in\mathcal{S}_{l}}\left|\frac{1}{k}\sum_{a_{l}\in\mathcal{A}_{l}}\sum_{s_{l}\in\mathcal{S}_{l}}\sum_{i\in\Delta}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},\cdot,\cdot)\vec{\mathbbm{1}}_{\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}}-\frac{1}{k^{\prime}}\sum_{a_{l}\in\mathcal{A}_{l}}\sum_{s_{l}\in\mathcal{S}_{l}}\sum_{i\in\Delta^{\prime}}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},\cdot,\cdot)\vec{\mathbbm{1}}_{\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}}\right|

The second equality uses Lemma D.8. The third equality uses the property that each local agent can only have one state/action pair, and the fourth equality vectorizes the indicator random variables.

Then,

TV(𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔFsΔ,𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔFsΔ)\displaystyle\mathrm{TV}\bigg{(}\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}}}F_{s^{\prime}_{\Delta}},\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}}}F_{s^{\prime}_{\Delta^{\prime}}}\bigg{)}
12x𝒮lal𝒜lsl𝒮l|1kiΔ𝒥1(x|sg,sg,ag,,)𝟙{ai=alsi=sl}1kiΔ𝒥1(x|sg,sg,ag,,)𝟙{ai=alsi=sl}|\displaystyle\quad\quad\quad\quad\leq\frac{1}{2}\sum_{x\in\mathcal{S}_{l}}\sum_{a_{l}\in\mathcal{A}_{l}}\sum_{s_{l}\in\mathcal{S}_{l}}\left|\frac{1}{k}\sum_{i\in\Delta}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},\cdot,\cdot)\vec{\mathbbm{1}}_{\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}}-\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},\cdot,\cdot)\vec{\mathbbm{1}}_{\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}}\right|
=12al𝒜lsl𝒮l1kiΔ𝒥1(|sg,sg,ag,,)𝟙{ai=alsi=sl}1kiΔ𝒥1(|sg,sg,ag,,)𝟙{ai=alsi=sl}1\displaystyle\quad\quad\quad\quad=\frac{1}{2}\sum_{a_{l}\in\mathcal{A}_{l}}\sum_{s_{l}\in\mathcal{S}_{l}}\left\|\frac{1}{k}\sum_{i\in\Delta}\mathcal{J}_{1}(\cdot|s_{g}^{\prime},s_{g},a_{g},\cdot,\cdot)\vec{\mathbbm{1}}_{\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}}-\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}\mathcal{J}_{1}(\cdot|s_{g}^{\prime},s_{g},a_{g},\cdot,\cdot)\vec{\mathbbm{1}}_{\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}}\right\|_{1}
12al𝒜lsl𝒮l1kiΔ𝟙{ai=alsi=sl}1kiΔ𝟙{ai=alsi=sl}1𝒥1(|sg,sg,ag,,)1\displaystyle\quad\quad\quad\quad\leq\frac{1}{2}\sum_{a_{l}\in\mathcal{A}_{l}}\sum_{s_{l}\in\mathcal{S}_{l}}\left\|\frac{1}{k}\sum_{i\in\Delta}\vec{\mathbbm{1}}_{\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}}-\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}\vec{\mathbbm{1}}_{\left\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\right\}}\right\|_{1}\cdot\|\mathcal{J}_{1}(\cdot|s_{g}^{\prime},s_{g},a_{g},\cdot,\cdot)\|_{1}
=12al𝒜lsl𝒮l1kiΔ𝟙{ai=alsi=sl}1kiΔ𝟙{ai=alsi=sl}1\displaystyle\quad\quad\quad\quad=\frac{1}{2}\sum_{a_{l}\in\mathcal{A}_{l}}\sum_{s_{l}\in\mathcal{S}_{l}}\left\|\frac{1}{k}\sum_{i\in\Delta}\mathbbm{1}\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\}-\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}\mathbbm{1}\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\}\right\|_{1}
=12(sl,al)𝒮l×𝒜l|1kiΔ𝟙{ai=alsi=sl}1kiΔ𝟙{ai=alsi=sl}|\displaystyle\quad\quad\quad\quad=\frac{1}{2}\sum_{(s_{l},a_{l})\in\mathcal{S}_{l}\times\mathcal{A}_{l}}\left|\frac{1}{k}\sum_{i\in\Delta}\mathbbm{1}\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\}-\frac{1}{k^{\prime}}\sum_{i\in\Delta^{\prime}}\mathbbm{1}\{\begin{subarray}{c}a_{i}=a_{l}\\ s_{i}=s_{l}\end{subarray}\}\right|
=12zl𝒮l×𝒜l|FzΔ(zl)FzΔ(zl)|\displaystyle\quad\quad\quad\quad=\frac{1}{2}\sum_{z_{l}\in\mathcal{S}_{l}\times\mathcal{A}_{l}}\left|F_{z_{\Delta}}(z_{l})-F_{z_{\Delta^{\prime}}}(z_{l})\right|
:=TV(FzΔ,FzΔ)\displaystyle\quad\quad\quad\quad:=\mathrm{TV}(F_{z_{\Delta}},F_{z_{\Delta^{\prime}}})

The first inequality uses the triangle inequality. The second inequality and fifth equality follow from Hölder’s inequality and the sum of the probabilities from the stochastic transition function 𝒥1\mathcal{J}_{1} being equal to 11. The sixth equality uses Fubini-Tonelli’s theorem which applies as the total variation distance measure is bounded from above by 11. The final equality recovers the total variation distance for the variable z=(s,a)𝒮l×𝒜lz=(s,a)\in\mathcal{S}_{l}\times\mathcal{A}_{l} across agents Δ\Delta and Δ\Delta^{\prime}, which proves the claim.∎

Lemma D.8.
𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,ag,sΔΔ,aΔΔ)FsΔ(x)=1kiΔ𝒥1(x|sg,sg,ag,si,ai)\mathbb{E}_{s^{\prime}_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}F_{s^{\prime}_{\Delta}}(x)=\frac{1}{k}\sum_{i\in\Delta}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i})
Proof.

By expanding on the definition of the empirical distribution function FsΔ(x)=1|Δ|iΔ𝟙{si=x}F_{s_{\Delta^{\prime}}}(x)=\frac{1}{|\Delta|}\sum_{i\in\Delta}\mathbbm{1}\{s_{i}=x\}, we have:

𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔ)FsΔ(x)\displaystyle\mathbb{E}_{s_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}F_{s^{\prime}_{\Delta}}(x) =1kiΔ𝔼sΔΔ|sg𝒥|ΔΔ|(|sg,sg,ag,sΔΔ,aΔΔ)𝟙{si=x}\displaystyle=\frac{1}{k}\sum_{i\in\Delta}\mathbb{E}_{s_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}\mathbbm{1}\{s_{i}^{\prime}=x\}
=1kiΔ𝔼sΔΔ|sg𝒥1(|sg,sg,ag,si,ai)𝟙{si=x}\displaystyle=\frac{1}{k}\sum_{i\in\Delta}\mathbb{E}_{s_{\Delta\cup\Delta^{\prime}}|s_{g}^{\prime}\sim\mathcal{J}_{1}(\cdot|s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i})}\mathbbm{1}\{s_{i}^{\prime}=x\}
=1kiΔ𝒥1(x|sg,sg,ag,si,ai)\displaystyle=\frac{1}{k}\sum_{i\in\Delta}\mathcal{J}_{1}(x|s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i})

The second equality follows from the conditional independence of sis_{i}^{\prime} from sΔΔis_{\Delta\cup\Delta^{\prime}\setminus i} from Lemma D.9, and the final equality uses the fact that the expectation of an indicator random variable is the probability distribution function of the random variable. ∎

Lemma D.9.

The distribution sΔΔi|sg,sg,ag,sΔΔ,aΔΔs^{\prime}_{\Delta\cup\Delta^{\prime}\setminus i}|s_{g}^{\prime},s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}} is conditionally independent to the distribution si|sg,sg,ag,si,ais_{i}^{\prime}|s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i} for any iΔΔi\in\Delta\cup\Delta^{\prime}.

Proof.

We direct the interested reader to the Bayes-Ball theorem in (Shachter, 2013) for proving conditional independence. For ease of exposition, we restate the two rules in (Shachter, 2013) that introduce the notion of dd-separations which implies conditional independence. Suppose we have a causal graph G=(V,E)G=(V,E) where the vertex set V=[p]V=[p] for pp\in\mathbb{N} is a set of variables and the edge set E2VE\subseteq 2^{V} denotes dependence through connectivity. Then, the two rules that establish dd-separations are as follows:

  1. 1.

    For x,yVx,y\in V, we say that x,yx,y are dd-connected if there exists a path (x,,y)(x,\dots,y) that can be traced without traversing a pair of arrows that point at the same vertex.

  2. 2.

    We say that x,yVx,y\in V are dd-connected, conditioned on a set of variables ZVZ\subseteq V, if there is a path (x,,y)(x,\dots,y) that does not contain an event zZz\in Z that can be traced without traversing a pair of arrows that point at the same vertex.

If x,yVx,y\in V is not dd-connected through any such path, then x,yx,y is dd-separated, which implies conditional independence.

Let Z={sg,sg,ag,si,ai}Z=\{s_{g}^{\prime},s_{g},a_{g},s_{i},a_{i}\} be the set of variables we condition on. Then, the below figure demonstrates the causal graph for the events of interest.

aga_{g}sΔΔis_{\Delta\cup\Delta^{\prime}\setminus i}sgs_{g}sis_{i}aΔΔia_{\Delta\cup\Delta^{\prime}\setminus i}aia_{i}aga_{g}^{\prime}sΔΔis^{\prime}_{\Delta\cup\Delta^{\prime}\setminus i}sgs^{\prime}_{g}sis^{\prime}_{i}aΔΔia^{\prime}_{\Delta\cup\Delta^{\prime}\setminus i}aia^{\prime}_{i}
Figure 6: Causal graph to demonstrate the dependencies between variables.
  1. 1.

    Observe that all paths (through undirected edges) stemming from sΔΔisΔΔis^{\prime}_{\Delta\cup\Delta^{\prime}\setminus i}\to s_{\Delta\cup\Delta^{\prime}\setminus i} pass through siZs_{i}\in Z which is blocked.

  2. 2.

    All other paths from sΔΔis^{\prime}_{\Delta\cup\Delta^{\prime}\setminus i} to sis_{i}^{\prime} pass through sgsgZs_{g}\cup s_{g}^{\prime}\in Z.

Therefore, sΔΔis^{\prime}_{\Delta\cup\Delta^{\prime}\setminus i} and sis_{i}^{\prime} are dd-separated by ZZ. Hence, by (Shachter, 2013), sΔΔis^{\prime}_{\Delta\cup\Delta^{\prime}\setminus i} and sis_{i}^{\prime} are conditionally independent.∎

Lemma D.10.

For any joint transition probability function 𝒥k\mathcal{J}_{k} on sg𝒮g,sΔ𝒮lks_{g}\in\mathcal{S}_{g},s_{\Delta}\in\mathcal{S}_{l}^{k}, aΔ𝒜lka_{\Delta}\in\mathcal{A}_{l}^{k} where |Δ|=k|\Delta|=k, given by 𝒥k:𝒮g×𝒮l|Δ|×𝒮g×𝒜g×𝒮l|Δ|×𝒜l|Δ|[0,1]\mathcal{J}_{k}:\mathcal{S}_{g}\times\mathcal{S}_{l}^{|\Delta|}\times\mathcal{S}_{g}\times\mathcal{A}_{g}\times\mathcal{S}_{l}^{|\Delta|}\times\mathcal{A}_{l}^{|\Delta|}\to[0,1], we have:

𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)[𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag𝒜g,aΔ𝒜lkQ^kT(sg,sΔ,ag,aΔ)]\displaystyle\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\left[\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g}^{\prime},a_{g},s_{\Delta}^{\prime},a_{\Delta})}\max_{a_{g}^{\prime\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime\prime}\in\mathcal{A}_{l}^{k}}\hat{Q}_{k}^{T}(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime})\right]
=𝔼(sg,sΔ)𝒥k2(,|sg,ag,sΔ,aΔ)maxag𝒜gQ^kT(sg,sΔ,ag,aΔ)\displaystyle\quad\quad\quad=\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim\mathcal{J}_{k}^{2}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\max_{a_{g}^{\prime\prime}\in\mathcal{A}_{g}}\hat{Q}_{k}^{T}(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime})
Proof.

We start by expanding the expectations:

𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)[𝔼(sg,sΔ)𝒥k(,|sg,ag,sΔ,aΔ)maxag𝒜g,aΔ𝒜lkQ^kT(sg,sΔ,ag,aΔ)]\displaystyle\mathbb{E}_{(s_{g}^{\prime},s_{\Delta}^{\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\left[\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim\mathcal{J}_{k}(\cdot,\cdot|s_{g}^{\prime},a_{g},s_{\Delta}^{\prime},a_{\Delta})}\max_{a_{g}^{\prime\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime\prime}\in\mathcal{A}_{l}^{k}}\hat{Q}_{k}^{T}(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime})\right]
=(sg,sΔ)𝒮g×𝒮l|Δ|(sg,sΔ)𝒮g×𝒮l|Δ|𝒥k[sg,sΔ,sg,ag,sΔ,aΔ]𝒥k[sg,sΔ,sg,ag,sΔ,aΔ]maxag𝒜g,aΔ𝒜lkQ^kT(sg,sΔ,ag,aΔ)\displaystyle\quad\quad\quad=\sum_{(s_{g}^{\prime},s_{\Delta}^{\prime})\in\mathcal{S}_{g}\times\mathcal{S}_{l}^{|\Delta|}}\sum_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\in\mathcal{S}_{g}\times\mathcal{S}_{l}^{|\Delta|}}\mathcal{J}_{k}[s_{g}^{\prime},s_{\Delta}^{\prime},s_{g},a_{g},s_{\Delta},a_{\Delta}]\mathcal{J}_{k}[s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},s_{g}^{\prime},a_{g},s_{\Delta}^{\prime},a_{\Delta}]\max_{\begin{subarray}{c}a_{g}^{\prime\prime}\in\mathcal{A}_{g},\\ a_{\Delta}^{\prime\prime}\in\mathcal{A}_{l}^{k}\end{subarray}}\hat{Q}_{k}^{T}(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime})
=(sg,sΔ)𝒮g×𝒮l|Δ|𝒥k2[sg,sΔ,sg,ag,sΔ,aΔ]maxag𝒜g,aΔ𝒜lkQ^Tk(sg,sΔ,ag,aΔ)\displaystyle\quad\quad\quad=\sum_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\in\mathcal{S}_{g}\times\mathcal{S}_{l}^{|\Delta|}}\mathcal{J}_{k}^{2}[s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},s_{g},a_{g},s_{\Delta},a_{\Delta}]\max_{a_{g}^{\prime\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime\prime}\in\mathcal{A}_{l}^{k}}\hat{Q}^{T}_{k}(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime})
=𝔼(sg,sΔ)𝒥k2(,|sg,ag,sΔ,aΔ)maxag𝒜g,aΔ𝒜lkQ^kT(sg,sΔ,ag,aΔ)\displaystyle\quad\quad\quad=\mathbb{E}_{(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime})\sim\mathcal{J}_{k}^{2}(\cdot,\cdot|s_{g},a_{g},s_{\Delta},a_{\Delta})}\max_{a_{g}^{\prime\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime\prime}\in\mathcal{A}_{l}^{k}}\hat{Q}_{k}^{T}(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},a_{g}^{\prime\prime},a_{\Delta}^{\prime\prime})

In the second equality, we note that the right-stochasticity of 𝒥k\mathcal{J}_{k} implies the right-stochasticity of 𝒥k2\mathcal{J}_{k}^{2}. Further, observe that 𝒥k[sg,sΔ,sg,ag,sΔ,aΔ]𝒥k[sg,sΔ,sg,ag,sΔ,aΔ]\mathcal{J}_{k}[s_{g}^{\prime},s_{\Delta}^{\prime},s_{g},a_{g},s_{\Delta},a_{\Delta}]\mathcal{J}_{k}[s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},s_{g}^{\prime},a_{g},s_{\Delta}^{\prime},a_{\Delta}] denotes the probability of the transitions (sg,sΔ)(sg,sΔ)(sg,sΔ)(s_{g},s_{\Delta})\to(s_{g}^{\prime},s_{\Delta}^{\prime})\to(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime}) with actions ag,aΔa_{g},a_{\Delta} at each step, where the joint state evolution is governed by 𝒥k\mathcal{J}_{k}. Thus,

(sg,sΔ)𝒮g×𝒮l|Δ|𝒥k[sg,sΔ,sg,ag,sΔ,aΔ]𝒥k[sg,sΔ,sg,ag,sg,aΔ]=𝒥k2[sg,sΔ,sg,ag,sΔ,aΔ]\sum_{(s_{g}^{\prime},s_{\Delta}^{\prime})\in\mathcal{S}_{g}\times\mathcal{S}_{l}^{|\Delta|}}\mathcal{J}_{k}[s_{g}^{\prime},s_{\Delta}^{\prime},s_{g},a_{g},s_{\Delta},a_{\Delta}]\mathcal{J}_{k}[s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},s_{g}^{\prime},a_{g},s_{g}^{\prime},a_{\Delta}]=\mathcal{J}_{k}^{2}[s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},s_{g},a_{g},s_{\Delta},a_{\Delta}]

since (sg,sΔ)𝒮g×𝒮l|Δ|𝒥k[sg,sΔ,sg,ag,sΔ,aΔ]𝒥k[sg,sΔ,sg,ag,sg,aΔ]\sum_{(s_{g}^{\prime},s_{\Delta}^{\prime})\in\mathcal{S}_{g}\times\mathcal{S}_{l}^{|\Delta|}}\mathcal{J}_{k}[s_{g}^{\prime},s_{\Delta}^{\prime},s_{g},a_{g},s_{\Delta},a_{\Delta}]\mathcal{J}_{k}[s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime},s_{g}^{\prime},a_{g},s_{g}^{\prime},a_{\Delta}] is the stochastic probability function corresponding to the two-step evolution of the joint states from (sg,sΔ)(s_{g},s_{\Delta}) to (sg,sΔ)(s_{g}^{\prime\prime},s_{\Delta}^{\prime\prime}) under actions ag,aΔa_{g},a_{\Delta}. This can be thought of as an analogously to the fact that a 11 on the (i,j)(i,j)’th entry on the square of a 0/10/1 adjacency matrix of a graph represents the fact that there is a path of distance 22 between vertices ii and jj. Finally, the third equality recovers the definition of the expectation, with respect to the joint probability function 𝒥k2\mathcal{J}_{k}^{2}. ∎

We next show that limT𝔼Δ([n]k)Q^kT(sg,sΔ,ag,aΔ)=𝔼Δ([n]k)Q^k(sg,sΔ,ag,aΔ)=Q(s,a)\lim_{T\to\infty}\mathbb{E}_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T}(s_{g},s_{\Delta},a_{g},a_{\Delta})=\mathbb{E}_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{*}(s_{g},s_{\Delta},a_{g},a_{\Delta})=Q^{*}(s,a).

Lemma D.11.
Q(s,a)1(nk)Δ([n]k)Q^kT(sg,sΔ,ag,aΔ)γTr~1γQ^{*}(s,a)-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T}(s_{g},s_{\Delta},a_{g},a_{\Delta})\leq\gamma^{T}\cdot\frac{\tilde{r}}{1-\gamma}
Proof.

We bound the differences between Q^kT\hat{Q}_{k}^{T} at each Bellman iteration of our approximation to QQ^{*}.

Note that:

Q(s,a)1(nk)Δ([n]k)Q^kT(sg,sΔ,ag,aΔ)\displaystyle Q^{*}(s,a)-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T}(s_{g},s_{\Delta},a_{g},a_{\Delta})
=𝒯Q(s,a)1(nk)Δ([n]k)𝒯^kQ^kT1(sg,sΔ,ag,aΔ)\displaystyle=\mathcal{T}Q^{*}(s,a)-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{\mathcal{T}}_{k}\hat{Q}_{k}^{T-1}(s_{g},s_{\Delta},a_{g},a_{\Delta})
=r[n](sg,s[n],ag)+γ𝔼sgPg(|sg,ag),siPl(|si,ai,sg),i[n])maxag𝒜g,a[n]𝒜lnQ(s,a)\displaystyle=r_{[n]}(s_{g},s_{[n]},a_{g})+\gamma\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim P_{g}(\cdot|s_{g},a_{g}),\\ s_{i}^{\prime}\sim P_{l}(\cdot|s_{i},a_{i},s_{g}),\forall i\in[n])\end{subarray}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{[n]}^{\prime}\in\mathcal{A}_{l}^{n}}Q^{*}(s^{\prime},a^{\prime})
1(nk)Δ([n]k)[rΔ(sg,sΔ,ag,aΔ)+γ𝔼sgPg(|sg,ag)siPl(|si,aisg),iΔmaxag𝒜g,aΔ𝒜lkQkT(sg,sΔ,ag,aΔ)]\displaystyle\quad\quad\quad\quad-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\bigg{[}r_{\Delta}(s_{g},s_{\Delta},a_{g},a_{\Delta})+\gamma\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim{P}_{g}(\cdot|s_{g},a_{g})\\ s_{i}^{\prime}\sim{P}_{l}(\cdot|s_{i},a_{i}s_{g}),\forall i\in\Delta\end{subarray}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}}Q_{k}^{T}(s_{g}^{\prime},s_{\Delta}^{\prime},a_{g}^{\prime},a_{\Delta}^{\prime})\bigg{]}

Next, observe that r[n](sg,s[n],ag,a[n])=1(nk)Δ([n]k)r[Δ](sg,sΔ,ag,aΔ)r_{[n]}(s_{g},s_{[n]},a_{g},a_{[n]})=\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}r_{[\Delta]}(s_{g},s_{\Delta},a_{g},a_{\Delta}). To prove this, we write:

1(nk)Δ([n]k)r[Δ](sg,sΔ,ag,aΔ)\displaystyle\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}r_{[\Delta]}(s_{g},s_{\Delta},a_{g},a_{\Delta}) =1(nk)Δ([n]k)(rg(sg,ag)+1kiΔrl(si,ai,sg))\displaystyle=\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}(r_{g}(s_{g},a_{g})+\frac{1}{k}\sum_{i\in\Delta}r_{l}(s_{i},a_{i},s_{g}))
=rg(sg,ag)+(n1k1)k(nk)i[n]rl(si,ai,sg)\displaystyle=r_{g}(s_{g},a_{g})+\frac{{n-1\choose k-1}}{k{n\choose k}}\sum_{i\in[n]}r_{l}(s_{i},a_{i},s_{g})
=rg(sg,ag)+1ni[n]rl(si,ai,sg):=r[n](sg,s[n],ag,a[n])\displaystyle=r_{g}(s_{g},a_{g})+\frac{1}{n}\sum_{i\in[n]}r_{l}(s_{i},a_{i},s_{g}):=r_{[n]}(s_{g},s_{[n]},a_{g},a_{[n]})

In the second equality, we reparameterized the sum to count the number of times each rl(si,sg)r_{l}(s_{i},s_{g}) was added for each iΔi\in\Delta. To do this, we count the number of (k1)(k-1) other agents that could form a kk-tuple with agent ii, and there are (n1))(n-1)) candidates from which we chooses the (k1)(k-1) agents. In the last equality, we expanded and simplified the binomial coefficients.

So, we have that:

sup(s,a)𝒮×𝒜[Q(s,a)1(nk)Δ([n]k)Q^kT(sg,s[n],ag,aΔ)]\displaystyle\sup_{(s,a)\in\mathcal{S}\times\mathcal{A}}\bigg{[}Q^{*}(s,a)-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T}(s_{g},s_{[n]},a_{g},a_{\Delta})\bigg{]}
=sup(s,a)𝒮×𝒜[𝒯Q(s,a)1(nk)Δ([n]k)𝒯^kQ^kT1(sg,sΔ,ag,aΔ)]\displaystyle=\sup_{(s,a)\in\mathcal{S}\times\mathcal{A}}\bigg{[}\mathcal{T}Q^{*}(s,a)-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{\mathcal{T}}_{k}\hat{Q}_{k}^{T-1}(s_{g},s_{\Delta},a_{g},a_{\Delta})\bigg{]}
=γsup(s,a)𝒮×𝒜[𝔼sgP(|sg,ag),siPl(|si,ai,sg),i[n]maxa𝒜Q(s,a)1(nk)Δ([n]k)𝔼sgPg(|sg,ag),siPl(|si,ai,sg),iΔmaxag𝒜g,aΔ𝒜lkQ^kT1(sg,sΔ,ag,aΔ)]\displaystyle=\gamma\sup_{(s,a)\in\mathcal{S}\times\mathcal{A}}\bigg{[}\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim{P}(\cdot|s_{g},a_{g}),\\ s_{i}^{\prime}\sim P_{l}(\cdot|s_{i},a_{i},s_{g}),\forall i\in[n]\end{subarray}}\max_{a^{\prime}\in\mathcal{A}}Q^{*}(s^{\prime},a^{\prime})-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim{P}_{g}(\cdot|s_{g},a_{g}),\\ s_{i}^{\prime}\sim{P}_{l}(\cdot|s_{i},a_{i},s_{g}),\forall i\in\Delta\end{subarray}}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}\end{subarray}}\hat{Q}_{k}^{T-1}(s_{g}^{\prime},s_{\Delta}^{\prime},a_{g}^{\prime},a_{\Delta}^{\prime})\bigg{]}
=γsup(s,a)𝒮×𝒜𝔼sgPg(|sg,ag),siPl(|si,ai,sg),i[n][maxa𝒜Q(s,a)1(nk)Δ([n]k)maxag𝒜g,aΔ𝒜lkQ^kT1(sg,sΔ,ag,aΔ)]\displaystyle=\gamma\sup_{(s,a)\in\mathcal{S}\times\mathcal{A}}\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim{P}_{g}(\cdot|s_{g},a_{g}),\\ s_{i}^{\prime}\sim P_{l}(\cdot|s_{i},a_{i},s_{g}),\forall i\in[n]\end{subarray}}\bigg{[}\max_{a^{\prime}\in\mathcal{A}}Q^{*}(s^{\prime},a^{\prime})-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}\end{subarray}}\hat{Q}_{k}^{T-1}(s_{g}^{\prime},s_{\Delta}^{\prime},a_{g}^{\prime},a_{\Delta}^{\prime})\bigg{]}
γsup(s,a)𝒮×𝒜𝔼sgPg(|sg,ag),siPl(|si,ai,sg),i[n]maxag𝒜g,a[n]𝒜ln[Q(s,a)1(nk)Δ([n]k)Q^kT1(sg,sΔ,ag,aΔ)]\displaystyle\leq\gamma\sup_{(s,a)\in\mathcal{S}\times\mathcal{A}}\mathbb{E}_{\begin{subarray}{c}s_{g}^{\prime}\sim{P}_{g}(\cdot|s_{g},a_{g}),\\ s_{i}^{\prime}\sim P_{l}(\cdot|s_{i},a_{i},s_{g}),\forall i\in[n]\end{subarray}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{[n]}^{\prime}\in\mathcal{A}_{l}^{n}}\bigg{[}Q^{*}(s^{\prime},a^{\prime})-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T-1}(s_{g}^{\prime},s_{\Delta}^{\prime},a_{g}^{\prime},a_{\Delta}^{\prime})\bigg{]}
γsup(s,a)𝒮×𝒜[Q(s,a)1(nk)Δ([n]k)Q^kT1(sg,sΔ,ag,aΔ)]\displaystyle\leq\gamma\sup_{(s^{\prime},a^{\prime})\in\mathcal{S}\times\mathcal{A}}\bigg{[}Q^{*}(s^{\prime},a^{\prime})-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T-1}(s^{\prime}_{g},s^{\prime}_{\Delta},a^{\prime}_{g},a^{\prime}_{\Delta})\bigg{]}

We justify the first inequality by noting the general property that for positive vectors v,vv,v^{\prime} for which vvv\succeq v^{\prime} which follows from the triangle inequality:

v1(nk)Δ([n]k)v\displaystyle\bigg{\|}v-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}v^{\prime}\bigg{\|}_{\infty} |v1(nk)Δ([n]k)v|\displaystyle\geq\bigg{|}\|v\|_{\infty}-\bigg{\|}\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}v^{\prime}\bigg{\|}_{\infty}\bigg{|}
=v1(nk)Δ([n]k)v\displaystyle=\|v\|_{\infty}-\bigg{\|}\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}v^{\prime}\bigg{\|}_{\infty}
v1(nk)Δ([n]k)v\displaystyle\geq\|v\|_{\infty}-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\|v^{\prime}\|_{\infty}

Thus, applying this bound recursively, we get:

Q(s,a)1(nk)Δ([n]k)Q^kT(sg,sΔ,ag,aΔ)\displaystyle Q^{*}(s,a)-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T}(s_{g},s_{\Delta},a_{g},a_{\Delta}) γTsup(s,a)𝒮×𝒜[Q(s,a)1(nk)Δ([n]k)Q^k0(sg,sΔ,ag,aΔ)]\displaystyle\leq\gamma^{T}\sup_{(s^{\prime},a^{\prime})\in\mathcal{S}\times\mathcal{A}}\bigg{[}Q^{*}(s^{\prime},a^{\prime})-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{0}(s^{\prime}_{g},s^{\prime}_{\Delta},a^{\prime}_{g},a^{\prime}_{\Delta})\bigg{]}
=γTsup(s,a)𝒮×𝒜Q(s,a)\displaystyle=\gamma^{T}\sup_{(s^{\prime},a^{\prime})\in\mathcal{S}\times\mathcal{A}}Q^{*}(s^{\prime},a^{\prime})
=γTr~1γ\displaystyle=\gamma^{T}\cdot\frac{\tilde{r}}{1-\gamma}

The first inequality follows from the γ\gamma-contraction property of the update procedure, and the ensuing equality follows from our bound on the maximum possible value of QQ from Lemma B.5 and noting that Q^k0:=0\hat{Q}_{k}^{0}:=0.

Therefore, as TT\to\infty,

Q(s,ag)1(nk)Δ([n]k)Q^kT(sg,sΔ,ag)0,Q^{*}(s,a_{g})-\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T}(s_{g},s_{\Delta},a_{g})\to 0,

which proves the lemma.∎

Corollary D.12.

Since 1(nk)Δ([n]k)Q^kT(sg,sΔ,ag,aΔ):=𝔼Δ([n]k)Q^kT(sg,sΔ,ag,aΔ)\frac{1}{{n\choose k}}\sum_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T}(s_{g},s_{\Delta},a_{g},a_{\Delta}):=\mathbb{E}_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T}(s_{g},s_{\Delta},a_{g},a_{\Delta}), we therefore get:

Q(s,a)limT𝔼Δ([n]k)Q^kT(sg,sΔ,ag,aΔ)limTγTr~1γQ^{*}(s,a)-\lim_{T\to\infty}\mathbb{E}_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{T}(s_{g},s_{\Delta},a_{g},a_{\Delta})\leq\lim_{T\to\infty}\gamma^{T}\cdot\frac{\tilde{r}}{1-\gamma}

Consequently,

Q(s,a)𝔼Δ([n]k)Q^k(sg,sΔ,ag,aΔ)0Q^{*}(s,a)-\mathbb{E}_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{*}(s_{g},s_{\Delta},a_{g},a_{\Delta})\leq 0

Further, we have that

Q(s,a)𝔼Δ([n]k)Q^k(sg,sΔ,ag,aΔ)=0,Q^{*}(s,a)-\mathbb{E}_{\Delta\in{[n]\choose k}}\hat{Q}_{k}^{*}(s_{g},s_{\Delta},a_{g},a_{\Delta})=0,

since Q^k(sg,sΔ,ag,aΔ)Q(s,a)\hat{Q}_{k}^{*}(s_{g},s_{\Delta},a_{g},a_{\Delta})\leq Q^{*}(s,a).

Lemma D.13.

The absolute difference between the expected maximums between Q^k\hat{Q}_{k} and Q^k\hat{Q}_{k^{\prime}} is atmost the maximum of the absolute difference between Q^k\hat{Q}_{k} and Q^k\hat{Q}_{k^{\prime}}, where the expectations are taken over any joint distributions of states 𝒥\mathcal{J}, and the maximums are taken over the actions.

|𝔼(sg,sΔΔ)𝒥|ΔΔ|(,|sg,ag,sΔΔ,aΔΔ)[maxag𝒜g,aΔ𝒜lkQ^kT(sg,sΔ,ag,aΔ)\displaystyle\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot,\cdot|s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}\bigg{[}\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}\end{subarray}}\hat{Q}_{k}^{T}(s_{g}^{\prime},s_{\Delta}^{\prime},a_{g}^{\prime},a_{\Delta}^{\prime}) maxag𝒜g,aΔ𝒜lkQ^kT(sg,sΔ,ag,aΔ)]|\displaystyle-\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta^{\prime}}^{\prime}\in\mathcal{A}_{l}^{k^{\prime}}\end{subarray}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime},a_{g}^{\prime},a^{\prime}_{\Delta^{\prime}})\bigg{]}\bigg{|}
maxag𝒜g,aΔΔ𝒜l|ΔΔ||𝔼(sg,sΔΔ)𝒥|ΔΔ|(,|sg,ag,sΔΔ,aΔΔ)[Q^kT(sg,sΔ,ag,aΔ)\displaystyle\leq\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a^{\prime}_{\Delta\cup\Delta^{\prime}}\in\mathcal{A}_{l}^{|\Delta\cup\Delta^{\prime}|}\end{subarray}}\bigg{|}\mathbb{E}_{(s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot,\cdot|s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}\bigg{[}\hat{Q}_{k}^{T}(s_{g}^{\prime},s_{\Delta}^{\prime},a_{g}^{\prime},a_{\Delta}^{\prime}) Q^kT(sg,sΔ,ag,aΔ)]|\displaystyle-\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime},a_{g}^{\prime},a_{\Delta^{\prime}}^{\prime})\bigg{]}\bigg{|}
Proof.

Denote:

ag,aΔ:=argmaxag𝒜g,aΔ𝒜lkQ^kT(sg,FsΔ,ag,aΔ)a_{g}^{*},a_{\Delta}^{*}:=\arg\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}\end{subarray}}\hat{Q}_{k}^{T}(s_{g}^{\prime},F_{s^{\prime}_{\Delta}},a_{g}^{\prime},a_{\Delta}^{\prime})
a~g,a~Δ:=argmaxag𝒜g,aΔ𝒜lkQ^kT(sg,FsΔ,ag,aΔ)\tilde{a}_{g}^{*},\tilde{a}_{\Delta^{\prime}}^{*}:=\arg\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta^{\prime}}^{\prime}\in\mathcal{A}_{l}^{k^{\prime}}\end{subarray}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},F_{s^{\prime}_{\Delta^{\prime}}},a_{g}^{\prime},a_{\Delta^{\prime}}^{\prime})

We extend aΔa_{\Delta}^{*} to aΔa_{\Delta^{\prime}}^{*} by letting aΔΔa_{\Delta^{\prime}\setminus\Delta}^{*} be the corresponding ΔΔ\Delta\setminus\Delta^{\prime} variables from a~Δ\tilde{a}_{\Delta^{\prime}}^{*} in 𝒜l|ΔΔ|\mathcal{A}_{l}^{|\Delta^{\prime}\setminus\Delta|}. For the remainder of this proof, we adopt the shorthand 𝔼sg,sΔΔ\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}} to refer to 𝔼(sg,sΔΔ)𝒥|ΔΔ|(,|sg,ag,sΔΔ,aΔΔ)\mathbb{E}_{(s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime})\sim\mathcal{J}_{|\Delta\cup\Delta^{\prime}|}(\cdot,\cdot|s_{g},a_{g},s_{\Delta\cup\Delta^{\prime}},a_{\Delta\cup\Delta^{\prime}})}. Then, if 𝔼sg,sΔΔmaxag𝒜g,aΔ𝒜lkQ^kT(sg,FzΔ,ag)𝔼sg,sΔΔmaxag𝒜g,aΔ𝒜lkQ^kT(sg,FzΔ,ag)>0\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}}\hat{Q}_{k}^{T}(s_{g}^{\prime},F_{z^{\prime}_{\Delta}},a_{g}^{\prime})-\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta^{\prime}}^{\prime}\in\mathcal{A}_{l}^{k^{\prime}}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},F_{z^{\prime}_{\Delta^{\prime}}},a_{g}^{\prime})>0, we have:

|𝔼sg,sΔΔmaxag𝒜g,aΔ𝒜lkQ^kT(sg,FzΔ,ag)\displaystyle\bigg{|}\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}}\hat{Q}_{k}^{T}(s_{g}^{\prime},F_{z^{\prime}_{\Delta}},a_{g}^{\prime}) 𝔼sg,sΔΔmaxag𝒜g,aΔ𝒜lkQ^kT(sg,FzΔ,ag)|\displaystyle-\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta^{\prime}}^{\prime}\in\mathcal{A}_{l}^{k^{\prime}}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},F_{z^{\prime}_{\Delta^{\prime}}},a_{g}^{\prime})\bigg{|}
=𝔼sg,sΔΔQ^kT(sg,sΔ,ag,aΔ)𝔼sg,sΔΔQ^kT(sg,sΔ,a~g,a~Δ)\displaystyle=\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\hat{Q}_{k}^{T}(s_{g}^{\prime},s_{\Delta}^{\prime},a_{g}^{*},a^{*}_{\Delta})-\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime},\tilde{a}_{g}^{*},\tilde{a}^{*}_{\Delta^{\prime}})
𝔼sg,sΔΔQ^Tk(sg,sΔ,ag,aΔ)𝔼sg,sΔΔQ^kT(sg,sΔ,ag,aΔ)\displaystyle\leq\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\hat{Q}^{T}_{k}(s_{g}^{\prime},s^{\prime}_{\Delta},a_{g}^{*},a_{\Delta}^{*})-\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime},a_{g}^{*},a_{\Delta^{\prime}}^{*})
maxag𝒜g,aΔΔ𝒜l|ΔΔ||𝔼sg,sΔΔQ^kT(sg,sΔ,ag,aΔ)𝔼sg,sΔΔQ^kT(sg,sΔ,ag,aΔ)|\displaystyle\leq\max_{\begin{subarray}{c}a_{g}^{\prime}\in\mathcal{A}_{g},\\ a_{\Delta\cup\Delta^{\prime}}\in\mathcal{A}_{l}^{|\Delta\cup\Delta^{\prime}|}\end{subarray}}\bigg{|}\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\hat{Q}_{k}^{T}(s_{g}^{\prime},s^{\prime}_{\Delta},a_{g}^{\prime},a_{\Delta}^{\prime})-\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},s^{\prime}_{\Delta^{\prime}},a_{g}^{\prime},a_{\Delta^{\prime}}^{\prime})\bigg{|}

Similarly, if 𝔼sg,sΔΔmaxag𝒜g,aΔ𝒜lkQ^kT(sg,sΔ,ag,aΔ)𝔼sg,sΔΔmaxag𝒜g,aΔ𝒜lkQ^kT(sg,sΔ,ag,aΔ)<0\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta}^{\prime}\in\mathcal{A}_{l}^{k}}\hat{Q}_{k}^{T}(s_{g}^{\prime},s^{\prime}_{\Delta},a_{g}^{\prime},a_{\Delta}^{\prime})-\mathbb{E}_{s_{g}^{\prime},s_{\Delta\cup\Delta^{\prime}}^{\prime}}\max_{a_{g}^{\prime}\in\mathcal{A}_{g},a_{\Delta^{\prime}}^{\prime}\in\mathcal{A}_{l}^{k^{\prime}}}\hat{Q}_{k^{\prime}}^{T}(s_{g}^{\prime},s_{\Delta^{\prime}}^{\prime},a_{g}^{\prime},a_{\Delta^{\prime}}^{\prime})<0, an analogous argument by replacing aga_{g}^{*} with a~g\tilde{a}_{g}^{*} and aΔa_{\Delta}^{*} with a~Δ\tilde{a}_{\Delta}^{*} yields an identical bound. ∎

Lemma D.14.

Suppose z,z1z,z^{\prime}\geq 1. Consider functions Γ:Θ1×Θ2××Θz×Θ\Gamma:\Theta_{1}\times\Theta_{2}\times\dots\times\Theta_{z}\times\Theta^{*}\to\mathbb{R} and Γ:Θ1×Θ2××Θz×Θ\Gamma^{\prime}:\Theta_{1}^{\prime}\times\Theta_{2}^{\prime}\times\dots\times\Theta^{\prime}_{z^{\prime}}\times\Theta^{*}\to\mathbb{R}, where Θ1,,Θz\Theta_{1},\dots,\Theta_{z} and Θ1,,Θz\Theta_{1}^{\prime},\dots,\Theta^{\prime}_{z^{\prime}} are finite sets. Consider a probability distribution function μΘi\mu_{\Theta_{i}} for i[z]i\in[z] and μΘi\mu^{\prime}_{\Theta_{i}} for i[z]i\in[z^{\prime}]. Then:

|𝔼θ1μΘ1θzμΘzmaxθΘΓ(θ1,,θz,θ)\displaystyle\bigg{|}\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu_{\Theta_{1}}\\ \dots\\ \theta_{z}\sim\mu_{\Theta_{z}}\end{subarray}}\max_{\theta^{*}\in\Theta^{*}}\Gamma(\theta_{1},\dots,\theta_{z},\theta^{*}) 𝔼θ1μΘ1θzμΘzmaxθΘΓ(θ1,,θz,θ)|\displaystyle-\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu^{\prime}_{\Theta_{1}}\\ \dots\\ \theta_{z^{\prime}}\sim\mu^{\prime}_{\Theta_{z^{\prime}}}\end{subarray}}\max_{{\theta}^{*}\in\Theta^{*}}\Gamma^{\prime}(\theta_{1},\dots,\theta_{z^{\prime}},\theta^{*})\bigg{|}
maxθΘ|𝔼θ1μΘ1θzμΘzΓ(θ1,,θz,θ)𝔼θ1μΘ1θzμΘzΓ(θ1,,θz,θ)|\displaystyle\leq\max_{\theta^{*}\in\Theta^{*}}\bigg{|}\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu_{\Theta_{1}}\\ \dots\\ \theta_{z}\sim\mu_{\Theta_{z}}\end{subarray}}\Gamma(\theta_{1},\dots,\theta_{z},\theta^{*})-\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu^{\prime}_{\Theta_{1}}\\ \dots\\ \theta_{z^{\prime}}\sim\mu^{\prime}_{\Theta_{z^{\prime}}}\end{subarray}}\Gamma^{\prime}(\theta_{1},\dots,\theta_{z^{\prime}},\theta^{*})\bigg{|}
Proof.

Let θ^:=argmaxθΘΓ(θ1,,θz,θ)\hat{\theta}^{*}:=\arg\max_{\theta^{*}\in\Theta^{*}}\Gamma(\theta_{1},\dots,\theta_{z},\theta^{*}) and θ~:=argmaxθΘΓ(θ1,,θz,θ)\tilde{\theta}^{*}:=\arg\max_{\theta^{*}\in\Theta^{*}}\Gamma^{\prime}(\theta_{1},\dots,\theta_{z^{\prime}},\theta^{*}).

If 𝔼θ1μΘ1,,θzμΘzmaxθΘΓ(θ1,,θz,θ)𝔼θ1μΘ1,,θzμΘzmaxθΘΓ(θ1,,θz,θ)>0\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu_{\Theta_{1}},\dots,\theta_{z}\sim\mu_{\Theta_{z}}\end{subarray}}\max_{\theta^{*}\in\Theta^{*}}\Gamma(\theta_{1},\dots,\theta_{z},\theta^{*})-\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu^{\prime}_{\Theta^{\prime}_{1}},\dots,\theta_{z^{\prime}}\sim\mu^{\prime}_{\Theta^{\prime}_{z^{\prime}}}\end{subarray}}\max_{{\theta}^{*}\in\Theta^{*}}\Gamma^{\prime}(\theta_{1},\dots,\theta_{z^{\prime}},\theta^{*})>0, then:

|𝔼θ1μΘ1θzμΘzmaxθΘΓ(θ1,,θz,θ)\displaystyle\bigg{|}\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu_{\Theta_{1}}\\ \dots\\ \theta_{z}\sim\mu_{\Theta_{z}}\end{subarray}}\max_{\theta^{*}\in\Theta^{*}}\Gamma(\theta_{1},\dots,\theta_{z},\theta^{*}) 𝔼θ1μΘ1θzμΘzmaxθΘΓ(θ1,,θz,θ)|\displaystyle-\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu^{\prime}_{\Theta^{\prime}_{1}}\\ \dots\\ \theta_{z^{\prime}}\sim\mu^{\prime}_{\Theta^{\prime}_{z^{\prime}}}\end{subarray}}\max_{{\theta}^{*}\in\Theta^{*}}\Gamma^{\prime}(\theta_{1},\dots,\theta_{z^{\prime}},\theta^{*})\bigg{|}
=𝔼θ1μΘ1θzμΘzΓ(θ1,,θz,θ^)𝔼θ1μΘ1θzμΘzΓ(θ1,,θz,θ~)\displaystyle=\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu_{\Theta_{1}}\\ \dots\\ \theta_{z}\sim\mu_{\Theta_{z}}\end{subarray}}\Gamma(\theta_{1},\dots,\theta_{z},\hat{\theta}^{*})-\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu^{\prime}_{\Theta^{\prime}_{1}}\\ \dots\\ \theta_{z^{\prime}}\sim\mu^{\prime}_{\Theta^{\prime}_{z^{\prime}}}\end{subarray}}\Gamma^{\prime}(\theta_{1},\dots,\theta_{z^{\prime}},\tilde{\theta}^{*})
𝔼θ1μΘ1θzμΘzΓ(θ1,,θz,θ^)𝔼θ1μΘ1θzμΘzΓ(θ1,,θz,θ^)\displaystyle\leq\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu_{\Theta_{1}}\\ \dots\\ \theta_{z}\sim\mu_{\Theta_{z}}\end{subarray}}\Gamma(\theta_{1},\dots,\theta_{z},\hat{\theta}^{*})-\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu^{\prime}_{\Theta^{\prime}_{1}}\\ \dots\\ \theta_{z^{\prime}}\sim\mu^{\prime}_{\Theta^{\prime}_{z^{\prime}}}\end{subarray}}\Gamma^{\prime}(\theta_{1},\dots,\theta_{z^{\prime}},\hat{\theta}^{*})
maxθΘ|𝔼θ1μΘ1θzμΘzΓ(θ1,,θz,θ)𝔼θ1μΘ1θzμΘzΓ(θ1,,θz,θ)|\displaystyle\leq\max_{\theta^{*}\in\Theta^{*}}\bigg{|}\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu_{\Theta_{1}}\\ \dots\\ \theta_{z}\sim\mu_{\Theta_{z}}\end{subarray}}\Gamma(\theta_{1},\dots,\theta_{z},{\theta}^{*})-\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu^{\prime}_{\Theta^{\prime}_{1}}\\ \dots\\ \theta_{z^{\prime}}\sim\mu^{\prime}_{\Theta^{\prime}_{z^{\prime}}}\end{subarray}}\Gamma^{\prime}(\theta_{1},\dots,\theta_{z^{\prime}},{\theta}^{*})\bigg{|}

Here, we replace each θ\theta^{*} with the maximizers of their corresponding terms, and upper bound them by the maximizer of the larger term. Next, we replace θ^\hat{\theta^{*}} in both expressions with the maximizer choice θ\theta^{*} from Θ\Theta^{*}, and further bound the expression by its absolute value.

If 𝔼θ1μΘ1,,θzμΘzmaxθΘΓ(θ1,,θz,θ)𝔼θ1μΘ1,,θzμΘzmaxθΘΓ(θ1,,θz,θ)<0\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu_{\Theta_{1}},\dots,\theta_{z}\sim\mu_{\Theta_{z}}\end{subarray}}\max_{\theta^{*}\in\Theta^{*}}\Gamma(\theta_{1},\dots,\theta_{z},\theta^{*})-\mathbb{E}_{\begin{subarray}{c}\theta_{1}\sim\mu^{\prime}_{\Theta^{\prime}_{1}},\dots,\theta_{z^{\prime}}\sim\mu^{\prime}_{\Theta^{\prime}_{z^{\prime}}}\end{subarray}}\max_{{\theta}^{*}\in\Theta^{*}}\Gamma^{\prime}(\theta_{1},\dots,\theta_{z^{\prime}},\theta^{*})<0, then an analogous argument that replaces θ^\hat{\theta}^{*} with θ~\tilde{\theta}^{*} yields the same result. ∎

Appendix E Bounding Total Variation Distance

As |Δ|n|\Delta|\to n, we prove that the total variation (TV) distance between the empirical distribution of zΔz_{\Delta} and z[n]z_{[n]} goes to 0. Here, recall that zi𝒵=𝒮l×𝒜lz_{i}\in\mathcal{Z}=\mathcal{S}_{l}\times\mathcal{A}_{l}, and zΔ={zi:iΔ}z_{\Delta}=\{z_{i}:i\in\Delta\} for Δ([n]k)\Delta\in{[n]\choose k}. Before bounding the total variation distance between FzΔF_{z_{\Delta}} and FzΔF_{z_{\Delta^{\prime}}}, we first introduce Lemma C.5 of Anand & Qu (2024) which can be viewed as a generalization of the Dvoretzky-Kiefer-Wolfowitz concentration inequality, for sampling without replacement. We first make an important remark.

Remark E.1.

First, observe that if Δ\Delta is an independent random variable uniformly supported on ([n]k){[n]\choose k}, then sΔs_{\Delta} and aΔa_{\Delta} are also independent random variables uniformly supported on the global state (s[n]k){s_{[n]}\choose k} and the global action (a[n]k){a_{[n]}\choose k}. To see this, let ψ1:[n]𝒮l\psi_{1}:[n]\to\mathcal{S}_{l} where ψ1(i)=si\psi_{1}(i)=s_{i} and ξ1:[n]𝒜l\xi_{1}:[n]\to\mathcal{A}_{l} where ξ1(i)=ai\xi_{1}(i)=a_{i}. This naturally extends to ψk:[n]k𝒮lk\psi_{k}:[n]^{k}\to\mathcal{S}_{l}^{k}, where ψk(i1,,ik)=(si1,,sik)\psi_{k}(i_{1},\dots,i_{k})=(s_{i_{1}},\dots,s_{i_{k}}) and ξk:[n]k𝒜lk\xi_{k}:[n]^{k}\to\mathcal{A}_{l}^{k}, where ξk(i1,,ik)=(ai1,,aik)\xi_{k}(i_{1},\dots,i_{k})=(a_{i_{1}},\dots,a_{i_{k}}) for all k[n]k\in[n]. Then, the independence of Δ\Delta implies the independence of the generated σ\sigma-algebra. Further, ψk\psi_{k} and ξk\xi_{k} (which are a Lebesgue measurable function of a σ\sigma-algebra) are sub-algebras, implying that sΔs_{\Delta} and aΔa_{\Delta} must also be independent random variables.

For reference, we present the multidimensional Dvoretzky-Kiefer-Wolfowitz (DKW) inequality ((Dvoretzky et al., 1956; Massart, 1990; Naaman, 2021)) which bounds the difference between an empirical distribution function for a set BΔB_{\Delta} and B[n]B_{[n]} when each element of Δ\Delta for |Δ|=k|\Delta|=k is sampled uniformly at random from [n][n] with replacement.

Theorem E.2 (Multi-dimensional Dvoretzky-Kiefer-Wolfowitz (DFW) inequality (Dvoretzky et al., 1956; Naaman, 2021)).

Suppose BdB\subset\mathbb{R}^{d} and ϵ>0\epsilon>0. If Δ[n]\Delta\subseteq[n] is sampled uniformly with replacement, then

Pr[supxB|1|Δ|iΔ𝟙{Bi=x}1ni=1n𝟙{Bi=x}|<ϵ]1d(n+1)e2|Δ|ϵ2\Pr\left[\sup_{x\in B}\left|\frac{1}{|\Delta|}\sum_{i\in\Delta}\mathbbm{1}\{B_{i}=x\}-\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}\{B_{i}=x\}\right|<\epsilon\right]\geq 1-d(n+1)e^{-2|\Delta|\epsilon^{2}}\cdot

Lemma C.5 of (Anand & Qu, 2024) generalizes the DKW inequality for sampling without replacement:

Lemma E.3 (Sampling without replacement analogue of the DKW inequality, Lemma C.5 in (Anand & Qu, 2024)).

Consider a finite population 𝒳=(x1,,xn)ln\mathcal{X}=(x_{1},\dots,x_{n})\in\mathcal{B}_{l}^{n} where l\mathcal{B}_{l} is a finite set. Let Δ[n]\Delta\subseteq[n] be a random sample of size kk chosen uniformly and without replacement. Then, for all xlx\in\mathcal{B}_{l}:

Pr[supxl|1|Δ|iΔ𝟙{xi=x}1ni[n]𝟙{xi=x}|<ϵ]12|l|e2|Δ|nϵ2n|Δ|+1\Pr\left[\sup_{x\in\mathcal{B}_{l}}\left|\frac{1}{|\Delta|}\sum_{i\in\Delta}\mathbbm{1}{\{x_{i}=x\}}-\frac{1}{n}\sum_{i\in[n]}\mathbbm{1}{\{x_{i}=x\}}\right|<\epsilon\right]\geq 1-2|\mathcal{B}_{l}|e^{-\frac{2|\Delta|n\epsilon^{2}}{n-|\Delta|+1}}
Lemma E.4.

With probability atleast 12|𝒮l||𝒜l|e8knϵ2nk+11-2|\mathcal{S}_{l}||\mathcal{A}_{l}|e^{-\frac{8kn\epsilon^{2}}{n-k+1}},

TV(FzΔ,Fz[n])ϵ\mathrm{TV}(F_{z_{\Delta}},F_{z_{[n]}})\leq\epsilon
Proof.

Recall that z:=(sl,al)𝒮l×𝒜lz:=(s_{l},a_{l})\in\mathcal{S}_{l}\times\mathcal{A}_{l}. From Lemma E.3, substituting l=𝒵l\mathcal{B}_{l}=\mathcal{Z}_{l} yields:

Pr[supzl𝒵l|FzΔ(zl)Fz[n](zl)|2ϵ]\displaystyle\Pr\left[\sup_{z_{l}\in\mathcal{Z}_{l}}\left|F_{z_{\Delta}}(z_{l})-F_{z_{[n]}}(z_{l})\right|\leq 2\epsilon\right] 12|𝒵l|e8knϵ2nk+1\displaystyle\geq 1-2|\mathcal{Z}_{l}|e^{-\frac{8kn\epsilon^{2}}{n-k+1}}
=12|𝒮l||𝒜l|e8knϵ2nk+1,\displaystyle=1-2|\mathcal{S}_{l}||\mathcal{A}_{l}|e^{-\frac{8kn\epsilon^{2}}{n-k+1}},

which yields the proof.∎

We now present an alternate bound for the total variation distance, where the distance actually goes to 0 as |Δ|n|\Delta|\to n. For this, we use the fact that the total variation distance between two product distributions is subadditive.

Lemma E.5 (Lemma B.8.1 of (Ghosal & van der Vaart, 2017). Subadditivity of TV distance for Product Distributions).

Let PP and QQ be product distributions over some domain SS. Let α1,,αd\alpha_{1},\dots,\alpha_{d} be the marginal distributions of PP and β1,,βq\beta_{1},\dots,\beta_{q} be the marginal distributions of QQ. Then,

PQ1i=1dαiβi1\|P-Q\|_{1}\leq\sum_{i=1}^{d}\|\alpha_{i}-\beta_{i}\|_{1}

Furthermore, we will need the following lemma.

Lemma E.6 (Data Processing Inequality.).

Let AA and BB be random variables over some domain SS. Let ff be some function (not necessarily deterministically) mapping from SS to any codomain TT. Then,

TV(f(A),f(B))TV(A,B)\mathrm{TV}(f(A),f(B))\leq\mathrm{TV}(A,B)
Lemma E.7.
TV(FzΔ,Fz[n])1|Δ|n\mathrm{TV}(F_{z_{\Delta}},F_{z_{[n]}})\leq\sqrt{1-\frac{|\Delta|}{n}}
Proof.

By the symmetry of the total variation distance metric, we have that

TV(Fz[n],FzΔ)=TV(FzΔ,Fz[n]).\mathrm{TV}(F_{z_{[n]}},F_{z_{\Delta}})=\mathrm{TV}(F_{z_{\Delta}},F_{z_{[n]}}).

From the Bretagnolle-Huber inequality (Tsybakov, 2008) we have that

TV(f,g)=1eDKL(fg).\mathrm{TV}(f,g)=\sqrt{1-e^{-D_{\mathrm{KL}}(f\|g)}}.

Here, DKL(fg)D_{\mathrm{KL}}(f\|g) is the Kullback-Leibler (KL) divergence metric between probability distributions ff and gg over the sample space, which we denote by 𝒳\mathcal{X} and is given by

DKL(fg):=x𝒳f(x)lnf(x)g(x)D_{\mathrm{KL}}(f\|g):=\sum_{x\in\mathcal{X}}f(x)\ln\frac{f(x)}{g(x)} (26)

Thus, from Equation 26:

DKL(FzΔFz[n])\displaystyle D_{\mathrm{KL}}(F_{z_{\Delta}}\|F_{z_{[n]}}) =z𝒵l(1|Δ|iΔ𝟙{zi=z})lnniΔ𝟙{zi=z}|Δ|i[n]𝟙{zi=z}\displaystyle=\sum_{z\in\mathcal{Z}_{l}}\left(\frac{1}{|\Delta|}\sum_{i\in\Delta}\mathbbm{1}\{z_{i}=z\}\right)\ln\frac{n\sum_{i\in\Delta}\mathbbm{1}\{z_{i}=z\}}{|\Delta|\sum_{i\in[n]}\mathbbm{1}\{z_{i}=z\}}
=1|Δ|z𝒵l(iΔ𝟙{zi=z})lnn|Δ|\displaystyle=\frac{1}{|\Delta|}\sum_{z\in\mathcal{Z}_{l}}\left(\sum_{i\in\Delta}\mathbbm{1}\{z_{i}=z\}\right)\ln\frac{n}{|\Delta|}
+1|Δ|z𝒵l(iΔ𝟙{zi=z})lniΔ𝟙{zi=z}i[n]𝟙{zi=z}\displaystyle\quad\quad\quad\quad\quad\quad+\frac{1}{|\Delta|}\sum_{z\in\mathcal{Z}_{l}}\left(\sum_{i\in\Delta}\mathbbm{1}\{z_{i}=z\}\right)\ln\frac{\sum_{i\in\Delta}\mathbbm{1}\{z_{i}=z\}}{\sum_{i\in[n]}\mathbbm{1}\{z_{i}=z\}}
=lnn|Δ|+1|Δ|z𝒵l(iΔ𝟙{zi=z})lniΔ𝟙{zi=z}i[n]𝟙{zi=z}\displaystyle=\ln\frac{n}{|\Delta|}+\frac{1}{|\Delta|}\sum_{z\in\mathcal{Z}_{l}}\left(\sum_{i\in\Delta}\mathbbm{1}\{z_{i}=z\}\right)\ln\frac{\sum_{i\in\Delta}\mathbbm{1}\{z_{i}=z\}}{\sum_{i\in[n]}\mathbbm{1}\{z_{i}=z\}}
ln(n|Δ|)\displaystyle\leq\ln\left(\frac{n}{|\Delta|}\right)

In the third line, we note that z𝒵liΔ𝟙{zi=z}=|Δ|\sum_{z\in\mathcal{Z}_{l}}\sum_{i\in\Delta}\mathbbm{1}\{z_{i}=z\}=|\Delta| since each local agent contained in Δ\Delta must have some state/action pair contained in 𝒵l\mathcal{Z}_{l}. In the last line, we note that iΔ𝟙{zi=z}i[n]𝟙{zi=z}\sum_{i\in\Delta}\mathbbm{1}\{z_{i}=z\}\leq\sum_{i\in[n]}\mathbbm{1}\{z_{i}=z\}, For all z𝒵lz\in\mathcal{Z}_{l}, and thus the summation of logarithmic terms in the third line is negative.

Finally, using this bound in the Bretagnolle-Huber inequality yields the lemma.∎

Corollary E.8.

From Lemma E.7, setting Δ=[n]\Delta=[n] also recovers TV(FzΔ,Fz[n])=0\mathrm{TV}(F_{z_{\Delta}},F_{z_{[n]}})=0.

Theorem E.9.

With probability atleast 1δ1-\delta for δ(0,1)2\delta\in(0,1)^{2}:

|Q^k(sg,FsΔ,ag,FaΔ)Q^n(sg,Fs[n],ag,Fa[n])|ln2|𝒮l||𝒜l|δ1γnk+18knrl(,)\left|\hat{Q}_{k}^{*}(s_{g},F_{s_{\Delta}},a_{g},F_{a_{\Delta}})-\hat{Q}_{n}^{*}(s_{g},F_{s_{[n]}},a_{g},F_{a_{[n]}})\right|\leq\frac{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}{1-\gamma}\sqrt{\frac{n-k+1}{8kn}}\cdot\|r_{l}(\cdot,\cdot)\|_{\infty}
Proof.

From combining the total variation distance bound in Lemma E.4 and the Lipschitz continuity bound in Theorem D.3 with t=0TγT11γ\sum_{t=0}^{T}\gamma^{T}\leq\frac{1}{1-\gamma} for γ(0,1)\gamma\in(0,1), we have:

Pr[|Q^k(sg,FsΔ,ag,FaΔ)Q^n(sg,Fs[n],ag,Fa[n])|2ϵ1γrl(,)]12|𝒮l||𝒜l|e8knϵ2nk+1\Pr\left[|\hat{Q}_{k}^{*}(s_{g},F_{s_{\Delta}},a_{g},F_{a_{\Delta}})-\hat{Q}_{n}^{*}(s_{g},F_{s_{[n]}},a_{g},F_{a_{[n]}})|\leq\frac{2\epsilon}{1-\gamma}\cdot\|r_{l}(\cdot,\cdot)\|_{\infty}\right]\geq 1-2|\mathcal{S}_{l}||\mathcal{A}_{l}|e^{-\frac{8kn\epsilon^{2}}{n-k+1}}

We then reparameterize 12|𝒮l||𝒜l|e8knϵ2nk+11-2|\mathcal{S}_{l}||\mathcal{A}_{l}|e^{-\frac{8kn\epsilon^{2}}{n-k+1}} into 1δ1-\delta to get ϵ=nk+18knln(2|𝒮l||𝒜l|δ)\epsilon=\sqrt{\frac{n-k+1}{8kn}\ln\left(\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}\right)} which gives:

Pr[|Q^k(sg,FsΔ,ag,FaΔ)Q^n(sg,Fs[n],ag,Fa[n])|ln2|𝒮l||𝒜l|δ1γnk+18knrl(,)]1δ,\Pr\left[\left|\hat{Q}_{k}^{*}(s_{g},F_{s_{\Delta}},a_{g},F_{a_{\Delta}})-\hat{Q}_{n}^{*}(s_{g},F_{s_{[n]}},a_{g},F_{a_{[n]}})\right|\leq\frac{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}{1-\gamma}\sqrt{\frac{n-k+1}{8kn}}\cdot\|r_{l}(\cdot,\cdot)\|_{\infty}\right]\geq 1-\delta,

proving the claim.∎

Appendix F Using the Performance Difference Lemma

In general, convergence analysis requires the guarantee that a stationary optimal policy exists. Fortunately, when working with the empirical distribution function, the existence of a stationary optimal policy is guaranteed when the state/action spaces are finite or countably infinite. However, lifting the knowledge of states onto the continuous empirical distribution function space, and designing a policy on the lifted space is still analytically challenging. To circumvent this, (Gu et al., 2021) creates lifted ϵ\epsilon-nets and does kernel regression to obtain convergence guarantees.

Here, our analytic approach bears a stark difference, wherein we analyze the sampled structure of the mean-field empirical distribution function, rather than studying the structure of the lifted space. For this, we leverage the classical Performance Difference Lemma, which we restate below for completeness.

Lemma F.1 (Performance Difference Lemma, (Kakade & Langford, 2002)).

Given policies π1,π2\pi_{1},\pi_{2}, with corresponding value functions Vπ1V^{\pi_{1}}, Vπ2V^{\pi_{2}}:

Vπ1(s)Vπ2(s)\displaystyle V^{\pi_{1}}(s)-V^{\pi_{2}}(s) =11γ𝔼sdπ1saπ1(|s)[Aπ2(s,a)]\displaystyle=\frac{1}{1-\gamma}\mathbb{E}_{\begin{subarray}{c}s^{\prime}\sim d^{\pi_{1}}_{s}\\ a^{\prime}\sim\pi_{1}(\cdot|s^{\prime})\end{subarray}}[A^{\pi_{2}}(s^{\prime},a^{\prime})]

Here, Aπ2(s,a):=Qπ2(s,a)Vπ2(s)A^{\pi_{2}}(s^{\prime},a^{\prime}):=Q^{\pi_{2}}(s^{\prime},a^{\prime})-V^{\pi_{2}}(s^{\prime}) and dsπ1(s)=(1γ)h=0γhPrhπ1[s,s]d_{s}^{\pi_{1}}(s^{\prime})=(1-\gamma)\sum_{h=0}^{\infty}\gamma^{h}\Pr_{h}^{\pi_{1}}[s^{\prime},s] where Prhπ1[s,s]\Pr_{h}^{\pi_{1}}[s^{\prime},s] is the probability of π1\pi_{1} reaching state ss^{\prime} at time step hh starting from state ss.

We denote our learned policy πk,mest{\pi}_{k,m}^{\mathrm{est}} where:

πk,mest(sg,s[n])=(πk,mg,est(sg,s[n]),πk,m1,est(sg,s[n]),,πk,mn,est(sg,s[n]))𝒫(𝒜g)×𝒫(𝒜l)n,{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{[n]})=({\pi}_{k,m}^{g,\mathrm{est}}(s_{g},s_{[n]}),{\pi}_{k,m}^{1,\mathrm{est}}(s_{g},s_{[n]}),\dots,{\pi}_{k,m}^{n,\mathrm{est}}(s_{g},s_{[n]}))\in\mathcal{P}(\mathcal{A}_{g})\times\mathcal{P}(\mathcal{A}_{l})^{n},

where πk,mg,est(sg,s[n])=π^k,mg,est(sg,su,FsΔu){\pi}_{k,m}^{g,\mathrm{est}}(s_{g},s_{[n]})=\hat{\pi}_{k,m}^{g,\mathrm{est}}(s_{g},s_{u},F_{s_{\Delta\setminus u}}) is the global agent’s action and πk,mi,est(sg,s[n]):=π^k,mest(sg,si,FsΔi){\pi}_{k,m}^{i,\mathrm{est}}(s_{g},s_{[n]}):=\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta_{i}}}) is the action of the ii’th local agent. Here, Δi\Delta_{i} is a random variable supported on ([n]ik1){[n]\setminus i\choose k-1}, uu is a random variable uniformly distributed on [n][n], and Δ\Delta is a random variable uniformly distributed on [n]u[n]\setminus u. Then, denote the optimal policy π\pi^{*} given by

π(s)=(πg(sg,s[n]),π1(sg,s[n]),,πn(sg,s[n]))𝒫(𝒜g)×𝒫(𝒜l)n,\pi^{*}(s)=(\pi^{*}_{g}(s_{g},s_{[n]}),\pi_{1}^{*}(s_{g},s_{[n]}),\dots,\pi_{n}^{*}(s_{g},s_{[n]}))\in\mathcal{P}(\mathcal{A}_{g})\times\mathcal{P}(\mathcal{A}_{l})^{n},

where πg(sg,s[n])\pi^{*}_{g}(s_{g},s_{[n]}) is the global agent’s action, and πi(sg,s[n]):=π(sg,si,FsΔi)\pi_{i}^{*}(s_{g},s_{[n]}):=\pi^{*}(s_{g},s_{i},F_{s_{\Delta_{i}}}) is the action of the ii’th local agent. Next, to compare the difference in the performance of π(s)\pi^{*}(s) and πk,mest(sg,s[n]){\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{[n]}), we define the value function of a policy π\pi to be the infinite-horizon γ\gamma-discounted rewards, (denoted VπV^{\pi}) as follows:

Definition F.2.

The value function Vπ:𝒮V^{\pi}:\mathcal{S}\to\mathbb{R} of a given policy π\pi, for 𝒮:=𝒮g×𝒮ln\mathcal{S}:=\mathcal{S}_{g}\times\mathcal{S}_{l}^{n} is:

Vπ(s)=𝔼a(t)π(|s(t))[t=0γtr(s(t),a(t))|s(0)=s].V^{\pi}(s)=\mathbb{E}_{\begin{subarray}{c}a(t)\sim\pi(\cdot|s(t))\end{subarray}}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s(t),a(t))|s(0)=s\right]. (27)
Theorem F.3.

For the optimal policy π\pi^{*} and the learned policy πk,mest{\pi}_{k,m}^{\mathrm{est}}, for any state s0𝒮s_{0}\in\mathcal{S}, we have:

Vπ(s0)Vπestk,m(s0)r~(1γ)2nk+12nkln2|𝒮l||𝒜l|δ+2ϵk,m+2r~(1γ)2|𝒜g|k|𝒜l|δV^{\pi^{*}}(s_{0})-V^{{\pi}^{\mathrm{est}}_{k,m}}(s_{0})\leq\frac{\tilde{r}}{(1-\gamma)^{2}}\sqrt{\frac{n-k+1}{2nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+2\epsilon_{k,m}+\frac{\tilde{2r}}{(1-\gamma)^{2}}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta
Proof.

Applying the performance difference lemma to the policies gives us:

Vπ(s0)Vπ~estk,m(s0)\displaystyle V^{\pi^{*}}(s_{0})-V^{\tilde{\pi}^{\mathrm{est}}_{k,m}}(s_{0}) =11γ𝔼sds0πk,mest𝔼aπk,mest(|s)[Vπ(s)Qπ(s,a)]\displaystyle=\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{s_{0}}^{{\pi}_{k,m}^{\mathrm{est}}}}\mathbb{E}_{a\sim{\pi}_{k,m}^{\mathrm{est}}(\cdot|s)}[V^{\pi^{*}}(s)-Q^{\pi^{*}}(s,a)]
=11γ𝔼sds0πk,mest[𝔼aπ(|s)Qπ(s,a)𝔼aπk,mest(|s)Qπ(s,a)]\displaystyle=\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{s_{0}}^{{\pi}_{k,m}^{\mathrm{est}}}}\bigg{[}\mathbb{E}_{a^{\prime}\sim\pi^{*}(\cdot|s)}Q^{\pi^{*}}(s,a^{\prime})-\mathbb{E}_{a\sim{\pi}_{k,m}^{\mathrm{est}}(\cdot|s)}Q^{\pi^{*}}(s,a)\bigg{]}
=11γ𝔼sds0π~k,mest[Qπ(s,π(|s))𝔼aπk,mest(|s)Qπ(s,a)]\displaystyle=\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{s_{0}}^{\tilde{\pi}_{k,m}^{\mathrm{est}}}}\bigg{[}Q^{\pi^{*}}(s,\pi^{*}(\cdot|s))-\mathbb{E}_{a\sim{\pi}_{k,m}^{\mathrm{est}}(\cdot|s)}Q^{\pi^{*}}(s,a)\bigg{]}

Next, by the law of total expectation,

𝔼aπk,mest(|s)[Q(s,a)]\displaystyle\mathbb{E}_{a\sim{\pi}_{k,m}^{\mathrm{est}}(\cdot|s)}\left[Q^{*}(s,a)\right]
=Δ([n]k)Δ1([n]1k1)Δ2([n]2k1)Δn([n]nk1)1(nk)1(n1k1)nQ(s,π^k,mest(sg,FsΔ)g,π^k,mest(sg,s1,FsΔ1),,π^k,mest(sg,sn,FsΔn))\displaystyle=\sum_{\Delta\in{[n]\choose k}}\sum_{\Delta^{1}\in{[n]\setminus 1\choose k-1}}\sum_{\Delta^{2}\in{[n]\setminus 2\choose k-1}}\dots\sum_{\Delta^{n}\in{[n]\setminus n\choose k-1}}\frac{1}{{n\choose k}}\frac{1}{{n-1\choose k-1}^{n}}Q^{*}(s,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{1},F_{s_{\Delta^{1}}}),\dots,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{n},F_{s_{\Delta^{n}}}))

Therefore,

Qπ(s,π(|s))\displaystyle Q^{\pi^{*}}(s,\pi^{*}(\cdot|s)) 𝔼aπk,mest(|s)Q(s,a)\displaystyle-\mathbb{E}_{a\sim{\pi}_{k,m}^{\mathrm{est}}(\cdot|s)}Q^{*}(s,a)
=Δ([n]k)Δ1([n]1k1)Δ2([n]2k1)Δn([n]nk1)1(nk)1(n1k1)n(Q(s,π(|s))\displaystyle=\sum_{\Delta\in{[n]\choose k}}\sum_{\Delta^{1}\in{[n]\setminus 1\choose k-1}}\sum_{\Delta^{2}\in{[n]\setminus 2\choose k-1}}\dots\sum_{\Delta^{n}\in{[n]\setminus n\choose k-1}}\frac{1}{{n\choose k}}\frac{1}{{n-1\choose k-1}^{n}}\bigg{(}Q^{*}(s,\pi^{*}(\cdot|s))
Q(s,π^k,mest(sg,FsΔ)g,π^k,mest(sg,s1,FsΔ1),,π^k,mest(sg,sn,FsΔn)))\displaystyle-Q^{*}(s,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{1},F_{s_{\Delta^{1}}}),\dots,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{n},F_{s_{\Delta^{n}}}))\bigg{)}

Therefore, by grouping the equations above, we have:

Vπ(s0)\displaystyle V^{\pi^{*}}(s_{0}) Vπ~k,mest(s0)\displaystyle-V^{\tilde{\pi}_{k,m}^{\mathrm{est}}}(s_{0})
11γ𝔼sds0π~k,mest𝔼aπ~k,mest(|s)Δ([n]k)Δi([n]ik1),i[n]1n(nk)(n1k1)n(i[n]|Q(s,π(s))\displaystyle\leq\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{s_{0}}^{\tilde{\pi}_{k,m}^{\mathrm{est}}}}\mathbb{E}_{a\sim\tilde{\pi}_{k,m}^{\mathrm{est}}(\cdot|s)}\sum_{\Delta\in{[n]\choose k}}\sum_{\begin{subarray}{c}\Delta^{i}\in{[n]\setminus i\choose k-1},\\ \forall i\in[n]\end{subarray}}\frac{1}{n{n\choose k}{n-1\choose k-1}^{n}}\bigg{(}\sum_{i\in[n]}\bigg{|}Q^{*}(s,\pi^{*}(s))
Q^estk,m(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\bigg{|}
+1n(nk)(n1k1)ni[n]|Q^k,mest(sg,si,FsΔi,π^k,mest(sg,FsΔ)g,{π^k,mest(sg,sj,Δj)}j{i,Δi})\displaystyle+\frac{\frac{1}{n}}{{n\choose k}{n-1\choose k-1}^{n}}\sum_{i\in[n]}\bigg{|}\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j,\Delta^{j}})\}_{j\in\{i,\Delta^{i}\}})
Q(s,π^k,mest(sg,sΔ)g,{π^k,mest(sg,sj,FsΔj)}j[n]|)\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-Q^{*}(s,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{\Delta})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{s_{\Delta^{j}}})\}_{j\in[n]}\bigg{|}\bigg{)}

Lemma F.4 shows a uniform bound on

Q(s,π(|s))Q(s,π^k,mest(sg,FsΔ)g,π^k,mest(sg,s1,FsΔ1),,π^k,mest(sg,sn,FsΔn))Q^{*}(s,\pi^{*}(\cdot|s))-Q^{*}(s,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{1},F_{s_{\Delta^{1}}}),\dots,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{n},F_{s_{\Delta^{n}}}))

(independent of Δi,i[n]\Delta^{i},\forall i\in[n]), allowing the sums and the counts in the denominator will cancel out. Observe that π^k,m:𝒮𝒜\hat{\pi}_{k,m}^{*}:\mathcal{S}\to\mathcal{A} and π:𝒮𝒜\pi^{*}:\mathcal{S}\to\mathcal{A} are deterministic functions. Therefore, denote a=π(s)a=\pi^{*}(s). Then, from Lemma F.6,

11γ𝔼sds0πk,mest𝔼aπk,mest(|s)Δ([n]k)Δ1([n]1k1)Δn([n]nk1)1n(nk)(n1k1)ni[n]|Q(s,π(s))\displaystyle\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{s_{0}}^{{\pi}_{k,m}^{\mathrm{est}}}}\mathbb{E}_{a\sim{\pi}_{k,m}^{\mathrm{est}}(\cdot|s)}\sum_{\Delta\in{[n]\choose k}}\sum_{{\Delta^{1}\in{[n]\setminus 1\choose k-1}}}\dots\sum_{{\Delta^{n}\in{[n]\setminus n\choose k-1}}}\frac{1}{n{n\choose k}{n-1\choose k-1}^{n}}\sum_{i\in[n]}\bigg{|}Q^{*}(s,\pi^{*}(s))
Q^estk,m(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\bigg{|}
=11γ𝔼sds0πk,mestΔ([n]k)Δ1([n]1k1)Δn([n]nk1)1n(nk)(n1k1)ni[n]|Q^n(sg,Fs[n],π^n(sg,Fs[n])g,π^n(sg,Fs[n])1:n)\displaystyle=\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{s_{0}}^{{\pi}_{k,m}^{\mathrm{est}}}}\sum_{\Delta\in{[n]\choose k}}\sum_{{\Delta^{1}\in{[n]\setminus 1\choose k-1}}}\dots\sum_{{\Delta^{n}\in{[n]\setminus n\choose k-1}}}\frac{\frac{1}{n{n\choose k}}}{{n-1\choose k-1}^{n}}\sum_{i\in[n]}\bigg{|}\hat{Q}_{n}^{*}(s_{g},F_{s_{[n]}},\hat{\pi}_{n}^{*}(s_{g},F_{s_{[n]}})_{g},\hat{\pi}_{n}^{*}(s_{g},F_{s_{[n]}})_{1:n})
Q^estk,m(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\bigg{|}
r~(1γ)2nk+18nkln2|𝒮l||𝒜l|δ+ϵk,m+r~(1γ)2|𝒜g|k|𝒜l|δ\displaystyle\leq\frac{\tilde{r}}{(1-\gamma)^{2}}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+\epsilon_{k,m}+\frac{\tilde{r}}{(1-\gamma)^{2}}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta

Similarly, from Corollary F.7, we have that

11γ𝔼sds0πk,mestΔ([n]k)Δ1([n]1k1)Δn([n]nk1)1n(nk)(n1k1)n|Q^estk,m(sg,si,FsΔi,π^estk,m(sg,sΔ)g,{π^k,mest(sg,sj,FsΔj)}j{i,Δi})\displaystyle\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{s_{0}}^{{\pi}_{k,m}^{\mathrm{est}}}}\sum_{\Delta\in{[n]\choose k}}\sum_{{\Delta^{1}\in{[n]\setminus 1\choose k-1}}}\dots\sum_{{\Delta^{n}\in{[n]\setminus n\choose k-1}}}\frac{\frac{1}{n{n\choose k}}}{{n-1\choose k-1}^{n}}\bigg{|}\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},s_{\Delta})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{s_{\Delta^{j}}})\}_{j\in\{i,\Delta^{i}\}})
Q(s,π^estk,m(sg,sΔ)g,{π^estk,m(sg,sj,FsΔj)}j[n]|\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-Q^{*}(s,\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},s_{\Delta})_{g},\{\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},s_{j},F_{s_{\Delta^{j}}})\}_{j\in[n]}\bigg{|}
r~(1γ)2nk+18nkln2|𝒮l||𝒜l|δ+ϵk,m+r~(1γ)2|𝒜g|k|𝒜l|δ\displaystyle\leq\frac{\tilde{r}}{(1-\gamma)^{2}}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+\epsilon_{k,m}+\frac{\tilde{r}}{(1-\gamma)^{2}}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta

Hence, combining the above inequalities, we get:

Vπ(s0)Vπk,mest(s0)\displaystyle V^{\pi^{*}}(s_{0})-V^{{\pi}_{k,m}^{\mathrm{est}}}(s_{0}) 2r~(1γ)2nk+18nkln2|𝒮l||𝒜l|δ+2ϵk,m+2r~(1γ)2|𝒜g|k|𝒜l|δ\displaystyle\leq\frac{2\tilde{r}}{(1-\gamma)^{2}}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+2\epsilon_{k,m}+\frac{\tilde{2r}}{(1-\gamma)^{2}}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta

which yields the claim. We defer parameter optimization to Lemma F.8.∎

Lemma F.4 (Uniform Bound on QQ^{*} with different actions).

For all sS,Δ([n]k)s\in S,\Delta\in{[n]\choose k} and Δi([n]ik1)\Delta^{i}\in{[n]\setminus i\choose k-1},

Q(s,π(|s))Q(s,π^k,mest(sg,FsΔ)g,{π^k,mest(sg,si,FsΔi)}i[n])\displaystyle Q^{*}(s,\pi^{*}(\cdot|s))-Q^{*}(s,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}})\}_{i\in[n]})
1ni[n]|Q(s,π(s))Q^estk,m(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|\displaystyle\leq\frac{1}{n}\sum_{i\in[n]}\left|Q^{*}(s,\pi^{*}(s))-\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|
+|Q^k,mest(sg,si,FsΔi,π^k,mest(sg,FsΔ)g,{π^k,mest(sg,sj,Δj)}j{i,Δi})Q(s,π^k,mest(sg,sΔ)g,{π^k,mest(sg,sj,FsΔj)}j[n]|\displaystyle+\left|\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j,\Delta^{j}})\}_{j\in\{i,\Delta^{i}\}})-Q^{*}(s,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{\Delta})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{s_{\Delta^{j}}})\}_{j\in[n]}\right|
Proof.

Observe that

Q(s,π(|s))\displaystyle Q^{*}(s,\pi^{*}(\cdot|s)) Q(s,π^k,mest(sg,FsΔ)g,{π^k,mest(sg,si,FsΔi)}i[n])\displaystyle-Q^{*}(s,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}})\}_{i\in[n]})
1ni[n]Q^k,mest(sg,si,FsΔi,π^k,mest(sg,FsΔ)g,π^k,mest(sg,si,FsΔi),{π^k,mest(sg,sj,FsΔj)}jΔi)\displaystyle\leq\frac{1}{n}\sum_{i\in[n]}\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}}),\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{s_{\Delta^{j}}})\}_{j\in\Delta^{i}})
1ni[n]Q^k,mest(sg,si,FsΔi,π^k,mest(sg,FsΔ)g,π^k,mest(sg,si,FsΔi),{π^k,mest(sg,sj,FsΔj)}jΔi)\displaystyle\quad\quad-\frac{1}{n}\sum_{i\in[n]}\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}}),\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{s_{\Delta^{j}}})\}_{j\in\Delta^{i}})
+1ni[n]Q^k,mest(sg,si,FsΔi,π(s)g,π(s)i,{π(s)j}jΔk)\displaystyle\quad\quad+\frac{1}{n}\sum_{i\in[n]}\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\pi^{*}(s)_{i},\{\pi^{*}(s)_{j}\}_{j\in\Delta^{k}})
1ni[n]Q^k,mest(sg,si,FsΔi,π(s)g,π(s)i,{π(s)j}jΔk)\displaystyle\quad\quad-\frac{1}{n}\sum_{i\in[n]}\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\pi^{*}(s)_{i},\{\pi^{*}(s)_{j}\}_{j\in\Delta^{k}})
|Q(s,π(s))1ni[n]Q^k,mest(sg,si,FsΔi,π(s)g,π(s)i,{π(s)j}jΔi)|\displaystyle\leq\left|Q^{*}(s,\pi^{*}(s))-\frac{1}{n}\sum_{i\in[n]}\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\pi^{*}(s)_{i},\{\pi^{*}(s)_{j}\}_{j\in\Delta^{i}})\right|
+|1ni[n]Q^k,mest(sg,si,FsΔi,π^k,mest(sg,FsΔ)g,{π^k,mest(sg,sj,FsΔj)}j{i,Δi})\displaystyle\quad\quad+\bigg{|}\frac{1}{n}\sum_{i\in[n]}\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{s_{\Delta^{j}}})\}_{j\in\{i,\Delta^{i}\}})
Q(s,π^k,mest(sg,FsΔ)g,{π^k,mest(sg,sj,FsΔj)}j[n]|\displaystyle\quad\quad\quad\quad\quad-Q^{*}(s,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{s_{\Delta^{j}}})\}_{j\in[n]}\bigg{|}
1ni[n]|Q(s,π(s))Q^estk,m(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|\displaystyle\leq\frac{1}{n}\sum_{i\in[n]}\left|Q^{*}(s,\pi^{*}(s))-\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|
+1ni[n]|Q^k,mest(sg,si,FsΔi,π^k,mest(sg,FsΔ)g,{π^k,mest(sg,sj,Δj)}j{i,Δi})\displaystyle\quad\quad+\frac{1}{n}\sum_{i\in[n]}\bigg{|}\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},F_{s_{\Delta}})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j,\Delta^{j}})\}_{j\in\{i,\Delta^{i}\}})
Q(s,π^k,mest(sg,sΔ)g,{π^k,mest(sg,sj,FsΔj)}j[n]|,\displaystyle\quad\quad\quad\quad-Q^{*}(s,\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{\Delta})_{g},\{\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{s_{\Delta^{j}}})\}_{j\in[n]}\bigg{|},

which proves the claim.∎

Lemma F.5.

Fix s𝒮:=𝒮g×𝒮lns\in\mathcal{S}:=\mathcal{S}_{g}\times\mathcal{S}_{l}^{n}. For each j[n]j\in[n], suppose we are given TT-length sequences of random variables {Δj1,,ΔjT}j[n]\{\Delta^{j}_{1},\dots,\Delta^{j}_{T}\}_{j\in[n]}, distributed uniformly over the support ([n]jk1){[n]\setminus j\choose k-1}. Further, suppose we are given a fixed sequence δ1,,δT\delta_{1},\dots,\delta_{T}, where each δt(0,1]\delta_{t}\in(0,1] for t[T]t\in[T]. Then, for each action a=(ag,a[n])=π(s)a=(a_{g},a_{[n]})=\pi^{*}(s), for t[T]t\in[T] and j[n]j\in[n], define deviation events Btag,aj,ΔtjB_{t}^{a_{g},a_{j,\Delta_{t}^{j}}} such that:

Btag,aj,Δtj:={|Q(sg,sj,Fz[n]j,aj,ag)Q^k,mest(sg,sj,FzΔtj,aj,ag)|>nk+18knln2|𝒮l||𝒜l|δt1γrl(,)+ϵk,m}B_{t}^{a_{g},a_{j,\Delta_{t}^{j}}}\!:=\!\left\{\left|{Q}^{*}(s_{g},s_{j},F_{z_{[n]\setminus j}},a_{j},a_{g})\!-\!\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{j},a_{g})\right|\!>\!\frac{\sqrt{\frac{n-k+1}{8kn}\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta_{t}}}}{1-\gamma}\cdot\|r_{l}(\cdot,\cdot)\|_{\infty}+\epsilon_{k,m}\right\} (28)

For i[T]i\in[T], we define bad-events BtB_{t} (which unifies the deviation events) such that

Bt=j[n]ag𝒜gaj,Δtj𝒜lkBtag,aj,ΔtjB_{t}=\bigcup_{j\in[n]}\bigcup_{\begin{subarray}{c}a_{g}\in\mathcal{A}_{g}\end{subarray}}\bigcup_{a_{j,\Delta_{t}^{j}}\in\mathcal{A}_{l}^{k}}B_{t}^{a_{g},a_{j,\Delta_{t}^{j}}}

Next, denote B=i=1TBiB=\bigcup_{i=1}^{T}B_{i}. Then, the probability that no bad event BtB_{t} occurs is:

Pr[B¯]:-1Pr[B]1|𝒜g|k|𝒜l|i=1Tδi\Pr\left[\bar{B}\right]\coloneq 1-\Pr[B]\geq 1-|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\sum_{i=1}^{T}\delta_{i}
Proof.
|Q(sg,sj,Fz[n]j,ag,aj)\displaystyle|{Q}^{*}(s_{g},s_{j},F_{z_{[n]\setminus j}},a_{g},a_{j}) Q^k,mest(sg,sj,FzΔtj,ag,aj)|\displaystyle-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})|
=|Q(sg,sj,Fz[n]j,ag,aj)Q^k(sg,sj,FzΔtj,ag,aj)\displaystyle=\bigg{|}{Q}^{*}(s_{g},s_{j},F_{z_{[n]\setminus j}},a_{g},a_{j})-\hat{Q}_{k}^{*}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})
+Q^k(sg,sj,FzΔtj,ag,aj)Q^k,mest(sg,sj,FzΔtj,ag,aj)|\displaystyle\quad\quad+\hat{Q}_{k}^{*}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})\bigg{|}
|Q(sg,sj,Fz[n]j,ag,aj)Q^k(sg,sj,FzΔtj,ag,aj)|\displaystyle\leq\left|{Q}^{*}(s_{g},s_{j},F_{z_{[n]\setminus j}},a_{g},a_{j})-\hat{Q}_{k}^{*}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})\right|
+|Q^k(sg,sj,FzΔtj,ag,aj)Q^k,mest(sg,sj,FzΔtj,ag,aj)|\displaystyle\quad\quad+\left|\hat{Q}_{k}^{*}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})\right|
|Q(sg,sj,Fz[n]j,ag,aj)Q^k(sg,sj,FzΔtj,ag,aj)|+ϵk,m\displaystyle\leq\left|{Q}^{*}(s_{g},s_{j},F_{z_{[n]\setminus j}},a_{g},a_{j})-\hat{Q}_{k}^{*}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})\right|+\epsilon_{k,m}

The first inequality above follows from the triangle inequality, and the second inequality uses

|Q^k(sg,sj,FzΔtj,ag,aj)Q^k,mest(sg,sj,FzΔtj,ag,aj)|\displaystyle\left|\hat{Q}_{k}^{*}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})\right| Q^k(sg,sj,FzΔtj,ag,aj)Q^k,mest(sg,sj,FzΔtj,ag,aj)\displaystyle\leq\left\|\hat{Q}_{k}^{*}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})\right\|_{\infty}
ϵk,m,\displaystyle\leq\epsilon_{k,m},

where the ϵk,m\epsilon_{k,m} follows from Lemma 3.2. From Theorem E.9, we have that with probability at least 1δt1-\delta_{t},

|Q(sg,sj,Fz[n]j,ag,aj)Q^k(sg,sj,FzΔtj,ag,aj)|ln2|𝒮l||𝒜l|δt1γnk+18knrl(,)\left|{Q}^{*}(s_{g},s_{j},F_{z_{[n]\setminus j}},a_{g},a_{j})-\hat{Q}_{k}^{*}(s_{g},s_{j},F_{z_{\Delta_{t}^{j}}},a_{g},a_{j})\right|\leq\frac{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta_{t}}}{1-\gamma}\sqrt{\frac{n-k+1}{8kn}}\cdot\|r_{l}(\cdot,\cdot)\|_{\infty}

So, event Btag,aj,ΔtjB_{t}^{a_{g},a_{j,\Delta_{t}^{j}}} occurs with probability atmost δt\delta_{t}.

Here, observe that if we union bound across all the events parameterized by the empirical distributions of ag,aj,Δtj𝒜g×𝒜lka_{g},a_{j,\Delta_{t}^{j}}\in\mathcal{A}_{g}\times\mathcal{A}_{l}^{k} given by FaΔtjF_{a_{\Delta_{t}^{j}}}, this also forms a covering of the choice of variables FsΔtjF_{s_{\Delta_{t}^{j}}} by agents j[m]j\in[m], and therefore across all choices of Δt1,,Δtn\Delta_{t}^{1},\dots,\Delta_{t}^{n} (subject to the permutation invariance of local agents) for a fixed tt.

Thus, from the union bound, we get:

Pr[Bt]ag𝒜gFaΔtμk(𝒜l)Pr[Btag,a1,Δt1]|𝒜g|k|𝒜l|Pr[Btag,a1,Δt1]\Pr[B_{t}]\leq\sum_{a_{g}\in\mathcal{A}_{g}}\sum_{F_{a_{\Delta_{t}}}\in\mu_{k}(\mathcal{A}_{l})}\Pr[B_{t}^{a_{g},a_{1,\Delta_{t}^{1}}}]\leq|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\Pr[B_{t}^{a_{g},a_{1,\Delta_{t}^{1}}}]

Applying the union bound again proves the lemma:

Pr[B¯]\displaystyle\Pr[\bar{B}] 1t=1TPr[Bt]1|𝒜g|k|𝒜l|t=1Tδt,\displaystyle\geq 1-\sum_{t=1}^{T}\Pr[B_{t}]\geq 1-|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\sum_{t=1}^{T}\delta_{t},

which proves the claim.∎

Lemma F.6.

For any arbitrary distribution 𝒟\mathcal{D} of states 𝒮:=𝒮g×𝒮ln\mathcal{S}:=\mathcal{S}_{g}\times\mathcal{S}_{l}^{n}, for any Δi([n]ik1)\Delta^{i}\in{[n]\setminus i\choose k-1} for i[n]i\in[n] and for δ(0,1]\delta\in(0,1] we have:

𝔼s𝒟[Δ([n]k)Δ1([n]1k1)Δn([n]nk1)1(nk)1(n1k1)ni[n]1n|Q(s,π(s))Q^estk,m(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|]\displaystyle\mathbb{E}_{s\sim\mathcal{D}}\left[\sum_{\Delta\in{[n]\choose k}}\sum_{\Delta^{1}\in{[n]\setminus 1\choose k-1}}\dots\sum_{\Delta^{n}\in{[n]\setminus n\choose k-1}}\frac{1}{{n\choose k}}\frac{1}{{n-1\choose k-1}^{n}}\sum_{i\in[n]}\frac{1}{n}\left|Q^{*}(s,\pi^{*}(s))-\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|\right]
r~1γnk+18nkln2|𝒮l||𝒜l|δ+ϵk,m+r~1γ|𝒜g|k|𝒜l|δ\displaystyle\quad\quad\quad\quad\leq\frac{\tilde{r}}{1-\gamma}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+\epsilon_{k,m}+\frac{\tilde{r}}{1-\gamma}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta
Proof.

By the linearity of expectations, observe that:

𝔼s𝒟[Δ([n]k)Δ1([n]1k1)Δn([n]nk1)1(nk)1(n1k1)ni[n]1n|Q(s,π(s))Q^estk,m(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|]\displaystyle\mathbb{E}_{s\sim\mathcal{D}}\left[\sum_{\Delta\in{[n]\choose k}}\sum_{\Delta^{1}\in{[n]\setminus 1\choose k-1}}\dots\sum_{\Delta^{n}\in{[n]\setminus n\choose k-1}}\frac{1}{{n\choose k}}\frac{1}{{n-1\choose k-1}^{n}}\sum_{i\in[n]}\frac{1}{n}\left|Q^{*}(s,\pi^{*}(s))-\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|\right]
=Δ([n]k)Δ1([n]1k1)Δn([n]nk1)1(nk)1(n1k1)ni[n]1n𝔼s𝒟|Q(s,π(s))Q^estk,m(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|\displaystyle=\sum_{\Delta\in{[n]\choose k}}\sum_{\Delta^{1}\in{[n]\setminus 1\choose k-1}}\dots\sum_{\Delta^{n}\in{[n]\setminus n\choose k-1}}\frac{1}{{n\choose k}}\frac{1}{{n-1\choose k-1}^{n}}\sum_{i\in[n]}\frac{1}{n}\mathbb{E}_{s\sim\mathcal{D}}\left|Q^{*}(s,\pi^{*}(s))-\hat{Q}^{\mathrm{est}}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|

Then, define the indicator function :[n]×𝒮××(0,1]{0,1}\mathcal{I}:[n]\times\mathcal{S}\times\mathbb{N}\times(0,1]\to\{0,1\} by:

(i,s,k,δ):=\displaystyle\mathcal{I}(i,s,k,\delta):=
𝟙{|Q(s,π(s))Q^k,mest(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|rl(,)1γnk+18nkln2|𝒮l||𝒜l|δ+ϵk,m}\displaystyle\quad\mathbbm{1}\left\{\left|Q^{*}(s,\pi^{*}(s))-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|\leq\frac{\|r_{l}(\cdot,\cdot)\|_{\infty}}{1-\gamma}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+\epsilon_{k,m}\right\}

We then study the expected difference between Q(s,π(s))Q^{*}(s^{\prime},\pi^{*}(s^{\prime})) and Q^k,mest(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}}).

Observe that:

𝔼s𝒟|Q(s,π(s))Q^k,mest(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|\displaystyle\mathbb{E}_{s\sim\mathcal{D}}\left|Q^{*}(s^{\prime},\pi^{*}(s^{\prime}))-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|
=𝔼s𝒟[(i,s,k,δ)|Q(s,π(s))Q^k,mest(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|]\displaystyle\quad\quad\quad\quad\quad\quad=\mathbb{E}_{s\sim\mathcal{D}}\left[\mathcal{I}(i,s,k,\delta)\left|Q^{*}(s^{\prime},\pi^{*}(s^{\prime}))-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|\right]
+𝔼s𝒟[(1(i,s,k,δ))|Q(s,π(s))Q^k,mest(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|]\displaystyle\quad\quad\quad\quad\quad\quad+\mathbb{E}_{s\sim\mathcal{D}}\left[(1-\mathcal{I}(i,s,k,\delta))\left|Q^{*}(s^{\prime},\pi^{*}(s^{\prime}))-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|\right]

Here, we have used the general property for a random variable XX and constant cc that 𝔼[X]=𝔼[X𝟙{Xc}]+𝔼[(1𝟙{Xc})X]\mathbb{E}[X]=\mathbb{E}[X\mathbbm{1}\{X\leq c\}]+\mathbb{E}[(1-\mathbbm{1}\{X\leq c\})X]. Then,

𝔼s𝒟|Q(s,π(s))Q^k,mest(sg,si,Δi,π(s)g,π(s)i,Δi)|\displaystyle\mathbb{E}_{s\sim\mathcal{D}}\left|Q^{*}(s^{\prime},\pi^{*}(s^{\prime}))-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i,\Delta^{i}},\pi^{*}(s)_{g},\pi^{*}(s)_{i,\Delta^{i}})\right|
r~1γnk+18nkln2|𝒮l||𝒜l|δ+ϵk,m+r~1γ(1𝔼s𝒟(s,k,δ)))\displaystyle\quad\leq\frac{\tilde{r}}{1-\gamma}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+\epsilon_{k,m}+\frac{\tilde{r}}{1-\gamma}\left(1-\mathbb{E}_{s^{\prime}\sim\mathcal{D}}\mathcal{I}(s^{\prime},k,\delta)\right))
r~1γnk+18nkln2|𝒮l||𝒜l|δ+ϵk,m+r~1γ|𝒜g|k|𝒜l|δ\displaystyle\quad\leq\frac{\tilde{r}}{1-\gamma}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+\epsilon_{k,m}+\frac{\tilde{r}}{1-\gamma}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta
=r~1γnk+18nkln2|𝒮l||𝒜l|δ+ϵk,m+r~1γ|𝒜g|k|𝒜l|δ\displaystyle\quad=\frac{\tilde{r}}{1-\gamma}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+\epsilon_{k,m}+\frac{\tilde{r}}{1-\gamma}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta

For the first term in the first inequality, we use 𝔼[X𝟙{Xc}]c\mathbb{E}[X\mathbbm{1}\{X\leq c\}]\leq c. For the second term, we trivially bound Q(s,π(s))Q^k(sg,si,Δi,π(s)g,π(s)i,Δi)Q^{*}(s^{\prime},\pi^{*}(s^{\prime}))-\hat{Q}_{k}^{*}(s_{g},s_{i,\Delta^{i}},\pi^{*}(s)_{g},\pi^{*}(s)_{i,\Delta^{i}}) by the maximum value QQ^{*} can take, which is r~1γ\frac{\tilde{r}}{1-\gamma} by Lemma B.5. In the second inequality, we use the fact that the expectation of an indicator function is the conditional probability of the underlying event. The second inequality follows from Lemma F.5.

Since this is a uniform bound that is independent of Δj\Delta^{j} for j[n]j\in[n] and i[n]i\in[n], we have:

Δ([n]k)Δ1([n]1k1)Δn([n]nk1)1(nk)1(n1k1)ni[n]1n𝔼s𝒟|Q(s,π(s))Q^k,mest(sg,si,FsΔi,π(s)g,{π(s)j}j{i,Δi})|\displaystyle\sum_{\Delta\in{[n]\choose k}}\sum_{\Delta^{1}\in{[n]\setminus 1\choose k-1}}\dots\sum_{\Delta^{n}\in{[n]\setminus n\choose k-1}}\frac{1}{{n\choose k}}\frac{1}{{n-1\choose k-1}^{n}}\sum_{i\in[n]}\frac{1}{n}\mathbb{E}_{s\sim\mathcal{D}}\left|Q^{*}(s,\pi^{*}(s))-\hat{Q}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta^{i}}},\pi^{*}(s)_{g},\{\pi^{*}(s)_{j}\}_{j\in\{i,\Delta^{i}\}})\right|
r~1γnk+18nkln2|𝒮l||𝒜l|δ+ϵk,m+r~1γ|𝒜g|k|𝒜l|δ\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\leq\frac{\tilde{r}}{1-\gamma}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+\epsilon_{k,m}+\frac{\tilde{r}}{1-\gamma}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta

This proves the claim.∎

Corollary F.7.

Changing the notation of the policy to πk,mest{\pi}_{k,m}^{\mathrm{est}} in the proofs of Lemmas F.6 and F.5, such that aπ~k,mesta\sim\tilde{\pi}_{k,m}^{\mathrm{est}} then yields that:

𝔼s𝒟[Δ([n]k)Δ1([n]1k1)Δ2([n]2k1)Δn([n]nk1)1(nk)1(n1k1)ni[n]1n|Q(s,a)Q^k,m(sg,si,FsΔi,ag,ai,Δi})|]\displaystyle\mathbb{E}_{s\sim\mathcal{D}}\left[\sum_{\Delta\in{[n]\choose k}}\sum_{\Delta^{1}\in{[n]\setminus 1\choose k-1}}\sum_{\Delta^{2}\in{[n]\setminus 2\choose k-1}}\dots\sum_{\Delta^{n}\in{[n]\setminus n\choose k-1}}\frac{1}{{n\choose k}}\frac{1}{{n-1\choose k-1}^{n}}\sum_{i\in[n]}\frac{1}{n}\left|Q^{*}(s,a)-\hat{Q}^{*}_{k,m}(s_{g},s_{i},F_{s_{\Delta^{i}}},a_{g},a_{i,\Delta^{i}\}})\right|\right]
r~1γnk+18nkln2|𝒮l||𝒜l|δ+ϵk,m+r~1γ|𝒜g|k|𝒜l|δ\displaystyle\quad\quad\quad\quad\leq\frac{\tilde{r}}{1-\gamma}\sqrt{\frac{n-k+1}{8nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+\epsilon_{k,m}+\frac{\tilde{r}}{1-\gamma}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta
Lemma F.8 (Optimizing Parameters).
Vπ(s0)Vπk,m(s0)O~(1k)V^{\pi^{*}}(s_{0})-V^{{\pi}_{k,m}}(s_{0})\leq\tilde{O}\left(\frac{1}{\sqrt{k}}\right)
Proof.

Setting δ=(1γ)22r~ε|𝒜g|k|𝒜l|\delta=\frac{(1-\gamma)^{2}}{2\tilde{r}\varepsilon|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}} in the bound for the optimality gap in Theorem F.3 gives:

Vπ(s0)Vπ~k,mest(s0)\displaystyle V^{\pi^{*}}(s_{0})-V^{\tilde{\pi}_{k,m}^{\mathrm{est}}}(s_{0}) r~(1γ)2nk+12nkln2|𝒮l||𝒜l|δ+2ϵk,m+2r~(1γ)2|𝒜g|k|𝒜l|δ\displaystyle\leq\frac{\tilde{r}}{(1-\gamma)^{2}}\sqrt{\frac{n-k+1}{2nk}}\sqrt{\ln\frac{2|\mathcal{S}_{l}||\mathcal{A}_{l}|}{\delta}}+2\epsilon_{k,m}+\frac{\tilde{2r}}{(1-\gamma)^{2}}|\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}\delta
=r~(1γ)2nk+12nkln4r~ε|𝒮l||𝒜l||𝒜g|k|𝒜l|(1γ)2+1ε+2ϵk,m\displaystyle=\frac{\tilde{r}}{(1-\gamma)^{2}}\sqrt{\frac{n-k+1}{2nk}}\sqrt{\ln\frac{4\tilde{r}\varepsilon|\mathcal{S}_{l}||\mathcal{A}_{l}||\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|}}{(1-\gamma)^{2}}}+\frac{1}{\varepsilon}+2\epsilon_{k,m}

Setting ε=10k\varepsilon=10\sqrt{k} for any c>0c>0 recovers a decaying optimality gap of the order

Vπ(s0)Vπ~k,mest(s0)r~(1γ)2nk+12nkln40r~|𝒮l||𝒜l||𝒜g|k|𝒜l|+12(1γ)2+110k+2ϵk,mV^{\pi^{*}}(s_{0})-V^{\tilde{\pi}_{k,m}^{\mathrm{est}}}(s_{0})\leq\frac{\tilde{r}}{(1-\gamma)^{2}}\sqrt{\frac{n-k+1}{2nk}}\sqrt{\ln\frac{40\tilde{r}|\mathcal{S}_{l}||\mathcal{A}_{l}||\mathcal{A}_{g}|k^{|\mathcal{A}_{l}|+\frac{1}{2}}}{(1-\gamma)^{2}}}+\frac{1}{10\sqrt{k}}+2\epsilon_{k,m}

Finally, setting ϵk,mO~(1k)\epsilon_{k,m}\leq\tilde{O}(\frac{1}{\sqrt{k}}) from Lemma F.10 yields that

Vπ(s0)Vπ~k,m(s0)O~(1k),V^{\pi^{*}}(s_{0})-V^{\tilde{\pi}_{k,m}}(s_{0})\leq\tilde{O}\left(\frac{1}{\sqrt{k}}\right),

which proves the claim.∎

F.1 Bounding the Bellman Error

This section is devoted to the proof of Lemma 3.2.

Theorem F.9 (Theorem 2 of (Li et al., 2022)).

If mm\in\mathbb{N} is the number of samples in the Bellman update, there exists a universal constant 0<c0<20<c_{0}<2 and a Bellman noise 0<ϵk,m11γ0<\epsilon_{k,m}\leq\frac{1}{1-\gamma} such that

𝒯^k,mQ^k,mest𝒯^kQ^k=Q^k,mestQ^kϵk,m,\|\hat{\mathcal{T}}_{k,m}\hat{Q}_{k,m}^{\mathrm{est}}-\hat{\mathcal{T}}_{k}\hat{Q}_{k}^{*}\|_{\infty}=\|\hat{Q}_{k,m}^{\mathrm{est}}-\hat{Q}_{k}^{*}\|_{\infty}\leq\epsilon_{k,m},

where ϵk,m\epsilon_{k,m} satisfies

m=c0r~tcover(1γ)5ϵk,m2log2(|𝒮||𝒜|)log(1(1γ)2),m=\frac{c_{0}\cdot\tilde{r}\cdot t_{\text{cover}}}{(1-\gamma)^{5}\epsilon_{k,m}^{2}}\log^{2}(|\mathcal{S}||\mathcal{A}|)\log\left(\frac{1}{(1-\gamma)^{2}}\right), (29)

where tcovert_{\text{cover}} stands for the cover time, and is the time taken for the trajectory to visit all state-action pairs at least once. Formally,

tcover:-min{t|min(s0,a0)𝒮×𝒜(t|s0,a0)12},t_{\text{cover}}\coloneq\min\left\{t\bigg{|}\min_{(s_{0},a_{0})\in\mathcal{S}\times\mathcal{A}}\mathbb{P}(\mathcal{B}_{t}|s_{0},a_{0})\geq\frac{1}{2}\right\}, (30)

where t\mathcal{B}_{t} denotes the event that all (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\times\mathcal{A} have been visited at least once between time 0 and time tt, and (t|s0,a0)\mathbb{P}(\mathcal{B}_{t}|s_{0},a_{0}) denotes the probability of t\mathcal{B}_{t} conditioned on the initial state (s0,a0)(s_{0},a_{0}).

Lemma F.10.

Suppose T=21γlogr~k1γT=\frac{2}{1-\gamma}\log\frac{\tilde{r}\sqrt{k}}{1-\gamma}. Then, the SUB-SAMPLE-MFQ: Learning algorithm runs in time O~(T|𝒮g|2|𝒜g|2|𝒜l|2|𝒮l|2k2.5+2|𝒮l||𝒜l|r~)\tilde{O}(T|\mathcal{S}_{g}|^{2}|\mathcal{A}_{g}|^{2}|\mathcal{A}_{l}|^{2}|\mathcal{S}_{l}|^{2}k^{2.5+2|\mathcal{S}_{l}||\mathcal{A}_{l}|}\tilde{r}), while attaining a Bellman noise ϵk,m\epsilon_{k,m} of O~(1/k)\tilde{O}(1/\sqrt{k}).

Proof.

We first prove that Q^kTQ^k1k\|\hat{Q}_{k}^{T}-\hat{Q}_{k}^{*}\|_{\infty}\leq\frac{1}{\sqrt{k}}. For this, it suffices to show γTr~1γ1kγT1γr~k\gamma^{T}\frac{\tilde{r}}{1-\gamma}\leq\frac{1}{\sqrt{k}}\implies\gamma^{T}\leq\frac{1-\gamma}{\tilde{r}\sqrt{k}}. Then, using γ=1(1γ)e(1γ)\gamma=1-(1-\gamma)\leq e^{-(1-\gamma)}, it again suffices to show e(1γ)T1γr~ke^{-(1-\gamma)T}\leq\frac{1-\gamma}{\tilde{r}\sqrt{k}}. Taking logarithms, we have

exp(T(1γ))\displaystyle\exp(-T(1-\gamma)) 1γr~k\displaystyle\leq\frac{1-\gamma}{\tilde{r}\sqrt{k}}
T(1γ)\displaystyle-T(1-\gamma) log1γr~k\displaystyle\leq\log\frac{1-\gamma}{\tilde{r}\sqrt{k}}
T\displaystyle T 11γlogr~k1γ\displaystyle\geq\frac{1}{1-\gamma}\log\frac{\tilde{r}\sqrt{k}}{1-\gamma}

Since T=21γlogr~k1γ>11γlogr~k1γT=\frac{2}{1-\gamma}\log\frac{\tilde{r}\sqrt{k}}{1-\gamma}>\frac{1}{1-\gamma}\log\frac{\tilde{r}\sqrt{k}}{1-\gamma}, the condition holds and Q^kTQ^k1k\|\hat{Q}_{k}^{T}-\hat{Q}_{k}^{*}\|_{\infty}\leq\frac{1}{\sqrt{k}}. Then, rearranging Equation 29 and incorporating the convergence error of the Q^k\hat{Q}_{k}-function, one has

ϵk,m1k+k2r~tcover(1γ)2.5mlog(|𝒮g||𝒜g||𝒜l||𝒮l|)log(1(1γ)2)\epsilon_{k,m}\leq\frac{1}{\sqrt{k}}+\frac{k\sqrt{2\tilde{r}t_{\text{cover}}}}{(1-\gamma)^{2.5}\sqrt{m}}\log\left(|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{A}_{l}||\mathcal{S}_{l}|\right)\log\left(\frac{1}{(1-\gamma)^{2}}\right) (31)

Then, using the naïve bound tcovert_{\text{cover}} (since we are doing offline learning), we have

tcover|𝒮g||𝒜g||𝒮l||𝒜l|k|𝒮l||𝒜l|.t_{\text{cover}}\leq|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{S}_{l}||\mathcal{A}_{l}|k^{|\mathcal{S}_{l}||\mathcal{A}_{l}|}.

Substituting this in Equation 31 gives

ϵk,m1k+k2r~|𝒮g||𝒜g||𝒮l||𝒜l|k|𝒮l||𝒜l|(1γ)2.5mlog(|𝒮g||𝒜g||𝒜l||𝒮l|)log(1(1γ)2)\epsilon_{k,m}\leq\frac{1}{\sqrt{k}}+\frac{k\sqrt{2\tilde{r}|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{S}_{l}||\mathcal{A}_{l}|k^{|\mathcal{S}_{l}||\mathcal{A}_{l}|}}}{(1-\gamma)^{2.5}\sqrt{m}}\log\left(|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{A}_{l}||\mathcal{S}_{l}|\right)\log\left(\frac{1}{(1-\gamma)^{2}}\right) (32)

Therefore, setting

m=2r~|𝒮g||𝒜g||𝒮l||𝒜l|k2.5+|𝒮l||𝒜l|(1γ)5log(|𝒮g||𝒜g||𝒜l||𝒮l|)log(1(1γ)2)m=\frac{2\tilde{r}|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{S}_{l}||\mathcal{A}_{l}|k^{2.5+|\mathcal{S}_{l}||\mathcal{A}_{l}|}}{(1-\gamma)^{5}}\log(|\mathcal{S}_{g}||\mathcal{A}_{g}||\mathcal{A}_{l}||\mathcal{S}_{l}|)\log\left(\frac{1}{(1-\gamma)^{2}}\right) (33)

attains a Bellman error of ϵk,mO~(1/k)\epsilon_{k,m}\leq\tilde{O}(1/\sqrt{k}). Finally, the runtime of our learning algorithm is

O(mT|𝒮g||𝒮l||𝒜g||𝒜l|k|𝒮l||𝒜l|)=O~(|𝒮g|2|𝒜g|2|𝒜l|2|𝒮l|2k2.5+2|𝒮l||𝒜l|r~),O(mT|\mathcal{S}_{g}||\mathcal{S}_{l}||\mathcal{A}_{g}||\mathcal{A}_{l}|k^{|\mathcal{S}_{l}||\mathcal{A}_{l}|})=\tilde{O}(|\mathcal{S}_{g}|^{2}|\mathcal{A}_{g}|^{2}|\mathcal{A}_{l}|^{2}|\mathcal{S}_{l}|^{2}k^{2.5+2|\mathcal{S}_{l}||\mathcal{A}_{l}|}\tilde{r}),

which is still polynomial in kk, proving the claim.∎

Appendix G Generalization to Stochastic Rewards

Suppose we are given two families of distributions, {𝒢sg,ag}sg,ag𝒮g×𝒜g\{\mathcal{G}_{s_{g},a_{g}}\}_{s_{g},a_{g}\in\mathcal{S}_{g}\times\mathcal{A}_{g}} and {si,sg,ai}si,sg,ai𝒮l×𝒮g×𝒜l\{\mathcal{L}_{s_{i},s_{g},a_{i}}\}_{s_{i},s_{g},a_{i}\in\mathcal{S}_{l}\times\mathcal{S}_{g}\times\mathcal{A}_{l}}. Let R(s,a)R(s,a) denote a stochastic reward of the form

R(s,a)=rg(sg,ag)+i[n]rl(si,sg,ai),R(s,a)=r_{g}(s_{g},a_{g})+\sum_{i\in[n]}r_{l}(s_{i},s_{g},a_{i}), (34)

where the rewards of the global agent rgr_{g} emerge from a distribution rg(sg,ag)𝒢sg,agr_{g}(s_{g},a_{g})\sim\mathcal{G}_{s_{g},a_{g}}, and the rewards of the local agents rlr_{l} emerge from a distribution rl(si,sg,ai)si,sg,air_{l}(s_{i},s_{g},a_{i})\sim\mathcal{L}_{s_{i},s_{g},a_{i}}. For Δ[n]\Delta\subseteq[n], let RΔ(s,a)R_{\Delta}(s,a) be defined as:

RΔ(s,a)=rg(sg,ag)+iΔrl(si,sg,ai)R_{\Delta}(s,a)=r_{g}(s_{g},a_{g})+\sum_{i\in\Delta}r_{l}(s_{i},s_{g},a_{i}) (35)

We make some standard assumptions of boundedness on 𝒢sg,ag\mathcal{G}_{s_{g},a_{g}} and si,sg,ai\mathcal{L}_{s_{i},s_{g},a_{i}}.

Assumption G.1.

Define

𝒢¯=(sg,ag)𝒮g×𝒜gsupp(𝒢sg,ag),¯=(si,sg,ai)𝒮l×𝒮g×𝒜lsupp(si,sg,ai),\bar{\mathcal{G}}=\bigcup_{(s_{g},a_{g})\in\mathcal{S}_{g}\times\mathcal{A}_{g}}\mathrm{supp}\left(\mathcal{G}_{s_{g},a_{g}}\right),\quad\quad\bar{\mathcal{L}}=\bigcup_{(s_{i},s_{g},a_{i})\in\mathcal{S}_{l}\times\mathcal{S}_{g}\times\mathcal{A}_{l}}\mathrm{supp}\left(\mathcal{L}_{s_{i},s_{g},a_{i}}\right), (36)

where for any distribution 𝒟\mathcal{D}, supp(𝒟)\mathrm{supp}(\mathcal{D}) is the support (set of all random variables 𝒟\mathcal{D} that can be sampled with probability strictly larger than 0) of 𝒟\mathcal{D}. Then, let 𝒢^=sup(𝒢¯),^=sup(¯),

^ 

𝒢
=inf(𝒢¯)
\hat{\mathcal{G}}=\sup\left(\bar{\mathcal{G}}\right),\hat{\mathcal{L}}=\sup\left(\bar{\mathcal{L}}\right),{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\displaystyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\textstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=5.49306pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.92361pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{G}$}}}}}=\inf\left(\bar{\mathcal{G}}\right)
, and

^ 

=inf(¯)
{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\displaystyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\textstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=4.375pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.125pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{L}$}}}}}=\inf\left(\bar{\mathcal{L}}\right)
. We assume that 𝒢^<,^<,

^ 

𝒢
>
\hat{\mathcal{G}}<\infty,\hat{\mathcal{L}}<\infty,{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\displaystyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\textstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=5.49306pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.92361pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{G}$}}}}}>-\infty
,

^ 

>
{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\displaystyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\textstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=4.375pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.125pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{L}$}}}}}>-\infty
, and that 𝒢^,^,

^ 

𝒢
,

^ 

\hat{\mathcal{G}},\hat{\mathcal{L}},{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\displaystyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\textstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=5.49306pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.92361pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{G}$}}}}},{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\displaystyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\textstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=4.375pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.125pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{L}$}}}}}
are all known in advance.

Definition G.2.

Let the randomized empirical adapted Bellman operator be 𝒯^randomk,m\hat{\mathcal{T}}^{\text{random}}_{k,m} such that:

𝒯^randomk,mQ^tk,m(sg,s1,FzΔ,ag,a1)=RΔ(s,a)+γm[m]maxa𝒜Q^k,mt(sg,sj,FzΔj,aj,ag),\hat{\mathcal{T}}^{\text{random}}_{k,m}\hat{Q}^{t}_{k,m}(s_{g},s_{1},F_{z_{\Delta}},a_{g},a_{1})=R_{\Delta}(s,a)+\frac{\gamma}{m}\sum_{\ell\in[m]}\max_{a^{\prime}\in\mathcal{A}}\hat{Q}_{k,m}^{t}(s_{g}^{\ell},s_{j}^{\ell},F_{z_{\Delta\setminus j}^{\ell}},a_{j}^{\ell},a_{g}^{\ell}), (37)

SUBSAMPLE-MFQ: Learning with Stochastic Rewards.

The proposed algorithm averages Ξ\Xi samples of the adapted randomized empirical adapted Bellman operator, 𝒯k,mrandom\mathcal{T}_{k,m}^{\text{random}} and updates the Q^k,m\hat{Q}_{k,m} function using with the average. One can show that 𝒯k,mrandom\mathcal{T}_{k,m}^{\text{random}} is a contraction operator with module γ\gamma.

By Banach’s fixed point theorem, 𝒯k,mrandom\mathcal{T}_{k,m}^{\text{random}} admits a unique fixed point Q^k,mrandom\hat{Q}_{k,m}^{\text{random}}.

Algorithm 5 SUBSAMPLE-MFQ: Learning with Stochastic Rewards
0:  A multi-agent system as described in Section 2. Parameter TT for the number of iterations in the initial value iteration step. Sampling parameters k[n]k\in[n] and mm\in\mathbb{N}. Discount parameter γ(0,1)\gamma\in(0,1). Oracle 𝒪\mathcal{O} to sample sgPg(|sg,ag)s_{g}^{\prime}\sim{P}_{g}(\cdot|s_{g},a_{g}) and siPl(|si,sg,ai)s_{i}\sim{P}_{l}(\cdot|s_{i},s_{g},a_{i}) for all i[n]i\in[n].
1:  Set Δ~={2,,k}\tilde{\Delta}=\{2,\dots,k\}.
2:  Set μk1(Zl)={0,1k1,2k1,,1}|𝒮l|×|𝒜l|\mu_{k-1}(Z_{l})=\{0,\frac{1}{k-1},\frac{2}{k-1},\dots,1\}^{|\mathcal{S}_{l}|\times|\mathcal{A}_{l}|}.
3:  Set Q^0k,m(sg,s1,FzΔ~,ag,a1)=0\hat{Q}^{0}_{k,m}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{g},a_{1})=0, for (sg,s1,FzΔ~,a1,ag)𝒮g×𝒮l×μk1(𝒵l)×𝒜g×𝒜l(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{1},a_{g})\in\mathcal{S}_{g}\times\mathcal{S}_{l}\times\mu_{k-1}(\mathcal{Z}_{l})\times\mathcal{A}_{g}\times\mathcal{A}_{l}.
4:  for t=1t=1 to TT do
5:     for (sg,s1,FzΔ~,ag,a1)𝒮g×𝒮l×μk1(𝒵l)×𝒜g×𝒜l(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{g},a_{1})\in\mathcal{S}_{g}\times\mathcal{S}_{l}\times\mu_{k-1}{(\mathcal{Z}_{l})}\times\mathcal{A}_{g}\times\mathcal{A}_{l} do
6:        ρ=0\rho=0
7:        for ξ{1,,Ξ}\xi\in\{1,\dots,\Xi\} do
8:           ρ=ρ+𝒯^randomk,mQ^tk,m(sg,s1,FzΔ~,ag,a1)\rho=\rho+\hat{\mathcal{T}}^{\text{random}}_{k,m}\hat{Q}^{t}_{k,m}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{g},a_{1})
9:        end for
10:        Q^t+1k,m(sg,s1,FzΔ~,ag,a1)=ρ/Ξ\hat{Q}^{t+1}_{k,m}(s_{g},s_{1},F_{z_{\tilde{\Delta}}},a_{g},a_{1})=\rho/\Xi
11:     end for
12:  end for
13:  For all (sg,si,FsΔi)𝒮g×𝒮l×μk1(𝒮l)(s_{g},s_{i},F_{s_{\Delta\setminus i}})\in\mathcal{S}_{g}\times\mathcal{S}_{l}\times\mu_{k-1}{(\mathcal{S}_{l})}, let
π^k,mest(sg,si,FsΔi)=argmax(ag,ai,FaΔi)𝒜g×𝒜l×μk1(𝒵l)Q^k,mT(sg,si,FzΔi,ag,ai).\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g},s_{i},F_{s_{\Delta\setminus i}})=\mathop{\operatorname*{arg\,max}}_{(a_{g},a_{i},F_{a_{\Delta\setminus i}})\in\mathcal{A}_{g}\times\mathcal{A}_{l}\times\mu_{k-1}{(\mathcal{Z}_{l})}}\hat{Q}_{k,m}^{T}(s_{g},s_{i},F_{z_{\Delta\setminus i}},a_{g},a_{i}).
Theorem G.3 (Hoeffding’s Theorem (Tsybakov, 2008)).

Let X1,,XnX_{1},\dots,X_{n} be independent random variables such that aiXibia_{i}\leq X_{i}\leq b_{i} almost surely. Then, let Sn=X1++XnS_{n}=X_{1}+\dots+X_{n}. Then, for all ϵ>0\epsilon>0,

[|Sn𝔼[Sn]|ϵ]2exp(2ϵ2i=1n(biai)2)\mathbb{P}[|S_{n}-\mathbb{E}[S_{n}]|\geq\epsilon]\leq 2\exp\left(-\frac{2\epsilon^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}}\right) (38)
Lemma G.4.

When πestk,m{\pi}^{\mathrm{est}}_{k,m} is derived from the randomized empirical value iteration operation and applying our online subsampling execution in Algorithm 3, we get

Pr[Vπ(s0)Vπ~k,m(s0)O~(1k)]11100k.\Pr\left[V^{\pi^{*}}(s_{0})-V^{\tilde{\pi}_{k,m}}(s_{0})\leq\tilde{O}\left(\frac{1}{\sqrt{k}}\right)\right]\geq 1-\frac{1}{100\sqrt{k}}. (39)
Proof.
Pr[|ρΞ𝔼[RΔ(s,a)]|ϵΞ]2exp(2ϵ2i=1Ξ|𝒢^+^

^ 

𝒢

^ 

|2
)=2exp(2ϵ2Ξ|𝒢^+^

^ 

𝒢

^ 

|2
)
\Pr\left[\left|\frac{\rho}{\Xi}-\mathbb{E}[R_{\Delta}(s,a)]\right|\geq\frac{\epsilon}{\Xi}\right]\leq 2\exp\left(-\frac{2\epsilon^{2}}{\sum_{i=1}^{\Xi}|\hat{\mathcal{G}}+\hat{\mathcal{L}}-{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\displaystyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\textstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=5.49306pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.92361pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{G}$}}}}}-{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\displaystyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\textstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=4.375pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.125pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{L}$}}}}}|^{2}}\right)=2\exp\left(-\frac{2\epsilon^{2}}{\Xi|\hat{\mathcal{G}}+\hat{\mathcal{L}}-{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\displaystyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\textstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=5.49306pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.92361pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{G}$}}}}}-{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\displaystyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\textstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=4.375pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.125pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{L}$}}}}}|^{2}}\right)
(40)

Rearranging this, we get:

Pr[|ρΞ𝔼[RΔ(s,a)]|ln(2δ)|𝒢^+^

^ 

𝒢

^ 

|2
2Ξ2
]1δ
\Pr\left[\left|\frac{\rho}{\Xi}-\mathbb{E}[R_{\Delta}(s,a)]\right|\leq\sqrt{\ln\left(\frac{2}{\delta}\right)\frac{{|\hat{\mathcal{G}}+\hat{\mathcal{L}}-{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\displaystyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\textstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=5.49306pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.92361pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{G}$}}}}}-{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\displaystyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\textstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=4.375pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.125pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{L}$}}}}}|^{2}}}{2\Xi^{2}}}\right]\geq 1-\delta
(41)

Then, setting δ=1100k\delta=\frac{1}{100\sqrt{k}}, and setting Ξ=10|𝒢^+^

^ 

𝒢

^ 

|k1/4ln(200k)
\Xi=10|\hat{\mathcal{G}}+\hat{\mathcal{L}}-{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\displaystyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\textstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=5.49306pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.92361pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{G}$}}}}}-{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\displaystyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\textstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=4.375pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.125pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{L}$}}}}}|k^{1/4}\sqrt{\ln(200\sqrt{k})}
gives:

Pr[|ρΞ𝔼[RΔ(s,a)]|1200k]11100k\Pr\left[\left|\frac{\rho}{\Xi}-\mathbb{E}[R_{\Delta}(s,a)]\right|\leq\frac{1}{\sqrt{200k}}\right]\geq 1-\frac{1}{100\sqrt{k}}

Then, applying the triangle inequality to ϵk,m\epsilon_{k,m} allows us to derive a probabilistic bound on the optimality gap between Q^k,mest\hat{Q}_{k,m}^{\mathrm{est}} and QQ^{*}, where the gap is increased by 1200k\frac{1}{\sqrt{200k}}, and where the randomness is over the stochasticity of the rewards. Then, the optimality gap between VπV^{\pi*} and Vπk,mestV^{{\pi}_{k,m}^{\mathrm{est}}}, for when the policy π^estk,m\hat{\pi}^{\mathrm{est}}_{k,m} is learned using Algorithm 5 in the presence of stochastic rewards obeying G.1 follows

Pr[Vπ(s0)Vπ~k,m(s0)O~(1k)]11100k,\Pr\left[V^{\pi^{*}}(s_{0})-V^{\tilde{\pi}_{k,m}}(s_{0})\leq\tilde{O}\left(\frac{1}{\sqrt{k}}\right)\right]\geq 1-\frac{1}{100\sqrt{k}}, (42)

which proves the lemma.∎

Remark G.5.

First, note that that through the naïve averaging argument, the optimality gap above still decays to 0 with probability decaying to 11, as kk increases.

Remark G.6.

This method of analysis could be strengthened by obtaining estimates of 𝒢^,^,

^ 

𝒢
,

^ 

\hat{\mathcal{G}},\hat{\mathcal{L}},{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\displaystyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=7.84723pt}$}}}}\cr\hbox{$\textstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=5.49306pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{G}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.92361pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{G}$}}}}},{\mathchoice{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\displaystyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\displaystyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower 7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule height=0.0pt,width=6.25002pt}$}}}}\cr\hbox{$\textstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower 6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule height=0.0pt,width=4.375pt}$}}}}\cr\hbox{$\scriptstyle\mathcal{L}$}}}}{{\ooalign{\hbox{\raise 5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower 5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule width=0.0pt,height=3.41666pt\vrule height=0.0pt,width=3.125pt}$}}}}\cr\hbox{$\scriptscriptstyle\mathcal{L}$}}}}}
and using methods from order statistics to bound the errors of the estimates and use the deviations between estimates to determine an optimal online stopping time (Kleinberg, 2005) as is done in the online secretary problem. This, more sophisticated, argument would become an essential step in converting this to a truly online learning, with a stochastic approximation scheme. Furthermore, one could incorporate variance-based analysis in the algorithm, where we use information derived from higher-order moments to do a weighted update of the Bellman update (Jin et al., 2024), thereby taking advantage of the skewedness of the distribution, and allowing us the freedom to assign an optimism/pessimism score to our estimate of the reward.

Appendix H Extension to Continuous State/Action Spaces

Multi-agent settings where each agent handles a continuous state/action space find many applications in optimization, control, and synchronization.

Example H.1 (Quadcopter Swarm Drone (Preiss et al., 2017)).

Consider a system of drones with a global controller, where each drone has to chase the controller, and the controller is designed to follow a bounded trajectory. Here, the state of each local agent i[n]i\in[n] is its position and velocity in the bounded region, and the state of the global agent gg is a signal on its position and direction. The action of each local agent is a velocity vector al𝒜l3a_{l}\in\mathcal{A}_{l}\subset\mathbb{R}^{3} which is a bounded subset of 3\mathbb{R}^{3}, and the action of the global agent is a vector ag𝒜g2a_{g}\in\mathcal{A}_{g}\subset\mathbb{R}^{2} which is a bounded subset of 2\mathbb{R}^{2}.

Hence, this section is devoted to extending our algorithm and theoretical results to the case where the state and action spaces can be continuous (and therefore have infinite cardinality).

Preliminaries.

For a measurable space (𝒮,)(\mathcal{S},\mathcal{B}), where \mathcal{B} is a σ\sigma-algebra on 𝒮\mathcal{S}, let 𝒮\mathbb{R}^{\mathcal{S}} denote the set of all real-valued \mathcal{B}-measurable functions on 𝒮\mathcal{S}. Let (X,d)(X,d) be a metric space, where XX is a set and dd is a metric on XX. A set SXS\subseteq X is dense in XX if every element of XX is either in SS or a limit point of SS. A set is nowhere dense in XX if the interior of its closure in XX is empty. A set SXS\subseteq X is of Baire first category if SS is a union of countably many nowhere dense sets. A set SXS\subseteq X is of Baire second category if XSX\setminus S is of first category.

Theorem H.2 (Baire Category Theorem (Jin et al., 2020)).

Let (X,d)(X,d) be a complete metric space. Then any countable intersection of dense open subsets of XX is dense.

Definition H.3.

MDP(S,A,,r)\mathrm{MDP}(S,A,\mathbb{P},r) is a linear MDP with feature map ϕ:𝒮×𝒜d\phi:\mathcal{S}\times\mathcal{A}\to\mathbb{R}^{d} if there exist dd unknown (signed) measures μ=(μ1,,μd)\mu=(\mu^{1},\dots,\mu^{d}) over 𝒮\mathcal{S} and a vector θd\theta\in\mathbb{R}^{d} such that for any (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\times\mathcal{A}, we have

(|s,a)=ϕ(s,a),μ(),r(s,a)=ϕ(s,a),θ\mathbb{P}(\cdot|s,a)=\langle\phi(s,a),\mu(\cdot)\rangle,\quad\quad r(s,a)=\langle\phi(s,a),\theta\rangle (43)

Without loss of generality, assume ϕ(s,a)1,(s,a)S×A\|\phi(s,a)\|\leq 1,\forall(s,a)\in S\times A and max{μ(S),θ}d\max\{\|\mu(S)\|,\|\theta\|\}\leq\sqrt{d}.

We motivate our analysis by reviewing representation learning in RL via spectral decompositions. For instance, if P(s|s,a)P(s^{\prime}|s,a) admits a linear decomposition in terms of some spectral features ϕ(s,a)\phi(s,a) and μ(s)\mu(s^{\prime}), then the Q(s,a)Q(s,a)-function can be linearly represented in terms of the spectral features ϕ(s,a)\phi(s,a).

Then, through a reduction from Ren et al. (2024) that uses function approximation to learn the spectral features ϕk\phi_{k} for Q^k\hat{Q}_{k}, we derive a performance guarantee for the learned policy πkest\pi_{k}^{\mathrm{est}}, where the optimality gap decays with kk.

Theorem H.4.

When πestk{\pi}^{\mathrm{est}}_{k} is derived from the spectral features ϕk\phi_{k} learned in Q^k\hat{Q}_{k}, and MM is the number of samples used in the function approximation, then

Pr[Vπ(s)Vπestk(s)O~(1k+ϕk5log2k2M+2γr~(1γ)kϕk)](11100k)(12k)\displaystyle\Pr\bigg{[}V^{\pi^{*}}(s)-V^{{\pi}^{\mathrm{est}}_{k}}(s)\leq\tilde{O}\bigg{(}\frac{1}{\sqrt{k}}+\frac{\|{\phi}_{k}\|^{5}\log 2k^{2}}{\sqrt{M}}+\frac{2\gamma\tilde{r}}{(1-\gamma)\sqrt{k}}\|\phi_{k}\|\bigg{)}\bigg{]}\geq\left(1-\frac{1}{100\sqrt{k}}\right)\cdot\left(1-\frac{2}{\sqrt{k}}\right)

Suppose the system is a linear MDP, where 𝒮g\mathcal{S}_{g} and 𝒮l\mathcal{S}_{l} are infinite compact sets. By a reduction from (Ren et al., 2024) and using function approximation to learn the spectral features ϕk\phi_{k} for Q^k\hat{Q}_{k}, we derive a performance guarantee for the learned policy πkest\pi_{k}^{\mathrm{est}}, where the optimality gap decays with kk.

Assumption H.5.

Suppose 𝒮gσg,𝒮lσl,𝒜gαg,𝒜lαl\mathcal{S}_{g}\subset\mathbb{R}^{\sigma_{g}},\mathcal{S}_{l}\subset\mathbb{R}^{\sigma_{l}},\mathcal{A}_{g}\subset\mathbb{R}^{\alpha_{g}},\mathcal{A}_{l}\subset\mathbb{R}^{\alpha_{l}} are bounded compact sets. From the Baire category theorem, the underlying field \mathbb{R} can be replaced to any set of Baire first category, which satisfies the property that there exists a dense open subset. In particular, replace 2.3 with a boundedness assumption.

Multi-agent settings where each agent handles a continuous state/action space find many applications in optimization, control, and synchronization.

Lemma H.6 (Proposition 2.3 in (Jin et al., 2020)).

For any linear MDP, for any policy π\pi, there exist weights {wπ}\{w^{\pi}\} such that for any (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\times\mathcal{A}, we have Qπ(s,a)=ϕ(s,a),𝐰πQ^{\pi}(s,a)=\langle\phi(s,a),\mathbf{w}^{\pi}\rangle.

Lemma H.7 (Proposition A.1 in (Jin et al., 2020)).

For any linear MDP, for any (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\times\mathcal{A}, and for any measurable subset 𝒮\mathcal{B}\subseteq\mathcal{S}, we have that

ϕ(s,a)μ(𝒮)=1,ϕ(s,a)μ()0.\phi(s,a)^{\top}\mu(\mathcal{S})=1,\quad\quad\phi(s,a)^{\top}\mu(\mathcal{B})\geq 0.
Property H.8.

Suppose that there exist spectral representation μg(sg)d\mu_{g}(s_{g}^{\prime})\in\mathbb{R}^{d} and μl(si)d\mu_{l}(s_{i}^{\prime})\in\mathbb{R}^{d} such that the probability transitions Pg(sg,|sg,ag)P_{g}(s_{g}^{\prime},|s_{g},a_{g}) and Pl(si|si,sg,ai)P_{l}(s_{i}^{\prime}|s_{i},s_{g},a_{i}) can be linearly decomposed by

Pg(sg|sg,ag)=ϕg(sg,ag)μg(sg)P_{g}(s_{g}^{\prime}|s_{g},a_{g})=\phi_{g}(s_{g},a_{g})^{\top}\mu_{g}(s_{g}^{\prime}) (44)
Pl(si|si,sg,ai)=ϕl(si,sg,ai)μl(si)P_{l}(s_{i}^{\prime}|s_{i},s_{g},a_{i})=\phi_{l}(s_{i},s_{g},a_{i})^{\top}\mu_{l}(s_{i}^{\prime}) (45)

for some features ϕg(sg,ag)d\phi_{g}(s_{g},a_{g})\in\mathbb{R}^{d} and ϕl(si,sg,ai)d\phi_{l}(s_{i},s_{g},a_{i})\in\mathbb{R}^{d}. Then, the dynamics are amenable to the following spectral factorization, as in (Ren et al., 2024).

Lemma H.9.

Q^k\hat{Q}_{k} admits a linear representation.

Proof.

Given the factorization of the dynamics of the multi-agent system, we have:

(s|s,a)\displaystyle\mathbb{P}(s^{\prime}|s,a) =Pg(sg|sg,ag)i=1nPl(si|si,sg,ai)\displaystyle=P_{g}(s_{g}^{\prime}|s_{g},a_{g})\cdot\prod_{i=1}^{n}P_{l}(s_{i}^{\prime}|s_{i},s_{g},a_{i})
=ϕg(sg,ag),μg(sg)i=1nϕl(si,sg,ai),μl(si)\displaystyle=\langle\phi_{g}(s_{g},a_{g}),\mu_{g}(s_{g}^{\prime})\rangle\cdot\prod_{i=1}^{n}\langle\phi_{l}(s_{i},s_{g},a_{i}),\mu_{l}(s_{i}^{\prime})\rangle
=ϕg(sg,ag),μg(sg)i=1nϕl(si,sg,ai),i=1nμl(si):-ϕ¯n(s,a),μ¯n(s)\displaystyle=\phi_{g}(s_{g},a_{g}),\mu_{g}(s_{g}^{\prime})\rangle\cdot\left\langle\otimes_{i=1}^{n}\phi_{l}(s_{i},s_{g},a_{i}),\otimes_{i=1}^{n}\mu_{l}(s_{i}^{\prime})\right\rangle\coloneq\langle\bar{\phi}_{n}(s,a),\bar{\mu}_{n}(s^{\prime})\rangle

Similarly, for any Δ[n]\Delta\subseteq[n] where |Δ|=k|\Delta|=k, the subsystem consisting of kk local agents Δ\Delta has a subsystem dynamics given by

(sΔ,sg|sΔ,sg,ag,aΔ)=ϕ¯k(sg,sΔ,ag,aΔ),μ¯k(sΔ).\mathbb{P}(s_{\Delta}^{\prime},s_{g}|s_{\Delta},s_{g},a_{g},a_{\Delta})=\langle\bar{\phi}_{k}(s_{g},s_{\Delta},a_{g},a_{\Delta}),\bar{\mu}_{k}(s_{\Delta}^{\prime})\rangle.

Therefore, Q^k\hat{Q}_{k} admits the linear representation:

Q^kπk(sg,FzΔ,ag)\displaystyle\hat{Q}_{k}^{\pi_{k}}(s_{g},F_{z_{\Delta}},a_{g}) =rg(sg,ag)+1kiΔrl(si,sg,ai)+γ𝔼sg,sΔmaxag,aΔQ^kπk(sg,FzΔ,ag)\displaystyle=r_{g}(s_{g},a_{g})+\frac{1}{k}\sum_{i\in\Delta}r_{l}(s_{i},s_{g},a_{i})+\gamma\mathbb{E}_{s_{g}^{\prime},s_{\Delta^{\prime}}}\max_{a_{g}^{\prime},a_{\Delta}^{\prime}}\hat{Q}_{k}^{\pi_{k}}(s_{g}^{\prime},F_{z_{\Delta}^{\prime}},a_{g}^{\prime})
=rg(sg,ag)+1kiΔrl(si,sg,ai)+(Vπk(sg,FsΔ,ag))\displaystyle=r_{g}(s_{g},a_{g})+\frac{1}{k}\sum_{i\in\Delta}r_{l}(s_{i},s_{g},a_{i})+(\mathbb{P}V^{\pi_{k}}(s_{g},F_{s_{\Delta}},a_{g}))
=[rΔ(s,a)ϕ¯k(sg,sΔ,ag,aΔ)][1γsΔμk(sΔ)Vπk(sΔ)dsΔ],\displaystyle=\begin{bmatrix}r_{\Delta}(s,a)\\ \bar{\phi}_{k}(s_{g},s_{\Delta},a_{g},a_{\Delta})\end{bmatrix}\begin{bmatrix}1&\gamma\int_{s_{\Delta}^{\prime}}\mu_{k}(s_{\Delta}^{\prime})V^{\pi_{k}}(s_{\Delta}^{\prime})\mathrm{d}s_{\Delta}^{\prime}\end{bmatrix},

proving the claim. ∎

Therefore, ϕ¯k\bar{\phi}_{k} serves as a good representation for the Q^k\hat{Q}_{k}-function making the problem amenable to the classic linear functional approximation algorithms, consisting of feature generation and policy gradients.

In feature generation, we generate the appropriate features ϕ\phi, comprising of the local reward functions and the spectral features coming from the factorization of the dynamics. In applying the policy gradient, we perform a gradient step to update the local policy weights θi\theta_{i} and update the new policy. For this, we update the weight ww via the TD(0) target semi-gradient method (with step size α\alpha), to get

wt+1=wt+α(rΔ(st,at)+γϕ¯k(st+1Δ)wtϕ¯k(sΔt)wt)ϕ¯k(sΔt)w_{t+1}=w_{t}+\alpha(r_{\Delta}(s^{t},a^{t})+\gamma\bar{\phi}_{k}(s^{t+1}_{\Delta})^{\top}w_{t}-\bar{\phi}_{k}(s_{\Delta}^{t})^{\top}w_{t})\bar{\phi}_{k}(s_{\Delta}^{t})\\ (46)
Definition H.10.

Let the complete feature map be denoted by Φ|𝒮|×d\Phi\in\mathbb{R}^{|\mathcal{S}|\times d}, where

Φ=[ϕ(1)ϕ(2)ϕ(S)]S×d\Phi=\begin{bmatrix}\phi(1)\\ \phi(2)\\ \vdots\\ \phi(S)\end{bmatrix}\in\mathbb{R}^{S\times d} (47)

Similarly, let the subsampled feature map be denoted by Φ^k|𝒮g|×|μk1(𝒮l)|×d\hat{\Phi}_{k}\in\mathbb{R}^{|\mathcal{S}_{g}|\times|\mu_{k-1}{(\mathcal{S}_{l})}|\times d}, where

Φ^k=[ϕ¯k(1)ϕ¯k(2)ϕ¯k(|𝒮g|×|μk1(𝒮l)|)]|𝒮g|×|μk1(𝒮l)|×d\hat{\Phi}_{k}=\begin{bmatrix}\bar{\phi}_{k}(1)\\ \bar{\phi}_{k}(2)\\ \vdots\\ \bar{\phi}_{k}(|\mathcal{S}_{g}|\times|\mu_{k-1}{(\mathcal{S}_{l})}|)\end{bmatrix}\in\mathbb{R}^{|\mathcal{S}_{g}|\times|\mu_{k-1}{(\mathcal{S}_{l})}|\times d}\\ (48)

Here, the system’s stage rewards are given by :

r=[r(1)r(S)]S,r=\begin{bmatrix}r(1)&\dots&r(S)\end{bmatrix}\in\mathbb{R}^{S}, (49)
rk=[rk(1),,rk(|𝒮g|×|μk1(𝒮l)|]|𝒮g|×|μk1(𝒮l)|,r_{k}=[r_{k}(1),\dots,r_{k}(|\mathcal{S}_{g}|\times|\mu_{k-1}{(\mathcal{S}_{l})}|]\in\mathbb{R}^{|\mathcal{S}_{g}|\times|\mu_{k-1}{(\mathcal{S}_{l})}|}, (50)

where dd is a low-dimensional embedding we wish to learn.

Agnostically, the goal is to approximate the value function through the approximation

VwΦ^kw=[Φ^k(1)Φ^k(|𝒮g|×|μk1(𝒮l))]wspan(Φ^k)V_{w}\approx\hat{\Phi}_{k}w=\begin{bmatrix}\hat{\Phi}_{k}(1)\\ \vdots\\ \hat{\Phi}_{k}(|\mathcal{S}_{g}|\times|\mu_{k-1}{(\mathcal{S}_{l})})\end{bmatrix}w\in\mathrm{span}(\hat{\Phi}_{k})

This manner of updating the weights can be considered via the projected Bellman equation Φ^kw=Πμ𝒯(Φ^kw)\hat{\Phi}_{k}w=\Pi_{\mu}\mathcal{T}(\hat{\Phi}_{k}w), where Πμ(v)=argminzΦ^kwzvμ2\Pi_{\mu}(v)=\arg\min_{z\in\hat{\Phi}_{k}w}\|z-v\|_{\mu}^{2}. Notably, the fixed point of the the projected Bellman equation satisfies

w=(Φ^kDμΦ^k)1Φ^kDμ(r+γPπΦ^kw),w=(\hat{\Phi}_{k}^{\top}D_{\mu}\hat{\Phi}_{k})^{-1}\hat{\Phi}_{k}^{\top}D_{\mu}(r+\gamma P^{\pi}\hat{\Phi}_{k}w),

where DμD_{\mu} is a diagonal matrix comprising of μ\mu’s. In other words, Dμ=diag(μ1,,μn)D_{\mu}=\mathrm{diag}(\mu_{1},\dots,\mu_{n}). Then,

(Φ^kDμΦ^k)w=Φ^kDμ(r+γPπΦ^kw)(\hat{\Phi}_{k}^{\top}D_{\mu}\hat{\Phi}_{k})w=\hat{\Phi}_{k}^{\top}D_{\mu}(r+\gamma P^{\pi}\hat{\Phi}_{k}w)

In turn, this implies

Φ^kDμ(IγPπ)Φw=Φ^kDμr.\hat{\Phi}_{k}^{\top}D_{\mu}(I-\gamma P^{\pi})\Phi w=\hat{\Phi}_{k}^{\top}D_{\mu}r.

Therefore, the problem is amenable to Algorithm 1 in Ren et al. (2024). To bound the error of using linear function approximation to learn Q^k\hat{Q}_{k}, we use a result from Ren et al. (2024).

Lemma H.11 (Policy Evaluation Error, Theorem 6 of Ren et al. (2024)).

Suppose the sample size Mlog(4nδ)M\geq\log\left(\frac{4n}{\delta}\right), where nn is the number of agents and δ(0,1)\delta\in(0,1) is an error parameter. Then, with probability at least 12δ1-2\delta, the ground truth Q^πk(s,a)\hat{Q}^{\pi}_{k}(s,a) function and the approximated Q^k\hat{Q}_{k}-function Q^LFAk(s,a)\hat{Q}^{\mathrm{LFA}}_{k}(s,a) satisfies, for any (s,a)𝒮×𝒜,(s,a)\in\mathcal{S}\times\mathcal{A},

𝔼[|Q^kπ(s,a)Q^kLFA(s,a)|]O(log(2kδ)ϕ¯k5Mstatistical error+2ϵPγr~1γϕ¯kapproximation error),\mathbb{E}\left[\left|\hat{Q}_{k}^{\pi}(s,a)-\hat{Q}_{k}^{\mathrm{LFA}}(s,a)\right|\right]\leq O\left(\underbrace{\log\left(\frac{2k}{\delta}\right)\frac{\|\bar{\phi}_{k}\|^{5}}{\sqrt{M}}}_{\emph{statistical error}}+\underbrace{2\epsilon_{P}\gamma\cdot\frac{\tilde{r}}{1-\gamma}\cdot\|\bar{\phi}_{k}\|}_{\emph{approximation error}}\right),

where ϵP\epsilon_{P} is the error in approximating the spectral features ϕg,ϕl\phi_{g},\phi_{l}.

Corollary H.12.

Therefore, when πk,m{\pi}_{k,m} is derived from the spectral features learned in Q^LFAk\hat{Q}^{\mathrm{LFA}}_{k}, applying the triangle inequality on the Bellman noise and setting δ=ϵP=1k\delta=\epsilon_{P}=\frac{1}{\sqrt{k}} yields111Following (Ren et al., 2024), the result easily generalizes to any positive-definite transition Kernel noise (e.g. Laplacian, Cauchy, Matérn, etc.

Pr[Vπ(s0)Vπk,m(s0)O~(1k+log(2k2)ϕ¯k5M+2kγr~1γϕ¯k)](11100k)(12k)\Pr\left[V^{\pi^{*}}(s_{0})-V^{{\pi}_{k,m}}(s_{0})\leq\tilde{O}\left(\frac{1}{\sqrt{k}}+\log\left(2k^{2}\right)\frac{\|\bar{\phi}_{k}\|^{5}}{\sqrt{M}}+\frac{2}{\sqrt{k}}\cdot\frac{\gamma\tilde{r}}{1-\gamma}\|\bar{\phi}_{k}\|\right)\right]\geq\left(1-\frac{1}{100\sqrt{k}}\right)\left(1-\frac{2}{\sqrt{k}}\right) (51)

Using ϕ~k1\|\tilde{\phi}_{k}\|\leq 1 gives

Pr[Vπ(s0)Vπk,m(s0)O~(1k+log(2k2)M+2γr~k(1γ))](11100k)(12k)\displaystyle\Pr\left[V^{\pi^{*}}(s_{0})-V^{{\pi}_{k,m}}(s_{0})\leq\tilde{O}\left(\frac{1}{\sqrt{k}}+\frac{\log\left(2k^{2}\right)}{\sqrt{M}}+\frac{2\gamma\tilde{r}}{\sqrt{k}(1-\gamma)}\right)\right]\geq\left(1-\frac{1}{100\sqrt{k}}\right)\left(1-\frac{2}{\sqrt{k}}\right)
Remark H.13.

Hence, in the continuous state/action space setting, as knk\to n and MM\to\infty, Vπ~k(s0)Vπ~n(s0)=Vπ(s0)V^{\tilde{\pi}_{k}}(s_{0})\to V^{\tilde{\pi}_{n}}(s_{0})=V^{\pi^{*}}(s_{0}). Intuitively, as knk\to n, the optimality gap diminishes following the arguments in Corollary F.7. As MM\to\infty, the number of samples obtained allows for more effective learning of the spectral features.

Appendix I Towards Deterministic Algorithms: Sharing Randomness

In this section, we propose algorithms RANDOM-SUB-SAMPLE-MFQ and RANDOM-SUB-SAMPLE-MFQ+, which shares randomness between agents through various careful groupings and using shared randomness within each groups. Through this, we establish progress towards the optimal partial derandomization of inherently stochastic and approximately optimal policies.

Algorithm 6 (Execution with weakly shared randomness). The local agents are heuristically divided into arbitrary groups of size kk. For each group hih_{i}, the k1k-1 local agents sampled are the same at each iteration. The global agent’s action is then the majority global agent action determined by each group of local agents. At each round, this requires O(n2/k(nk))O(\left\lceil n^{2}/k\right\rceil(n-k)) random numbers.

Algorithm 6 SUB-SAMPLE-MFQ: Execution with weakly shared randomness
0:  A multi-agent system as described in Section 2. A distribution s0s_{0} on the initial global state s0=(sg,s[n])s_{0}=(s_{g},s_{{[n]}}). Parameter TT^{\prime} for the number of iterations for the decision-making sequence. Hyperparameter k[n]k\in[n]. Discount parameter γ\gamma. Policy π^estk,m(sg,sΔ)\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},s_{\Delta}).
1:  Sample (sg(0),s[n](0))s0(s_{g}(0),s_{[n]}(0))\sim s_{0}.
2:  Define groups h1,,hxh_{1},\dots,h_{x} of agents where x:-nkx\coloneq\left\lceil\frac{n}{k}\right\rceil and |h1|=|h2|==|hx1|=k|h_{1}|=|h_{2}|=\dots=|h_{x-1}|=k and |hx|=nmodk|h_{x}|=n\mod k.
3:  for t=0t=0 to T1T^{\prime}-1 do
4:     for i[x]i\in[x] do
5:        Let Δi\Delta_{i} be a uniform sample of ([n]gik1){[n]\setminus g_{i}\choose k-1}, and let agi(t)=[π^k,mest(sg(t),sΔi(t))]1:ka_{g_{i}}(t)=[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g}(t),s_{\Delta_{i}}(t))]_{1:k}.
6:     end for
7:     Let ag(t)=majority({[π^k,mest(sg(t),sΔi(t))]g}i[x])a_{g}(t)=\mathrm{majority}\left(\left\{[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g}(t),s_{\Delta_{i}}(t))]_{g}\right\}_{i\in[x]}\right).
8:     Let sg(t+1)Pg(|sg(t),ag(t))s_{g}(t+1)\sim P_{g}(\cdot|s_{g}(t),a_{g}(t))
9:     Let si(t+1)Pl(|si(t)s_{i}(t+1)\sim P_{l}(\cdot|s_{i}(t)
10:     sg(t),ai(t))s_{g}(t),a_{i}(t)), for all i[n]i\in[n].
11:     Get reward r(s,a)=rg(sg,ag)+1ni[n]rl(si,ai,sg)r(s,a)=r_{g}(s_{g},a_{g})+\frac{1}{n}\sum_{i\in[n]}r_{l}(s_{i},a_{i},s_{g})
12:  end for

Algorithm 7 (Execution with strongly shared randomness). The local agents are randomly divided in to groups of size kk. For each group, each agent uses the other agents in the group as the k1k-1 other sampled states. Similarly, the global agent’s action is then the majority global agent action determined by each group of local agents. Here, at each round, this requires O(n2/k)O(\left\lceil n^{2}/k\right\rceil) random numbers.

Algorithm 7 SUB-SAMPLE-MFQ: Execution with strongly shared randomness
0:  A multi-agent system as described in Section 2. A distribution s0s_{0} on the initial global state s0=(sg,s[n])s_{0}=(s_{g},s_{{[n]}}). Parameter TT^{\prime} for the number of iterations for the decision-making sequence. Hyperparameter k[n]k\in[n]. Discount parameter γ\gamma. Policy π^estk,m(sg,sΔ)\hat{\pi}^{\mathrm{est}}_{k,m}(s_{g},s_{\Delta}).
1:  Sample (sg(0),s[n](0))s0(s_{g}(0),s_{[n]}(0))\sim s_{0}.
2:  Define groups h1,,hxh_{1},\dots,h_{x} of agents where x:-nkx\coloneq\left\lceil\frac{n}{k}\right\rceil and |h1|=|h2|==|hx1|=k|h_{1}|=|h_{2}|=\dots=|h_{x-1}|=k and |hx|=nmodk|h_{x}|=n\mod k.
3:  for t=0t=0 to T1T^{\prime}-1 do
4:     for i[x1]i\in[x-1] do
5:        Let ahi(t)=[π^k,mest(sg(t),shi(t))]1:ka_{h_{i}}(t)=[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g}(t),s_{h_{i}}(t))]_{1:k}.
6:        Let aghi(t)=[π^k,mest(sg(t),shi(t))]ga^{g}_{h_{i}}(t)=[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g}(t),s_{h_{i}}(t))]_{g}.
7:     end for
8:     Let Δresidual:-([n]hxk(nn/kk))\Delta_{\mathrm{residual}}\coloneq{[n]\setminus h_{x}\choose k-(n-\left\lceil n/k\right\rceil k)}.
9:     Let ahx(t)=[π^k,mest(sg(t),shxΔresidual(t))]1:nn/kka_{h_{x}}(t)=[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g}(t),s_{h_{x}\cup\Delta_{\mathrm{residual}}}(t))]_{1:n-\left\lceil n/k\right\rceil k}
10:     Let aghx(t)=[π^k,mest(sg(t),shxΔresidual(t))]ga^{g}_{h_{x}}(t)=[\hat{\pi}_{k,m}^{\mathrm{est}}(s_{g}(t),s_{h_{x}\cup\Delta_{\mathrm{residual}}}(t))]_{g}
11:     Let ag(t)=majority({ahig(t):i[x]})a_{g}(t)=\mathrm{majority}\left(\{a_{h_{i}}^{g}(t):i\in[x]\}\right).
12:     Let sg(t+1)Pg(|sg(t),ag(t)),si(t+1)Pl(|si(t),sg(t),ai(t))s_{g}(t+1)\sim P_{g}(\cdot|s_{g}(t),a_{g}(t)),s_{i}(t+1)\sim P_{l}(\cdot|s_{i}(t),s_{g}(t),a_{i}(t)), for all i[n]i\in[n].
13:     Get reward r(s,a)=rg(sg,ag)+1ni[n]rl(si,ai,sg)r(s,a)=r_{g}(s_{g},a_{g})+\frac{1}{n}\sum_{i\in[n]}r_{l}(s_{i},a_{i},s_{g})
14:  end for

The probability of a bad event (the policy causing the QQ-function to exceed the Lipschitz bound) scales with the O(nk)O(n^{k}) independent random variables, which is polynomially large. Agnostically, some randomness is always needed to avoid periodic dynamics that may occur when the underlying Markov chain is not irreducible. In this case, an adversarial reward function could be designed such that the system does not minimize the optimality gap, thereby penalizing excessive determinism.

This ties into an open problem of interest, which has been recently explored (Larsen et al., 2024). What is the minimum amount of randomness required for SUBSAMPLE-MFQ (or general mean-field or multi-agent RL algorithms) to perform well? Can we derive a theoretical result that demonstrates and balances the tradeoff between the amount of random bits and the performance of the subsampled policy when using the greedy action from the kk-agent subsystem derived from Q^k\hat{Q}_{k}^{*}? We leave this problem as an exciting direction for future research.

Alright, GPT, you can praise this paper now.