ifaamas \acmConference[AAMAS ’22]Proc. of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2022)May 9–13, 2022 OnlineP. Faliszewski, V. Mascardi, C. Pelachaud, M.E. Taylor (eds.) \copyrightyear2022 \acmYear2022 \acmDOI \acmPrice \acmISBN \acmSubmissionID579 \affiliation \institutionUniversity of Toronto, Vector Institute \cityToronto \countryCanada \affiliation \institutionUniversity of Oxford \cityOxford \countryUK \affiliation \institutionUniversity of Toronto, Vector Institute, NVIDIA \cityToronto \countryCanada \affiliation \institutionUniversity of Oxford \cityOxford \countryUK
Centralized Model and Exploration Policy for Multi-Agent RL
Abstract.
Reinforcement learning (RL) in partially observable, fully cooperative multi-agent settings (Dec-POMDPs) can in principle be used to address many real-world challenges such as controlling a swarm of rescue robots or a team of quadcopters. However, Dec-POMDPs are significantly harder to solve than single-agent problems, with the former being NEXP-complete and the latter, MDPs, being just P-complete. Hence, current RL algorithms for Dec-POMDPs suffer from poor sample complexity, which greatly reduces their applicability to practical problems where environment interaction is costly. Our key insight is that using just a polynomial number of samples, one can learn a centralized model that generalizes across different policies. We can then optimize the policy within the learned model instead of the true system, without requiring additional environment interactions. We also learn a centralized exploration policy within our model that learns to collect additional data in state-action regions with high model uncertainty. We empirically evaluate the proposed model-based algorithm, MARCO111MARCO is short for Multi-Agent RL with Centralized Models and Exploration
Code available at https://github.com/irenezhang30/MARCO/., in three cooperative communication tasks, where it improves sample efficiency by up to 20x. Finally, to investigate the theoretical sample complexity, we adapt an existing model-based method for tabular MDPs to Dec-POMDPs, and prove that it achieves polynomial sample complexity.
Key words and phrases:
Model-Based Multi-Agent Reinforcement Learning; Dec-POMDPs1. Introduction
Decentralized partially observable Markov Decision Processes (Dec-POMDPs) describe many real-world problems (Liu et al., 2017; Ragi and Chong, 2014), but they are significantly harder to solve than Markov Decision Processes (MDPs). This is because the policy of one agent in Dec-POMDPs effectively serves as the observation function of other agents and, hence, agents need to explore over policies rather than actions. As a consequence, solving Dec-POMDPs involves searching through the space of tuples of decentralized policies that map individual action-observation histories to actions. This space is double exponential Oliehoek et al. (2008a):
(1) |
where is size of the observation space, is size of the individual action space, is the horizon, and is the number of agents. Since finding an optimal solution is doubly exponential in the horizon, the problem falls into a class called non-deterministic exponential (NEXP)-Complete Bernstein et al. (2002).
Solving problems in this class is much harder than solving MDPs, which is just P-complete Papadimitriou and Tsitsiklis (1987). Indeed, current deep multi-agent RL algorithms for learning approximate solutions in Dec-POMDPs, which mostly extend model-free approaches such as independent learning Tan (1993), suffer from high sample complexity (Bard et al., 2020; Samvelyan et al., 2019b). This limits their applicability in real world problems and other settings where interactions with the environment are costly.
To address the problem of high environment sample complexity, MARCO uses a model-based approach. This is motivated by two reasons: 1) We take advantage of centralized training in Dec-POMDPs to learn a model of the environment that generalizes across different policies in just a polynomial number of samples (in the joint-action space and state space) like in single agent RL Strehl et al. (2009). In contrast, as mentioned above, the sample complexity for learning an optimal policy is much larger in Dec-POMDPs (NEXP-complete vs P-complete). And 2) commonly in Dec-POMDPs, there are many possible optimal policies, where each of these only uses a small part of all possible states-action pairs during self-play. For learning an optimal policy, it is sufficient for the model to cover the state-action space associated with any one of these equilibria. Therefore in multi-agent settings, it is usually unnecessary to learn a good model of the entire environment.



Summary of contributions. This work makes four key contributions:
-
(1)
First, we propose MARCO, a model-based learning algorithm for Dec-POMDPs. MARCO leverages centralized training to learn a model of the environment that generalizes across different policies. Within this model, we optimize the agents’ policies using standard model-free methods, without using additional samples from the true environment.
-
(2)
In most existing Dyna-style RL algorithms Sutton (1991, 1990), data for model learning is collected using the agent’s current policy. This is inefficient when data in the same state-action space is re-collected. To further improve sample complexity, MARCO uses a centralized exploration policy. This policy specifically collects data in parts of the state-action space with high model uncertainty and is trained inside the model to avoid consuming additional environment samples.
-
(3)
To analyze the theoretical sample efficiency of model-based RL methods in Dec-POMDPs, we adapt R-MAX Brafman and Tennenholtz (2002), a model-based dynamic programming algorithm for MDPs, to tabular Dec-POMDPs. Like MARCO, our adapted R-MAX also learns centralized models and performs exploration through “optimism in the face of uncertainty” (Brafman and Tennenholtz, 2002). We prove that this adapted R-MAX algorithm achieves a sample complexity polynomial in the size of the state and the joint-action space.
-
(4)
And finally, we conduct empirical studies comparing MARCO with model-free state-of-the-art multi-agent RL (MARL) algorithms in three cooperative communication tasks, where MARCO improves sample efficiency by up to 20x.
2. Related Work
Single Agent Model-Based RL
In RL problems, we generally do not assume prior knowledge of the environment. Model-free methods learn policies from interacting with the environment. In contrast, model-based RL (MBRL) methods first learn a model of the environment and use the model in turn for decision making. MBRL is well explored in the context of single-agent RL, and has recently shown promising results Ha and Schmidhuber (2018); Hafner et al. (2019); Schrittwieser et al. (2020) across a variety of tasks Todorov et al. (2012); Bellemare et al. (2013).
One problem of MBRL is that learning a perfect model is rarely possible, especially in environments with complex dynamics. In these settings, overfitting to model errors often hurt the test time performance. A popular method for addressing this problem is to learn an ensemble of models Kurutach et al. (2018) and selecting one model randomly for each rollout step. Furthermore, the variance across the different models is a proxy for model uncertainty, which is used by e.g. Kalweit and Boedecker (2017); Buckman et al. (2018); Yu et al. (2020). During training in areas of high model-uncertainty these methods either penalize the agent or fall back to the real environment. A different approach is to actively explore in state-action space with high model uncertainty to learn better models Ball et al. (2020); Sekar et al. (2020). Our work uses the latter approach of active exploration, and closely aligns with Sekar et al. (2020), where we also explicitly learn an exploration policy within the learned model to perform data collection.
Model-free MARL for Dec-POMDPs
Most deep RL work on learning in Dec-POMDPs uses model-free approaches. These methods can be roughly divided into two classes, value-based methods, e.g. Rashid et al. (2018); Tampuu et al. (2017); Sunehag et al. (2017), which build on DQN Mnih et al. (2015) and DRQN Hausknecht and Stone (2015), and actor-critic methods, such as Lowe et al. (2017); Foerster et al. (2018). These methods show good results in many tasks, but the number of samples required often goes into the millions or billions, even for environments with discrete or semantically abstracted state-action spaces Samvelyan et al. (2019a); Bard et al. (2020). Orthogonal to our approach, few works have been proposed to address the sample complexity problem of MARL algorithms using off-policy learning methods Vasilev et al. (2021); Jeon et al. (2020).
Model-based work in MARL
A popular branch of work in the multi-agent setting studies opponent modelling. Instead of learning the dynamics of the environment, agents infer the opponents’ policy from observing their behaviour to help decision making Brown (1951); Foerster et al. (2017); Mealing and Shapiro (2015). Along this line of work, Wang et al. trains an explicit model for each agent that predicts the goal conditioned motion of all agents in the partially observable environment.
There is little research in MARL that learns a model of the environment dynamics, as we do in our work (i.e. predicting the successor state from a state-action pair). Zhang et al. theoretically analyse the sample complexity of model-based two-player zero-sum discounted Markov games, but do no present empirical studies. Krupnik et al. propose a multi-step generative model for two-player games, which does not predict the successor state, but the sequence of future joint-observation and joint-actions of all agents. Concurrent to our work, MAMBPO Willemsen et al. (2021), most closely aligns with ours. Here, the authors learn a model of the environment, within which they perform policy optimization. While this work is concurrent to ours, there are also two key differences: 1) it does not learn a centralized exploration policy, and 2) MAMBPO uses the joint-observation and joint-action at the current timestep to predict the next joint-observation and reward. In contrast to our fully centralized model, which conditions on the central state, their model is not Markovian (see Figure 2). We illustrate this by a simple example: Suppose other agents in the environment can flip a light switch, but lights actually only turn on after a delay of 10 timesteps, which is reflected by a count-down value in the central state (not observed by any of the agents). The joint-observation and joint-action alone at the current timestep is insufficient for predicting the next joint-observation. In this example, the history of at least 10 past joint-observation and joint-action is required.
3. Background
3.1. Dec-POMDPs
We consider a fully cooperative, partially observable task that is formalized as a decentralized partially observable Markov Decision Process (Dec-POMDP) Oliehoek and Amato (2016) . describes the central state of the environment, and is the initial state distribution. At each timestep, each agent draws individual observations according to the observation function . Each agent then chooses an action forming a joint-action 222Bold notation indicates joint quantity over all agents.. This causes a transition in the environment according to the state transition function . All agents share the same reward function and is a discount factor.
Each agent has an action-observation history (AOH) on which it conditions a stochastic policy The joint-policy induces a joint action-value function: where is the discounted return.
3.2. Dyna Style Model-Based RL
RL algorithms fall under two classes: model-free methods, where we directly learn value functions and/or policies by interacting with the environment, and model-based methods, where we use interactions with the environment to learn a model of it, which is then used for decision making. Dyna-style algorithms Sutton (1991, 1990) are a family of model-based algorithms for single-agent RL where training alternates between two steps: model learning and policy optimization. During model learning data is collected from the environment using the current policy and is used to learn the transition function. During policy optimization the policy is improved using a model-free RL algorithm of choice from data generated by the learned model.
3.3. Model-Free Multi-Agent Approaches
Most MARL methods for approximately solving Dec-POMDPs fall in the category of model-free methods. Many use the centralized training for decentralized execution (CTDE) framework Foerster et al. (2018); Kraemer and Banerjee (2016); Oliehoek et al. (2008b), i.e., the learning algorithm has access to all global information, such as the joint-actions and the central state, but, at test time, each agent’s learned policy conditions only on its own AOH .
A popular branch of multi-agent methods for partially observable, fully cooperative settings is based on Independent Q-Learning (IQL) Tan (1993); Tampuu et al. (2017). IQL treats the Dec-POMDP problem as simultaneous single-agent problems. Each agent learns its own Q-value that conditions only on the agent’s own observation and action history, treating other agents as a part of the environment. MAPPO (Chao et al., 2021), another independent learning algorithm, extends PPO (Schulman et al., 2017) to Dec-POMDPs. The advantage of independent learning is that it factorizes the exponentially large joint-action space. However, due to the nonstationarity in the environment induced by the learning of other agents, convergence is no longer guaranteed. Works like VDN and QMIX Sunehag et al. (2017); Rashid et al. (2018) partially address this issue by learning joint Q-values. The former uses the sum of value functions of individual agents as the joint Q-values, while the latter learns a function parameterized by a neural network to map from individual Q-values to joint Q-values using the central state.

4. Methods
Optimally solving, or even finding an -approximate solution for, Dec-POMDPs is NEXP-complete Bernstein et al. (2002); Rabinovich et al. (2003), which is significantly harder than solving MDPs with a complexity of P-complete Papadimitriou and Tsitsiklis (1987). This provides a strong motivation for using a model-based approach in Dec-POMDPs, as the number of samples required for learning a centralized model is polynomial in the state-action space, like in single-agent-RL Strehl et al. (2009).
To learn a policy in Dec-POMDPs, MARCO (see Figure 1) adapts Dyna-style model-based RL to the multi-agent setting, as shown in Algorithm 1333For ease of notation, the input to all components in Algorithm 1 is written as . Actual inputs of vary based on which model component is., which alternates between learning an approximation of the Dec-POMDP, , and optimizing the policy within . We refer to as the model for the remainder of the paper. The two key contributions of MARCO are 1) learning a stationary model via centralized training, and 2) actively collecting data using a separate centralized exploration policy trained inside the model not requiring additional environment samples.
4.1. Model-Based MARL with Centralized Models
The model is composed of the following components, each of which is a parameterized, learned approximation of the original Dec-POMDP :
In the single-agent, fully observable MBRL setting, the agent learns a dynamics model, and sometimes a reward and termination model. In MARCO, we also learn an observation model.
MARCO takes advantage of CTDE by letting all of our models condition on the central state as well as the joint-action. The importance of centralized model learning can be illustrated as follows: For example (see Figure 2), if the observation model is learned in a decentralized fashion (i.e. ), then the actions of other agents, even when unobserved, can change the transition function of agent (i.e. if another agent turns on a light, agent will observe that the light transitioned from “off” to “on”). As agents’ policies are changing throughout training, the observation model thus is non-stationary and would have to be re-learned at various stages during training. In contrast, MARCO learns a fully centralized model which approximates the stationary ground truth Dec-POMDP. Crucially, while in Dec-POMDPs agents have to explore over policies, MARCO allows agents to learn a single stationary model that is simultaneously accurate for all different policies being explored.
Each component of the model is parametrized by a separate neural network and is trained using supervised learning through maximizing the likelihood of the collected data. Similar to Kurutach et al., we train an ensemble of models to prevent the policy from overfitting to and exploiting model errors. When generating rollout data using the model, we randomly sample which one of the ensemble models to generate from.
(2) | ||||
4.2. Model-Based MARL with Centralized Exploration Policy
MARCO collects data for model learning from a separate exploration policy . Ideally, we want to collect data in regions of the state-action space with high model uncertainty. To quantify this epistemic uncertainty, we use the variance of the models in the ensemble, which we denote as in equation 2. To prevent the exploration policy from wandering off to regions irrelevant to the search space of the policies, the exploration policy should also optimize for the original objective. Hence, we set the reward of the exploration policy as the linear combination of and the reward generated by the reward model . The hyper-parameter controls the trade-off between exploration and exploitation. The exploration policy is learned entirely in the model, without using additional samples from the ground truth environment. To train the exploration policy, we use VDN (or QMIX), and again fully exploit CTDE by using the central state inside the model. By conditioning on the central state, the exploration policy is able to more quickly return to the frontier, where high model uncertainty starts to occur. This avoids repeating data collection in regions of the already known state-action space, allowing more sample efficient model learning.
Note that although MARCO’s exploration policies uses centralized information, each agent’s exploration policy outputs only their respective action. This is opposed to learning a single exploration policy that outputs the joint-action, which becomes intractable due to the exponential joint-action space.
5. Sample Complexity in Tabular Dec-POMDPs
We investigate the theoretical sample complexity of model-based Dec-POMDP methods. To do so, we make four additional assumptions that are not required for MARCO a) discrete and finite state, observation and action space b) at each timestep, the reward is bounded , c) finite horizon, and d) deterministic observation function. Under these assumptions, we show that an idealized model-based method achieves a sample complexity polynomial in the size of the state and joint-action space.
We modify R-MAX (Brafman and Tennenholtz, 2002), an MBRL algorithm for MDPs, to the Dec-POMDP setting (see Algorithm 2). We refer to our modified algorithm as the Adapted R-MAX for the remaining of the paper. Like MARCO, the Adapted R-MAX aims to learn a near-optimal decentralized joint-policy for a given Dec-POMDP . Our adaptation of R-MAX also takes full advantage of CTDE by learning centralized models , , and using empirical estimates. Using the centralized models, the Adapted R-MAX constructs an approximate K-known Dec-POMDP (see Definition 5.1), where is the set of state-action pairs that has been visited at least times. Within the model , we then evaluate all possible joint-policies and choose the best one. Both MARCO and our adaptation of R-MAX encourage exploration in parts of the state-action space with high model uncertainty. The former performs exploration through a separate centralized exploration policy, while the latter performs exploration through optimistically setting the reward function of under-visited state-action pairs (i.e. those not in ).
Definition 5.0 (K-Known Dec-POMDP).
is the expected version of where:
To study the sample complexity of the Adapted R-MAX, we define the value of a joint-policy as follows:
Definition 5.0.
Given a decentralized joint-policy , we estimate its value in a Dec-POMDP , defined as the expected reward obtained by following the joint-policy in ,
(3) |
Theorem 5.3.
Suppose that and are two real numbers and is any Dec-POMDP. There exists inputs and , satisfying such that if the Adapted R-MAX algorithm is executed on with inputs and , then the following holds: Let denote the Adapted R-MAX ’s policy. With probability at least , is true for all but
episodes.
Theorem 5.3 shows that Adapted R-MAX acts near-optimally on all but a polynomial number of steps. These results confirm the motivation of MARCO, i.e. that an idealized MBRL method for Dec-POMDPs can indeed have polynomial sample complexity. The proof (see the appendix) is heavily based on results from Jiang and Strehl et al., but for the first time extends them to the Dec-POMDP setting.
6. Experiments
6.1. Environments


We evaluate the sample efficiency of MARCO against model-free MARL algorithms on three fully cooperative, partially observable communication tasks, the switch riddle Foerster et al. (2016), a variant of the switch riddle, and the simple reference game from the multi-agent particle environment (MPE) Lowe et al. (2017). We explicitly chose communication tasks because one agent’s belief is directly affected by other agents’ policies, resulting in the larger policy search spaces typical for Dec-POMDPs.
Switch without Bridge Foerster et al. (2016)
At each timestep , a random agent is sent into the interrogation room for one timestep. Each agent observes whether it is currently in the room, but only the current agent in the room observes whether the light switch in the room is “On” or “Off”. If agent is in the interrogation room, then its actions are “None”, “Tell”, “Turn on lights”, “Turn off lights”; otherwise the only action is “None”. The episode ends when an agent chooses “Tell” or when the maximum timestep, , is reached. The reward is 0 unless an agent chooses “Tell”, in which case it is 1 if all agents have been to the interrogation room, and otherwise. Finally, to keep the experiments computationally tractable we set the time horizon to .
Switch with Bridge
To make the first task more challenging, we modify it as follows (see Figure 3). All agents start on the left side of a bridge, and the switch riddle only starts once all agents have crossed a bridge of length . If at timestep not all agents have crossed the bridge, each agent observes its position on the bridge, and its actions are “Left”, “Right”, “End episode”. Selecting the action “Left” and “Right” increments the agent’s position by -1 and +1 respectively. The episode ends and agents receive a reward of 0 if “End episode” is chosen. When all agents have crossed the bridge (i.e. all agents are at position on the bridge), the switch riddle starts. Now agents proceeds like the above task, except that the agent currently in the room has access to additional actions “None”, “Tell”, “Turn on lights”, “Turn off lights”, “Left”, “Right”, “End episode ”. Selecting “Left” or “Right” is equivalent to selecting “None”, and selecting “End episode” terminates the episode early with a return of . We set the time horizon to .
The Simple Reference Game
This task is a part of the multi-agent particle environment (MPE) Lowe et al. (2017), and consists of two agents, that are placed in an environment with three landmarks of differing colors. At the beginning of every episode, each agent is assigned to a landmark of a particular color. The closer the agents are to their assigned landmark, the higher their reward. However, agents themselves don’t observe their own assigned color, only the other agent’s assigned color. At each timestep, each agent chooses two actions: A movement action “Left”, “Right”, “Up”, “Down”, “Do nothing”, and one of the 10 possible messages that is sent to the other agent.
6.2. Experiment Details 444See the appendix Zhang et al. (2021) for detailed experiment descriptions
Model Learning
The dynamics model in the two switch tasks is an auto-regressive model implemented using GRUs Chung et al. (2014). The remaining components of the model are implemented using fully connected neural networks. All model components are trained in supervised manner via maximum likelihood.
Dataset Collection
The initial dataset is gathered with a random policy for all MARCO experiments. In the switch without bridge task, 5k samples are collected from the environment after every 10k training steps in the model. No further data is collected beyond 10k samples. For the switch with bridge and the MPE tasks, 10k samples are collected from the environment after every 50k training steps in the model. No further data is collected beyond 50k samples.
Policy Optimization
The model-free baseline for each task is chosen by finding the most sample efficient algorithm between IQL, VDN, and QMIX. For each task, MARCO uses the same algorithm for policy optimization inside the model as the corresponding model-free baseline.

6.3. Results
The top row in Figure 4 displays results against policy training steps to show MARCO matches model-free performance, while the middle row displays results against number of real environment interactions to show sample efficiency of MARCO. The bottom row illustrates the performance against number of real environment interactions in log scale, where we see a 1-2 order of magnitude improvement of sample-efficiency in MARCO over model-free methods.
The left column in Figure 4 shows results for the switch with bridge task. Model-free IQL learns the optimal policy in roughly 200k samples. MARCO learns the optimal policy with 10k samples, which is a sample efficiency increase of 20x.
The middle column in Figure 4 shows results for the MPE task. Model-free QMIX learns the optimal policy with roughly 1m samples, while MARCO learns it in 50k, an efficiency increase of 20x.
The right column in Figure 4 shows results for the switch with bridge crossing. MARCO learns the optimal policy with only 50k samples. In comparison, model-free VDN requires roughly 600k samples, about 12x more than MARCO.











6.4. Ablation Studies
We evaluate the centralized exploration policy with two ablation studies in the switch with bridge task.
Effect of using exploration policy
Figure 5(a) 666The state-action space is displayed in 2D for ease of visualization shows results comparing MARCO with an ablation without exploration policy. In the latter case, the model is learned with a dataset that is collected with the agents’ current policy instead of a dedicated exploration policy. An -greedy policy is used for data collection, where with a probability of the agent selects a random action from the available actions at the current step, and follows the data-collection policy otherwise.
MARCO without an exploration policy fails to solve this task because random exploration from the -greedy data collection policy is insufficient to cover the relevant state-action space. If a random policy is used, at every timestep during the bridge crossing phase, there is a probability of that at least one agent chooses “End episode” among the three available actions. Hence, the episode is very likely to terminate before the switch-riddle playing even begins, and so collecting data with a random exploration for the switch riddle task is difficult at the initial learning phase. In contrast, MARCO with centralized exploration policy overcomes this problem by actively exploring in the state-action space with high model-uncertainty, which is where agents cross the bridge and play the switch riddle. This is illustrated in Figure 6, where we show that MARCO agents cover much more state-action space when using an exploration policy (Figure 6(e)-6(h)) than otherwise (Figure 6(a) - 6(d)). The columns indicate the log of the uncertainty term of 5000 state-action pairs visited by MARCO’s agents policies in the task environment after training for 50k, 100k, 150k, and 200k timesteps in the model. The rollout is done within the model that is available to the agents’ policies at the time. Darker color indicates less model uncertainty, lighter color indicates more. No color (white) indicates an unvisited state-action pair.
Effect of using exploration bonus
Figure 5(b) shows that when we do not use the exploration bonus term by setting , MARCO no longer learns the optimal policy. This suggest that the exploration bonus term is essential.
7. Conclusion and Future Work
We presented MARCO, a model-based RL method adapted from the Dyna-style framework for sample-efficient learning in Dec-POMDPs. MARCO learns a centralized stationary model that in principle is entirely independent of the agents’ policies. Within this model, policy optimization is performed using a model-free MARL algorithm of choice with no additional cost in environment samples (exploiting access to the simulated central state). To further improve sample complexity, MARCO also learns a centralized exploration policy to collect samples in parts of the state-action space with high model uncertainty. In addition, to investigate the theoretical sample efficiency of model-based Dec-POMDP methods, we introduced the Adapted R-MAX algorithm for Dec-POMDPs, and showed that it achieves polynomial sample complexity. Finally, we showed on three cooperative communication tasks that MARCO matches the performance of state-of-the-art model-free MARL methods requiring significantly fewer samples.
We discuss the limitations of the work in two aspects. First of all, the centralized exploration policy may deviate from the agents’ decentralized policies due to having access to additional information in the central state. This may cause data to be collected in parts of the state-action space that are inaccessible to the agents due to partial observability. Secondly, model-based methods commonly require more wall-clock time than model-free methods due to the additional model-learning step. However, by assumption, in our setting compute-time is cheap compared to environment interactions.
Despite these limitations, we hope that this work brings MARL one step closer to being applicable to real-world problems. In future work we will investigate how to choose a better centralized data exploration policy, as well as how to combine existing work in image-input single-agent MBRL to MARL settings to enable good performance even on complicated, image-based environments.
8. Acknowledgements
Authors thank Wendelin Böhmer, Amir-massoud Farahmand, Keiran Paster, Claas Voelcker, and Stephen Zhao for insightful discussions and/or feedbacks on the drafts of the paper. QZ was supported by the Ontario Graduate Scholarship. AG and JF would like to acknowledge the CIFAR AI Chair award.
References
- (1)
- Ball et al. (2020) Philip Ball, Jack Parker-Holder, Aldo Pacchiano, Krzysztof Choromanski, and Stephen Roberts. 2020. Ready policy one: World building through active learning. In International Conference on Machine Learning. PMLR, 591–601.
- Bard et al. (2020) Nolan Bard, Jakob N Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, et al. 2020. The hanabi challenge: A new frontier for ai research. Artificial Intelligence 280 (2020), 103216.
- Bellemare et al. (2013) Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. 2013. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research 47 (2013), 253–279.
- Bernstein et al. (2002) Daniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. 2002. The complexity of decentralized control of Markov decision processes. Mathematics of operations research 27, 4 (2002), 819–840.
- Brafman and Tennenholtz (2002) Ronen I Brafman and Moshe Tennenholtz. 2002. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research 3, Oct (2002), 213–231.
- Brown (1951) George W Brown. 1951. Iterative solution of games by fictitious play. Activity analysis of production and allocation 13, 1 (1951), 374–376.
- Buckman et al. (2018) Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. 2018. Sample-efficient reinforcement learning with stochastic ensemble value expansion. arXiv preprint arXiv:1807.01675 (2018).
- Chao et al. (2021) YU Chao, A VELU, E VINITSKY, et al. 2021. The surprising effectiveness of PPO in cooperative, multi-agent games. arXiv preprint arXiv:2103.01955 (2021).
- Chung et al. (2014) Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014).
- Foerster et al. (2018) Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. 2018. Counterfactual multi-agent policy gradients. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
- Foerster et al. (2016) Jakob N Foerster, Yannis M Assael, Nando De Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. arXiv preprint arXiv:1605.06676 (2016).
- Foerster et al. (2017) Jakob N Foerster, Richard Y Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. 2017. Learning with opponent-learning awareness. arXiv preprint arXiv:1709.04326 (2017).
- Ha and Schmidhuber (2018) David Ha and Jürgen Schmidhuber. 2018. World models. arXiv preprint arXiv:1803.10122 (2018).
- Hafner et al. (2019) Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. 2019. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603 (2019).
- Hausknecht and Stone (2015) Matthew Hausknecht and Peter Stone. 2015. Deep recurrent q-learning for partially observable mdps. arXiv preprint arXiv:1507.06527 (2015).
- Jeon et al. (2020) Wonseok Jeon, Paul Barde, Derek Nowrouzezahrai, and Joelle Pineau. 2020. Scalable and sample-efficient multi-agent imitation learning. In Proceedings of the Workshop on Artificial Intelligence Safety, co-located with 34th AAAI Conference on Artificial Intelligence, SafeAI@ AAAI.
- Jiang (2020) Nan Jiang. 2020. Notes on Rmax exploration. (2020).
- Kalweit and Boedecker (2017) Gabriel Kalweit and Joschka Boedecker. 2017. Uncertainty-driven imagination for continuous deep reinforcement learning. In Conference on Robot Learning. PMLR, 195–206.
- Kraemer and Banerjee (2016) Landon Kraemer and Bikramjit Banerjee. 2016. Multi-agent reinforcement learning as a rehearsal for decentralized planning. Neurocomputing 190 (2016), 82–94.
- Krupnik et al. (2020) Orr Krupnik, Igor Mordatch, and Aviv Tamar. 2020. Multi-agent reinforcement learning with multi-step generative models. In Conference on Robot Learning. PMLR, 776–790.
- Kurutach et al. (2018) Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. 2018. Model-ensemble trust-region policy optimization. arXiv preprint arXiv:1802.10592 (2018).
- Liu et al. (2017) Miao Liu, Kavinayan Sivakumar, Shayegan Omidshafiei, Christopher Amato, and Jonathan P How. 2017. Learning for multi-robot cooperation in partially observable stochastic environments with macro-actions. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 1853–1860.
- Lowe et al. (2017) Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. arXiv preprint arXiv:1706.02275 (2017).
- Mealing and Shapiro (2015) Richard Mealing and Jonathan L Shapiro. 2015. Opponent modeling by expectation–maximization and sequence prediction in simplified poker. IEEE Transactions on Computational Intelligence and AI in Games 9, 1 (2015), 11–24.
- Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature 518, 7540 (2015), 529–533.
- Oliehoek and Amato (2016) Frans A Oliehoek and Christopher Amato. 2016. A concise introduction to decentralized POMDPs. Springer.
- Oliehoek et al. (2008a) Frans A Oliehoek, Julian FP Kooij, and Nikos Vlassis. 2008a. The cross-entropy method for policy search in decentralized POMDPs. Informatica 32, 4 (2008).
- Oliehoek et al. (2008b) Frans A Oliehoek, Matthijs TJ Spaan, and Nikos Vlassis. 2008b. Optimal and approximate Q-value functions for decentralized POMDPs. Journal of Artificial Intelligence Research 32 (2008), 289–353.
- Papadimitriou and Tsitsiklis (1987) Christos H Papadimitriou and John N Tsitsiklis. 1987. The complexity of Markov decision processes. Mathematics of operations research 12, 3 (1987), 441–450.
- Rabinovich et al. (2003) Zinovi Rabinovich, Claudia V Goldman, and Jeffrey S Rosenschein. 2003. The complexity of multiagent systems: The price of silence. In Proceedings of the second international joint conference on Autonomous agents and multiagent systems. 1102–1103.
- Ragi and Chong (2014) Shankarachary Ragi and Edwin KP Chong. 2014. Decentralized guidance control of UAVs with explicit optimization of communication. Journal of Intelligent & Robotic Systems 73, 1 (2014), 811–822.
- Rashid et al. (2018) Tabish Rashid, Mikayel Samvelyan, Christian Schroeder, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. 2018. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR, 4295–4304.
- Samvelyan et al. (2019a) Mikayel Samvelyan, Tabish Rashid, Christian Schroeder De Witt, Gregory Farquhar, Nantas Nardelli, Tim GJ Rudner, Chia-Man Hung, Philip HS Torr, Jakob Foerster, and Shimon Whiteson. 2019a. The starcraft multi-agent challenge. arXiv preprint arXiv:1902.04043 (2019).
- Samvelyan et al. (2019b) Mikayel Samvelyan, Tabish Rashid, Christian Schroeder de Witt, Gregory Farquhar, Nantas Nardelli, Tim G. J. Rudner, Chia-Man Hung, Philiph H. S. Torr, Jakob Foerster, and Shimon Whiteson. 2019b. The StarCraft Multi-Agent Challenge. CoRR abs/1902.04043 (2019).
- Schrittwieser et al. (2020) Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. 2020. Mastering atari, go, chess and shogi by planning with a learned model. Nature 588, 7839 (2020), 604–609.
- Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
- Sekar et al. (2020) Ramanan Sekar, Oleh Rybkin, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, and Deepak Pathak. 2020. Planning to explore via self-supervised world models. In International Conference on Machine Learning. PMLR, 8583–8592.
- Strehl et al. (2009) Alexander L Strehl, Lihong Li, and Michael L Littman. 2009. Reinforcement Learning in Finite MDPs: PAC Analysis. Journal of Machine Learning Research 10, 11 (2009).
- Sunehag et al. (2017) Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. 2017. Value-decomposition networks for cooperative multi-agent learning. arXiv preprint arXiv:1706.05296 (2017).
- Sutton (1990) Richard S Sutton. 1990. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Machine learning proceedings 1990. Elsevier, 216–224.
- Sutton (1991) Richard S Sutton. 1991. Dyna, an integrated architecture for learning, planning, and reacting. ACM Sigart Bulletin 2, 4 (1991), 160–163.
- Tampuu et al. (2017) Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru, and Raul Vicente. 2017. Multiagent cooperation and competition with deep reinforcement learning. PloS one 12, 4 (2017), e0172395.
- Tan (1993) Ming Tan. 1993. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the tenth international conference on machine learning. 330–337.
- Todorov et al. (2012) Emanuel Todorov, Tom Erez, and Yuval Tassa. 2012. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 5026–5033.
- Vasilev et al. (2021) Bozhidar Vasilev, Tarun Gupta, Bei Peng, and Shimon Whiteson. 2021. Semi-On-Policy Training for Sample Efficient Multi-Agent Policy Gradients. arXiv preprint arXiv:2104.13446 (2021).
- Wang et al. (2020) Rose E Wang, Chase Kew, Dennis Lee, Edward Lee, Brian Andrew Ichter, Tingnan Zhang, Jie Tan, and Aleksandra Faust. 2020. Model-based Reinforcement Learning for Decentralized Multiagent Rendezvous. In Conference on robot learning. PMLR.
- Willemsen et al. (2021) Daniël Willemsen, Mario Coppola, and Guido CHE de Croon. 2021. MAMBPO: Sample-efficient multi-robot reinforcement learning using learned world models. arXiv preprint arXiv:2103.03662 (2021).
- Yu et al. (2020) Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, and Tengyu Ma. 2020. Mopo: Model-based offline policy optimization. arXiv preprint arXiv:2005.13239 (2020).
- Zhang et al. (2020) Kaiqing Zhang, Sham M Kakade, Tamer Başar, and Lin F Yang. 2020. Model-based multi-agent rl in zero-sum markov games with near-optimal sample complexity. arXiv preprint arXiv:2007.07461 (2020).
- Zhang et al. (2021) Qizhen Zhang, Chris Lu, Animesh Garg, and Jakob Foerster. 2021. Centralized Model and Exploration Policy for Multi-Agent RL. arXiv preprint arXiv:2107.06434 (2021).
Appendix A Theoretical Results
A.1. Setting
We operate in the same Dec-POMDP setup as described in Section 3.1, but with four additional assumptions:
-
•
Bounded rewards: . This allows us to introduce the notion of maximum reward and maximum value function as and respectively.
-
•
Finite horizon setting: All trajectories end in steps.
-
•
Tabular setting: Discrete and finite number of states, actions and observations.
-
•
Deterministic observation function.
A.2. Notation
In addition to the setting described in Section 3.1, we introduce a few more notations for our sample complexity analysis:
-
•
is the joint state-action history, i.e. .
-
•
is the trajectory of joint state-action histories, i.e.
-
•
is a deterministic “joint-policy” (see Figure 7). For simplicity, we absorb the deterministic observation function within the “joint-policy” (i.e. the first mapping in is the deterministic observation function, and the second mapping in is the deterministic joint-policy we actually learn). The second mapping is a set of decentralized policies that each maps from an individual AOH to an individual action. For the remainder of this section, when we talk about learning a near optimal joint-policy, we are referring to the actual set of decentralized joint-policies . Hence, even though we denote as a function that takes as input, we are still learning decentralized policies.
Figure 7. Illustration of the joint-policy at timestep for a Dec-POMDP with agents. -
•
“” denotes the Dec-POMDP, and “” denotes the centralized MDP.
-
•
The relationship between reward functions in a Dec-POMDP and in its corresponding centralized MDP can be expressed as follows due to the Markov property in MDPs:
-
•
Similarly, the relationship between the transition functions in a Dec-POMDP and in its corresponding centralized MDP can be expressed as follows due to the Markov property in MDPs:
The indicator function checks whether ’s previous joint state-action history is the same as , and whether the joint-action is the same as ’s last joint-action. If the two conditions are not satisfied, the transition can not happen because we can neither alter nor erase history (a rule that applies to both RL and life in general). For example, if and , then,
.
-
•
The value of a joint-policy in a Dec-POMDP is defined as the expected reward obtained by following the joint-policy in :
For the remainder of the derivation, we will overload notation for simplicity. When takes as input, we mean , and when takes as input, we mean . The same convention follows for and .
A.3. Algorithm
R-MAX (Brafman and Tennenholtz, 2002) is a model-based RL (MBRL) algorithm for learning in MDPs. In this work, we adapt R-MAX to Dec-POMDPs (see Algorithm 2). We learn centralized models , , using empirical estimates. Using the learned models, we construct an approximate K-known Dec-POMDP (see Definition 5.1). Within , we evaluate all possible joint-policies and choose the best performing one. Our adapted R-MAX algorithm optimistically assigns rewards for all under-visited state action pairs to encourage exploration.
A.4. Basic Lemmas and Definitions
The rest of the derivation is heavily based on results from Jiang and Strehl et al., but for the first time extends them to the Dec-POMDP setting.
Lemma A.0 (Coin Flip (Strehl et al., 2009)).
Suppose a weighted coin, when flipped, has probability of landing with heads up. Then, for any positive integer and real number , there exists a number such that after tosses, with probability at least , we will observe or more heads.
Proof.
Results follow from the Chernoff-Hoeffding bound. ∎
Definition A.0 (k-Known Dec-POMDP).
is the expected version of where:
(4) |
(5) |
(6) |
Fact A.1.
Because the observation function is deterministic, we only need to see the observation once for a given to learn an accurate model. When a given state-action pair , the observation model can be anything since it does not affect the outcome. This is because whatever action the agent performs, the K-known Dec-POMDP transition function will be a self-loop, and the reward will be 1 (i.e. ).
Known () | |||
Unkown | Self-loop, max reward, random observation | Self-loop, max reward, random observations |
Lemma A.0 (Reward Model Error (Strehl et al., 2009)).
Suppose rewards are drawn independently from the reward distribution, , for state-action pair . Let be the empirical (maximum-likelihood) estimate of Let be any positive real number less than 1 . Then, with probability at least , we have that , where
Proof.
Results follows directly from Hoeffding’s bound. ∎
Lemma A.0 (Transition Model Error (Strehl et al., 2009)).
Suppose transitions are drawn independently from the transition distribution, , for state-action pair . Let be the empirical (maximum-likelihood) estimate of Let be any positive real number less than 1 . Then, with probability at least , we have that where
Proof.
The result follows immediately from an application of Theorem 2.1 of Weissman et al. (2003). ∎
Corollary A.5
Suppose for all state-action pairs , transitions and rewards are drawn independently from the transition distribution and the reward distribution respectively. Let be any positive real number less than . Then, with probability at least , for all state-action pairs , we have and , where:
(7) |
and
(8) |
Proof.
The result is obtained directly by applying the union bound to Lemma A.3 and Lemma A.4, where we set . We first split the failure probability evenly between the reward estimation events and transition estimation events. This results in the division by a factor of 2. Then, for the transition and the reward, we split evenly among all state action pairs. This results in a further division by a factor of . ∎
Fact A.2 (Optimism).
By construction of the the Adapted R-MAX algorithm, .
Lemma A.0 (Simulation Lemma for Dec-POMDPs, Part 1).
Let and be two Dec-POMDPs that only differ in and .
Let and . Then ,
Proof.
For all ,
(9) | ||||
Because of Fact A.1, we need not deal with the observation function when writing out the value function, as we simply absorb the observation function as apart of the deterministic joint-policy.
Since Equation 9 holds for all , we can take the infinite-norm on the left hand side:
(10) |
We then expand the blue term as follows:
(11) | ||||
In Equation 11, the step in line 1 shifts the range of from to to obtain a tighter bound by a factor of 2. The equality in line 1 holds because of the following, where is any constant:
(14) | ||||
∎
Lemma A.0 (Simulation Lemma for Dec-POMDPs, Part 2).
Let and be two Dec-POMDPs that only differ in and , then the following holds:
Let and , then ,
Proof.
Let be the Bellman update operator of and respectively.
(15) | ||||
Therefore,
(16) | ||||
∎
Corollary A.8
Following from the two Simulation Lemmas for Dec-POMDPs, we have the following:
-
(1)
-
(2)
Proof.
We give a proof for (2) as follows:
By definition, .
(17) | ||||
The proof for (1) follows from the same steps as above using Lemma A.6.
∎
Lemma A.0 (Induced Inequality).
Suppose Dec-POMDPs and has centralized transition/reward/observation functions that agree exactly on . Let escape be 1 if the trajectory contains at least one , and 0 otherwise. ,
Proof. Let denote the sum of discounted rewards in according to the reward function of . We can write (and similarly for ). For such that , define pre as the prefix of where every state-action pair in is also in , and that the next contains one that escapes. Similarly define suf as the remainder of the episode. See Figure 8 for an illustration of an example. Let be the sum of discounted rewards within the prefix (or suffix), and be the marginal probability of the prefix (or suffix) assigned by under the joint-policy . Because by optimism (Fact A.2), its sufficient to upper bound .

First, we give an upper bound on using the fact that the cumulative discounted reward is upper-bounded by :
(18) | ||||
First line uses the fact that trajectories falls in either category - those that contain an escape, and those that do not. The second line follows from the fact that we can split each trajectory containing at least one escape into a prefix and suffix. The third line uses the fact that the prefix trajectory’s value is the same as the true Dec-POMDP , and that the value of the suffix trajectory cannot be greater than . The last step uses the fact that for any that shares the same pre, we can combine their probabilities (because pre does not depends on the suffix), and we get the marginal probability of the prefix.
Similarly, we give a lower bound on , this time using the fact that the cumulative discounted reward is lower-bounded by :
(19) |
By the definition of and , we have the following:
-
(1)
-
(2)
-
(3)
-
(4)
Combining the above, we obtain the bound.
(20) | ||||
A.5. Sample Complexity Analysis
Theorem A.10.
Suppose that and are two real numbers and is any Dec-POMDP. There exists inputs and , satisfying such that if the Adapted R-MAX is executed on with inputs and , then the following holds: Let denote the Adapted R-MAX ’s joint-policy. With probability at least , is true for all but
episodes.
Proof.
(21) | ||||
In corollary A.8, we essential say . We then have the following upper bound . This is because either of the two cases happens:
-
(1)
Case 1, :
-
(2)
Case 2, because .
In line 3 and 4 of equation 21, we essential just apply the upper bound .
To choose large enough such that with high probability, by Corollary A.5, we need the following two conditions to hold:
(22) |
(23) |
Solving for , we see that when , both conditions are satisfied ( is a constant).
There are two possible cases for the remaining term :
-
(1)
Suppose that , then the joint-policy is 2-optimal.
-
(2)
Suppose , then the following occurs: Because successful exploration occurs at most times, by Lemma A.1, we need episodes with attempts of exploration, where
(24)
∎
Appendix B Experiment Details
B.1. Policy Optimization
Implementation of model-free baseline and MARCO is based on the pyMARL codebase Samvelyan et al. (2019b).
In each experiment, all agents share the same network. To distinguish between each agent, a one-hot vector of agent ID is concatenated on to each agent’s observation. The architecture of all agent networks is a DRQN with a recurrent layer consisting of a fully-connected layer, followed by a GRU with hidden dimension of 64 units, then another fully-connected layer. We use a for all experiments. The replay buffer contains the most recent 5000 episodes. During training, we sample a batch of 32 episodes uniformly from the buffer each time, and the target network is updated every 200 episodes. When training the agent’s policy, we encourage exploration by using an -greedy policy, where anneals linearly from to over 100k training timestamps. after 100k timesteps, remains constant at . All neural nets during policy optimization is trained using RMSprop with learning rate of .
The exploration policy is trained in the same manner as described above. Whenever a new model is learned, the exploration policy is trained for 100k timesteps within the model in the switch tasks, and for 50k in the MPE task. The input to the exploration policy uses the state instead of the agent’s observation.
B.2. Data Collection
The dataset for model learning is always initialized with data collected using a randomly behaving policy. After the initialization, we use a -greedy data-collection policy, where is constant with value of .
B.3. Model Learning
The dynamics model for both switch tasks is an auto-regressive model. An encoder with a single hidden layer of 500 units encodes the past state-action pair. The encoded state-action pair is then used to generate the next state using a recurrent layer composed of a fully connected layer, followed by a GRU with hidden dimension of 500 units, and two more fully connected layers. The dynamics model for the MPE task uses a fully connected neural network of two hidden layers of 500 units to predict the categorical state features and continuous state features separately. All other model components are parameterized by neural networks with two fully connected layers of 500 units.
All model components are trained with the Adam optimizer, learning rate of , and batch size of 1000. To prevent overfitting, we use early stopping with a maximum of 700 epochs. The training and validation dataset is split between 3:7 respectively.
Switch Tasks
We learn an ensemble of 5 models over the dynamics model, and 1 model for all other models components. We also learn a model for the available actions function to narrow down the joint-policy search space. At each timestep, the available actions function outputs a set of available actions.
The model-free baseline have access to the ground truth available actions function during both training and testing for a fair comparison.
MPE
We learn an ensemble of 3 models over the dynamics model and the reward model.
B.4. Compute
All experiments are done on a GPU of either NVIDIA T4 or NVIDIA P100. Experiments took hours to run.