TAPE: Leveraging Agent Topology for Cooperative Multi-Agent Policy Gradient
Abstract
Multi-Agent Policy Gradient (MAPG) has made significant progress in recent years. However, centralized critics in state-of-the-art MAPG methods still face the centralized-decentralized mismatch (CDM) issue, which means sub-optimal actions by some agents will affect other agent’s policy learning. While using individual critics for policy updates can avoid this issue, they severely limit cooperation among agents. To address this issue, we propose an agent topology framework, which decides whether other agents should be considered in policy gradient and achieves compromise between facilitating cooperation and alleviating the CDM issue. The agent topology allows agents to use coalition utility as learning objective instead of global utility by centralized critics or local utility by individual critics. To constitute the agent topology, various models are studied. We propose Topology-based multi-Agent Policy gradiEnt (TAPE) for both stochastic and deterministic MAPG methods. We prove the policy improvement theorem for stochastic TAPE and give a theoretical explanation for the improved cooperation among agents. Experiment results on several benchmarks show the agent topology is able to facilitate agent cooperation and alleviate CDM issue respectively to improve performance of TAPE. Finally, multiple ablation studies and a heuristic graph search algorithm are devised to show the efficacy of the agent topology.
1 Introduction
Recent years has witnessed dramatic progress of reinforcement learning (RL) and multi-agent reinforcement learning (MARL) in real life applications, such as unmanned vehicles (Liu et al. 2022), traffic signal control (Noaeen et al. 2022) and on-demand delivery (Wang et al. 2023). Taking advantage of the centralized training decentralized execution (CTDE) (Oliehoek, Spaan, and Vlassis 2008; Kraemer and Banerjee 2016) paradigm, current cooperative MARL methods (Du et al. 2023; Wang et al. 2020a, b; Peng et al. 2021; Zhang et al. 2021; Zhou, Lan, and Aggarwal 2022) adopt value function factorization or a centralized critic to provide centralized learning signals to promote cooperation and achieve implicit or explicit credit assignment. Multi-agent policy gradient (MAPG) (Lowe et al. 2017; Foerster et al. 2018; Zhou et al. 2020; Zhang et al. 2021; Zhou, Lan, and Aggarwal 2022; Du et al. 2019) applies RL policy gradient techniques (Sutton and Barto 2018; Silver et al. 2014; Lillicrap et al. 2015) to the multi-agent context. In CTDE, MAPG methods adopt centralized critics or value-mixing networks (Rashid et al. 2020b, a; Wang et al. 2020a) for individual critics so that agents can directly update their policies to maximize the global value in their policy gradient. As a result, agents cooperate more effectively and obtain better expected team rewards.
The centralized critic approach has an inherent problem known as centralized-decentralized mismatch (CDM) (Wang et al. 2020c; Chen et al. 2022). The CDM issue refers to sub-optimal, or explorative actions of some agents negatively affecting policy learning of others, causing catastrophic miscoordination. The CDM issue arises because sub-optimal or explorative actions may lead to a small or negative centralized global value , even if other agents take good or optimal actions. In turn, the small will make the other agents mistake their good actions as bad ones and interrupt their policy learning. The Decomposed Off-Policy (DOP) approach (Wang et al. 2020c) deals with sub-optimal actions of other agents by linearly decomposed individual critics, which ignore the other agents’ actions in the policy gradient. But the use of individual critics severely limits agent cooperation.
We give an example to illustrate the issue of learning with centralised critics and individual critics respectively. Consider an one-step matrix game with two agents , where each agent has two actions . Reward and . Assume agent has a near-optimal policy with probability choosing non-optimal action and is using the COMA centralized critic (Foerster et al. 2018) for policy learning. If agent takes optimal action and takes the non-optimal action , agent ’s counterfactual advantage , which means agent will mistakenly think as a bad action. Consequently, the sub-optimal action of agent causes agent to decrease the probability of taking optimal action and deviate from the optimal policy. Similar problems will occur with other centralized critics. If we employ individual critics, however, cooperation will be limited. Assume both agents’ policies are initialized as random policies and learning with individual critics. For agent , . Similarly, we can get . The post-update joint-policy will be and receive reward , which is clearly sub-optimal.
In this paper, we aims to alleviate the CDM issue without hindering agent’s cooperation capacity by proposing an agent topology framework to describe the relationships between agents’ policy updates. Under the agent topology framework, agents connected in the topology consider and maximize each other’s utilities. Thus, the shared objective makes each individual agent forms a coalition with its connected neighbors. Agents only consider the utilities of agents in the same coalition, facilitating in-coalition cooperation and avoiding influence of out-of-coalition agents. Based on the agent topology, we propose Topology-based multi-Agent Policy gradiEnt (TAPE) for both stochastic and deterministic MAPG, where the agent topology can alleviate the bad influence of other agents’ sub-optimality without hindering cooperation among agents. Theoretically, we prove the policy improvement theorem for stochastic TAPE and give a theoretical explanation for improved cooperation by exploiting agent topology from the perspective of parameter-space exploration.
Empirically, we use three prevalent random graph models (Erdős, Rényi et al. 1960; Watts and Strogatz 1998; Albert and Barabási 2002) to constitute the agent topology. Results show that the Erdős–Rényi (ER) model (Erdős, Rényi et al. 1960) is able to generate the most diverse topologies. With diverse coalitions, agents are able to explore different cooperation patterns and achieve strong cooperation performance. Evaluation results on a matrix game, Level-based foraging (Papoudakis et al. 2021) and SMAC (Samvelyan et al. 2019) show that TAPE outperforms all baselines and the agent topology is able to improve base methods’ performance by both facilitating cooperation among agents and alleviating the CDM issue. Moreover, to show the efficacy of the agent topology, we conduct multiple studies and devise a heuristic graph search algorithm.
Contributions of this paper are three-fold: Firstly, We propose an agent topology framework and Topology-based multi-Agent Policy gradiEnt (TAPE) to achieve compromise between facilitating cooperation and alleviating CDM issue; Secondly, we theoretically establish policy improvement theorem for stochastic TAPE and elaborate the cause for improved cooperation by agent topology; Finally, empirical results demonstrate that the agent topology is able to alleviate the CDM issue without hindering cooperation among agents, resulting in strong performance of TAPE.
2 Preliminaries
The cooperative multi-agent task in this paper is modelled as Decentralized Partially Observable Markov Decision Process (Dec-POMDP) (Oliehoek and Amato 2016). A Dec-POMDP is a tuple , where is a finite set of agents, is the state space, is the agent action space and is a discount factor. At each timestep, every agent picks an action to form the joint-action to interact with the environment. Then a state transition will occur according to a state transition function . All agents will receive a shared reward by the reward function . During execution, every agent draws a local observation by an observation function . Every agent stores an observation-action history , based on which agent derives a policy . The joint policy consists of policies of all agents. The global value function is the expectation of discounted future reward summed over the joint-policy .
The policy gradient in stochastic MAPG method DOP is: , where is the positive coefficient provided by the mixing network, and the policy gradient in deterministic MAPG methods is , where is the centralized critic and is the policy of agent parameterized .
3 Related Work
Multi-Agent Policy Gradient The policy gradient in stochastic MAPG methods has the form (Foerster et al. 2018; Wang et al. 2020c; Lou et al. 2023b; Chen et al. 2022), where objective varies across different methods, such as counterfactual advantage (Foerster et al. 2018) and polarized joint-action value (Chen et al. 2022). The objective in DOP is individual aristocratic utility (Wolpert and Tumer 2001), which ignores other agents’ utilities to avoid the CDM issue, but the cooperation is also limited by this objective. It is worth noting that polarized joint-action value (Chen et al. 2022) also aims to address the CDM issue, but it only applies to stochastic MAPG methods, and the polarized global value can be very unstable. Deterministic MAPG methods use gradient ascent to directly maximize the centralized global value . Lowe et al. (Lowe et al. 2017) model the global value with a centralized critic. Current deterministic MAPG methods (Zhang et al. 2021; Peng et al. 2021; Zhou, Lan, and Aggarwal 2022) adopt value factorization to mix individual values to get . As the global value is determined by the centralized critic for all agents, sub-optimal actions of one agent will easily influence all others.
Topology in Reinforcement Learning Adjodah et al. (Adjodah et al. 2019) discuss the communication topology issue in parallel-running RL algorithms such as A3C (Mnih et al. 2016). Results show that the centralized learner implicitly yields a fully-connected communication topology among parallel workers, which will harm their performance. In MARL with decentralized training, communication topology is adopted to enable inter-agent communication among networked agents (Zhang et al. 2018; Wang et al. 2019; Konan, Seraj, and Gombolay 2022; Du et al. 2021). The communication topology allows agent to share local information with each other during both training and execution and even achieve local consensus, which further leads to better cooperation performance. In MARL with centralized training, deep coordination graph (DCG) (Böhmer, Kurin, and Whiteson 2020) factorizes the joint value function according to a coordination graph to achieve a trade-off between representational capacity and generalization. Deep implicit coordination graph (Li et al. 2020) allows to infer the coordination graph dynamically by agent interactions instead of domain expertise in DCG. Ruan et al. (Ruan et al. 2022) learn an action coordination graph to represents agents’ decision dependency, which further coordinates the dependent behaviors among agents.
4 Topology-based Multi-Agent Policy Gradient
In this section, we propose Topology-based multi-Agent Policy gradiEnt (TAPE), which exploits the agent topology for both stochastic and deterministic MAPG. This use of the agent topology provides a compromise between facilitating cooperation and alleviating CDM. The primary purpose of the agent topology is to indicate relationships between agents’ policy updates, so we focus on policy gradients of TAPE here and cover the remainder in supplementary material. First, we will define the agent topology.
The agent topology describes how agents should consider others’ utility during policy updates. Each agent is a vertex and is the set of edges. For a given topology, , if , the source agent should consider the utility of the destination agent in its policy gradient. The only constraint we place on a topology is that , because agents should at least consider their own utility in the policy gradient. The topology captures the relationships between agents’ policy updates, not their communication network at test time (Foerster et al. 2016; Das et al. 2019; Wang et al. 2019; Ding, Huang, and Lu 2020). Connected agents consider and maximize each other’s utilities together. Thus, the shared objective makes each individual agent form a coalition with the connected neighbors. We use the adjacency matrix to refer the agent topology in what follows.
In our agent topology framework, DOP (Wang et al. 2020c) (policy gradient given in section 2) and other independent learning algorithms’ has an edgeless agent topology. The adjacency matrix is the identity matrix and no edge exists in the topology. With no coalition, DOP agent will only maximize its own individual utility , and hence is poor at cooperation. Although DOP adopts a mixing network for the individual utilities to enhance cooperation, an agent’s ability to cooperate is still limited, which we will empirically show in the matrix game experiments. Methods with centralized critic such as COMA (Foerster et al. 2018), FACMAC (Peng et al. 2021) and PAC (Zhou, Lan, and Aggarwal 2022) yields the fully-connected agent topology. In these methods, there is only one coalition with all of the agents in it (all edges exist in the topology), and all agents update their policies based on the centralized critic. Consequently, they suffer from the CDM issue severely, because the influence of an agent’s sub-optimal behavior will spread to the entire multi-agent system.
4.1 Stochastic TAPE
Instead of global centralized critic (Foerster et al. 2018), we use the agent topology to aggregate individual utilities and critics to facilitate cooperation among agents for stochastic MAPG (Wang et al. 2020c). To this end, a new learning objective Coalition Utility for the policy gradient is defined as below.
Definition 1 (Coalition Utility).
Coalition Utility for agent is the summation of individual utility of connected agent in agent topology , i.e. , where .
is the aristocrat utility from (Wang et al. 2020c; Wolpert and Tumer 2001). only if agent is connected to agent in and is the global value function. Coalition utility only depends on in-coalition agents because if agent is not in agent ’s coalition, . With the coalition utility, we propose stochastic TAPE with the policy gradient given by
(1) | ||||
(2) |
where is the weight for agent ’s local value provided by the mixing network. The policy gradient derivation from Eq. 1 to Eq. 2 is provided in the appendix A. Since the local utility of other in-coalition agents is maximized by the policy updates, cooperation among agents is facilitated. Pseudo-code and more details of stochastic TAPE are provided in the appendix D.1.
4.2 Deterministic TAPE
Current deterministic MAPG methods (Peng et al. 2021; Zhang et al. 2021; Zhou, Lan, and Aggarwal 2022) yield fully-connected agent topology, which makes agents vulnerable to bad influence of other agents’ sub-optimal actions. A mixing network is adopted to mix local Q value functions . Each agent uses deterministic policy gradient to update parameters and directly maximize global value . We use the agent topology to drop out utilities of out-of-coalition agents, so that influence of their sub-optimal actions will not spread to in-coalition agents. To this end, Coalition is defined as below.
Definition 2 (Coalition ).
Coalition for agent is the mixture of its in-coalition agents’ local values with mixing network , i.e.
(3) |
where is the indicator function and only when edge exists in the topology.

During policy update, out-of-coalition agents’ values are always masked out, so agent ’s policy learning will not be affected by out-of-coalition agents. Based on Coalition , we propose deterministic TAPE, whose policy gradient is given by
(4) |
where and is the local soft value (Zhang et al. 2021) augmented with assistive information which contains information to assist policy learning towards the optimal policy as in (Zhou, Lan, and Aggarwal 2022). After dropping out agents not in the coalition, the bad influence of out-of-coalition sub-optimal actions will not affect in-coalition agents. More details and pseudo-code are provided in the appendix D.2.
5 Analysis
5.1 Agent Topology
Although the agent topology can be any arbitrary topology, a proper agent topology should be able to explore diverse cooperation pattern, which is essential for robust cooperation (Li et al. 2021; Strouse et al. 2021; Lou et al. 2023a). We studied three prevalent random graph model: Barabási–Albert (BA) model (Albert and Barabási 2002), Watts–Strogatz (WS) model (Watts and Strogatz 1998) and Erdős–Rényi (ER) model (Erdős, Rényi et al. 1960). BA model is a scale-free network commonly used for citation and signaling biological networks (Barabási and Albert 1999). WS model is known as the small-world network where each nodes can be reached through a small number of nodes, resulting in the six degrees of separation (Travers and Milgram 1977). While in ER model, each edge between any two nodes has an independent probability of being present. Formally, the adjacency matrix of ER agent topology for agents is defined as , ; , with probability otherwise 0.
In research question 1 of section 6.3, we found that ER model is able to generate the most diverse topologies, which in turn help the agents explore diverse cooperation pattern and achieve strongest performance. Thus, we use ER model to constitute the agent topology in the experiments.
5.2 Theoretical Results
We now establish policy improvement theorem of stochastic TAPE, and prove a theorem for the cooperation improvement from the perspective of exploring the parameter space, which is a common motivation in RL research (Schulman, Chen, and Abbeel 2017; Haarnoja et al. 2018; Zhang et al. 2021; Adjodah et al. 2019). We assume the policy to have tabular expressions.
The following theorem states that stochastic TAPE updates can monotonically improve the objective function .
Theorem 1. [stochastic TAPE policy improvement theorem] With tabular expressions for policies, for any pre-update policy and updated policy by policy gradient in Eq. 2 that satisfy , where is a sufficiently small number, we have i.e. the joint policy is improved by the update.
Please refer to Appendix B for the proof of Theorem 1. Although this policy improvement theorem is established for policies with tabular expressions, we provide conditions in the proof, under which policy improvement is guaranteed even with function approximators.
Next, we provide a theoretical insight that compared to using individual critics, stochastic TAPE can better explore the parameter space to find more effective cooperation pattern. One heuristic for measuring such capacity is the diversity of parameter updates during each iteration (Adjodah et al. 2019), which is measured by the variance of parameter updates.

Given state and action , let and denote the stochastic TAPE and DOP parameter updates respectively. The following theorem states that stochastic TAPE policy update is more diverse so that it can explore the parameter space more effectively.
Theorem 2. For any agent and , the stochastic TAPE policy update and DOP policy update satisfy that , and is in proportion to , where is the probability of edges being present in the Erdős–Rényi model, i.e.
Theorem 2 shows that compared to solely using individual critics, our agent topology provides larger diversity in policy updates to find better cooperation pattern. More details and proof are provided in the appendix C. It is worth noting that although a large hyperparameter in the agent topology means larger diversity in parameter updates, the CDM issue will also become severer because the connections among agents become denser. Thus, must be set properly to achieve compromise between facilitating cooperation and avoiding CDM issue, which we will show later in the experiments.
6 Experiment
In this section, we first demonstrate that by ignoring other agents in the policy gradient to avoid bad influence of their sub-optimal actions, cooperation among agents is severely harmed. To this end, three one-step matrix games that require strong cooperation are proposed. Then, we evaluate the efficacy of the proposed methods on (a) Level-Based Foraging (LBF) (Papoudakis et al. 2021); (b) Starcraft II Multi-Agent Challenge (SMAC) (Samvelyan et al. 2019), and answer several research questions via various ablations and a heuristic graph search technique. Our code is available here111github.com/LxzGordon/TAPE.
6.1 Matrix Game
We propose 3 one-step matrix games, which are harder versions of the example in introduction. The matrix games are given in Fig. 1(a). We use different colors to show rewards in different games (blue for Easy, green for Medium and red for Hard). The optimal joint policy is for both agents to take action . But agent lacks motivation to choose because it is very likely to receive a large penalty ( or ). Thus, this game requires strong cooperation among agents. In the Medium game, we further increase the penalty for agent 0 to choose . In the Hard game, we keep the large penalty and add a local optimal reward at . Note that these matrix games are not monotonic games (Rashid et al. 2020b) as the optimal action for each agent depends on other agents. The evaluation results are given in Fig. 1.
With the agent topology to encourage cooperation, stochastic TAPE outperforms other methods by a large margin and is able to learn optimal joint policy even in the Hard game. DOP agents optimize individual utilities, ignoring utilities of other agents to avoid the influence of their sub-optimal actions, which result in severe miscoordination in these games. But since DOP agents adopt stochastic policy, they may receive some large reward after enough exploration. But the learning efficiency is much lower than stochastic TAPE. COMA is a weak baseline on complex tasks (Samvelyan et al. 2019) (0% win rate in all maps in section 6.3). But since COMA agents optimize global value (expected team reward sum) instead of individual utility in DOP, it can achieve better results on these tasks requiring strong cooperation. These matrix games demonstrate the importance of considering the utility of other agents in cooperative tasks. With the agent topology, stochastic TAPE can facilitate cooperation among agents and alleviate CDM issue simultaneously.
6.2 Level-Based Foraging

In Level-Based Foraging (LBF (Papoudakis et al. 2021)), agents navigate a grid-world and collect randomly-scattered food items. Agents and food items are assigned with levels. A food item is only allowed to be collected when near-by agents’ level sum is larger than the food level. Reward is only given when a foot item is collected, assigning the environment with sparse-reward property. Test return is when all food items are collected. Compared baselines include both value-based methods: QMIX (Rashid et al. 2020b) and QPLEX (Wang et al. 2020a), and policy-based methods: DOP (Wang et al. 2020c), FACMAC (Peng et al. 2021) and PAC (Zhou, Lan, and Aggarwal 2022). Scenario illustration and results are given in Fig. 2.
To make 8x8-2p-3f more difficult, food items can only be collected when all agents participate. In this simple and sparse-reward task, with the stochastic policy and enhanced cooperation, stochastic TAPE outperforms all other methods on convergence speed and performance. While in 15x15-4p-5f, only state-of-the-art method PAC and deterministic TAPE learn to collect food items. With the agent topology to keep out bad influence of other agents’ sub-optimal actions, deterministic TAPE achieves best performance.
6.3 StarCraft Multi-Agent Challenge
StarCraft Multi-Agent Challenge (SMAC) (Samvelyan et al. 2019) is a challenging benchmark built on StarCraft II, where agents must cooperate with each other to defeat enemy teams controlled by built-in AI. We evaluate the proposed methods and baselines with the recommended evaluation protocol and metric in six maps including three hard maps (3s_vs_4z, 5m_vs_6m and 2c_vs_64zg) and three super hard maps (corridor, MMM2 and 6h_vs_8z). All algorithms are run for four times with different random seeds. Each run lasts for environmental steps. During training, each algorithm has four parallel environment to collect training data.


Overall results The overall results in six maps are provided in Fig. 3. We can see deterministic TAPE outperforms all other methods in terms of performance and convergence speed. In 6h_vs_8z, one of the most difficult maps in SMAC, deterministic TAPE achieves noticeably better performance than its base method PAC and other baselines. It’s worth noting that after integrating agent topology, both stochastic TAPE and deterministic TAPE have better performance compared to the base methods. This demonstrates the efficacy of the proposed agent topology in facilitating cooperation for DOP and alleviating CDM issue for PAC. Especially, in 2c_vs_64zg, stochastic TAPE outperforms all of the baselines except for our deterministic TAPE while its base method DOP struggles to perform well.
Next, we answer three research questions by ablations and additional experiments. The research questions are: Q1. What is the proper model to constitute the agent topology? Q2. Is there indeed a compromise between facilitating cooperation and suffering from the CDM issue? Q3. Is the agent topology capable of compromising between facilitating cooperation and the CDM issue to achieve best performance?
Q1. We study three prevalent random graph models: Barabási–Albert (BA) model (Albert and Barabási 2002), Watts–Strogatz (WS) model (Watts and Strogatz 1998) and the Erdős–Rényi (ER) model (Erdős, Rényi et al. 1960) via visualization and ablation study. First, we generate 1000 topologies for 12 agents with each model and give the visualization result in Fig. 4(a), where axis is average degree and axis is connectivity (minimum number of edges required, by removing which the graph becomes two sub-graphs). Average degree and connectivity are two essential factors for agent topology as they reflect the level of CDM issue and cooperation. Compared to the other two models, ER model generates much more diverse topologies, covering the area from edgeless topology to fully-connected topology. Then, we evaluate stochastic TAPE with each model on MMM2, a super hard map in SMAC. Results are given in Fig. 4(b). For the random graph models, the larger the graph diversity in Fig. 4(a), the stronger the performance is. Thus, we constitute the agent topology with ER model in other experiments. For fully-connected topology, the performance demonstrates very large variance, because once a sub-optimal action occurs, its bad influence will easily spread through the centralized critic to all other agents. It is worth nothing that the graphs can also be generated via Bayesian optimization, but this may also result in limited graph diversity, causing unstable or even worse performance. Thus, how to generate agent topology via optimization-based methods remains a challenge.
Q2. The compromise here means the more connection among agents to improve performance, the severer CDM issue becomes, and when it is too severe, it will in turn affect performance. To answer this research question, we devise a heuristic graph search technique. During policy training of agent , we generate topologies with the ER model in each step and use them to update the agent policy. After obtaining updated policy , we evaluate the post-update global value and choose the policy with largest global value as the updated policy, i.e. . The motivation of this heuristic graph search technique is that global value is the expected future reward sum, which shows the post-update performance. Using this technique, we can find the topology with better performance. Then, we respectively use the graph search technique when is small or large and give the visualization of preferred topologies in Fig. 5. The results confirm that the compromise does exist, because (1) facilitating cooperation by building more agent connections when there is little CDM issue (Fig. 5(a)), and (2) removing connections to stop bad influence of sub-optimal actions from spreading when CDM issue is severe (Fig. 5(c)), can both improve performance.
Q3. We answer this research question by giving the performance with different hyperparameter , as it controls the level of CDM issue and cooperation. The results are given in Fig. 4(c), (d). Large stands for dense connections, where agents are easily affected by sub-optimal actions of other agents but cooperation is strongly encouraged. Small means sparse connections, where sub-optimal actions’ influence will not easily spread but cooperation among agents is limited. (c) and (d) are drawn at the end of training and half of training to show the convergence performance and speed respectively. We can see the performances first increase when is small and later decrease when is too large. The best performance appears at the point where the cooperation is strong and CDM issue is acceptable. From the results, we can say our ER agent topology is able to compromise between cooperation and alleviating the CDM issue to achieve the best performance.
7 Conclusion and Future Work
In this paper, we propose an agent topology framework, which aims to alleviate the CDM issue without limiting agents’ cooperation capacity. Based on the agent topology, we propose TAPE for both stochastic and deterministic MAPG methods. Theoretically, we prove the policy improvement theorem for stochastic TAPE and give a theoretical explanation about the improved cooperation among agents. Empirically, we evaluate the proposed methods on several benchmarks. Experiment results show that the methods outperform their base methods and other baselines in terms of convergence speed and performance. A heuristic graph search algorithm is devised and various studies are conducted, which validate the efficacy of our proposed agent topology.
Limitation and Future Work In this work, we consider constructing agent topology with existing random graph models without learning-based methods. Our future work is to adaptively learn the agent topology that can simultaneously facilitate agent cooperation and alleviate the CDM issue.
References
- Adjodah et al. (2019) Adjodah, D.; Calacci, D.; Dubey, A.; Goyal, A.; Krafft, P.; Moro, E.; and Pentland, A. 2019. Leveraging Communication Topologies Between Learning Agents in Deep Reinforcement Learning. arXiv preprint arXiv:1902.06740.
- Albert and Barabási (2002) Albert, R.; and Barabási, A.-L. 2002. Statistical mechanics of complex networks. Reviews of modern physics, 74(1): 47.
- Alemi et al. (2016) Alemi, A. A.; Fischer, I.; Dillon, J. V.; and Murphy, K. 2016. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410.
- Barabási and Albert (1999) Barabási, A.-L.; and Albert, R. 1999. Emergence of scaling in random networks. science, 286(5439): 509–512.
- Böhmer, Kurin, and Whiteson (2020) Böhmer, W.; Kurin, V.; and Whiteson, S. 2020. Deep coordination graphs. In International Conference on Machine Learning, 980–991. PMLR.
- Chen et al. (2022) Chen, W.; Li, W.; Liu, X.; and Yang, S. 2022. Learning Credit Assignment for Cooperative Reinforcement Learning. arXiv preprint arXiv:2210.05367.
- Das et al. (2019) Das, A.; Gervet, T.; Romoff, J.; Batra, D.; Parikh, D.; Rabbat, M.; and Pineau, J. 2019. Tarmac: Targeted multi-agent communication. In International Conference on Machine Learning, 1538–1546. PMLR.
- Degris, White, and Sutton (2012) Degris, T.; White, M.; and Sutton, R. S. 2012. Off-policy actor-critic. arXiv preprint arXiv:1205.4839.
- Ding, Huang, and Lu (2020) Ding, Z.; Huang, T.; and Lu, Z. 2020. Learning individually inferred communication for multi-agent cooperation. Advances in Neural Information Processing Systems, 33: 22069–22079.
- Du et al. (2019) Du, Y.; Han, L.; Fang, M.; Dai, T.; Liu, J.; and Tao, D. 2019. LIIR: learning individual intrinsic reward in multi-agent reinforcement learning. In Proceedings of the 33rd International Conference on Neural Information Processing Systems (NeurIPS), 4403–4414.
- Du et al. (2023) Du, Y.; Leibo, J. Z.; Islam, U.; Willis, R.; and Sunehag, P. 2023. A Review of Cooperation in Multi-agent Learning. arXiv preprint arXiv:2312.05162.
- Du et al. (2021) Du, Y.; Liu, B.; Moens, V.; Liu, Z.; Ren, Z.; Wang, J.; Chen, X.; and Zhang, H. 2021. Learning Correlated Communication Topology in Multi-Agent Reinforcement learning. In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 456–464.
- Erdős, Rényi et al. (1960) Erdős, P.; Rényi, A.; et al. 1960. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1): 17–60.
- Feinberg et al. (2018) Feinberg, V.; Wan, A.; Stoica, I.; Jordan, M. I.; Gonzalez, J. E.; and Levine, S. 2018. Model-based value estimation for efficient model-free reinforcement learning. arXiv preprint arXiv:1803.00101.
- Foerster et al. (2016) Foerster, J.; Assael, I. A.; De Freitas, N.; and Whiteson, S. 2016. Learning to communicate with deep multi-agent reinforcement learning. Advances in neural information processing systems, 29.
- Foerster et al. (2018) Foerster, J.; Farquhar, G.; Afouras, T.; Nardelli, N.; and Whiteson, S. 2018. Counterfactual multi-agent policy gradients. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
- Haarnoja et al. (2018) Haarnoja, T.; Zhou, A.; Hartikainen, K.; Tucker, G.; Ha, S.; Tan, J.; Kumar, V.; Zhu, H.; Gupta, A.; Abbeel, P.; et al. 2018. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905.
- Kingma and Ba (2014) Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
- Konan, Seraj, and Gombolay (2022) Konan, S.; Seraj, E.; and Gombolay, M. 2022. Iterated reasoning with mutual information in cooperative and byzantine decentralized teaming. arXiv preprint arXiv:2201.08484.
- Kraemer and Banerjee (2016) Kraemer, L.; and Banerjee, B. 2016. Multi-agent reinforcement learning as a rehearsal for decentralized planning. Neurocomputing, 190: 82–94.
- Li et al. (2021) Li, C.; Wang, T.; Wu, C.; Zhao, Q.; Yang, J.; and Zhang, C. 2021. Celebrating diversity in shared multi-agent reinforcement learning. Advances in Neural Information Processing Systems, 34: 3991–4002.
- Li et al. (2020) Li, S.; Gupta, J. K.; Morales, P.; Allen, R.; and Kochenderfer, M. J. 2020. Deep implicit coordination graphs for multi-agent reinforcement learning. arXiv preprint arXiv:2006.11438.
- Lillicrap et al. (2015) Lillicrap, T. P.; Hunt, J. J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; and Wierstra, D. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
- Liu et al. (2022) Liu, H.; Kiumarsi, B.; Kartal, Y.; Koru, A. T.; Modares, H.; and Lewis, F. L. 2022. Reinforcement learning applications in unmanned vehicle control: A comprehensive overview. Unmanned Systems, 1–10.
- Lou et al. (2023a) Lou, X.; Guo, J.; Zhang, J.; Wang, J.; Huang, K.; and Du, Y. 2023a. PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI Coordination. arXiv preprint arXiv:2301.06387.
- Lou et al. (2023b) Lou, X.; Zhang, J.; Du, Y.; Yu, C.; He, Z.; and Huang, K. 2023b. Leveraging Joint-action Embedding in Multi-agent Reinforcement Learning for Cooperative Games. IEEE Transactions on Games.
- Lowe et al. (2017) Lowe, R.; Wu, Y. I.; Tamar, A.; Harb, J.; Pieter Abbeel, O.; and Mordatch, I. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information processing systems, 30.
- Mnih et al. (2016) Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, 1928–1937. PMLR.
- Mnih et al. (2013) Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; and Riedmiller, M. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
- Munos et al. (2016) Munos, R.; Stepleton, T.; Harutyunyan, A.; and Bellemare, M. 2016. Safe and efficient off-policy reinforcement learning. Advances in neural information processing systems, 29.
- Noaeen et al. (2022) Noaeen, M.; Naik, A.; Goodman, L.; Crebo, J.; Abrar, T.; Abad, Z. S. H.; Bazzan, A. L.; and Far, B. 2022. Reinforcement learning in urban network traffic signal control: A systematic literature review. Expert Systems with Applications, 116830.
- Oliehoek and Amato (2016) Oliehoek, F. A.; and Amato, C. 2016. A concise introduction to decentralized POMDPs. Springer.
- Oliehoek, Spaan, and Vlassis (2008) Oliehoek, F. A.; Spaan, M. T.; and Vlassis, N. 2008. Optimal and approximate Q-value functions for decentralized POMDPs. Journal of Artificial Intelligence Research, 32: 289–353.
- Papoudakis et al. (2021) Papoudakis, G.; Christianos, F.; Schäfer, L.; and Albrecht, S. V. 2021. Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS).
- Peng et al. (2021) Peng, B.; Rashid, T.; Schroeder de Witt, C.; Kamienny, P.-A.; Torr, P.; Böhmer, W.; and Whiteson, S. 2021. Facmac: Factored multi-agent centralised policy gradients. Advances in Neural Information Processing Systems, 34: 12208–12221.
- Precup (2000) Precup, D. 2000. Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, 80.
- Rashid et al. (2020a) Rashid, T.; Farquhar, G.; Peng, B.; and Whiteson, S. 2020a. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. Advances in neural information processing systems, 33: 10199–10210.
- Rashid et al. (2020b) Rashid, T.; Samvelyan, M.; De Witt, C. S.; Farquhar, G.; Foerster, J.; and Whiteson, S. 2020b. Monotonic value function factorisation for deep multi-agent reinforcement learning. The Journal of Machine Learning Research, 21(1): 7234–7284.
- Ruan et al. (2022) Ruan, J.; Du, Y.; Xiong, X.; Xing, D.; Li, X.; Meng, L.; Zhang, H.; Wang, J.; and Xu, B. 2022. GCS: graph-based coordination strategy for multi-agent reinforcement learning. arXiv preprint arXiv:2201.06257.
- Samvelyan et al. (2019) Samvelyan, M.; Rashid, T.; de Witt, C. S.; Farquhar, G.; Nardelli, N.; Rudner, T. G. J.; Hung, C.-M.; Torr, P. H. S.; Foerster, J.; and Whiteson, S. 2019. The StarCraft Multi-Agent Challenge. CoRR, abs/1902.04043.
- Schulman, Chen, and Abbeel (2017) Schulman, J.; Chen, X.; and Abbeel, P. 2017. Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440.
- Silver et al. (2014) Silver, D.; Lever, G.; Heess, N.; Degris, T.; Wierstra, D.; and Riedmiller, M. 2014. Deterministic policy gradient algorithms. In International conference on machine learning, 387–395. Pmlr.
- Strouse et al. (2021) Strouse, D.; McKee, K.; Botvinick, M.; Hughes, E.; and Everett, R. 2021. Collaborating with humans without human data. Advances in Neural Information Processing Systems, 34: 14502–14515.
- Sutton and Barto (2018) Sutton, R. S.; and Barto, A. G. 2018. Reinforcement learning: An introduction. MIT press.
- Tishby, Pereira, and Bialek (2000) Tishby, N.; Pereira, F. C.; and Bialek, W. 2000. The information bottleneck method. arXiv preprint physics/0004057.
- Travers and Milgram (1977) Travers, J.; and Milgram, S. 1977. An experimental study of the small world problem. In Social networks, 179–197. Elsevier.
- Wang et al. (2020a) Wang, J.; Ren, Z.; Liu, T.; Yu, Y.; and Zhang, C. 2020a. Qplex: Duplex dueling multi-agent q-learning. arXiv preprint arXiv:2008.01062.
- Wang et al. (2023) Wang, S.; Hu, S.; Guo, B.; and Wang, G. 2023. Cross-Region Courier Displacement for On-Demand Delivery With Multi-Agent Reinforcement Learning. IEEE Transactions on Big Data.
- Wang et al. (2020b) Wang, T.; Gupta, T.; Mahajan, A.; Peng, B.; Whiteson, S.; and Zhang, C. 2020b. Rode: Learning roles to decompose multi-agent tasks. arXiv preprint arXiv:2010.01523.
- Wang et al. (2019) Wang, T.; Wang, J.; Zheng, C.; and Zhang, C. 2019. Learning nearly decomposable value functions via communication minimization. arXiv preprint arXiv:1910.05366.
- Wang et al. (2020c) Wang, Y.; Han, B.; Wang, T.; Dong, H.; and Zhang, C. 2020c. Off-policy multi-agent decomposed policy gradients. arXiv preprint arXiv:2007.12322.
- Watts and Strogatz (1998) Watts, D. J.; and Strogatz, S. H. 1998. Collective dynamics of ‘small-world’networks. nature, 393(6684): 440–442.
- Wolpert and Tumer (2001) Wolpert, D. H.; and Tumer, K. 2001. Optimal payoff functions for members of collectives. Advances in Complex Systems, 4(02n03): 265–279.
- Zhang et al. (2018) Zhang, K.; Yang, Z.; Liu, H.; Zhang, T.; and Basar, T. 2018. Fully decentralized multi-agent reinforcement learning with networked agents. In International Conference on Machine Learning, 5872–5881. PMLR.
- Zhang et al. (2021) Zhang, T.; Li, Y.; Wang, C.; Xie, G.; and Lu, Z. 2021. Fop: Factorizing optimal joint policy of maximum-entropy multi-agent reinforcement learning. In International Conference on Machine Learning, 12491–12500. PMLR.
- Zhou, Lan, and Aggarwal (2022) Zhou, H.; Lan, T.; and Aggarwal, V. 2022. PAC: Assisted Value Factorisation with Counterfactual Predictions in Multi-Agent Reinforcement Learning. arXiv preprint arXiv:2206.11420.
- Zhou et al. (2020) Zhou, M.; Liu, Z.; Sui, P.; Li, Y.; and Chung, Y. Y. 2020. Learning implicit credit assignment for cooperative multi-agent reinforcement learning. Advances in neural information processing systems, 33: 11853–11864.
A Derivation of Policy Gradient
In this section, we give the derivation of the following policy gradient for stochastic TAPE
(5) | ||||
where is the coalition utility of agent with other agents connected in the agent topology, is the aristocrat utility of agent from (Wolpert and Tumer 2001; Wang et al. 2020c), and , i.e. all agents are connected with themselves.
Proof. First, we will reformulate the aristocrat utility for the policy gradient
. So policy gradient is
Let . Consider a given agent
Since
. Thus,
is the policy gradient of stochastic TAPE.
B Policy Improvement Theorem
In this section, we give proof of the policy improvement theorem of stochastic TAPE. As in previous works (Degris, White, and Sutton 2012; Wang et al. 2020c; Feinberg et al. 2018), we relax the requirement that is a good estimate of and simplify the -function learning process as the following MSE problem to make it tractable.
(6) |
where is the joint policy, is the true value and is the estimated value.
To make this proof self-contained, we first borrow Lemma 1 and Lemma 2 from (Wang et al. 2020c). Lemma 1 states that the learning of centralized critic can preserve the order of local action values. Without loss of generality, we consider a given state .
Lemma 1. We consider the following optimization problem:
(7) |
Here, , and is a vector with the entry being . satisfies that .
Then, for any local optimal solution, it holds that
Proof. A necessary condition for a local optimal is
This implies that, for , we have
Let denote . We have
Therefore, if , then any local minimal of satisfies .
The mixer module of our method satisfies , so Lemma 1 holds after the policy evaluation converges.
Lemma 2. For two sequences , listed in an increasing order. if , then .
Proof. We denote , then , where . Without loss of generality, we assume that , . and which and . Since are symmetric, we assume . Then, we have
As , we have
Thus .
The next lemma states that for any policies with tabular expressions updated by stochastic TAPE policy gradient, the larger the local critic’s value , the larger update stepsize for under state , and vice versa.
Lemma 3. For any pre-update joint policy and updated joint policy by stochastic TAPE policy gradient with tabular expressions that satisfy for any agent , where is a sufficiently small number, , it holds that
(8) |
Proof. We start by showing the connection between and . is the policy gradient for agent
With a little abuse of notation, we let for simplicity, where is joint agent action-observation history and stands for excluding agent . Thus, without loss of generality, consider some given state and joint action , the policy gradient for agent is
(9) | ||||
Since the policies are with tabular expressions, then . And the update is in proportion to the gradient, so that
where is a constant independent of and .
Since is a positive coefficient given by the mixing network, is a linear function with positive coefficient w.r.t. . Thus , it holds that
Now, we are ready to provide proof for the policy improvement theorem.
Theorem 1. With tabular expressions for policies, for any pre-update policy and updated policy by policy gradient of stochastic TAPE that satisfy , where is a sufficiently small number, we have i.e. the joint policy is improved by the update.
Proof. Given a good value estimate , from Lemma 1 and Lemma 3 we have
(10) |
Given , we have
Since is sufficiently small, we use to represent the summation of components where the exponential coefficient of is greater than . is omitted in further analysis since it is sufficiently small.
Because , we have . From Lemma 2 and Eq. 10, we have . Thus
The rest of the proof follows the policy improvement theorem for tabular MDPs from (Sutton and Barto 2018).
So we have
Which is equivalent to , since
Remark We prove that with the coalition utility in the policy gradient, the objective function is monotonically maximized. The monotone condition (Eq. 8) guarantees the monotonic improvement of stochastic TAPE policy updates in tabular cases. In cases where function approximators (such as neural networks) are used, the policy improvements are still guaranteed as long as the monotone condition holds (actions with larger values have larger update stepsizes). In the experiment section, we empirically demonstrate that the policies parameterized by deep neural networks have steady performance improvement as training goes on, and agents have better performance compared to the baselines.
C Policy Update Diversity
In this section, we give the detailed proof of Theorem 2. We define the diversity of exploration in the parameter space as the variance of parameter updates , where is the policy gradient given and , is learning rate and is the Erdős–Rényi network in stochastic TAPE. Without loss of generality, we assume .
For clarity, we first restate Theorem 2.
Theorem 2. For any agent and , the stochastic TAPE policy update and DOP policy update satisfy that , and is in proportion to , where is the probability of edges being present in the Erdős–Rényi model, i.e.
Proof. From the proof of Lemma 3, we have
By replacing the adjacency matrix with identity matrix , we have
Substitute into
where is independent of and . Thus,
Since variance is always non-negative, clearly we have and .
Remark The above theorem states that stochastic TAPE explores the parameter space more effectively. This provides a theoretical insight and explanation why stochastic TAPE agents are more capable of finding good cooperation patterns with other agents. And is in proportion to , which means as increase, the connections among agents in the topology become denser and enhance their capability of cooperation. But the dense connection will also introduces the CDM issue, as sub-optimality of some agents will more easily affect other agents. Thus, serves as a hyperparameter to compromise between avoiding CDM issue and capability to explore cooperation patterns.
D Algorithm
In this section, we give pseudo-code of the proposed methods. As we only modify policy gradients, rest of structures remains the same as the base methods (Wang et al. 2020c; Zhou, Lan, and Aggarwal 2022).
D.1 Stochastic TAPE
We first give the pseudo-code of stochastic TAPE and then give the details.
The policy gradient of stochastic TAPE is given by
(11) |
where is the ER agent model, defined as
(12) | ||||
And as in the base method DOP, stochastic TAPE adopts an off-policy critic to improve sample efficiency, where the critics’ training loss is the mixture of an off-policy loss with target based on tree-backup technique (Precup 2000; Munos et al. 2016) and an on-policy loss with target , i.e.
(13) |
where
(14) | ||||
(15) |
where MSE is the mean-squared error loss function, is a parameter controlling the importance of off-policy learning, is the off-policy replay buffer, is the joint policy, , and are coefficients provided by the mixing network, is the target network to stabilize training (Mnih et al. 2013), and is the TD() hyperparameter.
With all the equations above, the pseudo-code of stochastic TAPE is given in Algorithm 1.
D.2 Deterministic TAPE
We first give the pseudo-code of deterministic TAPE and then give the details.
The policy gradient of deterministic TAPE is given by
(16) | ||||
where is the soft network augmented with assistive information . The assistive information is encoded from the observation of agent , which provides information about the optimal joint action and assists the value factorization.
PAC follows the design of WQMIX (Rashid et al. 2020a) and keep two mixing network and . is the monotonic mixing network as in QMIX (Rashid et al. 2020b), while is an unrestricted function to make sure the joint-action values are correctly estimated as in WQMIX. The loss functions for and are given by
(17) | ||||
(18) |
where is the batch size, , , is the weighting function in WQMIX, is the joint assistive information, is the mixture of local values and target with being the parameters of the target network. PAC adopts two auxiliary loss to assist value factorization in MAPG, which we will briefly introduce next.
Counterfactual Assistance Loss: The counterfactual assistance loss is proposed to directly guide individual agent’s policy towards the action from . To this end, they propose an advantage function with a counterfactual baseline that relegates while keeping other all other agents’ actions fixed. Thus, the counterfactual assistance loss is given by
(19) | ||||
Information Bottleneck Loss: Inspired by information bottleneck method (Tishby, Pereira, and Bialek 2000), the information bottleneck loss encodes the optimal joint action as the assistive information for local value functions . The assistive information is maximally informative about the optimal action . With the deep variational information bottleneck (Alemi et al. 2016), the variational lower bound of this objective is
(20) | ||||
where is the entropy operator, is the KL divergence and is a variational posterior estimator of with parameter . The information encoder is trained to encode assistive information , where is the normal distribution.
With all the loss terms defined above, pseudo-code of deterministic TAPE is given in Algorithm 2.
E Experiment and Implementation Details
We run experiments on Nvidia RTX Titan graphics cards with an Intel Xeon Gold 6240R CPU. The curves in our experiments are smoothed by a sliding window, and the window size is 4, i.e. results of each timestep is the average of the past 4 timesteps. More experiment and implementation details in each environment are given below. It is worth noting that the only hyperparameter that needs tuning in our methods is , the probability of edges being present.
E.1 Matrix Game
The evaluation metric in the matrix game is the average return of last 100 episodes and the training goes on for 10000 episodes in total. The results are drawn with four random seeds, which are randomly initialized at the beginning of an experiment. For this simple matrix game, we use tabular expressions for policies in stochastic TAPE, DOP and COMA. The critics are parameterized by a three-layer feed-forward neural network with hidden size 32. The mixing network for QMIX and linearly decomposed critics in stochastic TAPE and DOP is also three-layer feed-forward neural network with hidden size 32, where the coefficients for local values are always non-negative. Hyperparameter for ER based agent topology in stochastic TAPE is . All algorithms are trained with Adam optimizer (Kingma and Ba 2014) and the learning rate is . And we use batch size 32 for QMIX. Since we wish to compare differences of the policy gradients, we omit the off-policy target for critic in Eq. 14 in stochastic TAPE and DOP for simplicity.
E.2 Level-Based Foraging

We use the official implementation of Level-Based Foraging (Papoudakis et al. 2021) (LBF) and remain the default settings of the environment, e.g. reward function and randomly scattered food items and agents. The time limit for 8x8-2p-3f is 25, and 120 for 15x15-4p-5f. For the 8x8-2p-3f scenario, we use the ’-coop’ option in the environment to force the agents collect all food items together and make the task more difficult. The training goes on for 2 million timesteps in 8x8-2p-3f and 5 million timesteps in 15x15-4p-5f. Each algorithm runs for 100 episodes for test every 50k timesteps. The evaluation metric is the average return of the test episodes.
We use official implementations for all baseline algorithms and implement our methods based on the official implementations without changing the default hyperparameters. For example, there are two hidden layers with hidden size being 64 in the hyper network in QMIX, and target networks are updated every steps in DOP. For fair comparison, we only change the number of parallel-running environments to 4 for all algorithms and remain the other hyperparameters recommended by the official implementations. Hyperparameter for both stochastic and deterministic TAPE is .
E.3 Starcraft Multi-Agent Challenge
Agents and players do not play the whole StarCraft II game in Starcraft Multi-Agent Challenge (SMAC). In SMAC, a series of decentralized micromanagement tasks in StarCraft II are proposed to test a group of cooperative agents. The agents must cooperate to fight against enemy units controlled by StarCraft II built-in game AI. We keep the default settings of SMAC in our experiments, such as game AI difficulty 7, observation range and unit health point. We run 32 test episodes every 20k timesteps for evaluation and report median test win rate across all individual runs as recommended.
As in LBF, we use the official implementations, recommended hyperparameters and 4 parallel-running environments for all baseline algorithms. The hyperparameter controlling the probability of edges being present is chosen from for stochastic TAPE and for deterministic TAPE. for each map and algorithm is given in Table 1.
Map | (stochastic) | (deterministic) |
---|---|---|
5m_vs_6m | 0.1 | 0.7 |
2c_vs_64zg | 0.1 | 0.7 |
6h_vs_8z | 0.1 | 0.5 |
3s_vs_4z | 0.3 | 0.5 |
corridor | 0.3 | 0.5 |
MMM2 | 0.3 | 0.5 |
F Additional Experiments
The concept of topology is also adopted in fully-decentralized MARL methods with networked agents (Zhang et al. 2018; Konan, Seraj, and Gombolay 2022). In fully-decentralized methods, the CDM issue does not exist since all agents are trained independently. Agents are networked together according to the topology, so that they can consider each other during decision making to better cooperate and even achieve local consensus. However, although using agent topology to gather and utilize local information of neighboring agents, this decentralized training paradigm cannot coordinate agents’ behavior as well as our methods, which is based on centralized-training-decentralized-execution paradigm.
InfoPG (Konan, Seraj, and Gombolay 2022) is the state-of-the-art fully-decentralized method with networked agents, where agents’ policies are conditional on the policies of its neighboring teammates. We run InfoPG on all six maps of our SMAC experiments and compare the mean test return with our proposed stochastic TAPE and deterministic TAPE.
The results demonstrate that our agent topology is effective in facilitating cooperation and filtering out bad influence from other agents during centralized training, which makes it outperform the fully-decentralized method InfoPG.