Network reconstruction from asynchronously updated evolutionary game
Abstract
The interactions between players of prisoner’s dilemma (PD) game are reconstructed with evolutionary game data. All participants play the game with their counterparts and gain corresponding rewards during each round of the game. However, their strategies are updated asynchronously during the evolutionary PD game. Two inference methods of the interactions between players are derived with naive mean-field (nMF) approximation and maximum log-likelihood estimation (MLE) respectively. The two methods are tested numerically also for fully connected asymmetric Sherrington-Kirkpatrick (SK) models, varying the data length, asymmetric degree, payoff and system noise (coupling strength). We find that the reconstruction mean square error (MSE) of MLE method is proportional to the inverse of data length and typically half (benefit from the extra information of update times) of that by nMF. Both methods are robust to the asymmetric degree but works better for large payoff. Compared with MLE, nMF is more sensitive to the couplings strength which prefers weak couplings.
pacs:
Valid PACS appear hereI Introduction
Networks can be generally used for describing systems composed by interacting elements. Such systems usually generate big time-series data based on the underlying dynamics or mechanics. Uncovering the network structures from observations is crucial to understand the collective dynamics of the systems and has attracted great interest in different fields, ranging from biology system Schneidman et al. (2006); Nguyen et al. (2016); Roudi and Hertz (2011a); Zeng and Aurell (2020a) to social science Pincus and Kalman (2004); Bury (2013); Zeng et al. (2014).
Meanwhile, the Prisoner’s Dilemma (PD) game has long been studied for the emergence of cooperation among connected (links of the network) selfish individuals (nodes of the network). With the PD game, the criminals could take cooperation (confessing the crime to the judge) or defection (denying the crime) strategy. A large number of studies have focused on the collective behaviors with different network structures, e.g., small-world network Masuda and Aihara (2003), two-dimensional lattice Szabó and Töke (1998) and scale-free network Chen et al. (2007), etc.. Furthermore, the couplings between players are typically symmetric and binary (with / without interactions) between players in those studies. To quantify the strength of interactions between players, here we studied fully connected asymmetric Sherrington-Kirkpatrick (SK) Kirkpatrick and Sherrington (1978) models, with an asymmetric degree parameter could be varying between 0 and 1. Thus the couplings between players are Gaussians and not necessarily symmetric, which are more closer to the realities in natural social systems. We are more interested in the reconstruction of the couplings between players instead of the collective behaviours of the system.
For an evolutionary PD game, the players’ strategies follow a binary-state dynamics, which could be naturally mapped into binary state, i.e., for cooperation and for defection. Hence, it is straightforward to pose the coupling inference problem for an evolutionary game into the inverse Ising problem. There are many related algorithms to recovering the couplings for the inverse Ising problems, as reviewed in Nguyen et al. (2017) and references therein.
Generally, the couplings can be reconstructed by equilibrium Aurell and Ekeberg (2012); Cocco et al. (2009a); Weigt et al. (2009) or non-equilibrium Pillow et al. (2008); Roudi and Hertz (2011b); Zhang (2012); Zeng et al. (2011, 2013) model given the time-dependent/independent system data. However, in many applications, equilibrium Ising models are typically not the ideal choice for network inference and statistical modeling of the data. A mismatch between the real couplings and the inferred ones can be seen for large systems Roudi et al. (2009); Roudi and Hertz (2011a); Zeng and Aurell (2020b). This is mainly because these systems have the out-of-equilibrium nature and asymmetric couplings between elements. Thus, researchers have moved to the kinetic Ising models, developed exact and approximate methods for network reconstruction Cocco et al. (2009b); Roudi and Hertz (2011a); Zeng et al. (2011, 2013), which has wider generality.
With kinetic Ising models, spins could update synchronously or asynchronously. Similarly, with evolutionary games, synchronous means all players update their strategies simultaneously at every time step Nowak and May (1992). This can be mathematically described by a finite difference equation, but rarely exists in social or natural systems. The players, or organisms usually act at different and uncorrelated times with the coming information which may be delayed and imperfect Huberman and Glance (1993). For instance, a neuron will be in a refractory period after releasing a spike, it cannot respond to an input signal because it is still processing or recovering from the previous input signal Greil et al. (2007); Zeng et al. (2020). The dynamics of such asynchronous update is usually expressed by differential equations, whose solutions are not always the same with those of their finite-difference counterparts. Thus players will update their strategies asynchronously here, even though all of them get their corresponding rewards through the game with and from all neighbors. With asynchronous updates of strategies, only one player will be assigned to update the strategy in one game round. For finite number of players this is not exactly the asynchronous update described above, but we do not see any differences at least for kinetic Ising models Zeng et al. (2011, 2013).
For the network reconstruction with evolutionary game data, some inference methods Wang et al. (2016, 2011) have been developed based on compressive sensing technique Foucart and Rauhut (2017), which is a paradigm for sparse-signal reconstruction. However, the compressive sensing methodology works with assumptions that the interactions between individuals should be explicit and stationary and the system could be handled linearly in some way. For example, in the studies of Wang et al. (2011); Guo et al. (2016); Wu et al. (2016), the observations are the strategies and the total payoff of each player in the evolutionary game. Furthermore, all agents are assumed to play the same game with a given payoff matrix. One player’s total payoff is hence the sum of the payoff from all neighbors. In studies Steinke et al. (2007); Chang et al. (2014); Li et al. (2017), the dynamical functions are employed, but which can also be decomposed into a linear sum of scalar basis functions.
Be distinct from compressive sensing based inference methods, we present here an exact method which is intuitive and based on maximum-log-likelihood estimation (MLE). Another approximate reconstruction method is presented also which is based on naive mean-field (nMF) approximation. Both of the derived inference formula for evolutionary PD game data is enlighten by the asynchronously updated Glauber dynamics Glauber (1963), which could converge to a stationary state described by Gibbs-Boltzman distribution when the couplings are symmetric. However, this is not necessarily true for synchronous case. Hence, we are interested in using a kinetic Ising-like model to reconstruct an evolutionary game network dynamically.
With evolutionary PD game, the update times , history of strategies and rewards for each player are the full model data generated during the game process. MLE needs all of these data while nMF does not need the update times, neither the rewards for each game round. nMF method relies on means, correlations and the temptation of defection parameter parameter in the payoff matrix. With massive numerical simulations, we find that the nMF performs very close to MLE method while avoid long-time iteration process. One equilibrium inference method also tested for the sake of completeness, which shows poor reconstruction for the couplings of our non-equilibrium processes.
The paper is organized as follows. In Sec. II we present the testing network structure as well as the dynamics for the PD evolutionary game. In Sec. III, we derive the MLE and nMF inference formulae from evolutionary game data. Sec. IV, presents how these two methods perform for the network reconstruction, while Sec. V summarizes and discusses the results.
II Asymmetric SK Model and Glauber-Like Game Dynamics
The quenched aSK model is composed by vertices, which represents game player with two kinds of strategies ( for cooperation while for defection). This is a fully connected model, i.e., all players are connected to each other. The interactions between each pair of players have the form of
(1) |
with the asymmetric degree of these interactions, while and are symmetric and asymmetric matrices, respectively. They are Gaussian random variables with means and variances
(2) |
The players do not play game with themselves. This indicates and the diagonal elements of and are as well.
We now describe the dynamics for evolutionary game on the above aSK network model. Within an evolutionary PD game, the judge knows the players are criminal but without evidences. Then a player has two possible strategies (S): cooperation (confess their crime to the judge) or defection (deny their offense to the judge). They will get different payoff when they take different strategies. The rewards are usually expressed by a payoff matrix , for the prisoner dilemma (PD) game:
(3) |
with . If both players choose cooperation, then each of them will get a reward of ; if one takes cooperation while the counterpart defects, then the cooperated one will get a reward of while the defected one gets (temptation of defection); if both players choose defection, then none of them will get any rewards.
During each time step , all players play games simultaneously and get corresponding rewards according to the payoff matrix in matrix (3). Thus, for each player, the rewards come form the games played with as well as from the neighbours. Once they get the rewards from the game, one randomly chosen player will be assigned to update his strategy according to the following formula:
(4) |
where
is the normalization factor (local partition function), which sums over both cooperation and defection states of player .
We denote the cooperative and defective strategy of player at time as and respectively
(5a) | |||
(5b) |
Then the partition function can be rewritten as
(6) |
Define
(7) | |||||
and substitute in (4) with Eq. (6), then we have
(8) | |||||
During each time step, all players join the prisoners’ dilemma game and obtain corresponding payoff according to the payoff matrix (3) from each counterpart. The averaged gain during this time step for player is derived in Eq. (25). However, only one of the players is going to update his strategy, which is not necessarily change his current choice. This is why we refer this dynamics as asynchronous PD game.
III INFERENCES FOR GAME NETWORKS
III.1 Maximum Log-likelihood Estimation (MSE)
Suppose we have the following data: history of players’ strategies , updating times and payoff of each player , of length steps. We reconstruct the game network based on these data. This can be done by maximizing the likelihood over these parameters. As mentioned in Zeng et al. (2013), for each player , the chance of updating their strategy is an Poisson process. This indicates the probability of the update history is independent of the model parameters, thus we can take the objective function as safely from Eq. (8),
(9) |
The sum only over the update times instead of all time steps as in synchronous case Roudi and Hertz (2011a). Then it leads to a learning rule for couplings
(10) | |||||
We refer this algorithm as the “MLE”. It is an exact method as no approximation is introduced to the derivation of the inference formula. Besides, it is notable that if the rewards for each player equal to and the same with each other, then the PD game process degenerates to the original Glauber dynamics Glauber (1963).
III.2 naive Mean Field (nMF) approximation
We start from the master equation
(11) | |||||
and the flipping rate is:
(12) |
which could be obtained through detailed balanced condition Van Kampen (1992):
(13a) | |||
(13b) |
With strategy history for each player , the time-dependent means and correlations can be defined as
(14a) | |||
(14b) |
Thus the external field (influence) for player at time in (7) could be rewritten as
(16) | |||||
where and are two non-fluctuating term depending on parameter and is the population average. The detailed derivation of Eq. (16) is shown in Appendix A.
The nMF approximation hence is formally the same with that for original asynchronously updated kinetic Ising model in Zeng et al. (2011),
(17) |
but with
(18) |
which is obtained by taking the stationary solution of Eq. (15a) with naive mean field approximation. The details are illustrated in Appendix B.
With the time-derivative of correlations in Eq. (15b), the function can be expanded with respect to , yields
(19) | |||||
(20) |
Denoting , we get
(21) |
With the limit , we get the nMF inference formula for the PD game network
(22) |
with , and . With full data history, it is safe to replace with the time average .
IV Simulation Results
We infer the evolutionary game network by MLE (10) and nMF (22) formula respectively. In Fig. 1, the scatter plots for reconstructed structure against the true ones are presented for both methods. Fig. 1(a) is for the data length and Fig. 1(b) for . It shows that both nMF and MLE perform better with larger data length . Furthermore, MLE works slightly better than nMF in both cases. The nMF method over estimates the couplings especially for , as shown in Fig. 1(b).

We introduce the mean square error (MSE) to measure the distance between the reconstructed and the true tested network structure.
(23) |
where are the tested network couplings and for the inferred ones.
With the reconstruction error MSE defined by (23), we compared the performance of MLE and nMF method for fully connected SK model (1). We study both methods for different data length , the asymmetric degree , the temptation of defection parameter in the payoff matrix (3) as well as the coupling strength . The inverse of corresponds to the temperature in spin glass system, which can be explained as the noise (say, fake information or rumors etcetera) in the evolutionary PD game.

Figure 2 shows the performance of both methods. Similar to the results for asynchronous Ising case Zeng et al. (2013), the MSE for MLE is here. However, with the data length we tested here, nMF does not approach to a saturated value as happened for asynchronous Ising case. Instead, the reconstruction error MSE for nMF is about twice of that for MLE, see Fig. 2(a). This could be an indication for game network, nMF inference formula can be obtained also by taken the average of update times in MLE formula (10), which has been proved so in Zeng et al. (2013) for asynchronous Ising models. Figure 2(b) shows the MSE for both methods is not sensitive to the asymmetric degree , no matter the network is symmetric () or fully asymmetric (). Here, the data length with and . The difference between two methods is effected mainly by the data length . The MSE for MLE is about half of that for nMF approach, which agrees well with that shown in Fig. 2(a). Figure 2(c) shows the performance of MLE and nMF algorithm for different reward parameter in the payoff matrix (3), which is a specific parameter for game networks. Both algorithms recover the network better with larger reward . However, the proportion of MSE between nMF and MLE increases slightly for larger . Finally, the effects of inverse over network inference is illustrated in Fig. 2(d). For strong couplings, nMF works much worse than MLE. However, for , their performances are not effected by anymore. MLE still has its advantage of the factor 2 (conferred by the information of update times) compared with nMF.
To illustrate how the equilibrium inference method works for the non-equilibrium process, we use the naive mean-filed equilibrium inference Kappen and Spanjers (2000) algorithm which depends on the equal time correlations only:
(24) |
We refer this as equilibrium inference to avoid the confusion with the nMF method for asynchronous case we derived here. Then, we present the inferred networks through scatter-plots in Fig. 3 with both equilibrium and non-equilibrium method for evolutionary data generated by PD games.
Fig. 3(a) and (b) presents the reconstructed couplings for symmetric () and fully asymmetric () SK model respectively. Our asynchronous nMF (blue open dots) and MLE (red upper triangle) methods recover the couplings nicely while the equilibrium nMF (24) (open black squares) works much worse, even for , the symmetric case. It is not surprising as equilibrium nMF (24) we tested here does not take into account the game process at all. For the original asynchronous Ising model without game process, the equilibrium nMF should work as good as the non-equilibrium methods Zeng and Aurell (2020b) since they are for the same dynamics.

V Discussion
We studied the network reconstruction problem with asynchronous updated evolutionary game data. The gamblers play the PD game with their counterparts simultaneously but update their strategies asynchronously. Two methods (exact and iterative MLE and approximate nMF) are introduced to infer the fully connected and Gaussian distributed couplings between players.
Comparing with the original asynchronous Ising model Zeng et al. (2011, 2013), we combine the PD games with Glauber dynamics, where the rewards during the game for each player is considered. The MLE and nMF reconstruction formulae for asynchronous Ising models in Zeng et al. (2011, 2013) hence have to be generalized for the asynchronous evolutionary game case here. MLE is derived from the log-likelihood of the system while nMF from the derivative of the equations of motion of means and correlations. However, they are not independent with each other. MLE utilizes the full model histories, , and . The update histories in Eq. (10) could be averaged and yields the nMF inference formula (22) by taking the linear term of the averaged function.
With the derived MLE and nMF methods for asynchronously updates PD game, our numerical results show that the inference performance of nMF is comparable with that of MLE. Both methods are not sensitive to the asymmetric degree , but proportional to the inverse of data length. For fixed data length, the reconstruction error (MSE) of MLE is half (benefit from update times) of that by nMF method. Furthermore, nMF prefers small/weak coupling strength () while MLE holds for much wider range of . Both methods recover better for large payoff parameter . This is reasonable as when the rewards of each players equal to , then the external field Eq. (7) in the dynamics degenerates to the original asynchronous Ising model. With increases, the rewards of each players will also close to which show better performance of network reconstruction.
However, comparing with the data that MLE method used for inference, nMF needs only the parameter and the moments of strategy history . Furthermore, nMF infers much fast than MLE as no iterative procedure included for network inference. The scatter-plots in Fig. 1 and Fig. 3 show that the s are obviously comparable with s, even though the MSE for nMF is roughly twice of that for MLE. Thus, we expect that for the network inference with data from asynchronous PD game, nMF works well enough. The higher orders of tanh function (say, TAP) are not necessary.
Besides, we have tested a widely used equilibrium inference method (24) blindly for our dynamic data generated from evolutionary PD games, which recover much worse than that by non-equilibrium inference methods, even for symmetric interactions between players, as illustrated in Fig. 3(a). This bring us a message that if we have time series data generated from some certain dynamics, we should try non-equilibrium methods first. Furthermore, we start from the detailed balance condition to derive the asynchronous nMF formula for the network inference, but as shown by the numerical results, the inference formula holds for asymmetric case also where the system is not in the equilibrium realm any more.
This work focuses on a simple and idealized prisoner’s game with fully connected and Gaussian distributed interactions, which could be extended to certain type of structures or different distributions (say, random power-law, where strong couplings between few players while weak ones between most of them) for the couplings. Besides, we use a constant value of defection temptation which could be different for each players. Meanwhile, non-zero abut small in the payoff matrix (3) could also be tested. These will be studied by further work.
Appendix A Reformulating the external field
According to the payoff matrix (3) for prisoner’s dilemma game, the average reward that player will gain at time is (for simplicity, is assumed to be equal to 0):
(25) | |||||
The factor outside the bracket comes from the fact that players obtained rewards from the game with neighbors as well as the games originated from neighbors within each time step of synchronous PD games. Then in Eq. (7) is:
(26) |
with the cooperation density in the neighborhood of player :
(27) |
Hence we rewrite Eq. (26) as:
(28) |
with
(29a) | |||
(29b) |
and .
Thus, we reformulated eq. (7) as:
(30) |
Appendix B nMF approximation
Substituting with non-fluctuating and fluctuating terms as:
(31) |
and define
(32) |
then
expanding the function with respect to to the first order, we get:
(34) | |||||
where the second term on the right hand-side goes to zero. Hence we have naive mean-field approximation for as:
(35) |
which is the stationary solution of the time derivative of magnetization in equation (15a).
Acknowledgements.
We sincerely thank John Hertz, Erik Aurell and Pan Zhang for heuristic and fruitful discussions and suggestions. H.-L. Zeng was supported by NSFC Project No. 11705097. HLZ was also supported by Natural Science Foundation of Jiangsu Province (BK20170895), Natural Science Fund for Colleges and Universities in Jiangsu Province (17KJB140015), Jiangsu Government Scholarship for Overseas Studies of 2018, and Scientific Research Foundation of Nanjing University of Posts and Telecommunications (NY217013). S.-M. Qin was supported by Project NSFC No. 11705279.References
- Schneidman et al. (2006) E. Schneidman, M. J. Berry II, R. Segev, and W. Bialek, Nature 440, 1007 (2006).
- Nguyen et al. (2016) J. P. Nguyen, F. B. Shipley, A. N. Linder, G. S. Plummer, M. Liu, S. U. Setru, J. W. Shaevitz, and A. M. Leifer, Proceedings of the National Academy of Sciences 113, E1074 (2016).
- Roudi and Hertz (2011a) Y. Roudi and J. Hertz, Physical review letters 106, 048702 (2011a).
- Zeng and Aurell (2020a) H.-L. Zeng and E. Aurell, “Inferring genetic fitness from genomic data,” (2020a), arXiv:2001.02173 [q-bio.PE] .
- Pincus and Kalman (2004) S. Pincus and R. E. Kalman, Proceedings of the National Academy of Sciences 101, 13709 (2004), https://www.pnas.org/content/101/38/13709.full.pdf .
- Bury (2013) T. Bury, Physica A: Statistical Mechanics and its Applications 392, 1375 (2013).
- Zeng et al. (2014) H.-L. Zeng, R. Lemoy, and M. Alava, Journal of Statistical Mechanics: Theory and Experiment 2014, P07008 (2014).
- Masuda and Aihara (2003) N. Masuda and K. Aihara, Physics Letters, Section A: General, Atomic and Solid State Physics 313, 55 (2003).
- Szabó and Töke (1998) G. Szabó and C. Töke, Physical Review E 58, 69 (1998).
- Chen et al. (2007) Y. S. Chen, H. Lin, and C. X. Wu, Physica A: Statistical Mechanics and its Applications 385, 379 (2007).
- Kirkpatrick and Sherrington (1978) S. Kirkpatrick and D. Sherrington, Physical Review B 17, 4384 (1978).
- Nguyen et al. (2017) H. C. Nguyen, R. Zecchina, and J. Berg, Advances in Physics 66, 197 (2017).
- Aurell and Ekeberg (2012) E. Aurell and M. Ekeberg, Phys. Rev. Lett. 108, 090201 (2012).
- Cocco et al. (2009a) S. Cocco, S. Leibler, and R. Monasson, Proceedings of the National Academy of Sciences 106, 14058 (2009a).
- Weigt et al. (2009) M. Weigt, R. A. White, H. Szurmant, J. A. Hoch, and T. Hwa, Proceedings of the National Academy of Sciences 106, 67 (2009).
- Pillow et al. (2008) J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli, Nature 454, 995 (2008).
- Roudi and Hertz (2011b) Y. Roudi and J. Hertz, Phys. Rev. Lett. 106, 048702 (2011b).
- Zhang (2012) P. Zhang, Journal of Statistical Physics 148, 502 (2012).
- Zeng et al. (2011) H.-L. Zeng, E. Aurell, M. Alava, and H. Mahmoudi, Physical Review E 83, 041135 (2011).
- Zeng et al. (2013) H.-L. Zeng, M. Alava, E. Aurell, J. Hertz, and Y. Roudi, Physical review letters 110, 210601 (2013).
- Roudi et al. (2009) Y. Roudi, S. Nirenberg, and P. E. Latham, PLoS computational biology 5, e1000380 (2009).
- Zeng and Aurell (2020b) H.-L. Zeng and E. Aurell, arXiv:2002.05222 (2020b).
- Cocco et al. (2009b) S. Cocco, S. Leibler, and R. Monasson, Proceedings of the National Academy of Sciences 106, 14058 (2009b).
- Nowak and May (1992) M. A. Nowak and R. M. May, Nature 359, 826 (1992).
- Huberman and Glance (1993) B. A. Huberman and N. S. Glance, Proceedings of the National Academy of Sciences 90, 7716 (1993).
- Greil et al. (2007) F. Greil, B. Drossel, and J. Sattler, New Journal of Physics 9, 373 (2007).
- Zeng et al. (2020) H.-L. Zeng, C.-P. Zhu, S.-X. Wang, Y.-D. Guo, Z.-M. Gu, and C.-K. Hu, Physica A: Statistical Mechanics and its Applications 540, 123191 (2020).
- Wang et al. (2016) W.-X. Wang, Y.-C. Lai, and C. Grebogi, Physics Reports 644, 1 (2016).
- Wang et al. (2011) W.-X. Wang, Y.-C. Lai, C. Grebogi, and J. Ye, Phys. Rev. X 1, 021021 (2011).
- Foucart and Rauhut (2017) S. Foucart and H. Rauhut, Bull. Am. Math 54, 151 (2017).
- Guo et al. (2016) Q. Guo, G. Liang, J.-Q. Fu, J.-T. Han, and J.-G. Liu, Physical Review E 94, 052303 (2016).
- Wu et al. (2016) K. Wu, J. Liu, and S. Wang, Scientific reports 6, 37771 (2016).
- Steinke et al. (2007) F. Steinke, M. Seeger, and K. Tsuda, BMC systems biology 1, 51 (2007).
- Chang et al. (2014) Y. H. Chang, J. W. Gray, and C. J. Tomlin, BMC bioinformatics 15, 400 (2014).
- Li et al. (2017) J. Li, Z. Shen, W.-X. Wang, C. Grebogi, and Y.-C. Lai, Phys. Rev. E 95, 032303 (2017).
- Glauber (1963) R. J. Glauber, Journal of mathematical physics 4, 294 (1963).
- Van Kampen (1992) N. G. Van Kampen, Stochastic processes in physics and chemistry, Vol. 1 (Elsevier, 1992).
- Kappen and Spanjers (2000) H. J. Kappen and J. J. Spanjers, Phys. Rev. E 61, 5658 (2000).