This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Score-Based Equilibrium Learning in Multi-Player Finite Games with Imperfect Information

Runyu Lu1
[email protected]
&Yuanheng Zhu2
[email protected]
&Dongbin Zhao3
[email protected]
Abstract

Real-world games, which concern imperfect information, multiple players, and simultaneous moves, are less frequently discussed in the existing literature of game theory. While reinforcement learning (RL) provides a general framework to extend the game theoretical algorithms, the assumptions that guarantee their convergence towards Nash equilibria may no longer hold in real-world games. Starting from the definition of the Nash distribution, we construct a continuous-time dynamic named imperfect-information exponential-decay score-based learning (IESL) to find approximate Nash equilibria in games with the above-mentioned features. Theoretical analysis demonstrates that IESL yields equilibrium-approaching policies in imperfect information simultaneous games with the basic assumption of concavity. Experimental results show that IESL manages to find approximate Nash equilibria in four canonical poker scenarios and significantly outperforms three other representative algorithms in 3-player Leduc poker, manifesting its equilibrium-finding ability even in practical sequential games. Furthermore, related to the concept of game hypomonotonicity, a trade-off between the convergence of the IESL dynamic and the ultimate NashConv of the convergent policies is observed from the perspectives of both theory and experiment.

11footnotetext: School of Artificial Intelligence, University of Chinese Academy of Sciences 2,3State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences

1 Introduction

1.1 Background

Allowing players to bluff and probe, imperfect information games have become an interesting topic in recent research involving both game theory and machine learning. This class of games covers more real-life problems like poker (see [24][6]) and is even more difficult to solve when compared with its perfect information counterpart. Despite its complexity, research in game theory (see [29][25]) has established various equilibrium-finding dynamics that have shown the potential to solve large-scale imperfect information games with the help of deep reinforcement learning (DRL).

As two typical equilibrium-learning dynamics, fictitious play (FP) [4] and regret matching [13] (RM) yield average policies that converge to Nash equilibria in two-player zero-sum games (see [28][33]). In sequential imperfect information games, they can be extended into extensive-form fictitious play (XFP) [14] and counterfactual regret minimization (CFR) [36], respectively. Using DRL techniques, Neural Fictitious Self-Play (NFSP) [15] and Policy-Space Response Oracles (PSRO) [20] further extend FP to large-scale games, while Deep CFR [5] extends Monte Carlo CFR [19] in a similar manner. As a score-based learning dynamic that originates from evolutionary game theory [34][17], Follow the Regularized Leader (FTRL) [22] generalizes replicator dynamics (RD) [32] and guarantees average-policy convergence to Nash equilibrium also in zero-sum games. NeuRD [16] further extends RD with policy-based reinforcement learning and into imperfect information games.

1.2 Recent Works

Although the methods above are capable of finding Nash equilibria in zero-sum scenarios, the preservation of average policies can be difficult when function approximators like neural networks are introduced. Perolat et al. [27] showed that the instantaneous policy in FTRL may cycle in imperfect information games and further proposed an algorithm where last-iterate convergence towards exact Nash equilibrium is guaranteed in zero-sum or monotone games. Borrowing the one-step policy-update idea from NeuRD, the algorithm named Regularized Nash Dynamic (R-NaD) was combined with DRL to build DeepNash [26], which achieved human expert-level performance in Stratego.

While R-NaD applies reward regularization to ensure the convergence of instantaneous policies, another route is to study a broader class of score-based learning dynamics. Gao et al. [11] showed that a modified version of the score function, whose integral expression contains an exponentially decaying term, induces instantaneous policies that approach Nash distributions in normal-form games characterized by hypomonotonicity. The corresponding dynamic EXP-D-RL was generalized into Markov games by Zhu et al. [35]. In this paper, by proposing the continuous-time dynamic named imperfect-information exponential-decay score-based learning (IESL), we will show that a proper score construction can give rise to direct policy convergence towards approximate Nash equilibrium even in multi-player imperfect information games, both theoretically and empirically.

1.3 Motivations & Contributions

While the above-mentioned methods for dealing with imperfect information heavily rely on the formulation of extensive-form games, which require players to make decisions in turn, many real-world games (even real-time video games) allow players to move simultaneously. Since reinforcement learning is primarily established upon Markov decision processes (MDPs), which are characterized by simultaneous moves, building learning dynamics with the language of RL naturally deals with this problem and allows further extensions with theoretical guarantees. While partially observable stochastic games (POSGs) [12]) directly extend MDPs into imperfect information games, existing research on POSGs usually requires additional strong assumptions when constructing equilibrium-learning algorithms (see PO-POSG [18]). In this paper, we restate POSG as a general framework to describe imperfect information simultaneous games and show that IESL yields approximate Nash equilibria with a minimum of further restrictions on the game. While our theoretical analysis is based on this POSG-like framework, IESL can be rightly implemented in sequential games like poker and practically compared with algorithms like CFR.

The contributions of this paper include:

  • We theoretically analyze the relationships among two variants of Nash distribution, the rest point in IESL, and approximate Nash equilibrium in finite games with imperfect information and simultaneous moves. An important property is that every rest point of IESL induces an approximate Nash equilibrium when the utility is concave.

  • We theoretically demonstrate that IESL rest points attract other points in nearby regions restricted by policy hypomonotonicity. Since we have proved that the rest points of IESL yield approximate Nash equilibria, a novel convergence result with regard to Nash equilibrium in multi-player imperfect information games is established.

  • We empirically verify that the instantaneous policies in IESL do converge to approximate Nash equilibria in multi-player poker games. In fact, IESL significantly outperforms CFR, FP, and RD in moderate-scale 3-player Leduc poker.

  • From both theory and experiment, we observe that a trade-off exists between the convergence of the IESL dynamic and the NashConv of the convergent policies. This induces an efficient parameter selection method when we seek the lowest NashConv of a convergent policy.

2 Preliminaries

2.1 Game Description & Useful Notations

To describe general finite games with imperfect information and simultaneous moves, we first restate POSG under the restriction of a finite horizon. (See Appendix A.2 for a more detailed description.)

Definition 1.

An nn-player finite-horizon imperfect information game with simultaneous moves is a tuple N,S,Z,A,Tr,O,{Ri},ρ,γ,T\left\langle N,S,Z,A,Tr,O,\left\{{{R}^{i}}\right\},\rho,\gamma,T\right\rangle. N={1,2,,n}N=\{1,2,\cdots,n\} is the set of players. SS is the set of global states ss. ZZ is the set of joint observations with z=(zi)iNZ\vec{z}={(z^{i})}_{i\in N}\in Z. AA is the set of actions with a=(ai)iNAn\vec{a}={(a^{i})}_{i\in N}\in A^{n}. TrTr is the tuple of state transition distributions that generate subsequent states with st+1Tr(st,at)s_{t+1}\sim Tr(s_{t},\vec{a}_{t}). OO is the tuple of joint observation distributions that generates observations with zt+1O(st,at,st+1)\vec{z}_{t+1}\sim O(s_{t},\vec{a}_{t},s_{t+1}). RiR^{i} is the tuple of reward functions that generate player ii’s instant rewards with rt+1i=Ri(st,at,st+1)r_{t+1}^{i}=R^{i}(s_{t},\vec{a}_{t},s_{t+1}). ρ\rho is the initial state distribution that generates the initial state s0s_{0} with s0ρs_{0}\sim\rho. γ(0,1]\gamma\in(0,1] is the discount factor. TT is the termination time.

In addition to the basic elements in the definition above, some extra notations are useful in analysis.

Borrowing an important idea from extensive-form games, we can use trajectories to describe an arbitrary situation in a POSG. A global trajectory at step tt (t<T)(t<T) can be expressed as a history ht={s0,z0,a0,,st1,zt1,at1,st,zt}h_{t}=\left\{s_{0},\vec{z}_{0},\vec{a}_{0},\cdots,s_{t-1},\vec{z}_{t-1},\vec{a}_{t-1},s_{t},\vec{z}_{t}\right\}. In comparison, a local trajectory for player ii can be expressed as a reduced version xti={z0i,a0i,,zt1i,at1i,zti}x_{t}^{i}=\left\{z_{0}^{i},a_{0}^{i},\cdots,z_{t-1}^{i},a_{t-1}^{i},z_{t}^{i}\right\} that corresponds to an information set defined over all possible hth_{t}, as in extensive-form games. We write htxtih_{t}\in x_{t}^{i} if hth_{t} belongs to the corresponding information set of xtix_{t}^{i}, or, equivalently, if xtix_{t}^{i} is compatible with hth_{t} in reduction. For ktk\geq t, we simply write hkhth_{k}\sqsupset h_{t} if hth_{t} is compatible with hkh_{k}, and xkixtix_{k}^{i}\sqsupset x_{t}^{i} if xtix_{t}^{i} is compatible with xkix_{k}^{i}.

Policy is the core concept when we analyze a single-player decision process or a multiple-player game. In an imperfect information simultaneous game, player ii’s policy πi\pi^{i} at each local trajectory (information set) xix^{i} is a probability simplex over AA, assigning each possible action aia^{i} a probability πi(xi,ai)\pi^{i}(x^{i},a^{i}). The set of all complete policies for player ii is expressed as Πi\Pi^{i}. We use the joint policy π=(πi)iN\vec{\pi}=(\pi^{i})_{i\in N} to denote a combination of all players’ policies, and use πi\vec{\pi}^{-i} to denote a combination of all players’ policies except player ii.

Following single-agent Markov decision processes, we can define global-trajectory-based value functions Vπi(ht)=𝔼[k=tT1γktrk+1i|ht,π]V_{{\vec{\pi}}}^{i}({{h}_{t}})=\mathbb{E}\left[\sum\nolimits_{k=t}^{T-1}{{{\gamma}^{k-t}}r_{k+1}^{i}}\left|{{h}_{t}},\vec{\pi}\right.\right] and Qπi(ht,ati)=𝔼[rt+1i+γVπi(ht+1)|ht,ati,π]Q_{{\vec{\pi}}}^{i}({{h}_{t}},a_{t}^{i})=\mathbb{E}\left[r_{t+1}^{i}+\gamma V_{{\vec{\pi}}}^{i}({{h}_{t+1}})\left|{{h}_{t}},a_{t}^{i},\vec{\pi}\right.\right], as well as the advantage function Aπi(ht,ati)=Qπi(ht,ati)Vπi(ht)A_{{\vec{\pi}}}^{i}({{h}_{t}},a_{t}^{i})=Q_{{\vec{\pi}}}^{i}({{h}_{t}},a_{t}^{i})-V_{{\vec{\pi}}}^{i}({{h}_{t}}).

2.2 Nash Equilibrium & Nash Distribution

By constructing utility functions in different forms, various rest points in policy space can be defined. Nash equilibrium is among the most well-studied classes of rest points, reflecting a stable condition with respect to individual payoff maximization.

Definition 2.

A Nash equilibrium πnash\vec{\pi}_{nash} in an nn-player finite-horizon imperfect information game with simultaneous moves is described using the utility function: ui(π)=𝔼h0[Vπi(h0)]{{u}^{i}}(\vec{\pi})={{\mathbb{E}}_{{{h}_{0}}}}\left[V_{{\vec{\pi}}}^{i}({{h}_{0}})\right]. For any iNi\in N and for any possible policy πiΠi\pi^{i}\in\Pi^{i}, it holds:

ui(πnash)ui(πi,πnashi)\displaystyle{{u}^{i}}({{\vec{\pi}}_{nash}})\geq{{u}^{i}}({{\pi}^{i}},\vec{\pi}_{nash}^{-i})

NashConv is a commonly used concept that measures the closeness of an arbitrary policy π\vec{\pi} towards Nash equilibrium: NashConv(π)=i=1nmaxπiΠi{ui(πi,πi)}ui(π)\mathrm{NashConv}(\vec{\pi})=\sum\nolimits_{i=1}^{n}{{{\max}_{\pi_{\dagger}^{i}\in{{\Pi}^{i}}}}\left\{{{u}^{i}}(\pi_{\dagger}^{i},{\vec{\pi}^{-i}})\right\}-{{u}^{i}}(\vec{\pi})}. Exact Nash equilibrium has zero NashConv, and its computation in non-zero-sum games is probably rather hard (see [8]). Therefore, it is reasonable to consider approximate Nash equilibrium in multi-player games.

Definition 3.

An ϵ\epsilon^{\prime}-Nash equilibrium πnash(ϵ)\vec{\pi}_{nash(\epsilon^{\prime})} in an nn-player finite-horizon imperfect information game with simultaneous moves is described using the utility function: ui(π)=𝔼h0[Vπi(h0)]{{u}^{i}}(\vec{\pi})={{\mathbb{E}}_{{{h}_{0}}}}\left[V_{{\vec{\pi}}}^{i}({{h}_{0}})\right]. For any iNi\in N and for any possible policy πiΠi\pi^{i}\in\Pi^{i}, it holds:

ui(πi,πnash(ϵ)i)ui(πnash(ϵ))ϵ\displaystyle{{u}^{i}}({{\pi}^{i}},\vec{\pi}_{nash(\epsilon^{\prime})}^{-i})-{{u}^{i}}({{\vec{\pi}}_{nash(\epsilon^{\prime})}})\leq\epsilon^{\prime}

In the following paragraphs, we refer to the infimum of ϵ\epsilon^{\prime} as deviation. Simply by definition, we know that an approximate Nash equilibrium has NashConv upper-bounded by the deviation multiplied by nn. As finding even approximate Nash equilibrium is proved to be PPAD-complete [7], constructing dynamics directly related to this concept is still non-trivial. Nash distribution [21] describes another rest point class with utility regularization, which is helpful in building convergent learning dynamics.

Definition 4.

A Nash distribution π\vec{\pi}_{*} characterized by a non-negative parameter ϵ\epsilon in an nn-player finite-horizon imperfect information game with simultaneous moves is described using the regularized utility function u~i(π)=𝔼h0[Vπi(h0)]ϵt=0T1γtxti(H(πi(xti))htxtiPr(ht|π))\tilde{u}^{i}(\vec{\pi})={{\mathbb{E}}_{{{h}_{0}}}}\left[V_{{\vec{\pi}}}^{i}({{h}_{0}})\right]-\epsilon\sum\limits_{t=0}^{T-1}{{{\gamma}^{t}}\sum\limits_{x_{t}^{i}}{\left(H({{\pi}^{i}}(x_{t}^{i}))\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|\vec{\pi}\right)}\right)}}, where H(πi(xti))=atiπi(xti,ati)logπi(xti,ati)H({{\pi}^{i}}(x_{t}^{i}))=\sum\limits_{a_{t}^{i}}{{{\pi}^{i}}(x_{t}^{i},a_{t}^{i})\log{{\pi}^{i}}(x_{t}^{i},a_{t}^{i})} is the negative Gibbs entropy. For any iNi\in N and for any possible policy πiΠi\pi^{i}\in\Pi^{i}, it holds:

u~i(π)u~i(πi,πi)\displaystyle{\tilde{u}^{i}}({{\vec{\pi}}_{*}})\geq{\tilde{u}^{i}}({{\pi}^{i}},\vec{\pi}_{*}^{-i}) (1)

In Definition 4, the expression of u~i(π)\tilde{u}^{i}(\vec{\pi}) extracts the same expected value term 𝔼h0[Vπi(h0)]{{\mathbb{E}}_{{{h}_{0}}}}\left[V_{{\vec{\pi}}}^{i}({{h}_{0}})\right] in comparison to Nash equilibrium. Alternatively, we can write u~i(π)=𝔼h0[V~πi(h0)]\tilde{u}^{i}(\vec{\pi})={{\mathbb{E}}_{{{h}_{0}}}}\left[\tilde{V}_{{\vec{\pi}}}^{i}({{h}_{0}})\right], where V~\tilde{V} is the value defined under a regularized reward function R~xti(st,at,st+1)=Ri(st,at,st+1)H(πi(xti))\tilde{R}^{x_{t}^{i}}(s_{t},\vec{a}_{t},s_{t+1})=R^{i}(s_{t},\vec{a}_{t},s_{t+1})-H({{\pi}^{i}}(x_{t}^{i})). This defines the same utility function as in the formal definition above.

Apparently, when we set ϵ=0\epsilon^{\prime}=0 for approximate Nash equilibrium and ϵ=0\epsilon=0 for Nash distribution, the two concepts become the same one that describes exact Nash equilibrium. However, because the expression of u~i(π)\tilde{u}^{i}(\vec{\pi}) contains a reaching probability Pr(ht|π)\Pr\left({{h}_{t}}|\vec{\pi}\right) determined by the own policy π\vec{\pi}, the deviation of a Nash distribution is not tightly bounded. Besides, the probability term makes it hard to prove the relationship between a Nash distribution and the rest point of a learning dynamic in an imperfect information game. To make practical use of this concept, we consider modifying the original definition of Nash distribution. In Section 3, two variants of the Nash distribution are proposed, and they eventually serve to construct our equilibrium-finding dynamic.

3 Dynamic Construction

3.1 Variants of Nash Distribution

Definition 5.

Characterized by a non-negative parameter ϵ\epsilon, a Nash distribution with modification-L is a policy πL\vec{\pi}_{L}. For any iNi\in N and for any possible policy πiΠi\pi^{i}\in\Pi^{i}, it holds:

ui(πL)ϵt=0T1γtxti(H(πLi(xti))htxtiPr(ht|πL))\displaystyle{{u}^{i}}({{\vec{\pi}}_{L}})-\epsilon\sum\limits_{t=0}^{T-1}{{{\gamma}^{t}}\sum\limits_{x_{t}^{i}}{\left(H(\pi_{L}^{i}(x_{t}^{i}))\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{L}}\right)}\right)}}\geq
ui(πi,πLi)ϵt=0T1γtxti(H(πi(xti))htxtiPr(ht|πL))\displaystyle{{u}^{i}}({{\pi}^{i}},\vec{\pi}_{L}^{-i})-\epsilon\sum\limits_{t=0}^{T-1}{{{\gamma}^{t}}\sum\limits_{x_{t}^{i}}{\left(H({{\pi}^{i}}(x_{t}^{i}))\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{L}}\right)}\right)}}

where ui(π)=𝔼h0[Vπi(h0)]{{u}^{i}}(\vec{\pi})={{\mathbb{E}}_{{{h}_{0}}}}\left[V_{{\vec{\pi}}}^{i}({{h}_{0}})\right].

Note that modification-L replaces the RHS probability Pr(ht|πi,πLi)\Pr\left({{h}_{t}}|{\pi^{i},{\vec{\pi}^{-i}_{L}}}\right) in the original definition of Nash distribution (1) with the same Pr(ht|πL)\Pr\left({{h}_{t}}|\vec{\pi}_{L}\right) term in the LHS. Through transposition, we have:

ui(πi,πLi)ui(πL)ϵt=0T1γtxti((H(πi(xti))H(πLi(xti)))htxtiPr(ht|πL))\displaystyle{{u}^{i}}({{\pi}^{i}},\vec{\pi}_{L}^{-i})-{{u}^{i}}({{\vec{\pi}}_{L}})\leq\epsilon\sum\limits_{t=0}^{T-1}{{{\gamma}^{t}}\sum\limits_{x_{t}^{i}}{\left(\left(H({{\pi}^{i}}(x_{t}^{i}))-H(\pi_{L}^{i}(x_{t}^{i}))\right)\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{L}}\right)}\right)}}

Since H()H(\cdot) is the negative Gibbs entropy, (H(πi(xti))H(πLi(xti)))\left(H({{\pi}^{i}}(x_{t}^{i}))-H(\pi_{L}^{i}(x_{t}^{i}))\right) has an upper bound log|A|\log{|A|}. Since the probability Pr(ht|πL)\Pr\left({{h}_{t}}|\vec{\pi}_{L}\right) sums up to 11 over all global trajectories hth_{t} with the same tt, the deviation of πL\vec{\pi}_{L} is bounded by ϵlog|A|k=0T1γk\epsilon\log\left|A\right|\sum\nolimits_{k=0}^{T-1}{{{\gamma}^{k}}}.

Definition 6.

Characterized by a non-negative parameter ϵ\epsilon, a Nash distribution with modification-R is a policy πR\vec{\pi}_{R}. For any iNi\in N and for any possible policy πiΠi\pi^{i}\in\Pi^{i}, it holds:

ui(πR)ϵt=0T1γtxti(H(πRi(xti))htxtiPr(ht|πi,πRi))\displaystyle{{u}^{i}}({{\vec{\pi}}_{R}})-\epsilon\sum\limits_{t=0}^{T-1}{{{\gamma}^{t}}\sum\limits_{x_{t}^{i}}{\left(H(\pi_{R}^{i}(x_{t}^{i}))\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{\pi^{i},{\vec{\pi}^{-i}_{R}}}\right)}\right)}}\geq
ui(πi,πRi)ϵt=0T1γtxti(H(πi(xti))htxtiPr(ht|πi,πRi))\displaystyle{{u}^{i}}({{\pi}^{i}},\vec{\pi}_{R}^{-i})-\epsilon\sum\limits_{t=0}^{T-1}{{{\gamma}^{t}}\sum\limits_{x_{t}^{i}}{\left(H({{\pi}^{i}}(x_{t}^{i}))\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{\pi^{i},{\vec{\pi}^{-i}_{R}}}\right)}\right)}}

where ui(π)=𝔼h0[Vπi(h0)]{{u}^{i}}(\vec{\pi})={{\mathbb{E}}_{{{h}_{0}}}}\left[V_{{\vec{\pi}}}^{i}({{h}_{0}})\right].

Note that modification-R replaces the LHS probability Pr(ht|πR)\Pr\left({{h}_{t}}|\vec{\pi}_{R}\right) in the original definition of Nash distribution (1) with the same Pr(ht|πi,πRi)\Pr\left({{h}_{t}}|{\pi^{i},{\vec{\pi}^{-i}_{R}}}\right) term in the RHS. Similarly, we can show that the deviation of πR\vec{\pi}_{R} is bounded by ϵlog|A|k=0T1γk\epsilon\log\left|A\right|\sum\nolimits_{k=0}^{T-1}{{{\gamma}^{k}}}, as a common reaching probability is used on both sides to construct the policy-entropy regularization terms.

The two variants of the Nash distribution guide us to build a related learning dynamic whose rest points also yield policies with a bounded deviation.

3.2 Main Dynamic

Our main dynamic, named imperfect-information exponential-decay score-based learning (IESL), is expressed with (2)(4)(5). Note that both score y\vec{y} and policy π\vec{\pi} are functions on a continuous time variable tt, and the dot over yy indicates its derivative on tt. Since the game description has used the same variable as the discrete timestep, to avoid ambiguity, we will not explicitly write out the subscript tt in IESL unless necessary.

{y˙=η(w(π)y)π=σ(y)\displaystyle\left\{\begin{matrix}\ \ {{{\dot{\vec{y}}}}}=\eta(\vec{w}({{{\vec{\pi}}}})-{{{\vec{y}}}})\\ \\ \ \ {{{\vec{\pi}}}}=\vec{\sigma}({{{\vec{y}}}})\end{matrix}\right. (2)

In comparison to replicator dynamics, where y˙=w(π){{{\dot{\vec{y}}}}}=\vec{w}({{{\vec{\pi}}}}), IESL introduces an exponential decay in the integral form of the score:

yti(xi,ai)=eηty0i(xi,ai)+η0teη(tτ)wi(πτ)(xi,ai)𝑑τ\displaystyle y_{t}^{i}({{x}^{i}},{{a}^{i}})={{e}^{-\eta t}}y_{0}^{i}({{x}^{i}},{{a}^{i}})+\eta\int_{0}^{t}{{{e}^{-\eta(t-\tau)}}{{w}^{i}}({{{\vec{\pi}}}_{\tau}})({{x}^{i}},{{a}^{i}})d\tau} (3)

In the context of imperfect information games, w()\vec{w}(\cdot) in (2) can be viewed as a vectored local advantage function from the perspective of each individual player. To be specific, wi()w^{i}(\cdot) averages globally defined advantage values over all histories hih^{i} that belong to a given information set xix^{i}:

wi(π)(xi,ai)=hixiPr(hi|π)Aπi(hi,ai)hixiPr(hi|π)\displaystyle{{w}^{i}}({{\vec{\pi}}})({{x}^{i}},{{a}^{i}})=\frac{\sum\limits_{{{h}^{i}}\in{{x}^{i}}}{\Pr\left({{h}^{i}}|{{{\vec{\pi}}}}\right)A_{{{{\vec{\pi}}}}}^{i}({{h}^{i}},{{a}^{i}})}}{\sum\limits_{{{h}^{i}}\in{{x}^{i}}}{\Pr\left({{h}^{i}}|{{{\vec{\pi}}}}\right)}} (4)

The regularized policy choice function σ()\vec{\sigma}(\cdot) in (2) maps the score y\vec{y} to a corresponding policy π\vec{\pi}:

σi(yi)(xi)=argmaxπi(xi)Πi(xi){aiπi(xi,ai)yi(xi,ai)ϵH(πi(xi))}\displaystyle{{\sigma}^{i}}(y^{i})({{x}^{i}})=\underset{{{\pi}^{i}}({{x}^{i}})\in{{\Pi}^{i}}({{x}^{i}})}{\mathop{\arg\max}}\,\left\{\sum\limits_{{a}^{i}}{{{\pi}^{i}}({{x}^{i}},{{a}^{i}})y^{i}({{x}^{i}},{{a}^{i}})}-\epsilon H({{\pi}^{i}}({{x}^{i}}))\right\} (5)

Since HH is the negative Gibbs entropy, σ()\vec{\sigma}(\cdot) has a close-form expression, which is known as softmax:

σi(yi)(xi,ai)=exp(1ϵyi(xi,ai))bexp(1ϵyi(xi,b))\displaystyle{{\sigma}^{i}}(y^{i})({{x}^{i}},{{a}^{i}})=\frac{\exp\left(\frac{1}{\epsilon}{{y}^{i}}({{x}^{i}},{{a}^{i}})\right)}{\sum\limits_{b}{\exp\left(\frac{1}{\epsilon}{{y}^{i}}({{x}^{i}},b)\right)}} (6)

Note that it generates pure strategy with the largest score when ϵ0+\epsilon\to 0^{+} (fully exploit) and uniformly random policy when ϵ+\epsilon\to+\infty (fully explore). Also note that ϵH()\epsilon H(\cdot) is an ϵ\epsilon-strongly convex penalty function (see [23] Section 3.2), which is essential in the subsequent convergence analysis of IESL.

We can see that in (3), the exponentially decaying factor leads to a provably bounded score. On the one hand, w()\vec{w}(\cdot) is naturally bounded since the game is finite. On the other hand, 0teη(tτ)𝑑τ=1ηeη(tτ)|0t=1η(1eηt)[0,1η]\int_{0}^{t}{{{e}^{-\eta(t-\tau)}}d\tau}=\frac{1}{\eta}\left.{{e}^{-\eta(t-\tau)}}\right|_{0}^{t}=\frac{1}{\eta}(1-{{e}^{-\eta t}})\in[0,\frac{1}{\eta}]. The basic property of integrals shows that y\vec{y} is bounded.

Besides, since σ()\vec{\sigma}(\cdot) is continuous, w(σ())\vec{w}(\vec{\sigma}(\cdot)) is also continuous. Therefore, a fixed point of w(σ())\vec{w}(\vec{\sigma}(\cdot)) always exists by Brouwer’s fixed-point theorem [10]. The existence of a rest point of (2) in score space is thus guaranteed. In Section 3.3, we will further show that any Nash distribution with either modification described in Section 3.1 induces a rest point with regard to IESL.

3.3 Intrinsic Relationship

As we have mentioned, the construction of the IESL dynamic is derived from the definitions of the two variants of the Nash distribution. Once we relate the two concepts to the rest point of our learning dynamic, the direct bound on their deviation may suggest a similar bound on that of the corresponding policy at a rest point in IESL.

A value discrepancy arising from a policy difference can be decomposed as a sum of per-timestep advantages (see [30]). Since the property derived from single-agent Markov decision processes is quite useful in analysis, we first introduce it into multi-player imperfect information games.

Lemma 1.

Suppose a policy πi\pi^{i}_{\dagger} that differs from πi\pi^{i} for player ii. Then the policy difference yields:

Vπi,πii(ht)Vπi(ht)=𝔼[k=tT1γktAπi(hk,aki)|ht,πi,πi]\displaystyle V_{\pi_{\dagger}^{i},{{{\vec{\pi}}}^{-i}}}^{i}({{h}_{t}})-V_{{\vec{\pi}}}^{i}({{h}_{t}})=\mathbb{E}\left[\sum\limits_{k=t}^{T-1}{{{\gamma}^{k-t}}A_{{\vec{\pi}}}^{i}({{h}_{k}},a_{k}^{i})}\left|{{h}_{t}},\pi_{\dagger}^{i},{{{\vec{\pi}}}^{-i}}\right.\right]

Two proofs for Lemma 1 are provided in Appendix B.1 & B.2. The first proof is concise, while the second proof is more detailed by showing the exact process of state transition and reward generation with the basic elements in Definition 1.

Using the property of policy difference, we can prove an important inequality for both of the two variants of the Nash distribution.

Lemma 2.

Given a Nash distribution πL\vec{\pi}_{L} with modification-L, the following inequality holds for any policy πi\pi^{i} and information set xtix_{t}^{i} related to player ii:

atiπLi(xti,ati)htxtiPr(ht|πL)AπLi(ht,ati)ϵH(πLi(xti))htxtiPr(ht|πL)\displaystyle\sum\limits_{a_{t}^{i}}{\pi_{L}^{i}(x_{t}^{i},a_{t}^{i})\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{L}}\right)A_{{{{\vec{\pi}}}_{L}}}^{i}({{h}_{t}},a_{t}^{i})}}-\epsilon H(\pi_{L}^{i}(x_{t}^{i}))\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{L}}\right)}
\displaystyle\geq\ atiπi(xti,ati)htxtiPr(ht|πL)AπLi(ht,ati)ϵH(πi(xti))htxtiPr(ht|πL)\displaystyle\sum\limits_{a_{t}^{i}}{{{\pi}^{i}}(x_{t}^{i},a_{t}^{i})\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{L}}\right)A_{{{{\vec{\pi}}}_{L}}}^{i}({{h}_{t}},a_{t}^{i})}}-\epsilon H({{\pi}^{i}}(x_{t}^{i}))\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{L}}\right)}
Lemma 3.

Given a Nash distribution πR\vec{\pi}_{R} with modification-R, the following inequality holds for any policy πi\pi^{i} and information set xtix_{t}^{i} related to player ii:

atiπRi(xti,ati)htxtiPr(ht|πR)AπRi(ht,ati)ϵH(πRi(xti))htxtiPr(ht|πR)\displaystyle\sum\limits_{a_{t}^{i}}{\pi_{R}^{i}(x_{t}^{i},a_{t}^{i})\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{R}}\right)A_{{{{\vec{\pi}}}_{R}}}^{i}({{h}_{t}},a_{t}^{i})}}-\epsilon H(\pi_{R}^{i}(x_{t}^{i}))\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{R}}\right)}
\displaystyle\geq\ atiπi(xti,ati)htxtiPr(ht|πR)AπRi(ht,ati)ϵH(πi(xti))htxtiPr(ht|πR)\displaystyle\sum\limits_{a_{t}^{i}}{{{\pi}^{i}}(x_{t}^{i},a_{t}^{i})\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{R}}\right)A_{{{{\vec{\pi}}}_{R}}}^{i}({{h}_{t}},a_{t}^{i})}}-\epsilon H({{\pi}^{i}}(x_{t}^{i}))\sum\limits_{{{h}_{t}}\in x_{t}^{i}}{\Pr\left({{h}_{t}}|{{{\vec{\pi}}}_{R}}\right)}

Formal proofs for Lemma 2 and Lemma 3 are provided in Appendix B.3 & B.4.

With this inequality, we can instantly observe a relationship between the two variants of the Nash distribution and the rest points (specifically, fixed points with regard to w(σ())\vec{w}(\vec{\sigma}(\cdot))) in IESL.

Theorem 1.

Every Nash distribution π\vec{\pi}_{*} with either modification-L or modification-R induces a rest point y=w(π)\vec{y}_{*}=\vec{w}(\vec{\pi}_{*}) in IESL.

Although the proof for Theorem 1 can be completed directly using the definition of IESL, we still leave it in Appendix B.5 to save space.

While the two variants of Nash distribution yield approximate Nash equilibria and IESL rest points by definition, we need a further assumption on the concavity of the game to bound the deviation of the policies of the IESL rest points. By introducing the concept of local hypomonotonicity, we will also show how IESL yields convergent policies with bounded deviation in both theory (see Section 4) and practice (see Section 5).

4 Convergence Analysis

4.1 Deviation Bound for IESL Rest Point

Concavity (or convexity) is a basic assumption commonly used in game theory (see [9][3]) and general optimization (see [2]). Now we first show that under the concavity of the individual utility functions, every rest point of IESL induces an approximate Nash equilibrium.

Theorem 2.

Assume that for any player ii and any fixed πi\vec{\pi}^{-i}, the individual utility function ui(πi,πi)u^{i}(\pi^{i},\vec{\pi}^{-i}) is concave over Πi\Pi^{i}. Then every rest point yϵ\vec{y}_{\epsilon} in IESL induces an ϵ\epsilon^{\prime}-Nash equilibrium πϵ=σ(yϵ)\vec{\pi}_{\epsilon}=\vec{\sigma}(\vec{y}_{\epsilon}), where ϵ=ϵlog|A|k=0T1γk\epsilon^{\prime}=\epsilon\log\left|A\right|\sum\limits_{k=0}^{T-1}{{{\gamma}^{k}}}.

A formal proof for Theorem 2 is provided in Appendix B.6.

Similar to the variants of the Nash distribution, the deviation of the IESL rest points is bounded by ϵ=ϵlog|A|k=0T1γk{{\epsilon}^{\prime}}=\epsilon\log\left|A\right|\sum\nolimits_{k=0}^{T-1}{{{\gamma}^{k}}}, which is at most ϵTlog|A|\epsilon T\log\left|A\right| since γ(0,1]\gamma\in(0,1]. To further demonstrate that the IESL dynamic yields policies converging to approximate Nash equilibria, we only need to show that the policy in IESL converges to the corresponding policy at a rest point.

4.2 Convergence of IESL Dynamic

Attributed to the policy-entropy regularization term in the choice map σ()\vec{\sigma}(\cdot) (5), the convergence of IESL only requires certain assumptions on the concept of local hypomonotonicity. Appendix A.4 compares the other definitions related to game hypomonotonicity and shows that the restriction below is the minimum among them.

Definition 7.

We say the game is locally 𝛍\bm{\mu}-hypomonotone about 𝛑\bm{\pi_{\dagger}} (μ0\mu\geq 0) on a policy set Πlocal\Pi_{local} containing π\pi_{\dagger}, if the following inequality holds for any πΠlocal\vec{\pi}\in\Pi_{local}:

iNπiπi,wi(π)wi(π)μiNπiπi2\displaystyle\sum\limits_{i\in N}{\left\langle{{\pi}^{i}}-\pi_{\dagger}^{i},{{w}^{i}}(\vec{\pi})-{{w}^{i}}(\vec{\pi}_{\dagger})\right\rangle}\leq\mu\sum\limits_{i\in N}{{{\left\|{{\pi}^{i}}-\pi_{\dagger}^{i}\right\|}^{2}}}

where the inner product sums over all (k,hk,aki)(k,h_{k},a_{k}^{i}) triples.

In [23], Mertikopoulos et al. examined a concept called Fenchel coupling, whose Lyapunov properties are useful in convergence analysis of equilibrium-learning dynamics based on duality (regarding policy space as primal space and score space as dual space). Since Fenchel coupling can be lower-bounded with the policy distance when using the strongly convex penalty function H()H(\cdot), we can use this concept to prove local convergence of (2) in the context of imperfect information games. To be specific, we can construct an energy function and demonstrate its non-increasing property under sufficiently strong local hypomonotonicity.

Theorem 3.

Assume that a rest point yϵ\vec{y}_{\epsilon} of IESL is contained in a positively invariant compact set Ω\Omega that induces a local policy set Πlocal={σ(y)|yΩ}{{\Pi}_{local}}=\left\{\vec{\sigma}(\vec{y})\left|\ \vec{y}\in\Omega\right.\right\}, and local μ\mu-hypomonotonicity about πϵ=σ(yϵ)\vec{\pi}_{\epsilon}=\vec{\sigma}(\vec{y}_{\epsilon}) is satisfied on Πlocal\Pi_{local}. When μ<ϵ\mu<\epsilon, policy yielded by IESL starting from an arbitrary y0Ω\vec{y}_{0}\in\Omega converges to πϵ\vec{\pi}_{\epsilon}.

A formal proof for Theorem 3 is provided in Appendix B.7.

For ease of understanding, Figure 1 reviews the relationships among the concepts involved in our theoretical analysis, showing the rationale behind the construction of the IESL dynamic.

Refer to caption
Figure 1: Overview of our theoretical analysis

Note that Theorem 3 suggests that whenever the score y\vec{y} in IESL falls into a region that is close enough to a rest point and satisfies certain local hypomonotonicity, the corresponding policy will inevitably converge to that of the rest point. Therefore, a trade-off exists with respect to the selection of ϵ\epsilon, since smaller ϵ\epsilon guarantees lower NashConv for the policies that converge (under the concavity assumption in Theorem 2) but makes the dynamic convergence more difficult (as it requires μ<ϵ\mu<\epsilon). An intuitive explanation for the influence of selecting different ϵ\epsilon is provided in Appendix C.

In practice, using binary search as an efficient parameter selection method for ϵ\epsilon in IESL, we are able to obtain policies approaching approximate Nash equilibria with competitively low NashConv. In Section 5, we will empirically show that our dynamic manages to find approximate Nash equilibria in multi-player poker scenarios and outperforms existing algorithms in moderate-scale Leduc poker games, demonstrating the equilibrium-finding capability of IESL even beyond theory (in practical sequential games).

5 Empirical Results

5.1 Comparative Experiments

Table 1: Problem scale (His. means the number of histories and Info. means the number of information sets)
His. Info.
Kuhn-2 54 12
Kuhn-3 600 48
Leduc-2 9450 936
Leduc-3 396120 13878

Since CFR [36], FP [14], and RD [16] are the major equilibrium-learning dynamics with theoretical guarantees in imperfect information games (though requiring average policy and the zero-sum condition), we seek to compare the empirical results of IESL with them. We chose four canonical poker game scenarios (2-player Kuhn, 3-player Kuhn, 2-player Leduc, and 3-player Leduc) in order to show the effectiveness of our dynamic in multi-player imperfect information games. To facilitate practical comparison, we first discretize the continuous time in IESL (2) with iteration steps. While the theoretical analysis of IESL is based on POSG-like simultaneous games, we can simply adjust it into sequential games by computing the required advantage terms in the same way as the counterfactual values in CFR [36]. As for the evaluation metric NashConv, it essentially requires best-response computation, which can be completed within linear time through tree-form dynamic programming in the same way as in extensive-form FP [14].

Table 2: NashConv of policies derived from the four dynamics
Kuhn-2 Kuhn-3 Leduc-2 Leduc-3
IESL (Ours) 0.000245 0.000318 0.016365 0.052198
CFR [36] 0.000240 0.000399 0.019648 0.466209
FP [14] 0.000130 0.000067 0.113065 0.198394
RD [16] 0.000147 0.030282 0.047568 0.540433

Table 1 shows the scale of the four poker game scenarios we selected as test environments. Table 2 shows the NashConv of policies derived from the four dynamics (instantaneous policy for IESL and average policy for CFR, FP, and RD). Adjustment to potential parameters is leveraged to reveal the near-optimal performance of each dynamic under the same number of iterations. FP performs excellently in simple Kuhn poker environments while struggling to close the NashConv gap as the problem scales. RD achieves a rather low NashConv in the two-player zero-sum scenarios, as is guaranteed in theory, but performs poorly when it comes to three players. Consistent with the theoretical results, IESL manages to find approximate Nash equilibria in all four poker game scenarios. As one of the most influential simple dynamics for solving imperfect information games, CFR performs closely to IESL in the three small-scale scenarios while failing to further reduce NashConv in the 3-player Leduc poker game (also observed in [1]).

5.2 Effect of ϵ\epsilon in IESL

For IESL itself, it is worth noticing that a smaller ϵ\epsilon leads to a less deviated converged policy but harder dynamic convergence, as is demonstrated in Theorem 2 and Theorem 3. Figure 2 compares the results of selecting different ϵ[0.02,0.04]\epsilon\in[0.02,0.04] in 2-player Leduc. While IESL finds rest points under ϵ{0.04,0.03,0.025}\epsilon\in\left\{0.04,0.03,0.025\right\}, it suffers from severe oscillation when we further reduce ϵ\epsilon to 0.020.02. For the three convergent curves, a small ϵ\epsilon slows down the convergence speed but guarantees an ultimate policy with relatively low NashConv. This result is consistent with our theoretical analysis in Section 4, suggesting the generality of the theory even in games with real-world utility settings and non-simultaneous moves. Furthermore, when combined with theory, the result implies that the invariant areas restricted by local hypomonotonicity with μ<0.02\mu<0.02 around rest points (with regard to IESL under ϵ=0.02\epsilon=0.02) are significantly harder to find in the score space. This threshold phenomenon may reflect certain structures in the 2-player Leduc poker game.

Refer to caption
Figure 2: Learning curves of IESL under different ϵ\epsilon

In view of the results from both theory and experiment, an efficient method of optimal parameter selection is binary search. For example, given an initial interval of (0.02,0.04](0.02,0.04] (with ϵ=0.02\epsilon=0.02 diverging and ϵ=0.04\epsilon=0.04 converging), we can first divide the interval into (0.02,0.03](0.02,0.03] by checking the convergence under ϵ=0.03\epsilon=0.03, and further subdivide it into (0.02,0.025](0.02,0.025] by checking the convergence under ϵ=0.025\epsilon=0.025. With a precision threshold of 0.0050.005, we derive the optimal parameter ϵ=0.025\epsilon=0.025 among (0.02,0.04](0.02,0.04]. For larger intervals like (0,1](0,1], while the number of all possible ϵ\epsilon is 100.005=200\frac{1-0}{0.005}=200, we can find the optimal ϵ\epsilon within log2200=8\left\lceil{{\log}_{2}200}\right\rceil=8 repetitions of IESL when using binary search to balance between policy convergence and ultimate NashConv.

More detailed information on experiments is provided in Appendix D, including the pseudocode of IESL implemented in sequential games (D.1), discussions on time complexity and learning curves (D.2), and a simple comparison with DRL algorithms (D.3).

6 Conclusion

6.1 Overview

In this paper, we covered three practical aspects not fully addressed in the current studies on imperfect information games: multiple players, simultaneous moves, and convergent instantaneous policies. By restating POSG as a general framework and proposing a corresponding learning dynamic, IESL, we examined the relationship among different classes of rest points in imperfect information games and demonstrated the equilibrium-finding capability of our simple dynamic, both theoretically and empirically. While computing approximate Nash equilibrium is generally hard (PPAD-complete), we showed that an efficient learning algorithm is possible when further restricting games with properties like local hypomonotonicity, whose existence seems not to contradict the experimental phenomena in practical poker games. The theoretical foundation established in this paper may ultimately facilitate real-life imperfect information game solving, which requires adaptability under a more general class of game rules. Besides, the manner in which our dynamic is constructed may also inspire researchers to propose new methods for dealing with multi-player games.

6.2 Limitation & Future Work

A limitation of this work is that the IESL dynamic cannot be directly applied to solving large-scale games, as it requires traversing the entire game tree (expensive in time) and recording values and policies for all possible trajectories in tabular form (expensive in space). Therefore, our next research step is to construct a DRL extension for IESL in a similar way as Deep CFR [5] (from CFR), NFSP [15] (from FP), and NeuRD [16] (from RD). Since the game description is compatible with the MDP nature, it is expected to be smooth to extend our theory to a DRL implementation with function approximation and randomization. Besides, since IESL shows a comparatively strong convergence result with regard to instantaneous policy in moderate-scale poker games, it is highly possible that an efficient implementation of equilibrium learning exists when it comes to large scale, with the help of neural networks and RL techniques like generalized advantage estimation (see [31]).

References

  • [1] Nicholas Abou Risk, Duane Szafron, et al. Using counterfactual regret minimization to create competitive multiplayer poker agents. In AAMAS, pages 159–166, 2010.
  • [2] Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge University Press, 2004.
  • [3] Mario Bravo, David Leslie, and Panayotis Mertikopoulos. Bandit learning in concave N-person games. Advances in Neural Information Processing Systems, 31, 2018.
  • [4] George W Brown. Iterative solution of games by fictitious play. Act. Anal. Prod Allocation, 13(1):374, 1951.
  • [5] Noam Brown, Adam Lerer, Sam Gross, and Tuomas Sandholm. Deep counterfactual regret minimization. In International Conference on Machine Learning, pages 793–802. PMLR, 2019.
  • [6] Noam Brown and Tuomas Sandholm. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Science, 359(6374):418–424, 2018.
  • [7] Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a Nash equilibrium. Communications of the ACM, 52(2):89–97, 2009.
  • [8] Kousha Etessami and Mihalis Yannakakis. On the complexity of Nash equilibria and other fixed points. SIAM Journal on Computing, 39(6):2531–2597, 2010.
  • [9] Eyal Even-Dar, Yishay Mansour, and Uri Nadav. On the convergence of regret minimization dynamics in concave games. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 523–532, 2009.
  • [10] Monique Florenzano. General equilibrium analysis: Existence and optimality properties of equilibria. Springer Science & Business Media, 2003.
  • [11] Bolin Gao and Lacra Pavel. On passivity, reinforcement learning, and higher order learning in multiagent finite games. IEEE Transactions on Automatic Control, 66(1):121–136, 2020.
  • [12] Eric A Hansen, Daniel S Bernstein, and Shlomo Zilberstein. Dynamic programming for partially observable stochastic games. In AAAI, volume 4, pages 709–715, 2004.
  • [13] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127–1150, 2000.
  • [14] Johannes Heinrich, Marc Lanctot, and David Silver. Fictitious self-play in extensive-form games. In International Conference on Machine Learning, pages 805–813. PMLR, 2015.
  • [15] Johannes Heinrich and David Silver. Deep reinforcement learning from self-play in imperfect-information games. ArXiv preprint arXiv:1603.01121, 2016.
  • [16] Daniel Hennes, Dustin Morrill, Shayegan Omidshafiei, Rémi Munos, Julien Perolat, Marc Lanctot, Audrunas Gruslys, Jean-Baptiste Lespiau, Paavo Parmas, Edgar Duéñez-Guzmán, et al. Neural replicator dynamics: Multiagent learning via hedging policy gradients. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pages 492–501, 2020.
  • [17] Josef Hofbauer, Karl Sigmund, et al. Evolutionary games and population dynamics. Cambridge University Press, 1998.
  • [18] Karel Horák and Branislav Bošanskỳ. Solving partially observable stochastic games with public observations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33(01), pages 2029–2036, 2019.
  • [19] Marc Lanctot. Monte Carlo sampling and regret minimization for equilibrium computation and decision-making in large extensive form games. PhD Thesis, University of Alberta, 2013.
  • [20] Marc Lanctot, Vinicius Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Pérolat, David Silver, and Thore Graepel. A unified game-theoretic approach to multiagent reinforcement learning. Advances in Neural Information Processing Systems, 30, 2017.
  • [21] David S Leslie and Edmund J Collins. Individual Q-learning in normal form games. SIAM Journal on Control and Optimization, 44(2):495–514, 2005.
  • [22] Brendan McMahan. Follow-the-regularized-leader and mirror descent: Equivalence theorems and L1 regularization. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 525–533. JMLR Workshop and Conference Proceedings, 2011.
  • [23] Panayotis Mertikopoulos and Zhengyuan Zhou. Learning in games with continuous action sets and unknown payoff functions. Mathematical Programming, 173:465–507, 2019.
  • [24] Matej Moravčík, Martin Schmid, Neil Burch, Viliam Lisỳ, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 356(6337):508–513, 2017.
  • [25] Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay V Vazirani. Algorithmic game theory. Cambridge University Press, 2007.
  • [26] Julien Perolat, Bart De Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T Connor, Neil Burch, Thomas Anthony, et al. Mastering the game of Stratego with model-free multiagent reinforcement learning. Science, 378(6623):990–996, 2022.
  • [27] Julien Perolat, Remi Munos, Jean-Baptiste Lespiau, Shayegan Omidshafiei, Mark Rowland, Pedro Ortega, Neil Burch, Thomas Anthony, David Balduzzi, Bart De Vylder, et al. From poincaré recurrence to convergence in imperfect information games: Finding equilibrium via regularization. In International Conference on Machine Learning, pages 8525–8535. PMLR, 2021.
  • [28] Julia Robinson. An iterative method of solving a game. Annals of Mathematics, pages 296–301, 1951.
  • [29] Tim Roughgarden. Twenty lectures on algorithmic game theory. Cambridge University Press, 2016.
  • [30] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, pages 1889–1897. PMLR, 2015.
  • [31] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. ArXiv preprint arXiv:1506.02438, 2015.
  • [32] Peter D Taylor and Leo B Jonker. Evolutionary stable strategies and game dynamics. Mathematical Biosciences, 40(1-2):145–156, 1978.
  • [33] Kevin Waugh. Abstraction in large extensive games. Master’s Thesis, University of Alberta, 2009.
  • [34] E Christopher Zeeman. Population dynamics from game theory. In Global Theory of Dynamical Systems: Proceedings of an International Conference Held at Northwestern University, Evanston, Illinois, June 18–22, 1979, pages 471–497. Springer, 2006.
  • [35] Yuanheng Zhu, Weifan Li, Mengchen Zhao, Jianye Hao, and Dongbin Zhao. Empirical policy optimization for nn-player markov games. IEEE Transactions on Cybernetics, 2022.
  • [36] Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. Advances in Neural Information Processing Systems, 20, 2007.