This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On Reinforcement Learning for Turn-based Zero-sum
Markov Games

Devavrat Shah
MIT
[email protected]
   Varun Somani
Cornell University
[email protected]
   Qiaomin Xie
Cornell University
[email protected]
   Zhi Xu
MIT
[email protected]
Abstract

We consider the problem of finding Nash equilibrium for two-player turn-based zero-sum games. Inspired by the AlphaGo Zero (AGZ) algorithm (Silver et al., 2017b), we develop a Reinforcement Learning based approach. Specifically, we propose Explore-Improve-Supervise (EIS) method that combines “exploration”, “policy improvement” and “supervised learning” to find the value function and policy associated with Nash equilibrium. We identify sufficient conditions for convergence and correctness for such an approach. For a concrete instance of EIS where random policy is used for “exploration”, Monte-Carlo Tree Search is used for “policy improvement” and Nearest Neighbors is used for “supervised learning”, we establish that this method finds an ε\varepsilon-approximate value function of Nash equilibrium in O~(ε(d+4))\widetilde{O}(\varepsilon^{-(d+4)}) steps when the underlying state-space of the game is continuous and dd-dimensional. This is nearly optimal as we establish a lower bound of Ω~(ε(d+2))\widetilde{\Omega}(\varepsilon^{-(d+2)}) for any policy.

1 Introduction

In 2016, AlphaGo (Silver et al., 2016) became the first program to defeat the world champion in the game of Go. Soon after, another program, AlphaGo Zero (AGZ) (Silver et al., 2017b), achieved even stronger performance despite learning the game from scratch given only the rules. Starting tabula rasa, AGZ mastered the game of Go entirely through self-play using a new reinforcement learning algorithm. The same algorithm was shown to achieve superhuman performance in Chess and Shogi (Silver et al., 2017a).

One key innovation of AGZ is to learn a policy and value function using supervised learning from samples generated via Monte-Carlo Tree Search. Motivated by the remarkable success of this method, in this work we study the problem of finding Nash Equilibrium for two-player turn-based zero-sum games and in particular consider a reinforcement learning based approach.

Our Contributions. The central contribution of this work is the Explore-Improve-Supervise (EIS) method for finding Nash Equilibrium for two-player turn-based zero-sum games with continuous state space, modeled through the framework of Markov game. It is an iterative method where in each iteration three components are intertwined carefully: “explore” that allows for measured exploration of the state space, “improve” which allows for improving the current value and policy for the state being explored, and “supervise” which learns the improved value and policy over the explored states so as to generalize over the entire state space.

Importantly, we identify sufficient conditions, in terms of each of the “explore”, “improve” and “supervise” modules, under which convergence to the value function of the Nash equilibrium is guaranteed. In particular, we establish finite sample complexity bounds for such a generic method to find the ε\varepsilon-approximate value function of Nash equilibrium. See Theorem 2 and Proposition 3 for the precise statements.

We establish that when random sampling is used for “explore”, Monte-Carlo-Tree-Search (MCTS) is used for “policy improvement” and Nearest Neighbor is used for “supervised learning”, the theoretical conditions identified for convergence of EIS policy are satisfied. Using our finite sample bound for EIS policy, and quantification of conditions as stated above, we conclude that such an instance of EIS method find ε\varepsilon approximate value function of Nash equilibrium in O~(ε(d+4))\tilde{O}\big{(}\varepsilon^{-(d+4)}) steps, where dd is the dimension of the state space of the game (cf. Theorem 8). We also establish a mini-max lower bound on the number of steps required for learning ε\varepsilon-approximate value function of Nash equilibrium as Ω~(ε(d+2))\tilde{\Omega}\big{(}\varepsilon^{-(d+2)}) for any method (cf. Theorem 4). This establishes near-optimality of an instance of EIS.

Related Work. The Markov Decision Processes (MDP) provide a canonical framework to study the single-agent setting. Its natural extension, the Markov Games, provide a canonical framework to study multi-agent settings (Littman, 1994). In this work, we consider an instance of it—turn-based two players or agents with zero-sum rewards. Analogous to learning the optimal policy in MDP setting, here we consider finding the Nash Equilibrium in the setting of Markov Games. There has been a rich literature on existence, uniqueness as well as algorithms for finding the Nash Equilibrium. In what follows, we describe the most relevant literature in that regard.

To start with, in Shapley (1953), for finite state and action spaces where game would terminate in a finite number of stages with positive probability, the existence of optimal stationary strategies and uniqueness of the optimal value function are established. For generic state space, the existence of Nash Equilibrium has been established for Markov Games with discounted rewards. Particularly, when the state space is a compact metric space, Maitra & Parthasarathy (1970, 1971) and Parthasarathy (1973) show the uniqueness of value function and existence of optimal stationary policy. The same result has been established by Kumar & Shiau (1981) when the state space is complete, separable metric space. For two-player zero-sum discounted Markov games, the Bellman operator corresponding to the Nash equilibrium is a contraction and hence, the value function is unique and there exists a deterministic stationary optimal policy (Szepesvári & Littman, 1996; Hansen et al., 2013). We also note that existence of Nash equilibrium for a general class of games (stochastic shortest path) is established by Patek (1997). It argues that the optimal value function is unique and can be achieved by mixed stationary strategies.

For computing or finding optimal value function and policy associated with the Nash equilibrium, there are two settings considered in the literature: (i) when system model is entirely known, and (ii) when model is not known but one can sample from the underlying model. In the first setting, classical approaches from the setting of MDPs such as value/policy iteration are adapted to find the optimal value function or policy associated with the Nash equilibrium (Patek, 1997; Hansen et al., 2013). In the second setting which is considered here, various approximate dynamic programming algorithms have been proposed (Szepesvári & Littman, 1996; Bowling & Veloso, 2001; Littman, 2001a, b; Hu et al., 1998; Hu & Wellman, 2003; Lagoudakis & Parr, 2002; Perolat et al., 2015; Pérolat et al., 2016). More recently, with the advance of deep reinforcement learning (Mnih et al., 2015; Lillicrap et al., 2016; Schulman et al., 2015, 2017; Yang et al., 2019a), recent work approximates the value function/policy by deep neural networks (Silver et al., 2016, 2017a, 2017b).

In terms of theoretical results, there has been work establishing asymptotic convergence to the optimal value function when the state space is finite. For example, Q-learning for MDP adapted to the setting of two-player zero-sum games asymptotically converges (Szepesvári & Littman, 1996). Non-asymptotic results are available for model-based algorithms developed for Markov games with finite states, including R-max algorithm (Brafman & Tennenholtz, 2002) and an algorithm that extends upper confidence reinforcement learning algorithm (Wei et al., 2017). Recent work by Sidford et al. (2019) provides an algorithm that computes an ε\varepsilon-optimal strategy with near-optimal sample complexity for Markov games with finite states. For Markov games where the transition function can be embedded in a given feature space, the work by Jia et al. (2019) analyzes the sample complexity of a Q-learning algorithm. However, non-asymptotic or finite sample analysis for continuous state space without a special structure, such as that considered in this work, receives less attention in the literature.

Comparison with Prior Work. In this work, we develop Explore-Improve-Supervise (EIS) policy when the model is unknown, but one is able to sample from the underlying model. We study the convergence and sample complexity of our approach. Our goal is to provide a formal study on the general framework of EIS. The overall framework is inspired by AlphaGo Zero and inherits similar components. However, we take an important step towards bridging the gap between sound intuitions and theoretical guarantees, which is valuable for a better understanding on applying or extending this framework with different instantiations. We note that EIS bears certain similarities with another AlphaGo-inspired study (Shah et al., 2019). Both works follow the main idea of coupling improvements with supervised learning. However, there are major differences. Shah et al. (2019) focus on approximating value function in deterministic MDPs and only studies a particular instance of the modules. In contrast, we focus on a broader class of algorithms, formulating general principles and studying the guarantees. This poses different challenges and requires generic formulations on properties of the modules that are technically precise and practically implementable.

Finally, as mentioned previously, non-asymptotic analysis for continuous state space, considered in this work, is scarce for Markov games. While there are some results for finite states, the bounds are not directly comparable. For example, the complexity in Perolat et al. (2015) depends on some oracle complexities for linear programming and regression.

For the setting with continuous state space, the sample complexity results in Jia et al. (2019) for Q-learning rely on the assumption of linear structure of the transition kernel. The recent work by Yang et al. (2019b) studies the finite-sample performance of minimax deep Q-learning for two-player zero-sum games, where the convergence rate depends on the family of neural networks. We remark that these belong to a different class of algorithms. We also derive a fundamental mini-max lower bound on sample-complexity for any method (cf. Theorem 4). The lower bound is interesting on its own. Moreover, it shows near optimal dependence on dimension for an instance of our EIS framework.

Organization. The remainder of the paper is organized as follows. We formally introduce the framework of Markov Games and Nash equilibrium in Section 2. Section 3 describes a generic Explore-Improve-Supervise (EIS) algorithm. The precise technical properties for the modules of EIS are then stated in Section 4, under which we establish our main results, convergence and sample complexity of EIS, in Section 5. Finally, a concrete instantiation is provided in Section 6, demonstrating the applicability of the generic EIS algorithm. All the proofs are presented in Appendices.

2 Two-Player Markov Games and Nash Equilibrium

We introduce the framework of Markov Games (MGs) (also called Stochastic Games (Shapley, 1953)) with two players and zero-sum rewards. The goal in this setting is to learn the Nash equilibrium.

2.1 Two-player Zero-sum Markov Game

We consider two-player turn-based Markov games like Go and Chess, where players take turns to make decisions. We denote the two players as 𝖯𝟣\mathsf{P1} and 𝖯𝟤\mathsf{P2}. Formally, a Markov game can be expressed as a tuple (𝒮1,𝒮2,𝒜1,𝒜2,r,P,γ),(\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{A}_{1},\mathcal{A}_{2},r,P,\gamma), where 𝒮1\mathcal{S}_{1} and 𝒮2\mathcal{S}_{2} are the set of states controlled by 𝖯𝟣\mathsf{P1} and 𝖯𝟤\mathsf{P2} respectively, 𝒜1\mathcal{A}_{1} and 𝒜2\mathcal{A}_{2} are the set of actions players can take, rr represents reward function, PP represents transition kernel and γ[0,1)\gamma\in[0,1) is the discount factor. Specifically, for i=1,2,i=1,2, let 𝒜i(s)\mathcal{A}_{i}(s) be the set of feasible actions for player ii in a given state s𝒮is\in\mathcal{S}_{i}. We assume that 𝒮1𝒮2=\mathcal{S}_{1}\cap\mathcal{S}_{2}=\emptyset111This assumption is typical for turn-based games. In particular, one can incorporate the “turn” of the player as part of the state, and thus 𝖯𝟣\mathsf{P1}’s state space (i.e. when it is player 1’s turn) is disjoint from that of 𝖯𝟤\mathsf{P2}’s turn. .

Let 𝒮=𝒮1𝒮2\mathcal{S}=\mathcal{S}_{1}\cup\mathcal{S}_{2}. For each state s𝒮,s\in\mathcal{S}, let I(s){1,2}I(s)\in\{1,2\} indicate the current player to play. At state ss, upon taking action aa by the corresponding player I(s)I(s), player i{1,2}i\in\{1,2\} receives a reward ri(s,a)r^{i}(s,a). In zero-sum games, r1(s,a)=r2(s,a)r^{1}(s,a)=-r^{2}(s,a). Without loss of generality, we let 𝖯𝟣\mathsf{P1} be our reference and use the notation r(s,a)r1(s,a)r(s,a)\triangleq r^{1}(s,a) for the definitions of value functions.

Let P(s,a)P(s,a) be the distribution of the new state after playing action aa, in state ss, by player I(s)I(s). In this paper, we focus on the setting where the state transitions are deterministic. This means that P(s,a)P(s,a) is supported on a single state, sas\circ a, where sas\circ a denotes the state after taking action aa at state ss.

For each i{1,2}i\in\{1,2\}, let πi\pi_{i} be the policy for player ii, where πi(|s)\pi_{i}(\cdot|s) is a probability distribution over 𝒜i(s)\mathcal{A}_{i}(s). Denote by Πi\Pi_{i} the set of all stationary policies of player ii, and let Π=Π1×Π2\Pi=\Pi_{1}\times\Pi_{2} be the set of all polices for the game. A two-player zero-sum game can be seen as player 𝖯𝟣\mathsf{P1} aiming to maximize the accumulated discounted reward while 𝖯𝟤\mathsf{P2} attempting to minimize it. The value function and Q function for a zero-sum Markov game can be defined in a manner analogous to the MDP setting:

Vπ1,π2(s)\displaystyle V_{\pi_{1},\pi_{2}}(s) =𝔼at,st+1,at+1,[k=0γkr(st+k,at+k)|st=s],\displaystyle=\mathbb{E}_{a_{t},s_{t+1},a_{t+1},\ldots}\big{[}\sum_{k=0}^{\infty}\gamma^{k}r(s_{t+k},a_{t+k})|s_{t}=s\big{]},
Qπ1,π2(s,a)\displaystyle Q_{\pi_{1},\pi_{2}}(s,a) =𝔼st+1,at+1,[k=0γkr(st+k,at+k)|st=s,at=a],\displaystyle=\mathbb{E}_{s_{t+1},a_{t+1},\ldots}\big{[}\sum_{k=0}^{\infty}\gamma^{k}r(s_{t+k},a_{t+k})|s_{t}=s,a_{t}=a\big{]},

where alπI(sl)(|sl)a_{l}\sim\pi_{I(s_{l})}(\cdot|s_{l}) and sl+1P(sl,al)s_{l+1}\sim P(s_{l},a_{l}). That is, Vπ1,π2(s)V_{\pi_{1},\pi_{2}}(s) is the expected total discounted reward for 𝖯𝟣\mathsf{P1} if the game starts from state ss, players 𝖯𝟣\mathsf{P1} and 𝖯𝟤\mathsf{P2} use the policies π1\pi_{1} and π2\pi_{2} respectively. The interpretation for Q-value is similar.

To simplify the notation, we assume that 𝒜1=𝒜2𝒜\mathcal{A}_{1}=\mathcal{A}_{2}\triangleq\mathcal{A}, where 𝒜\mathcal{A} is a finite set. We consider 𝒮\mathcal{S} to be a compact subset of d\mathbb{R}^{d} (d1d\geq 1). The rewards r(s,a)r(s,a) are independent random variables taking value in [Rmax,Rmax][-R_{\max},R_{\max}] for some Rmax>0R_{\max}>0. Define VmaxRmax/(1γ).V_{\max}\triangleq{R_{\max}/{(1-\gamma)}}. It follows that absolute value of value function and Q function for any policy is bounded by Vmax.V_{\max}.

Regarding Deterministic Transitions. Let us clarify this assumption. In fact, our approach and main results of EIS framework (i.e., Sections 4 and 5) apply to general non-deterministic cases as well. However, the example in Section 6 considers deterministic cases. In particular, the improvement module is instantiated by a variant of Monte Carlo Tree Search, where a clean non-asymptotical analysis has been only established for the deterministic case (Shah et al., 2019). To facilitate a coherent exposition, we focus on deterministic cases here. Indeed, many games, such as Go and Chess, are deterministic. Additionally, note that one could instantiate our EIS framework with other methods for the non-deterministic cases—for instance, by adapting the sparse sampling oracle (Kearns et al., 2002) as the improvement module—to obtain a similar analysis. As a proof of concept, we provide empirical results in Appendix J on a non-deterministic game with the sparse sampling oracle.

2.2 Nash Equilibrium

Algorithm 1 The Generic EIS Algorithm
1:  Initialization: a supervised learning model f0(s)=(V0(s),π0(|s))f_{0}(s)=(V_{0}(s),\pi_{0}(\cdot|s)) for every s𝒮s\in\mathcal{S}.
2:  for l=1,2,,Ll=1,2,\dots,L do
3:     initialize s1s_{1} to be a state (e.g., the starting state of the game).
4:     /*  data generation via improvement and exploration  */
5:     for i=1,2,,nli=1,2,\dots,n_{l} do
6:        query the improvement module, which takes as inputs the current model fl1f_{l-1}, and outputs estimates V^(si)\hat{V}(s_{i}) for the optimal value V(si)V^{*}(s_{i}) and π^(|si)\hat{\pi}(\cdot|s_{i}) for the optimal policy π(|si){\pi}^{*}(\cdot|s_{i}):
(V^(si),π^(|si))=Improvement Module(fl1,si)\big{(}\hat{V}(s_{i}),\>\>\hat{\pi}(\cdot|s_{i})\big{)}=\textrm{Improvement Module}(f_{{l-1}},s_{i})\vspace{-0.05in} (1)
7:        query the exploration module for the next state si+1s_{i+1}:
si+1=Exploration Module(si)s_{i+1}=\textrm{Exploration Module}(s_{i})\vspace{-0.1in} (2)
8:     end for
9:     /*  model update via supervised learning  */
10:     query the supervised learning module with the collected data 𝒟(l)={(si,V^(si),π^(|si)}i=1nl\mathcal{D}^{(l)}=\{(s_{i},\hat{V}(s_{i}),\hat{\pi}(\cdot|s_{i})\}_{i=1}^{n_{l}} and obtained an updated model fl(s)f_{l}(s) for every s𝒮s\in\mathcal{S}.
fl=Supervised Learning Module({(si,V^(si),π^(|si)}i=1nl)f_{l}=\textrm{Supervised Learning Module}\big{(}\{(s_{i},\hat{V}(s_{i}),\hat{\pi}(\cdot|s_{i})\}_{i=1}^{n_{l}}\big{)}\vspace{-0.1in} (3)
11:  end for
12:  Output: final model fLf_{L}.
Definition 1 (Optimal Counter Policy).

Given a policy π2Π2\pi_{2}\in\Pi_{2}, policy π1Π1\pi_{1}\in\Pi_{1} for 𝖯𝟣\mathsf{P1} is said to be an optimal counter-policy against π2\pi_{2}, if and only if for every s𝒮s\in\mathcal{S}, we have Vπ1,π2(s)Vπ1,π2(s),π1Π1.V_{\pi_{1},\pi_{2}}(s)\geq V_{\pi_{1}^{\prime},\pi_{2}}(s),\forall\pi_{1}^{\prime}\in\Pi_{1}. Similarly, a policy π2Π2\pi_{2}\in\Pi_{2} for 𝖯𝟤\mathsf{P2} is said to be an optimal counter-policy against a given policy π1Π1\pi_{1}\in\Pi_{1} for 𝖯𝟣\mathsf{P1}, if and only if for every s𝒮s\in\mathcal{S} Vπ1,π2Vπ1,π2,π2Π2.V_{\pi_{1},\pi_{2}}\leq V_{\pi_{1},\pi_{2}^{\prime}},\forall\pi_{2}^{\prime}\in\Pi_{2}.

In a two-player zero-sum game, it has been shown that the pairs of optimal policies coincides with the Nash equilibrium of this game (Maitra & Parthasarathy, 1970; Parthasarathy, 1973; Kumar & Shiau, 1981). In particular, a pair of policies (π1,π2)(\pi_{1}^{*},\pi_{2}^{*}) is called an equilibrium solution of the game, if π1\pi_{1}^{*} is an optimal counter policy against π2\pi_{2}^{*} and π2\pi_{2}^{*} is an optimal counter policy against π1\pi_{1}^{*}. The value function of the optimal policy, Vπ1,π2,V_{\pi_{1}^{*},\pi_{2}^{*}}, is the unique fixed point of a γ\gamma-contraction operator. In the sequel, we will simply refer to the strategy π=(π1,π2)\pi^{*}=(\pi_{1}^{*},\pi_{2}^{*}) as the optimal policy. Finally, we use the concise notation VV^{*} and QQ^{*} to denote the optimal value function and the optimal Q-value, respectively, i.e., V(s)=Vπ1,π2(s)V^{*}(s)=V_{\pi^{*}_{1},\pi^{*}_{2}}(s) and Q(s,a)=Qπ1,π2(s,a)Q^{*}(s,a)=Q_{\pi^{*}_{1},\pi^{*}_{2}}(s,a).

3 EIS: Explore-Improve-Supervise

We describe Explore-Improve-Supervise (EIS) algorithm for learning the optimal value function VV^{*} and optimal policy π\pi^{*}. The algorithm consists of three separate, but intertwined modules: exploration, improvement and supervised learning. Below is a brief summary of these modules. The precise, formal description of properties desired from these modules is stated in Section 4, which will lead to convergence and correctness of the EIS algorithm as stated in Theorem 2. Section 6 provides a concrete example of modules of EIS satisfying properties stated in Section 4.

Exploration Module. To extract meaningful information for the entire game, sufficient exploration is required so that enough representative states will be visited. This is commonly achieved by an appropriate exploration policy, such as ϵ\epsilon-greedy policy and Boltzmann policy. We require the existence of an exploration module guaranteeing sufficient exploration.

Improvement Module. For the overall learning to make any progress, the improvement module improves the existing estimates of the optimal solution. In particular, given the current estimates V^\hat{V} for VV^{*} and π^\hat{\pi} for π\pi^{*}, for a state ss, a query of the improvement module produces better estimates V^(s)\hat{V}^{\prime}(s) and π^(|s)\hat{\pi}^{\prime}(\cdot|s) that are closer to the optimal V(s)V^{*}(s) and π(|s)\pi^{*}(\cdot|s).

Supervised Learning Module. The previous two modules can be collectively viewed as a data generation process: the exploration module samples sufficient representative states, while a query of the improvement module provides improved estimates for the optimal value and policy. With these as training data, supervised learning module would learn and generalize the improvement of the training data to the entire state space. Subsequently, the trained supervised learning module produces better estimates for VV^{*} and π\pi^{*}.

Combining together, the three modules naturally lead to the following iterative algorithm whose pseudo-code is provided in Algorithm 1. Initially, the algorithm starts with an arbitrary model for value function and policy. In each iteration l1l\geq 1, it performs two steps:

Step 1. Data Generation. Given current model fl1=(Vl1,πl1)f_{l-1}=(V_{l-1},\pi_{l-1}): for current state ss, query the improvement module to obtain better estimates V^(s)\hat{V}(s) and π^(|s)\hat{\pi}(\cdot|s) than the current estimates fl1(s)f_{l-1}(s); and then query the exploration module to arrive at the next state ss^{\prime}; repeat the above process to obtain training data of nn samples, {(si,V^(si),π^(|si))}i=1n\{(s_{i},\hat{V}(s_{i}),\hat{\pi}(\cdot|s_{i}))\}_{i=1}^{n}.

Step 2. Supervised Learning. Given the improved estimates {(si,V^(si),π^(|si))}i=1n\{(s_{i},\hat{V}(s_{i}),\hat{\pi}(\cdot|s_{i}))\}_{i=1}^{n}, use the supervised learning module to build a new model fl=(Vl,πl)f_{l}=(V_{l},\pi_{l}).

Intuitively, the iterative algorithm keeps improving our estimation after each iteration, and eventually converges to optimal solutions. The focus of this paper is to formally understand under what conditions on each of the exploration, improvement and supervised learning module does the algorithm work. Of course, proof is in the puddling—we provide examples of existence of such modules in Section 6.

4 Properties of Modules

In this section we formally state the desired properties of each of the three modules of EIS. With these properties, we establish convergence and correctness of EIS algorithm in Section 5 to follow. We remark that the properties are not made for the ease of technical analysis. Examples satisfying them shall be provided in Section 6.

4.1 Improvement Module

This module improves both value function and policy. The value function is real-valued, whereas policy for each given state can be viewed as a probability distribution over all possible actions. This requires a careful choice of metric for quantifying improvement. Let V^(s)\hat{V}(s) and π^(|s)\hat{\pi}(\cdot|s) be the estimates output by the improvement module in the ll-th iteration of EIS. Improvement of value function means |V^(s)V(s)|<|Vl(s)V(s)||\hat{V}(s)-V^{*}(s)|<|V_{l}(s)-V^{*}(s)|. Improvement for policy is measured by the KL divergence between π^(|s)\hat{\pi}(\cdot|s) and π(|s)\pi^{*}(\cdot|s). Here some care is needed as KL divergence would become infinite if supports of the distributions mismatch.

Note that the optimal policy π\pi^{*} only assigns positive probability to the optimal actions. On the other hand, there is no guarantee that π^(|s)\hat{\pi}(\cdot|s) always has a full support on 𝒜.\mathcal{A}. To overcome these challenges, we instead measure the KL divergence with an alternative "optimal policy" that guarantees a full support on 𝒜\mathcal{A}. This naturally leads to the optimal Boltzmann policy: given a temperature τ>0\tau>0, the optimal Boltzmann policy is given by

Pτ(a|s)=eQ(s,a)/τa𝒜eQ(s,a)/τ,for a𝒜.P^{*}_{\tau}(a|s)=\frac{e^{Q^{*}(s,a)/\tau}}{\sum_{a^{\prime}\in\mathcal{A}}e^{Q^{*}(s,a^{\prime})/\tau}},\quad\text{for }a\in\mathcal{A}. (4)

If I(s)I(s) is player 𝖯𝟤\mathsf{P2}, use Q(s,a)-Q^{*}(s,a) instead in the above equation to construct the Boltzmann policy (Recall that player 𝖯𝟣\mathsf{P1} is set to be our reference in Section 2). By definition, DKL(π^(|s)||Pτ(|s))D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s)||P^{*}_{\tau}(\cdot|s)\big{)} is guaranteed to be finite for any estimate π^(|s)\hat{\pi}(\cdot|s). Furthermore, PτP^{*}_{\tau} converges to π\pi^{*} as τ0\tau\rightarrow 0. Therefore, we could use the KL divergence DKL(π^(|s)||Pτ(|s))D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s)||P^{*}_{\tau}(\cdot|s)\big{)} with a small enough τ\tau to measure the improvement of the estimates.

Finally, it makes sense to take into account the number of samples (i.e., observed state transitions) required by the module to improve the policy and value function. We now formally lay down the following property for the improvement module.

Property 1.

(Improvement Property) Suppose the current model f(s)=(V(s),π(|s))f(s)=(V(s),\pi(\cdot|s)) (potentially random) has estimation errors ε0,v>0\varepsilon_{0,v}>0 and ε0,p>0\varepsilon_{0,p}>0 for the value and policy estimates, respectively, i.e.,

𝔼[VV]\displaystyle\mathbb{E}\Big{[}||V-V^{*}||_{\infty}\Big{]} ε0,v,\displaystyle\leq\varepsilon_{0,v},
𝔼[DKL(π(|s)||Pτ(|s))]\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\pi(\cdot|s)||P^{*}_{\tau}(\cdot|s)\big{)}\Big{]} ε0,p,s𝒮,\displaystyle\leq\varepsilon_{0,p},\>\forall\>s\in\mathcal{S},

where the expectations are taken with respect to the randomness of the model f=(V,π)f=(V,\pi).

Fix any state s𝒮s\in\mathcal{S}, and query the improvement module on ss via (V^(s),π^(|s))=Improvement Module(f,s)\big{(}\hat{V}(s),\hat{\pi}(\cdot|s)\big{)}=\textrm{Improvement Module}(f,s). Let the temperature be τ>0\tau>0, and improvement factors be 0<ζv<10<\zeta_{v}<1 and 0<ζp<10<\zeta_{p}<1. Then, there exists a function κ(τ,ε0,v,ε0,p,ζv,ζp)\kappa(\tau,\varepsilon_{0,v},\varepsilon_{0,p},\zeta_{v},\zeta_{p}) such that if κ(τ,ε0,v,ε0,p,ζv,ζp)\kappa(\tau,\varepsilon_{0,v},\varepsilon_{0,p},\zeta_{v},\zeta_{p}) number of samples are used, the new estimates satisfy that

𝔼[|V^(s)V(s)|]\displaystyle\mathbb{E}\Big{[}\big{|}\hat{V}(s)-V^{*}(s)\big{|}\Big{]} ζvε0,v,\displaystyle\leq\zeta_{v}\cdot\varepsilon_{0,v},
𝔼[DKL(π^(|s)||Pτ(|s))]\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s)||P^{*}_{\tau}(\cdot|s)\big{)}\Big{]} ζpε0,p,\displaystyle\leq\zeta_{p}\cdot\varepsilon_{0,p},

where the expectations are with respect to the randomness in the model ff and the improvement module.

Property 1 allows for a randomized improvement module, but requires that on average, the errors for the value and policy estimates should strictly shrink.

4.2 Supervised Learning Module

To direct the model update in an improving manner, the supervised learning step (line 10 of Algorithm 1) should be able to learn from the training data, V^\hat{V} and π^\hat{\pi}, and generalize to unseen states by preserving the same order of error as the training data. Generically speaking, generalization would require two conditions: (1) sufficiently many training data that are “representative” of the underlying state space; (2) the model itself is expressive enough to capture the characteristics of the function that is desired to be learned.

Before specifying the generalization property, let us provide a few remarks on the above conditions. Condition (1) is typically ensured by using an effective exploration module. Recall that the state space is continuous. The exploration module should be capable of navigating the space until sufficiently many different states are visited. Intuitively, these finite states should properly cover the entire space, i.e., they are representative of the entire space so that learning from these states provide enough information for other states. Formally, this means that given the current estimation errors ε1,v\varepsilon_{1,v} and ε1,p\varepsilon_{1,p} for the optimal value and policy, there exists a sufficiently large set of N(ε1,v,ε1,p)N(\varepsilon_{1,v},\varepsilon_{1,p}) training states, such that supervised learning applied to those training data would generalize to the entire state space with the same order of accuracy. The precise definition of representative states may depend on the particular supervised learning algorithm.

Regarding condition (2), generalization performance of traditional models has been well studied in classical statistical learning theory. More recently, deep neural networks exhibit superior empirical generalization ability, although a complete rigorous proof seems beyond the reach of existing techniques. Our goal is to seek general principle underlying the supervised learning step and as such, we do not limit ourselves to specific models—the learning model could be a parametric model that learns via minimizing empirical squared loss and cross-entropy loss, or it could be a non-parametric model such as nearest neighbors regression. With the above conditions in mind, we state the following general property for the supervised learning module:

Property 2.

(Generalization Property) Let temperature τ>0\tau>0, estimation errors ε1,v>0\varepsilon_{1,v}>0 and ε1,p>0\varepsilon_{1,p}>0 be given. There exists at least one set of finite states, denoted by 𝒮(τ,ε1,v,ε1,p)\mathcal{S}(\tau,\varepsilon_{1,v},\varepsilon_{1,p}), with size N𝒮(τ,ε1,v,ε1,p)N_{\mathcal{S}}(\tau,\varepsilon_{1,v},\varepsilon_{1,p}), so that the following generalization bound holds:

Suppose that a training dataset {(si,V^(si),π^(|si))}i=1n\big{\{}\big{(}s_{i},\hat{V}(s_{i}),\hat{\pi}(\cdot|s_{i})\big{)}\big{\}}_{i=1}^{n} satisfies 𝒮(τ,ε1,v,ε1,p){si}i=1n\mathcal{S}(\tau,\varepsilon_{1,v},\varepsilon_{1,p})\subset\{s_{i}\}_{i=1}^{n} and the following error guarantees:

𝔼[|V^(si)V(si)|]\displaystyle\mathbb{E}\Big{[}|\hat{V}(s_{i})-V^{*}(s_{i})|\Big{]} ε1,v,\displaystyle\leq\varepsilon_{1,v},
𝔼[DKL(π^(|si)Pτ(|si))]\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s_{i})\|P^{*}_{\tau}(\cdot|s_{i})\big{)}\Big{]} ε1,p,i[n],\displaystyle\leq\varepsilon_{1,p},\>\forall\>i\in[n],

where the expectation is taken with respect to the randomness of the value V^(si)\hat{V}(s_{i}) and π^(|si)\hat{\pi}(\cdot|s_{i}). Then, there exist non-negative universal constants cpc_{p} and cvc_{v} such that after querying the supervised learning module, i.e., (V,π)=(V,\pi)= Supervised Learning Module({(si,V^(si),π^(|si))}i=1n)(\{\big{(}s_{i},\hat{V}(s_{i}),\hat{\pi}(\cdot|s_{i})\big{)}\}_{i=1}^{n}), (V,π)(V,\pi) satisfy

𝔼[VV]\displaystyle\mathbb{E}\Big{[}\big{\|}V-V^{*}\big{\|}_{\infty}\Big{]} cvε1,v;\displaystyle\leq c_{v}\cdot\varepsilon_{1,v};
𝔼[DKL(π(|s)Pτ(|s))]\displaystyle\quad\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\pi(\cdot|s)\|P^{*}_{\tau}(\cdot|s)\big{)}\Big{]} cpε1,p,s𝒮.\displaystyle\leq c_{p}\cdot\varepsilon_{1,p},\quad\forall s\in\mathcal{S}.

4.3 Exploration Module

With the above development, it is now straightforward to identify the desired property of the exploration module. In particular, as part of the data generation step, it should be capable of exploring the space so that a set of representative states 𝒮(τ,ε1,v,ε1,p)\mathcal{S}(\tau,\varepsilon_{1,v},\varepsilon_{1,p}) are visited. Consequently, the supervised learning module can then leverage the training data to generalize. Formally, let \mathcal{E} be the set of all possible representative sets that satisfy the Generalization Property:

(τ,ε1,v,ε1,p)={𝒮(τ,ε1,v,ε1,p)𝒮:Property 2 is satisfied with 𝒮(τ,ε1,v,ε1,p).}.\displaystyle\vspace{-0.15in}\mathcal{E}(\tau,\varepsilon_{1,v},\varepsilon_{1,p})=\Big{\{}\mathcal{S}(\tau,\varepsilon_{1,v},\varepsilon_{1,p})\subset\mathcal{S}\>:\>\textrm{Property \ref{property:sl} is satisfied with }\mathcal{S}(\tau,\varepsilon_{1,v},\varepsilon_{1,p}).\Big{\}}.\vspace{-0.15in}

Denote by 𝒯(t){si}i=1t\mathcal{T}(t)\triangleq\{s_{i}\}_{i=1}^{t} the set of states explored by querying the exploration module up to time tt, with s1s_{1} being the initial state and si+1=Exploration Module(si)s_{i+1}=\textrm{Exploration Module}(s_{i}) (cf. line 7 of Algorithm 1). We now state the exploration property, which stipulates that starting at an arbitrary state ss, the explored states should contain one of the representative sets in \mathcal{E}, within a finite number of steps.

Property 3.

(Exploration Property) Given the temperature τ>0\tau>0, and estimation errors ε1,v>0\varepsilon_{1,v}>0 and ε1,p>0\varepsilon_{1,p}>0 for the value and policy, define

T\displaystyle\vspace{-0.15in}T (τ,ε1,v,ε1,p,s)min{t1:s1=s;𝒮^(τ,ε1,v,ε1,p) such that 𝒮^𝒯(t)}.\displaystyle(\tau,\varepsilon_{1,v},\varepsilon_{1,p},s)\triangleq\min\Big{\{}\>t\geq 1\>:\>s_{1}=s;\exists\>\hat{\mathcal{S}}\in\mathcal{E}(\tau,\varepsilon_{1,v},\varepsilon_{1,p})\textrm{ such that }\hat{\mathcal{S}}\subset\mathcal{T}(t)\Big{\}}.\vspace{-0.15in}

Then, the exploration module satisfies that s𝒮\forall s\in\mathcal{S},

𝔼[T(τ,ε1,v,ε1,p,s)]<B(τ,ε1,v,ε1,p),\mathbb{E}\Big{[}T(\tau,\varepsilon_{1,v},\varepsilon_{1,p},s)\Big{]}<B(\tau,\varepsilon_{1,v},\varepsilon_{1,p}),

for some B(τ,ε1,v,ε1,p)<B(\tau,\varepsilon_{1,v},\varepsilon_{1,p})<\infty independent of ss. The above expectation is taken with respect to the randomness in the exploration module and the environment (i.e., state transitions).

In the sequel, when the context is clear or the initial state does not matter, we usually drop the dependence in ss to simplify the notation, i.e., T(τ,ε1,v,ε1,p)T(\tau,\varepsilon_{1,v},\varepsilon_{1,p}).

5 Main Results: Convergence Guarantees and Sample Complexity

5.1 Convergence Guarantees

As the main result of this paper, we establish convergence of the EIS algorithm under the three desired properties given in Section 4, and quantify the corresponding finite sample complexity. We also provide an algorithm-independent minimax lower bound; in Section 6 we introduce an instance of EIS that essentially matches this lower bound.

Theorem 2.

Given a small enough τ>0\tau>0, let Properties 1, 2 and 3 hold. Let C0,v=V0VC_{0,v}=\|V_{0}-V^{*}\|_{\infty} and C0,p=sups𝒮DKL(π0(|s)Pτ(|s))C_{0,p}=\sup_{s\in\mathcal{S}}D_{\textup{KL}}\big{(}\pi_{0}(\cdot|s)\|P^{*}_{\tau}(\cdot|s)\big{)} be initialization errors. Then for a given ρ(0,1),\rho\in(0,1), with appropriate parameters for Algorithm 1, the output fL=(VL,πL)f_{L}=(V_{L},\pi_{L}) after LL-th iteration satisfies

𝔼[VLV]\displaystyle\mathbb{E}\Big{[}\big{\|}V_{L}-V^{*}\big{\|}_{\infty}\Big{]} C0,vρL,\displaystyle\leq C_{0,v}\rho^{L}, (5)
𝔼[DKL(πL(|s)Pτ(|s))]\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\pi_{L}(\cdot|s)\,\|\,P_{\tau}^{*}(\cdot|s)\big{)}\Big{]} C0,pρL,s𝒮.\displaystyle\leq C_{0,p}\rho^{L},\quad\forall s\in\mathcal{S}. (6)

Theorem 2 implies that the model (VL,πL)(V_{L},\pi_{L}) learned by the generic EIS algorithm converges to the optimal value function VV^{*} and the optimal Boltzmann policy PτP^{*}_{\tau} exponentially with respect to the number of iterations. In particular, after

L=logεmax{C0,v,C0,p}logρ=Θ(log1ε)L=\bigg{\lceil}\frac{\log\frac{\varepsilon}{\max\{C_{0,v},C_{0,p}\}}}{\log\rho}\bigg{\rceil}=\Theta\Big{(}\log\frac{1}{\varepsilon}\Big{)}

iterations, we can obtain estimates for both VV^{*} and PτP_{\tau}^{*} that are within ε\varepsilon estimation errors. We note that with a sufficiently small temperature, PτP^{*}_{\tau} is close to the optimal policy π\pi^{*}. Therefore, the model fL=(VL,πL)f_{L}=(V_{L},\pi_{L}) can be close to (V,π)(V^{*},\pi^{*}) for a large LL.

5.2 Sample Complexity

We can also characterize the sample complexity of the EIS algorithm. Recall that the sample complexity is defined as the total number of state transitions required for the algorithm to learn ϵ\epsilon-approximate value/policy function. The sample complexity of Algorithm 1 comes from two parts: the improvement module and the exploration module. Recall that the improvement module requires κ(τ,ε0,v,ε0,p,ζv,ζp)\kappa(\tau,\varepsilon_{0,v},\varepsilon_{0,p},\zeta_{v},\zeta_{p}) samples for each call (cf. Property 1). The sample complexity of exploration module is proportional to T(τ,ε1,v,ε1,p)T(\tau,\varepsilon_{1,v},\varepsilon_{1,p}), which satisfies 𝔼[T(τ,ε1,v,ε1,p)]B(τ,ε1,v,ε1,p)\mathbb{E}[T(\tau,\varepsilon_{1,v},\varepsilon_{1,p})]\leq B(\tau,\varepsilon_{1,v},\varepsilon_{1,p}) (cf. Property 3). The following proposition bounds the sample complexity in terms of the above relevant quantities.

Proposition 3.

Consider the setting of Theorem 2. Then, with probability at least 1δ1-\delta, the convergence result (i.e., Eqs (5) and (6)) is achieved with sample complexity MM such that

Ml=1L[κ(τ,C0,vρl1,C0,pρl1,ρcv,ρcp)×B(τ,C0,vρlcv,C0,pρlcp)elogLδ].\displaystyle M\leq\sum_{l=1}^{L}\Bigg{[}\kappa\Big{(}\tau,C_{0,v}\rho^{l-1},C_{0,p}\rho^{l-1},\frac{\rho}{c_{v}},\frac{\rho}{c_{p}}\Big{)}\times B\Big{(}\tau,\frac{C_{0,v}\rho^{l}}{c_{v}},\frac{C_{0,p}\rho^{l}}{c_{p}}\Big{)}\cdot e\cdot\log\frac{L}{\delta}\Bigg{]}.

In Section 6, we provide a concrete instance of EIS that finds ε\varepsilon-approximate value function and policy of Nash equilibrium with O~(ε(d+4))\widetilde{O}(\varepsilon^{-(d+4)}) transitions.

5.3 A Generic Lower Bound

To understand how good the above sample complexity upper bound is, we establish a lower bound for any algorithm under any sampling policy. In particular, we leverage the the minimax lower bound for the problem of non-parametric regression (Tsybakov, 2009; Stone, 1982) to establish the lower bound, as stated in the following theorem.

Theorem 4.

Given an algorithm, let VTV_{T} be the estimate of VV^{*} after TT samples of transitions for the given Markov game. Then, for each δ(0,1)\delta\in(0,1), there exists a two-player zero-sum Markov game such that in order to achieve [V^TV<ε]1δ,\mathbb{P}\Big{[}\big{\|}\hat{V}_{T}-V^{*}\big{\|}_{\infty}<\varepsilon\Big{]}\geq 1-\delta, it must be that

TCdε(d+2)log(ε1),T\geq C^{\prime}d\cdot\varepsilon^{-(d+2)}\cdot\log(\varepsilon^{-1}),

where C>0C^{\prime}>0 is an algorithm-independent constant.

6 Implementation: A Concrete Instantiation of the Key Modules

In this section, we demonstrate the applicability of the generic EIS algorithm by giving a concrete instantiation. Specifically, we will use a variant of Monte Carlo Tree Search (MCTS) as the improvement module, nearest neighbor regression as the supervised learning module, and random sampling as the exploration module. We prove that all properties in Section 4 are satisfied. This shows that these properties are reasonable and hence gives a strong support for the generic recipe developed in this paper. Due to space limit, we provide high-level discussions here with informal technical results, and defer precise statements to Appendix E.

Improvement Module: MCTS. Recall that the improvement module should be able to provide improved estimates for the value and policy functions, at the queried state ss. Since both the value and policy are related to the QQ function, one approach for estimate improvement is to first obtain better estimates Q^\hat{Q} for QQ^{*} and then construct the improved estimates of value and policy from Q^\hat{Q}. We will take this approach in this example and use MCTS to obtain the estimates of QQ^{*} (see Algorithm 2 in Appendix E). We assume the existence of a generative model (i.e., a simulator). The following theorem states the property of this specific improvement module, which directly implies the desired improvement property, i.e., Property 1.

Theorem 5 (Informal Statement, Theorem 11, Appendix E).

Suppose that the state transitions are deterministic. Given the current model f=(V,π)f=(V,\pi) such that the value model VV satisfies 𝔼[VV]ε0,v.\mathbb{E}\big{[}||V-V^{*}||_{\infty}\big{]}\leq\varepsilon_{0,v}. Then, with appropriately chosen parameters for MCTS, for each query state s0𝒮s_{0}\in\mathcal{S}, the output (V^(s0),π^(|s0))=MCTS(f,s0)\big{(}\hat{V}(s_{0}),\hat{\pi}(\cdot|s_{0})\big{)}=\textrm{MCTS}(f,s_{0}) satisfies

𝔼[|V^(s0)V(s0)|]ζvε0,v,\displaystyle\mathbb{E}\Big{[}\big{|}\hat{V}(s_{0})-V^{*}(s_{0})\big{|}\Big{]}\leq\zeta_{v}\cdot\varepsilon_{0,v},
𝔼[DKL(π^(|s0)||Pτ(|s0))]ζpε0,v.\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s_{0})||P^{*}_{\tau}(\cdot|s_{0})\big{)}\Big{]}\leq\zeta_{p}\cdot\varepsilon_{0,v}.

The above is achieved with a sample complexity of

O((τmin{ζv,ζp}ε0,v)2log(τmin{ζv,ζp})(logγ)1).O\Big{(}\big{(}{\tau\cdot\min\{\zeta_{v},\zeta_{p}\}\cdot\varepsilon_{0,v}}\big{)}^{-2}\cdot{\log({\tau\cdot\min\{\zeta_{v},\zeta_{p}\}})}\cdot({\log\gamma})^{-1}\Big{)}.

Supervised Learning Module: Nearest Neighbor Regression. We employ a nearest neighbor algorithm to learn the optimal value function and policy. Intuitively, suppose that the optimal value function and the Boltzmann policy is Lipschitz in the state space, then this algorithm will generalize if there are sufficiently many (say KK) training data points around each state in the state space 𝒮\mathcal{S}. Quantitatively, consider covering 𝒮\mathcal{S} with balls of diameter h>0h>0. We call the training data (h,K)(h,K)-representative if each covering ball has at least KK training data. Here, hh and KK would depend on the temperature τ\tau and estimation errors of the training data.

Proposition 6 (Informal Statement, Proposition 12, Appendix E).

Under appropriate regularity conditions, if the training data is representative with respect to appropriate chosen h>0h>0 and K>0K>0, the nearest neighbor supervised learning satisfies Property 2. In particular, given training data with estimation errors εv\varepsilon_{v} and εp\varepsilon_{p}, we have

𝔼[VNNV]\displaystyle\mathbb{E}\Big{[}\left\|V_{\textup{NN}}-V^{*}\right\|_{\infty}\Big{]} 4εv,\displaystyle\leq 4\cdot\varepsilon_{v},
𝔼[DKL(πNN(|s)Pτ(|s))]\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\pi_{\textup{NN}}(\cdot|s)\,\|\,P^{*}_{\tau}(\cdot|s)\big{)}\Big{]} cεp,s𝒮,\displaystyle\leq c\cdot\varepsilon_{p},\quad\forall s\in\mathcal{S},

where the constant c>0c>0 is independent of nn, the size of training data.

As discussed in Appendix E, the representative number of data points for training required in the above for generalization depends on the property of the state-space. For example, if state space is the unit ball in dd dimension, for generalization error scaling with ε\varepsilon we require representative data points scaling as ε(2+d)\varepsilon^{-(2+d)}.

Exploration Module: Random Sampling Policy. In the above supervised learning module, the sampled states for nearest neighbor regression should be (h,K)(h,K)-representative. In other words, to satisfy the exploration property, the exploration module must visit a set of (h,K)(h,K)-representative states within a finite expected number of steps. We show that a uniformly random sampling policy achieves this. Let N(h)N(h) be the h/2h/2-covering number of the compact state space.

Proposition 7 (Informal Statement, Proposition 13, Appendix E).

Under appropriate regularity conditions, with h,Kh,K chosen as per desired the estimation errors, εv\varepsilon_{v} and εp\varepsilon_{p}, for the value and policy, the expected number of steps to obtain a set of (h,K)(h,K)-representative states under the random sampling policy is upper bounded by

B(τ,εv,εp)=O(KN(h)logN(h)).B(\tau,\varepsilon_{v},\varepsilon_{p})=O\big{(}KN(h)\log N(h)\big{)}.

Convergence Guarantees and Sample Complexity of the Instance. For this instance of EIS, we have shown that each module satisfies the desired properties. Therefore, the convergence result stated in Theorem 2 holds for this specific instance. Below we make this result explicit, providing concrete bounds on the estimation errors and sample complexity. In the following, the cc^{\prime}s denote appropriate constants. Please refer to Appendix E for details.

Theorem 8 (Informal Statement, Theorem 14, Appendix E).

For a given ρ(0,1)\rho\in(0,1), there exist appropriately chosen parameters for this instance such that:

  1. 1.

    The output fL=(VL,πL)f_{L}=(V_{L},\pi_{L}) at the end of LL-th iteration satisfies

    𝔼[VLV]\displaystyle\mathbb{E}\Big{[}\big{\|}V_{L}-V^{*}\big{\|}_{\infty}\Big{]} VmaxρL\displaystyle\leq V_{\max}\rho^{L}
    𝔼[DKL(πL(|s)Pτ(|s))]\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\pi_{L}(\cdot|s)\,\|\,P_{\tau}^{*}(\cdot|s)\big{)}\Big{]} cVmax4ρL,s𝒮.\displaystyle\leq\frac{cV_{\max}}{4}\rho^{L},\quad\forall s\in\mathcal{S}.
  2. 2.

    With probability at least 1δ,1-\delta, the above result is achieved with sample complexity of

    l=1Lclog1τρ1τ2ρ4llogN(ρl)ρlN(ρl)logN(ρl)logLδ.\displaystyle\sum_{l=1}^{L}c^{\prime}\log\frac{1}{\tau\rho}\cdot\frac{1}{\tau^{2}\rho^{4l}}\cdot\log\frac{N(\rho^{l})}{\rho^{l}}\cdot N(\rho^{l})\cdot\log N(\rho^{l})\cdot\log\frac{L}{\delta}.
  3. 3.

    In particular, if 𝒮\mathcal{S} is a unit volume hypercube in d\mathbb{R}^{d}, then the total sample complexity to achieve ε\varepsilon-error value function and policy is given by

    O(log1τρ1τ2εd+4log(1ε)4log1δ).\displaystyle O\Big{(}\log\frac{1}{\tau\rho}\cdot\frac{1}{\tau^{2}\varepsilon^{d+4}}\cdot\log\big{(}\frac{1}{\varepsilon}\big{)}^{4}\cdot\log\frac{1}{\delta}\Big{)}.

Theorem 8 states that for a unit hypercube, the sample complexity of the instance of EIS scales as O~(ε(4+d))\widetilde{O}\big{(}{\varepsilon^{-(4+d)}}\big{)} (omitting the logarithmic factor). Note that the minimax lower bound in Theorem 4 scales as Ω~(ε(2+d))\widetilde{\Omega}\big{(}{\varepsilon^{-(2+d)}}\big{)}. Hence, in terms of the dependence on the dimension, the instance we consider here is nearly optimal. We note that the O~(ε(4+d))\widetilde{O}\big{(}{\varepsilon^{-(4+d)}}\big{)} sample complexity results from two parts: the MCTS contributes a sample complexity scaling as ε2\varepsilon^{-2} due to simulating the search tree, while nearest neighbor requires ε(2+d)\varepsilon^{-(2+d)} samples due to the need of sufficiently many good neighbors. Obtaining tighter bound with potentially more powerful improvement module or supervised learning module such as neural networks is an interesting future avenue.

7 Conclusion

In this paper, we take theoretical steps towards understanding reinforcement learning for zero-sum turn-based Markov games. We develop the Explore-Improve-Supervise (EIS) method with three intuitive modules intertwined carefully. Such an abstraction of three key modules allows us to isolate the fundamental principles from the implementation details. Importantly, we identify conditions for successfully finding the optimal solutions, backed by a concrete instance satisfying those conditions. Overall, the abstraction and the generic properties developed in this paper could serve as some guidelines, with the potential of finding broader applications with different instantiations. Finally, it would be interesting to extend this framework to general Markov games with simultaneous moves. We believe the generic modeling techniques in Section 4 could be applied, but a key challenge is to develop an improvement module with rigorous non-asymptotic guarantees that satisfies the desired property. We believe that addressing this challenge and formally establishing the framework is a fruitful future direction.

References

  • Bowling & Veloso (2001) Bowling, M. and Veloso, M. Rational and convergent learning in stochastic games. In International joint conference on artificial intelligence, volume 17, pp.  1021–1026. Lawrence Erlbaum Associates Ltd, 2001.
  • Brafman & Tennenholtz (2002) Brafman, R. I. and Tennenholtz, M. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):213–231, 2002.
  • Hansen et al. (2013) Hansen, T. D., Miltersen, P. B., and Zwick, U. Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor. J. ACM, 60(1):1:1–1:16, February 2013. ISSN 0004-5411.
  • Hu & Wellman (2003) Hu, J. and Wellman, M. P. Nash q-learning for general-sum stochastic games. Journal of machine learning research, 4(Nov):1039–1069, 2003.
  • Hu et al. (1998) Hu, J., Wellman, M. P., et al. Multiagent reinforcement learning: theoretical framework and an algorithm. In ICML, volume 98, pp.  242–250. Citeseer, 1998.
  • Jia et al. (2019) Jia, Z., Yang, L. F., and Wang, M. Feature-based q-learning for two-player stochastic games, 2019.
  • Kaufmann & Koolen (2017) Kaufmann, E. and Koolen, W. M. Monte-carlo tree search by best arm identification. In Advances in Neural Information Processing Systems, pp. 4897–4906, 2017.
  • Kearns et al. (2002) Kearns, M., Mansour, Y., and Ng, A. Y. A sparse sampling algorithm for near-optimal planning in large markov decision processes. Machine learning, 49(2-3):193–208, 2002.
  • Kocsis et al. (2006) Kocsis, L., Szepesvári, C., and Willemson, J. Improved Monte-Carlo search. Univ. Tartu, Estonia, Tech. Rep, 2006.
  • Kumar & Shiau (1981) Kumar, P. R. and Shiau, T.-H. Existence of value and randomized strategies in zero-sum discrete-time stochastic dynamic games. SIAM Journal on Control and Optimization, 19(5):617–634, 1981.
  • Lagoudakis & Parr (2002) Lagoudakis, M. G. and Parr, R. Value function approximation in zero-sum markov games. In Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence, pp.  283–292. Morgan Kaufmann Publishers Inc., 2002.
  • Lillicrap et al. (2016) Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep reinforcement learning. 2016.
  • Littman (1994) Littman, M. L. Markov games as a framework for multi-agent reinforcement learning. In Machine Learning Proceedings 1994, pp.  157–163. Elsevier, 1994.
  • Littman (2001a) Littman, M. L. Friend-or-foe q-learning in general-sum games. In ICML, volume 1, pp.  322–328, 2001a.
  • Littman (2001b) Littman, M. L. Value-function reinforcement learning in markov games. Cognitive Systems Research, 2(1):55–66, 2001b.
  • Maitra & Parthasarathy (1970) Maitra, A. and Parthasarathy, T. On stochastic games. Journal of Optimization Theory and Applications, 5(4):289–300, 1970.
  • Maitra & Parthasarathy (1971) Maitra, A. and Parthasarathy, T. On stochastic games, ii. Journal of Optimization Theory and Applications, 8(2):154–160, 1971.
  • Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
  • Parthasarathy (1973) Parthasarathy, T. Discounted, positive, and noncooperative stochastic games. International Journal of Game Theory, 2(1):25–37, 1973.
  • Patek (1997) Patek, S. D. Stochastic Shortest Path Games: Theory and Algorithms. PhD dissertation, Massachusetts Institute of Technology, 1997.
  • Perolat et al. (2015) Perolat, J., Scherrer, B., Piot, B., and Pietquin, O. Approximate dynamic programming for two-player zero-sum markov games. In Proceedings of the 32nd International Conference on Machine Learning, pp.  1321–1329, 2015.
  • Pérolat et al. (2016) Pérolat, J., Piot, B., Geist, M., Scherrer, B., and Pietquin, O. Softened approximate policy iteration for markov games. In ICML 2016-33rd International Conference on Machine Learning, 2016.
  • Schulman et al. (2015) Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust region policy optimization. In International Conference on Machine Learning, pp. 1889–1897, 2015.
  • Schulman et al. (2017) Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  • Shah & Xie (2018) Shah, D. and Xie, Q. Q-learning with nearest neighbors. In Advances in Neural Information Processing Systems 31, pp. 3115–3125. 2018.
  • Shah et al. (2019) Shah, D., Xie, Q., and Xu, Z. On reinforcement learning using Monte Carlo tree search with supervised learning: Non-asymptotic analysis. arXiv preprint arXiv:1902.05213, 2019.
  • Shapley (1953) Shapley, L. S. Stochastic games. Proceedings of the National Academy of Sciences, 39(10):1095–1100, 1953.
  • Sidford et al. (2019) Sidford, A., Wang, M., Yang, L. F., and Ye, Y. Solving discounted stochastic two-player games with near-optimal time and sample complexity, 2019.
  • Silver et al. (2016) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
  • Silver et al. (2017a) Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al. Mastering chess and Shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017a.
  • Silver et al. (2017b) Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. Mastering the game of Go without human knowledge. Nature, 550(7676):354, 2017b.
  • Stone (1982) Stone, C. J. Optimal global rates of convergence for nonparametric regression. The Annals of Statistics, pp.  1040–1053, 1982.
  • Szepesvári & Littman (1996) Szepesvári, C. and Littman, M. L. Generalized Markov decision processes: Dynamic-programming and reinforcement-learning algorithms. In Proceedings of International Conference of Machine Learning, volume 96, 1996.
  • Tsybakov (2009) Tsybakov, A. B. Introduction to Nonparametric Estimation. Springer Series in Statistics. Springer, 2009.
  • Wainwright (2019) Wainwright, M. J. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press, 2019.
  • Wei et al. (2017) Wei, C.-Y., Hong, Y.-T., and Lu, C.-J. Online reinforcement learning in stochastic games. In Advances in Neural Information Processing Systems, pp. 4987–4997, 2017.
  • Yang et al. (2019a) Yang, Y., Zhang, G., Xu, Z., and Katabi, D. Harnessing structures for value-based planning and reinforcement learning. arXiv preprint arXiv:1909.12255, 2019a.
  • Yang et al. (2019b) Yang, Z., Xie, Y., and Wang, Z. A theoretical analysis of deep q-learning, 2019b.
\appendixpage

Appendix A Preliminary Facts

The following inequalities are used for developing our technical results:

Jensen’s Inequality: Let XX be a random variable and ϕ\phi be a convex function, then ϕ(𝔼[X])𝔼[ϕ(X)]\phi(\mathbb{E}[X])\leq\mathbb{E}[\phi(X)].

Pinsker’s Inequality: Let μ\mu and ν\nu be two probability distributions, then the total variation distance TV(μ,ν)\textup{TV}(\mu,\nu) and the KL divergences DKL(μν)D_{\text{KL}}(\mu\|\nu) satisfy the bound

TV(μ,ν)12DKL(μ,ν)\textup{TV}(\mu,\nu)\leq\sqrt{\frac{1}{2}D_{\text{KL}}(\mu,\nu)}

Note that if μ\mu and ν\nu are discrete distributions, then TV(μ,ν)=12ωΩ|μ(ω)ν(ω)|=12μν1\textup{TV}(\mu,\nu)=\frac{1}{2}\sum_{\omega\in\Omega}|\mu(\omega)-\nu(\omega)|=\frac{1}{2}\|\mu-\nu\|_{1}, where 1\|\cdot\|_{1} denotes the total variation (or 1\ell_{1}) norm.

Reverse Pinsker’s Inequality: If μ\mu and ν\nu are discrete distributions defined on a common finite set Ω\Omega, then

TV(μ,ν)αν2DKL(μ,ν),\textup{TV}(\mu,\nu)\geq\sqrt{\frac{\alpha_{\nu}}{2}D_{\text{KL}}(\mu,\nu)},

where αν:=minωΩν(ω)\alpha_{\nu}:=\min_{\omega\in\Omega}\nu(\omega).

Log-sum inequality: Suppose that ai0,bi0,ia_{i}\geq 0,b_{i}\geq 0,\forall\>i. We have

(iai)logiaiibiiailogaibi.\left(\sum_{i}a_{i}\right)\log\frac{\sum_{i}a_{i}}{\sum_{i}b_{i}}\leq\sum_{i}a_{i}\log\frac{a_{i}}{b_{i}}.

Appendix B Proof of Theorem 2

Proof.

With the three detailed properties, the proof is conceptually straightforward. At each iteration, the improvement module would produce better estimates for the explored states, by factors of ζv\zeta_{v} and ζp\zeta_{p}. The exploration continues until one of the desired representative sets in \mathcal{E} has been visited, and the exploration property guarantees that the exploration time will be finite. The current iteration then ends by calling the supervised learning module to generalize the improvement to the entire state space. In what follows, we make these statements formal.

Let us first introduce some notion. We will use the term iteration to refer to a complete round of improvement, exploration and supervised learning (cf. Line 2 of Algorithm 1). In general, at each iteration, we use a superscript (l)(l) to denote quantities relevant to the ll-th iteration, except that for the supervised learning module, we follow the convention in the paper and use a subscript ll, i.e., fl=(Vl,πl)f_{l}=(V_{l},\pi_{l}). We denote by Z(l)Z^{(l)} all the information during the ll-th iteration. Let {(l)}\{\mathcal{F}^{(l)}\} be the sigma-algebra generated by the stochastic process {Z(l)}\{Z^{(l)}\}, where the randomness comes from the environment and any randomness that may be used in the three modules. Let ωv(l)\omega_{v}^{(l)} and ωp(l)\omega_{p}^{(l)} be the estimation errors of the model at the beginning of the iteration, i.e.,

ωv(l)\displaystyle\omega_{v}^{(l)} 𝔼[Vl1V],\displaystyle\triangleq\mathbb{E}\Big{[}\big{\|}V_{{l-1}}-V^{*}\big{\|}_{\infty}\big{]},
ωp(l)\displaystyle\omega_{p}^{(l)} sups𝒮𝔼[DKL(πl1(|s)Pτ(|s))].\displaystyle\triangleq\sup_{s\in\mathcal{S}}\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}{\pi}_{l-1}(\cdot|s)\|P_{\tau}^{*}(\cdot|s)\big{)}\Big{]}.

We use 𝒟(l)={(si,V^(l)(si),π^(l(|si))}i=1nl\mathcal{D}^{(l)}=\Big{\{}\big{(}s_{i},\hat{V}^{(l)}(s_{i}),\hat{\pi}^{(l}(\cdot|s_{i})\big{)}\Big{\}}_{i=1}^{n_{l}} to denote the set of training data generated by the exploration module and querying the improvement module during the ll-th iteration. Let S(l)={si}i=1nlS^{(l)}=\{s_{i}\}_{i=1}^{n_{l}} be the set of states visited by the exploration module. Correspondingly, the estimation errors for the value function and the optimal policy after querying the improvement module are denoted by εv(l)\varepsilon_{v}^{(l)} and εp(l)\varepsilon_{p}^{(l)}, respectively:

εv(l)\displaystyle\varepsilon_{v}^{(l)} =supsS(l)𝔼[|V^(l)(s)V(s)|],\displaystyle=\sup_{s\in S^{(l)}}\mathbb{E}\Big{[}|\hat{V}^{(l)}(s)-V^{*}(s)|\Big{]},
εp(l)\displaystyle\varepsilon_{p}^{(l)} =supsS(l)𝔼[DKL(π^(l)(|s)Pτ(|s))].\displaystyle=\sup_{s\in S^{(l)}}\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\hat{\pi}^{(l)}(\cdot|s)\|P_{\tau}^{*}(\cdot|s)\big{)}\Big{]}.

At the ll-th iteration, the supervised learning modules takes the outputs of the improvement module, 𝒟(l)\mathcal{D}^{(l)}, as the training data. Let ξv(l)\xi_{v}^{(l)} and ξp(l)\xi_{p}^{(l)} denote the estimation errors for the new model fl=(Vl,πl)f_{l}=(V_{l},\pi_{l}), after querying the supervised learning module:

ξv(l)\displaystyle\xi_{v}^{(l)} =𝔼[VlV],\displaystyle=\mathbb{E}\Big{[}\big{\|}V_{{l}}-V^{*}\big{\|}_{\infty}\Big{]},
ξp(l)\displaystyle\xi_{p}^{(l)} =supsS𝔼[DKL(πl(|s)Pτ(|s))].\displaystyle=\sup_{s\in S}\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\pi_{{l}}(\cdot|s)\|P_{\tau}^{*}(\cdot|s)\big{)}\Big{]}.

Note that by definition, ξv(l)=ωv(l+1)\xi_{v}^{(l)}=\omega_{v}^{(l+1)} and ξp(l)=ωp(l+1)\xi_{p}^{(l)}=\omega_{p}^{(l+1)}.

First, the improvement property of the improvement module (cf. Property 1) implies that

εv(l)\displaystyle\varepsilon^{(l)}_{v} ζvωv(l),\displaystyle\leq\zeta_{v}\omega_{v}^{(l)}, (7)
εp(l)\displaystyle\varepsilon^{(l)}_{p} ζpωp(l).\displaystyle\leq\zeta_{p}\omega_{p}^{(l)}. (8)

For the supervised learning module, according to the generalization property (cf. Property 2), when the size of training set nln_{l} is sufficiently large and the sampled states S(l)={si}i=1nlS^{(l)}=\{s_{i}\}_{i=1}^{n_{l}} are representative of the state space, the same order of accuracy of the training data will be generalized to the entire state space. For now, let us assume that this is the case; we will come back to verify the generalization bound in Property 2 can indeed be satisfied by S(l)S^{(l)}. Then, the following bounds hold:

ξv(l)\displaystyle\xi_{v}^{(l)} cvεv(l),\displaystyle\leq c_{v}\varepsilon^{(l)}_{v},
ξp(l)\displaystyle\xi_{p}^{(l)} cpεp(l).\displaystyle\leq c_{p}\varepsilon^{(l)}_{p}.

Hence

ωv(l+1)=ξv(l)cvζvωv(l),\displaystyle\omega_{v}^{(l+1)}=\xi_{v}^{(l)}\leq c_{v}\zeta_{v}\omega_{v}^{(l)},
ωp(l+1)=ξp(l)cpζpωp(l).\displaystyle\omega_{p}^{(l+1)}=\xi_{p}^{(l)}\leq c_{p}\zeta_{p}\omega_{p}^{(l)}.

Therefore, when querying the improvement module, if we select the improvement factors to be

ζv=ρ/cv and ζp=ρ/cp,\zeta_{v}=\rho/c_{v}\textrm{ and }\zeta_{p}=\rho/c_{p}, (9)

then we have

ωv(l+1)=ξv(l)ρωv(l),\displaystyle\omega_{v}^{(l+1)}=\xi_{v}^{(l)}\leq\rho\omega_{v}^{(l)}, (10)
ωp(l+1)=ξp(l)ρωp(l).\displaystyle\omega_{p}^{(l+1)}=\xi_{p}^{(l)}\leq\rho\omega_{p}^{(l)}. (11)

It is worth taking note of the fact that cvc_{v} and cpc_{p} would be larger than 11 (cf. Property 2): a reasonable supervised learning model may generalize the same order of accuracy as training data, but unlikely for it be smaller; hence, ζv\zeta_{v} and ζp\zeta_{p} are required to be smaller than 11 in Property 1 so that ρ<1\rho<1.

By definition, ωv(1)=C0,v\omega_{v}^{(1)}=C_{0,v} and ωp(1)=C0,p\omega_{p}^{(1)}=C_{0,p}. Therefore, we have the desired inequalities:

𝔼[VLV]C0,vρL\displaystyle\mathbb{E}\Big{[}\big{\|}V_{{L}}-V^{*}\big{\|}_{\infty}\big{]}\leq C_{0,v}\rho^{L}
𝔼[DKL(πL(|s)Pτ(|s))]C0,pρL,s𝒮.\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}{\pi}_{L}(\cdot|s)\|P_{\tau}^{*}(\cdot|s)\big{)}\Big{]}\leq C_{0,p}\rho^{L},\quad\forall s\in\mathcal{S}.

Finally, to complete the proof, as we mentioned before, we need to verify that we could sample enough representative states at each iteration in finite time steps. This is indeed guaranteed by the exploration property. In particular, note that at the ll-th iteration, we would like to sample enough representative states that are of errors ζvωv(l)\zeta_{v}\omega_{v}^{(l)} and ζpωp(l)\zeta_{p}\omega_{p}^{(l)} for the value and policy functions (cf. Eqs. (7) and (8)). By a recursive argument (cf. Eqs. (10) and (11)), it is not hard to see that at the ll-th iteration, we need to query the exploration module until the sampled states, S(l)={si}i=1nlS^{(l)}=\{s_{i}\}_{i=1}^{n_{l}}, contain one of the representative sets in (τ,ζvC0,vρl1,ζpC0,pρl1)\mathcal{E}(\tau,\zeta_{v}C_{0,v}\rho^{l-1},\zeta_{p}C_{0,p}\rho^{l-1}), i.e., we immediately stop querying the exploration module at time nln_{l} when the following holds:

S^(τ,ζvC0,vρl1,ζpC0,pρl1) such that S^{si}i=1nl,\exists\>\hat{S}\in\mathcal{E}(\tau,\zeta_{v}C_{0,v}\rho^{l-1},\zeta_{p}C_{0,p}\rho^{l-1})\textrm{ such that }\hat{S}\subset\{s_{i}\}_{i=1}^{n_{l}},

where ζv\zeta_{v} and ζp\zeta_{p} are given by Eq. (9). From the exploration property, we know that 𝔼[T(τ,ζvC0,vρl1,ζpC0,pρl1)]\mathbb{E}[T(\tau,\zeta_{v}C_{0,v}\rho^{l-1},\zeta_{p}C_{0,p}\rho^{l-1})] is finite, which implies that nln_{l} is also finite with high probability. Therefore, we are guaranteed that the training data D(l)D^{(l)} contains one of the representative sets, and hence the supervised learning module will generalize at each iteration. This completes the proof of Theorem 2.

Appendix C Proof of Proposition 3

To prove Proposition 3, we first establish the following useful lemma:

Lemma 9.

Consider the exploration module and suppose that 𝔼[T(τ,ε1,v,ε1,p)]B(τ,ε1,v,ε1,p)\mathbb{E}[T(\tau,\varepsilon_{1,v},\varepsilon_{1,p})]\leq B(\tau,\varepsilon_{1,v},\varepsilon_{1,p}). Then, with probability at least 1δ1-\delta,

T(τ,ε1,v,ε1,p)eB(τ,ε1,v,ε1,p)log1δ.T(\tau,\varepsilon_{1,v},\varepsilon_{1,p})\leq e\cdot B(\tau,\varepsilon_{1,v},\varepsilon_{1,p})\cdot\log\frac{1}{\delta}. (12)
Proof of Lemma 9.

Consider a total time steps of n=eB(τ,ε1,v,ε1,p)log1δn=eB(\tau,\varepsilon_{1,v},\varepsilon_{1,p})\log\frac{1}{\delta}. All the states, {si}i=1n\{s_{i}\}_{i=1}^{n}, are sampled via querying the exploration module. Let us divide the total time steps nn into Mlog(1/δ)M\triangleq\log(1/\delta) segments, each consisting of heB(τ,ε1,v,ε1,p)h\triangleq eB(\tau,\varepsilon_{1,v},\varepsilon_{1,p}) states. Denote by 𝒮(m)\mathcal{S}(m) the set of states in the mm-th segment, i.e., 𝒮(m)={si}i=(m1)hmh1\mathcal{S}(m)=\{s_{i}\}_{i=(m-1)h}^{mh-1}. The key idea of the proof is to argue that with high probability, at least one of the sets 𝒮(m)\mathcal{S}(m), m=1,2,,Mm=1,2,\dots,M will contain a representative set in (τ,ε1,v,ε1,p)\mathcal{E}(\tau,\varepsilon_{1,v},\varepsilon_{1,p}).

Denote by EmE_{m} the event that the mm-th segment does not contain any the representative sets, i.e.,

Em={S^(τ,ε1,v,ε1,p) such that S^𝒮(m)}.E_{m}=\{\>\nexists\>\hat{S}\in\mathcal{E}(\tau,\varepsilon_{1,v},\varepsilon_{1,p})\textrm{ such that }\hat{S}\in\mathcal{S}(m)\}.

Let m\mathcal{F}_{m} be the filtration containing information untill the end of segment mm. Since 𝔼[T(τ,ε1,v,ε1,p)]B(τ,ε1,v,ε1,p)\mathbb{E}[T(\tau,\varepsilon_{1,v},\varepsilon_{1,p})]\leq B(\tau,\varepsilon_{1,v},\varepsilon_{1,p}), by Markov inequality, we have,

(T(τ,ε1,v,ε1,p)h+1)𝔼[T(τ,ε1,v,ε1,p)]h+1B(τ,ε1,v,ε1,p)h=1e.\mathbb{P}\big{(}T(\tau,\varepsilon_{1,v},\varepsilon_{1,p})\geq h+1\big{)}\leq\frac{\mathbb{E}[T(\tau,\varepsilon_{1,v},\varepsilon_{1,p})]}{h+1}\leq\frac{B(\tau,\varepsilon_{1,v},\varepsilon_{1,p})}{h}=\frac{1}{e}.

This then implies that

(Em|m1)1e,m[M].\mathbb{P}(E_{m}|\mathcal{F}_{m-1})\leq\frac{1}{e},\quad m\in[M].

Therefore,

(T(τ,ε1,v,ε1,p)>eB(τ,ε1,v,ε1,p)log1δ)\displaystyle\mathbb{P}\Big{(}T(\tau,\varepsilon_{1,v},\varepsilon_{1,p})>e\cdot B(\tau,\varepsilon_{1,v},\varepsilon_{1,p})\cdot\log\frac{1}{\delta}\Big{)} (m[M],Em occurs)\displaystyle\leq\mathbb{P}(\>\forall\>m\in[M],\>E_{m}\textrm{ occurs})
(1e)m\displaystyle\leq(\frac{1}{e})^{m}
=δ,\displaystyle=\delta,

which completes the proof of Lemma 9. ∎

Proof of Proposition 3.

With Lemma 9, we are now ready to prove Proposition 3. This is achieved by simply counting the sample complexity for each of the LL iterations. As discussed in the convergence proof of Theorem 2, at the ll-th iteration, we need to query the exploration module until the sampled states, S(l)={si}i=1nlS^{(l)}=\{s_{i}\}_{i=1}^{n_{l}}, contains one of the representative sets in (τ,C0,vρl/cv,C0,pρl/cp)\mathcal{E}(\tau,C_{0,v}\rho^{l}/c_{v},C_{0,p}\rho^{l}/c_{p}). For each of the explored states, a query of the improvement module incurs a deterministic sample complexity of κ(τ,C0,vρl1,C0,pρl1,ρcv,ρcp)\kappa\Big{(}\tau,C_{0,v}\rho^{l-1},C_{0,p}\rho^{l-1},\frac{\rho}{c_{v}},\frac{\rho}{c_{p}}\Big{)}, for the required improvement factors ζv=ρ/cv\zeta_{v}=\rho/c_{v} and ζp=ρ/cp\zeta_{p}=\rho/c_{p}. Let us now apply Lemma 9. Then, we know

(nleB(τ,C0,vρlcv,C0,pρlcp)logLδ)1δL.\mathbb{P}\Big{(}n_{l}\leq e\cdot B\Big{(}\tau,\frac{C_{0,v}\rho^{l}}{c_{v}},\frac{C_{0,p}\rho^{l}}{c_{p}}\Big{)}\cdot\log\frac{L}{\delta}\Big{)}\geq 1-\frac{\delta}{L}.

That is, with probability at most δ/L{\delta}/{L}, the sample complexity of the ll-th iteration is larger than

κ(τ,C0,vρl1,C0,pρl1,ρcv,ρcp)B(τ,C0,vρlcv,C0,pρlcp)elogLδ.\kappa\Big{(}\tau,C_{0,v}\rho^{l-1},C_{0,p}\rho^{l-1},\frac{\rho}{c_{v}},\frac{\rho}{c_{p}}\Big{)}\cdot B\Big{(}\tau,\frac{C_{0,v}\rho^{l}}{c_{v}},\frac{C_{0,p}\rho^{l}}{c_{p}}\Big{)}\cdot e\cdot\log\frac{L}{\delta}.

Finally, applying union bound over the LL iterations, we have

(l[L] such that nl>eB(τ,C0,vρlcv,C0,pρlcp)logLδ)\displaystyle\mathbb{P}\Big{(}\>\exists\>l\in[L]\textrm{ such that }n_{l}>e\cdot B\Big{(}\tau,\frac{C_{0,v}\rho^{l}}{c_{v}},\frac{C_{0,p}\rho^{l}}{c_{p}}\Big{)}\cdot\log\frac{L}{\delta}\Big{)}
\displaystyle\leq l=1L(nl>eB(τ,C0,vρlcv,C0,pρlcp)logLδ)\displaystyle\sum_{l=1}^{L}\mathbb{P}\Big{(}n_{l}>e\cdot B\Big{(}\tau,\frac{C_{0,v}\rho^{l}}{c_{v}},\frac{C_{0,p}\rho^{l}}{c_{p}}\Big{)}\cdot\log\frac{L}{\delta}\Big{)}
\displaystyle\leq LδL\displaystyle L\cdot\frac{\delta}{L}
=\displaystyle= δ.\displaystyle\delta.

Therefore, with probability at least 1δ1-\delta, for every l[L]l\in[L], nleB(τ,C0,vρlcv,C0,pρlcp)logLδn_{l}\leq e\cdot B\Big{(}\tau,\frac{C_{0,v}\rho^{l}}{c_{v}},\frac{C_{0,p}\rho^{l}}{c_{p}}\Big{)}\cdot\log\frac{L}{\delta}. Equivalently, with probability at least 1δ1-\delta, the total sample complexity is upper bounded by

l=1Lκ(τ,C0,vρl1,C0,pρl1,ρcv,ρcp)B(τ,C0,vρlcv,C0,pρlcp)elogLδ.\sum_{l=1}^{L}\kappa\Big{(}\tau,C_{0,v}\rho^{l-1},C_{0,p}\rho^{l-1},\frac{\rho}{c_{v}},\frac{\rho}{c_{p}}\Big{)}\cdot B\Big{(}\tau,\frac{C_{0,v}\rho^{l}}{c_{v}},\frac{C_{0,p}\rho^{l}}{c_{p}}\Big{)}\cdot e\cdot\log\frac{L}{\delta}.

Appendix D Proof of Theorem 4

The recent work Shah & Xie (2018) establishes a lower bound on the sample complexity for reinforcement learning algorithms on MDPs. We follow a similar argument to establish a lower bound on the sample complexity for two-player zero-sum Markov games. We provide the proof for completeness. The key idea is to connect the problem of estimating the value function to the problem of non-parametric regression, and then leveraging known minimax lower bound for the latter. In particular, we show that a class of non-parametric regression problem can be embedded in a Markov game problem, so any algorithm for the latter can be used to solve the former. Prior work on non-parametric regression (Tsybakov, 2009; Stone, 1982) establishes that a certain number of observations is necessary to achieve a given accuracy using any algorithms, hence leading to a corresponding necessary condition for the sample size of estimating the value function in a Markov game problem. We now provide the details.

Step 1. Non-parametric regression

Consider the following non-parametric regression problem: Let 𝒮:=[0,1]d\mathcal{S}:=[0,1]^{d} and assume that we have TT independent pairs of random variables (x1,y1),,(xT,yT)(x_{1},y_{1}),\ldots,(x_{T},y_{T}) such that

𝔼[yt|xt]=f(xt),xt𝒮\mathbb{E}\left[y_{t}|x_{t}\right]=f(x_{t}),\qquad x_{t}\in\mathcal{S} (13)

where xtuniform(𝒮)x_{t}\sim\text{uniform}(\mathcal{S}) and f:𝒮f:\mathcal{S}\to\mathbb{R} is the unknown regression function. Suppose that the conditional distribution of yty_{t} given xt=xx_{t}=x is a Bernoulli distribution with mean f(x)f(x). We also assume that ff is 11-Lipschitz continuous with respect to the Euclidean norm, i.e.,

|f(x)f(x0)||xx0|,x,x0𝒮.|f(x)-f(x_{0})|\leq|x-x_{0}|,\quad\forall x,x_{0}\in\mathcal{S}.

Let \mathcal{F} be the collection of all 11-Lipschitz continuous function on 𝒳\mathcal{X}, i.e.,

={h|h is a 1-Lipschitz function on 𝒮},\mathcal{F}=\left\{h|\text{$h$ is a 1-Lipschitz function on $\mathcal{S}$}\right\},

The goal is to estimate ff given the observations (x1,y1),,(xT,yT)(x_{1},y_{1}),\ldots,(x_{T},y_{T}) and the prior knowledge that ff\in\mathcal{F}.

It is easy to verify that the above problem is a special case of the non-parametric regression problem considered in the work by Stone (1982) (in particular, Example 2 therein). Let f^T\hat{f}_{T} denote an arbitrary (measurable) estimator of ff based on the training samples (x1,y1),,(xT,yT)(x_{1},y_{1}),\ldots,(x_{T},y_{T}). By Theorem 1 in Stone (1982), we have the following result: there exists a c>0c>0 such that

limTinff^Tsupf(f^Tfc(logTT)12+d)=1,\displaystyle\lim_{T\to\infty}\inf_{\hat{f}_{T}}\sup_{f\in\mathcal{F}}\mathbb{P}\bigg{(}\big{\|}\hat{f}_{T}-f\big{\|}_{\infty}\geq c\Big{(}\frac{\log T}{T}\Big{)}^{\frac{1}{2+d}}\bigg{)}=1, (14)

where infimum is over all possible estimators f^T\hat{f}_{T}.

Translating this result to the non-asymptotic regime, we obtain the following theorem.

Theorem 10.

Under the above assumptions, for any number δ(0,1)\delta\in(0,1), there exits some numbers c>0c>0 and TδT_{\delta} such that

inff^Tsupf(f^Tfc(logTT)12+d)δ,for all TTδ.\inf_{\hat{f}_{T}}\sup_{f\in\mathcal{F}}\mathbb{P}\bigg{(}\big{\|}\hat{f}_{T}-f\big{\|}_{\infty}\geq c\Big{(}\frac{\log T}{T}\Big{)}^{\frac{1}{2+d}}\bigg{)}\geq\delta,\qquad\text{for all $T\geq T_{\delta}$}.

Step 2. Two-player zero-sum Markov game

Consider a class of (degenerate) two-player zero-sum discounted Markov game (𝒮1,𝒮2,(\mathcal{S}_{1},\mathcal{S}_{2}, 𝒜1,𝒜2,r,P,γ)\mathcal{A}_{1},\mathcal{A}_{2},r,P,\gamma), where

𝒮1=[0,1/2]d,𝒮2=(1/2,1]d,\displaystyle\mathcal{S}_{1}=[0,1/2]^{d},\mathcal{S}_{2}=(1/2,1]^{d},
𝒜1=𝒜2=𝒜 is finite,\displaystyle\mathcal{A}_{1}=\mathcal{A}_{2}=\mathcal{A}\text{ is finite},
P(s,a) is supported on a single state,\displaystyle P(s,a)\text{ is supported on a single state,}
r(x,a)=r(x) for all a,\displaystyle r(x,a)=r(x)\text{ for all $a$},
γ=0.\displaystyle\gamma=0.

In words, the transition is deterministic, and the expected reward is independent of the action taken and the current state.

Let RtR_{t} be the observed reward at step tt. We assume that the distribution of RtR_{t} given xtx_{t} is Bernoulli(r(xt))\text{Bernoulli}\big{(}r(x_{t})\big{)}, independently of (x1,x2,,xt1)(x_{1},x_{2},\ldots,x_{t-1}). The expected reward function r(xt)=𝔼[R(xt)|xt]r(x_{t})=\mathbb{E}\left[R(x_{t})|x_{t}\right] is assumed to be 11-Lipschitz and bounded. It is easy to see that for all x𝒮x\in\mathcal{S}, a𝒜a\in\mathcal{A},

V(x)\displaystyle V^{*}(x) =r(x).\displaystyle=r(x). (15)

Step 3. Reduction from regression to a Markov game

Given a non-parametric regression problem as described in Step 11, we may reduce it to the problem of estimating the value function VV^{*} of the Markov game described in Step 22. To do this, we set

r(x)\displaystyle r(x) =f(x),x𝒮\displaystyle=f(x),\qquad\forall x\in\mathcal{S}

and

Rt\displaystyle R_{t} =yt,t=1,2,,T.\displaystyle=y_{t},\qquad t=1,2,\ldots,T.

In this case, it follows from equations (15) that the value function is given by V=fV^{*}=f. Moreover, the expected reward function r()r(\cdot) is 11-Lipschitz, so the assumptions of the Markov game in Step 22 are satisfied. This reduction shows that the Markov game problem is at least as hard as the nonparametric regression problem, so a lower bound for the latter is also a lower bound for the former.

Applying Theorem 10 yields the following result: for any number δ(0,1)\delta\in(0,1), there exist some numbers c>0c>0 and Tδ>0T_{\delta}>0, such that

infV^TsupV[V^TVc(logTT)12+d]δ,for all TTδ.\inf_{\hat{V}_{T}}\sup_{V^{*}\in\mathcal{F}}\mathbb{P}\bigg{[}\big{\|}\hat{V}_{T}-V^{*}\big{\|}_{\infty}\geq c\left(\frac{\log T}{T}\right)^{\frac{1}{2+d}}\bigg{]}\geq\delta,\qquad\text{for all $T\geq T_{\delta}$}.

Consequently, for any reinforcement learning algorithm V^T\hat{V}_{T} and any sufficiently small ε>0\varepsilon>0, there exists a Markov game problem such that in order to achieve

[V^TV<ε]1δ,\mathbb{P}\Big{[}\big{\|}\hat{V}_{T}-V^{*}\big{\|}_{\infty}<\varepsilon\Big{]}\geq 1-\delta,

one must have

TCd(1ε)2+dlog(1ε),T\geq C^{\prime}d\left(\frac{1}{\varepsilon}\right)^{2+d}\log\left(\frac{1}{\varepsilon}\right),

where C>0C^{\prime}>0 is a constant.

Appendix E Details: A Concrete Instantiation of the Key Modules

In Section 6, we provide a sketch of our instantiation of the key modules and their informal properties. In this appendix, we close the gap by giving a detailed treatment on each of the three modules. We discuss in details each instantiation and its formal property. Combining together, we provide a precise statement on the sample complexity of the overall EIS algorithm.

E.1 Improvement Module: MCTS

Recall that the improvement module should be capable of providing improved estimates for both the value and policy functions, at the queried state ss. Since both the value and the policy are closely related to the QQ function, one simple approach to simultaneously produce improved estimates is to obtain better estimates of QQ^{*} first and then construct the improved estimates of value and policy from Q^\hat{Q}. We will take this approach in this example and use MCTS to obtain the QQ estimates.

MCTS is a class of popular search algorithms for sequential decision-makings, by building search trees and randomly sampling the state space. It is also one of the key components underlying the success of AlphaGo Zero. Most variants of MCTS in literature uses some forms of upper confidence bound (UCB) algorithm to select actions at each depth of the search tree. Since our focus is to demonstrate the improvement property, we employ a fixed HH-depth MCTS, which takes the current model of the value function VlV_{l} as inputs and outputs a value estimate V^(s)\hat{V}(s) of the root node ss. The current model VlV_{l} of the value function is used for evaluating the value of the leaf nodes at depth HH during the Monte Carlo simulation. This fixed depth MCTS has been rigorously analyzed in Shah et al. (2019) with non-asymptotic error bound for the root node, when the state transition is deterministic.

We refer readers to Shah et al. (2019) (precisely, Algorithm 2) for the details of the pseudo code. We remark that in principle, many other variants of MCTS that has a precise error guarantee could be used instead; we choose the fixed HH-depth variant here to provide a concrete example.

We now lay down the overall algorithm of the improvement module in Algorithm 2 below. Recall that the state transition is deterministic and the reward r(s,a)r(s,a) could be random (cf. Section 2). Given the queried state ss, note that the Q-value estimate Q^(s,a)\hat{Q}(s,a) for each a𝒜a\in\mathcal{A} is given by the sum of two components: (1) empirical average of the reward r(s,a)r(s,a); (2) the estimated value V^(sa)\hat{V}(s\circ a) for the next state, returned from calling the fixed depth MCTS algorithm with sas\circ a being the root node. Further recall that we use player 𝖯𝟣\mathsf{P1} as the reference (i.e., r(s,a)r1(s,a)r(s,a)\triangleq r^{1}(s,a)). The module then obtains improved value estimate V^(s)\hat{V}(s) by taking proper max or min of the Q-value estimates Q^(s,a)\hat{Q}(s,a)—depending on whether I(s)I(s) is player 𝖯𝟣\mathsf{P1} (maximizer) or player 𝖯𝟤\mathsf{P2} (minimizer)—and improved policy estimate π^(|s)\hat{\pi}(\cdot|s) as the Boltzmann policy based on Q^(s,a)\hat{Q}(s,a). It is worth mentioning that the fixed depth MCTS algorithm was designed for discounted MDP in Shah et al. (2019), but extending to game setting is straightforward as in literature (Kocsis et al., 2006; Kaufmann & Koolen, 2017), i.e., by alternating between max nodes (i.e., 𝖯𝟣\mathsf{P1} plays) and min nodes (i.e., 𝖯𝟤\mathsf{P2}) for each depth in the tree. We defer details to Appendix F, where we prove the key theorem, Theorem 11, of our improvement module.

Algorithm 2 Improvement Module
1:  Input: (1) Current model f=(V,π)f=(V,\pi); (2) root node ss.
2:  /*  Q-value estimates*/
3:  for each a𝒜a\in\mathcal{A} do
4:     call the fixed depth MCTS (Algorithm 2 of Shah et al. (2019)) with: (1) depth HH; (2) root node sas\circ a; (3) current model VV for evaluating value of the leaf nodes at depth HH; (4) number of MCTS simulation mm. That is,
V^(sa)=Fixed Depth MCTS(H,sa,V,m)\hat{V}(s\circ a)=\textrm{Fixed Depth MCTS}(H,s\circ a,V,m)\vspace{-0.15in} (16)
5:     simulate action aa for mm times to obtain an empirical average, r^(s,a)\hat{r}(s,a) , of the reward r(s,a)r(s,a).
6:     form Q-value estimate for Q(s,a)Q^{*}(s,a):
Q^(s,a)=r^(s,a)+γV^(sa)\hat{Q}(s,a)=\hat{r}(s,a)+\gamma\hat{V}(s\circ a) (17)
7:  end for
8:  /*  Improved value and policy estimates*/
9:  form value and policy estimates for V(s)V^{*}(s) and Pτ(s)P^{*}_{\tau}(s) based on the Q-value estimates, i.e.,
V^(s)=maxa𝒜Q^(s,a) and π^(a|s)=eQ^(s,a)/τa𝒜eQ^(s,a)/τ for every a𝒜.\hat{V}(s)=\max_{a\in\mathcal{A}}\hat{Q}(s,a)\textrm{ and }\hat{\pi}(a|s)=\frac{e^{\hat{Q}(s,a)/\tau}}{\sum_{a^{\prime}\in\mathcal{A}}e^{\hat{Q}(s,a^{\prime})/\tau}}\textrm{ for every $a\in\mathcal{A}$}. (18)
if I(s)I(s) is player 𝖯𝟤\mathsf{P2}, then replace max\max with min\min in the value estimate and replace Q^\hat{Q} with Q^-\hat{Q} in the policy estimate (recall that the maximizer 𝖯𝟣\mathsf{P1} is the reference).
10:  Output: improved estimates V^(s)\hat{V}(s) and π^(|s)\hat{\pi}(\cdot|s).

The following theorem states the property of this specific improvement module (Algorithm 2). It is not hard to see that it directly implies the desired improvement property, i.e., Property 1.

Theorem 11.

Suppose that the state transitions are deterministic. Given the current model f=(V,π)f=(V,\pi), a small temperature τ<1\tau<1, and the improvement factors 0<ζv<10<\zeta_{v}<1 and 0<ζp<10<\zeta_{p}<1. Suppose that the current value model, VV, satisfies that

𝔼[VV]\displaystyle\mathbb{E}\Big{[}||V-V^{*}||_{\infty}\Big{]} ε0,v.\displaystyle\leq\varepsilon_{0,v}.

Then, with appropriately chosen parameters for Algorithm 2, for each query state s0𝒮s_{0}\in\mathcal{S}, (V^(s0),π^(|s0))=Improvement Module(f,s0)\big{(}\hat{V}(s_{0}),\hat{\pi}(\cdot|s_{0})\big{)}=\textrm{Improvement Module}(f,s_{0}), we have:

  1. 1.

    𝔼[|V^(s0)V(s0)||]ζvε0,v, and 𝔼[DKL(π^(|s0)||Pτ(|s0))]ζpε0,v.\mathbb{E}\Big{[}\big{|}\hat{V}(s_{0})-V^{*}(s_{0})|\big{|}\Big{]}\leq\zeta_{v}\cdot\varepsilon_{0,v},\>\textrm{ and }\>\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s_{0})||P^{*}_{\tau}(\cdot|s_{0})\big{)}\Big{]}\leq\zeta_{p}\cdot\varepsilon_{0,v}.

  2. 2.

    The above is achieved with a sample complexity of

    O((τmin{ζv,ζp}ε0,v)2log(τmin{ζv,ζp})logγ).O\Big{(}\big{(}{\tau\cdot\min\{\zeta_{v},\zeta_{p}\}\cdot\varepsilon_{0,v}}\big{)}^{-2}\cdot\frac{\log({\tau\cdot\min\{\zeta_{v},\zeta_{p}\}})}{\log\gamma}\Big{)}.

E.2 Supervised Learning Module: Nearest Neighbor Regression.

To establish the generalization property of nearest neighbor supervised learning algorithm for estimating the optimal value function and policy, we make the following structural assumption about the Markov game. Specifically, we assume that the optimal solutions (i.e., true regression function) are smooth in some sense.

Assumption 1.

(A1.) The state space 𝒮\mathcal{S} is a compact subset of d\mathbb{R}^{d}. The chosen distance metric d:𝒮×𝒮+d:\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}_{+} associated with the state space 𝒮\mathcal{S} satisfies that (𝒮,d)(\mathcal{S},d) forms a compact metric space. (A2.) The optimal value function V:𝒮V^{*}:\mathcal{S}\rightarrow\mathbb{R} is bounded by VmaxV_{\max} and satisfies Lipschitz continuity with parameter LvL_{v}, i.e., s,s𝒮,\forall s,s^{\prime}\in\mathcal{S}, |V(s)V(s)|Lvd(s,s).|V^{*}(s)-V^{*}(s^{\prime})|\leq L_{v}d(s,s^{\prime}). (A3.) The optimal Boltzmann policy PτP^{*}_{\tau} defined in Eq. (4) is Lipschitz continuous with parameter Lp(τ)L_{p}(\tau), i.e., s,s𝒮,\forall s,s^{\prime}\in\mathcal{S}, aA,\forall a\in A, |Pτ(a|s)Pτ(a|s)|Lp(τ)d(s,s).|P^{*}_{\tau}(a|s)-P^{*}_{\tau}(a|s^{\prime})|\leq L_{p}(\tau)d(s,s^{\prime}).

For each h>0h>0, the compact 𝒮\mathcal{S} has a finite h/2h/2-covering number N(h)N(h). There exists a partition of 𝒮\mathcal{S}, {Bj,j[N(h)]}\left\{B_{j},j\in[N(h)]\right\}, such that each BjB_{j} has a diameter at most hh, that is, supx,yBjd(x,y)h\sup_{x,y\in B_{j}}d(x,y)\leq h. We assume that states in the training set, T:={si,i[n]}T:=\left\{s_{i},i\in[n]\right\}, are sufficiently representative in the sense that for any given hh and KK, the sample size nn can be chosen large enough to ensure that |BjT|K\left|B_{j}\cap T\right|\geq K for all j[N(h)]j\in[N(h)]. If TT satisfies this condition, we call it (h,K)(h,K)-representative.

Given the training data, we fit a value function VNN:𝒮V_{\textup{NN}}:\mathcal{S}\to\mathbb{R} using the following Nearest Neighbor type algorithm: set

VNN(s)=1|BjT|siBjTV^(si),sBj.V_{\textup{NN}}(s)=\frac{1}{\left|B_{j}\cap T\right|}\sum_{s_{i}\in B_{j}\cap T}\hat{V}(s_{i}),\quad\forall s\in B_{j}.

For each a𝒜a\in\mathcal{A}, a similar algorithm can be used to fit the action probability πNN(a|):𝒮[0,1]\pi_{\textup{NN}}(a|\cdot):\mathcal{S}\to[0,1]. The proposition below, proved in Appendix G, shows that this algorithm has the desired generalization property.

To simplify the notation, we use εv\varepsilon_{v} and εp\varepsilon_{p} to represent the estimation errors of the value function and the policy, respectively, for the training data. That is, εvε1,v\varepsilon_{v}\triangleq\varepsilon_{1,v} and εpε1,p\varepsilon_{p}\triangleq\varepsilon_{1,p} in Property 2.

Proposition 12.

Suppose Assumption 1 holds. If the training data is representative with respect to appropriate h>0h>0 and K>0K>0, the above regression algorithm satisfies Property 2. In particular, if

h=min{εvLv,εp/2Lp},K=max{Vmax22εv2log(4VmaxNεv),1εplog(4Nεp)},\displaystyle h=\min\Big{\{}\frac{\varepsilon_{v}}{L_{v}},\frac{\sqrt{\varepsilon_{p}/2}}{L_{p}}\Big{\}},K=\max\Big{\{}\frac{V_{\max}^{2}}{2\varepsilon_{v}^{2}}\log\Big{(}\frac{4V_{\max}N}{\varepsilon_{v}}\Big{)},\frac{1}{\varepsilon_{p}}\log\Big{(}\frac{4N}{\varepsilon_{p}}\Big{)}\Big{\}}, (19)

we have

𝔼[VNNV]\displaystyle\mathbb{E}\left[\left\|V_{\textup{NN}}-V^{*}\right\|_{\infty}\right] 4εv,\displaystyle\leq 4\cdot\varepsilon_{v},
𝔼[DKL(πNN(|s)Pτ(|s))]\displaystyle\mathbb{E}\left[D_{\textup{KL}}\big{(}\pi_{\textup{NN}}(\cdot|s)\,\|\,P^{*}_{\tau}(\cdot|s)\big{)}\right] cεp,s𝒮,\displaystyle\leq c\cdot\varepsilon_{p},\quad\forall s\in\mathcal{S},

where the constant c>0c>0 is independent of nn, the size of the training data.

The size of a representative data set should at least scale as KN(h).KN(h). Consider a simple setting where the state space is a unit volume hypercube in d\mathbb{R}^{d} with ll_{\infty} metric. By (Wainwright, 2019, Lemma 5.7), the covering number N(h)N(h) of 𝒮\mathcal{S} scales as Θ((1/h)d).\Theta\big{(}(1/h)^{d}\big{)}. Let ε=min{εv,εp}.\varepsilon=\min\{\varepsilon_{v},\sqrt{\varepsilon_{p}}\}. Note that h=Θ(ε)h=\Theta(\varepsilon) Therefore, to achieve the desired generalization property, the size of the training dataset should satisfy

nKN(h)=O(Khd)=O(dεd+2log1ε).\displaystyle n\geq KN(h)=O\Big{(}\frac{K}{h^{d}}\Big{)}=O\bigg{(}\frac{d}{\varepsilon^{d+2}}\log\frac{1}{\varepsilon}\bigg{)}.

E.3 Exploration Module: Random Sampling Policy

As stated in Proposition 12, the sampled states for nearest neighbor regression should be (h,K)(h,K)-representative, where h,Kh,K are given by Eq. (19). We will show that a random sampling policy—uniformly sampling the state space—is able to visit a set of (h,K)(h,K)-representative states within a finite expected number of steps. We need to assume that the state space 𝒮\mathcal{S} is sufficiently regular near the boundary. In particular, we impose the following assumption which is naturally satisfied by convex compact sets in d\mathbb{R}^{d}, for example.

Assumption 2.

The partition {Bj,j[N(h)]}\{B_{j},j\in[N(h)]\} of 𝒮\mathcal{S} satisfies

λ(Bj)c0λ(𝒮)N(h),j[N(h)],\lambda(B_{j})\geq c_{0}\frac{\lambda(\mathcal{S})}{N(h)},\quad\forall j\in[N(h)], (20)

for some constant c0>0c_{0}>0, where λ\lambda is the Lesbegue measure in d\mathbb{R}^{d}.

Proposition 13.

Suppose that the state space 𝒮\mathcal{S} is a compact subset of d\mathbb{R}^{d} satisfying Assumption 2. Given temperature τ>0\tau>0, and estimation errors εv>0\varepsilon_{v}>0 and εp>0\varepsilon_{p}>0 for the value and policy respectively, define h,Kh,K as in Eq. (19). Then the expected number of steps to obtain a set of (h,K)(h,K)-representative states under the random sampling policy is upper bounded by

B(τ,εv,εp)=O(KN(h)c0logN(h)).B(\tau,\varepsilon_{v},\varepsilon_{p})=O\left(\frac{KN(h)}{c_{0}}\log N(h)\right).

E.4 Convergence Guarantees of the Instance

For the instance of EIS algorithm with MCTS, random sampling policy and nearest neighbor supervised learning, we have shown that each module satisfies the desired properties (cf. Theorem 11 and Propositions 12-13). Therefore, the convergence result stated in Theorem 2 holds for the specific instance we consider here. Moreover, the non-asymptotic analysis of these three methods provides an explicit upper bound on the sample complexity of this instance. The following corollary states the precise result.

Theorem 14.

Suppose that Assumptions 1 and 2 hold. For a given ρ(0,1)\rho\in(0,1), and a small τ<1,\tau<1, there exist appropriately chosen parameters for the instance of Algorithm 1 with MCTS, random sampling and nearest neighbor supervised learning, such that:

  1. 1.

    The output fL=(VL,πL)f_{L}=(V_{L},\pi_{L}) at the end of LL-th iteration satisfies

    𝔼[VLV]\displaystyle\mathbb{E}\Big{[}\big{\|}V_{L}-V^{*}\big{\|}_{\infty}\Big{]} VmaxρL,\displaystyle\leq V_{\max}\rho^{L},
    𝔼[DKL(πL(|s)Pτ(|s))]\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\pi_{L}(\cdot|s)\,\|\,P_{\tau}^{*}(\cdot|s)\big{)}\Big{]} cVmax4ρL,s𝒮,\displaystyle\leq\frac{cV_{\max}}{4}\rho^{L},\quad\forall s\in\mathcal{S},

    where cc is the generation constant for policy in Proposition 12.

  2. 2.

    With probability at least 1δ,1-\delta, the above result is achieved with sample complexity of

    l=1Lclog1τρ1τ2ρ4llogN(c4ρl)ρlN(c4ρl)logN(c4ρl)logLδ.,\displaystyle\sum_{l=1}^{L}c^{\prime}\log\frac{1}{\tau\rho}\cdot\frac{1}{\tau^{2}\rho^{4l}}\cdot\log\frac{N(c_{4}\rho^{l})}{\rho^{l}}\cdot N(c_{4}\rho^{l})\cdot\log N(c_{4}\rho^{l})\cdot\log\frac{L}{\delta}.,

    where c>0c^{\prime}>0 and c4>0c_{4}>0 are constants independent of ρ\rho and l.l.

  3. 3.

    In particular, if 𝒮\mathcal{S} is a unit volume hypercube in d\mathbb{R}^{d}, then the total sample complexity to achieve ε\varepsilon-error value function and policy is given by

    O(log1τρ1τ2εd+4log(1ε)4log1δ).\displaystyle O\bigg{(}\log\frac{1}{\tau\rho}\cdot\frac{1}{\tau^{2}\varepsilon^{d+4}}\cdot\log\big{(}\frac{1}{\varepsilon}\big{)}^{4}\cdot\log\frac{1}{\delta}\bigg{)}.

Theorem 14 states that the sample complexity of the instance of EIS algorithm scales as Θ~(1ε4+d)\widetilde{\Theta}\big{(}\frac{1}{\varepsilon^{4+d}}\big{)} (omitting the logarithmic factor). Note that Theorem 4 implies that for any policy to learn the optimal value function within ε\varepsilon approximation error, the number of samples required must scale as Ω~(1ε2+d)\widetilde{\Omega}\big{(}\frac{1}{\varepsilon^{2+d}}\big{)}. Hence in terms of the dependence on the dimension, the instance we consider here is nearly optimal.

Appendix F Example Improvement Module and Proof of Theorem 11

In this section, we formally show the improvement property of the specific example in Section E.1. To this end, we first elaborate some details regarding Algorithm 2 in Appendix F.1. We then state two useful lemmas in Appendix F.2 and finally, we complete the proof of Theorem 11 in Appendix F.3.

F.1 Details of the Improvement Module Example

Before proving the theorem, let us first discuss some details of the improvement module (i.e., Algorithm 2). It is worth mentioning some necessary modifications for applying the fixed HH-depth MCTS algorithm (Shah et al., 2019) in Algorithm 2. In particular, the original algorithm is introduced and analyzed for infinite-horizon discounted MDPs, but extending to a game setting is straightforward as similar to the literature (Kocsis et al., 2006; Kaufmann & Koolen, 2017). We now elaborate both the algorithmic and the technical extensions of the fixed HH-depth MCTS algorithm for Markov games.

Algorithmically, for turn-based zero-sum games, each layer in the tree would alternate between max nodes (i.e., player 𝖯𝟣\mathsf{P1}’s turn) and min nodes (i.e., player 𝖯𝟤\mathsf{P2}’s turn). For max nodes, the algorithm proceeds as usual by selecting the action with the maximum sum of the empirical average and the upper confidence term. For min nodes, the algorithm could choose the action with the minimum value of the empirical average minus the upper confidence term. More precisely, if the current node is a min node, then Line 6 (action selection) of the fixed depth MCTS algorithm in Shah et al. (2019) should be modified to (using the notation in Shah et al. (2019) to be consistent):

a(h+1)=argmina𝒜q(h+1)(s(h),a)+γv~(h+1)(s(h)a)N(h+1)(s(h)a)(β(h+1))1/ξ(h+1)(N(h)(s(h)))α(h+1)/ξ(h+1)(N(h+1)(s(h)a))1η(h+1).a^{(h+1)}=\arg\min_{a\in\mathcal{A}}\frac{q^{(h+1)}(s^{(h)},a)+\gamma\tilde{v}^{(h+1)}(s^{(h)}\circ a)}{N^{(h+1)}(s^{(h)}\circ a)}-\frac{\big{(}\beta^{(h+1)}\big{)}^{1/\xi^{(h+1)}}\cdot\big{(}N^{(h)}(s^{(h)})\big{)}^{\alpha^{(h+1)}/\xi^{(h+1)}}}{\big{(}N^{(h+1)}(s^{(h)}\circ a)\big{)}^{1-\eta^{(h+1)}}}.

Alternatively, the algorithm could first negate the empirical average and then choose the action that maximizes the sum of the negated empirical average and the upper confidence term. With these modifications, the fixed depth MCTS algorithm could be used to estimate values for the game setting considered in this paper.

Technically, we note that one could readily obtain the same guarantees for the fixed depth MCTS algorithm for the game setting as the algorithm for MDPs in Shah et al. (2019), by following essentially the same proof in Appendix A of Shah et al. (2019). We only remark some technical points in the following:

  1. 1.

    The concentration results (cf. Appendix A.4 of Shah et al. (2019)) still hold in the game setting. The original concentration inequalities in Shah et al. (2019) are two-sided. Therefore, they apply to both the max nodes and min nodes.

  2. 2.

    The technical results were derived for rewards that are bounded in [0,1][0,1] for convenience. It is not hard to see (cf. Remark 1 in Appendix A.3 of Shah et al. (2019)) that the same proof applies seamlessly for our setting, i.e., bounded rewards in [Rmax,Rmax][-R_{\max},R_{\max}].

  3. 3.

    Since the original derivation was for MDPs, Lemma 4 in Appendix A.8 of Shah et al. (2019) used the Bellman equation for MDPs. In the game setting, it is straightforward to replace it with the Bellman equation for the Markov games:

    V(s)={maxa𝒜𝔼[r(s,a)+γV(sa)],if s is a max node;mina𝒜𝔼[r(s,a)+γV(sa)],if s is a min node.V^{*}(s)=\begin{cases}\max_{a\in\mathcal{A}}\mathbb{E}[r(s,a)+\gamma V^{*}(s\circ a)],&\text{if $s$ is a max node;}\\ \min_{a\in\mathcal{A}}\mathbb{E}[r(s,a)+\gamma V^{*}(s\circ a)],&\text{if $s$ is a min node.}\end{cases}

    A similar result of Lemma 4 then readily follows.

F.2 Two Useful Lemmas

We first state two useful lemmas. The first lemma bounds the difference between the two Boltzmann policies in terms of the difference of the underlying QQ values that are used to construct the policies.

Lemma 15.

Fix a state s𝒮s\in\mathcal{S}. Suppose that the Q-value estimates satisfy

𝔼[|Q^(s,a)Q(s,a)|]ε,a𝒜.\displaystyle\mathbb{E}\Big{[}\big{|}\hat{Q}(s,a)-Q^{*}(s,a)\big{|}\Big{]}\leq\varepsilon,\quad\forall\>a\in\mathcal{A}.

Consider the two Boltzmann policies with temperature τ>0\tau>0:

P^τ(a)=eQ^(s,a)/τaeQ^(s,a)/τ,Pτ(a)=eQ(s,a)/τaeQ(s,a)/τ.\displaystyle\hat{P}_{\tau}(a)=\frac{e^{\hat{Q}(s,a)/\tau}}{\sum_{a^{\prime}}e^{\hat{Q}(s,a^{\prime})/\tau}},\quad P^{*}_{\tau}(a)=\frac{e^{Q^{*}(s,a)/\tau}}{\sum_{a^{\prime}}e^{Q^{*}(s,a^{\prime})/\tau}}.

We have that

𝔼[DKL(P^τPτ)]2|𝒜|ε/τ.\mathbb{E}\big{[}D_{\textup{KL}}(\hat{P}_{\tau}\|P^{*}_{\tau})\big{]}\leq 2|\mathcal{A}|\varepsilon/\tau.
Proof.

Since ss is fixed, we drop it for the ease of exposition. Let C=aeQ^(a)/τC=\sum_{a^{\prime}}e^{\hat{Q}(a^{\prime})/\tau} and C=aeQ(a)/τC^{*}=\sum_{a^{\prime}}e^{{Q}^{*}(a^{\prime})/\tau}. Then

DKL(P^P)\displaystyle D_{\textup{KL}}(\hat{P}\|P^{*}) =aP^(a)logP^(a)P(a)\displaystyle=\sum_{a}\hat{P}(a)\log\frac{\hat{P}(a)}{P^{*}(a)}
=1CaeQ^(a)/τ(logeQ^(a)/τeQ(a)/τ+logCC)\displaystyle=\frac{1}{C}\sum_{a}e^{\hat{Q}(a)/\tau}\left(\log\frac{e^{\hat{Q}(a)/\tau}}{e^{Q^{*}(a)/\tau}}+\log\frac{C^{*}}{C}\right)
=1CaeQ^(a)/τ1τ(Q^(a)Q(a))+1CaeQ^(a)/τ=1logCC\displaystyle=\frac{1}{C}\sum_{a}e^{\hat{Q}(a)/\tau}\cdot\frac{1}{\tau}\left(\hat{Q}(a)-Q^{*}(a)\right)+\underbrace{\frac{1}{C}\sum_{a}e^{\hat{Q}(a)/\tau}}_{=1}\log\frac{C^{*}}{C}
1τ1CaeQ^(a)/τ=1Q^Q+1CClogCC.\displaystyle\leq\frac{1}{\tau}\cdot\underbrace{\frac{1}{C}\sum_{a}e^{\hat{Q}(a)/\tau}}_{=1}\cdot\left\|\hat{Q}-Q^{*}\right\|_{\infty}+\frac{1}{C^{*}}\cdot C^{*}\log\frac{C^{*}}{C}.

The second term above can be bounded using the log-sum inequality (cf. Appendix A), which gives ClogCCaeQ(a)/τlogeQ(a)/τeQ^(a)/τC^{*}\log\frac{C^{*}}{C}\leq\sum_{a}e^{Q^{*}(a)/\tau}\log\frac{e^{Q^{*}(a)/\tau}}{e^{\hat{Q}(a)/\tau}}. We then continue the above chain of inequalities to obtain

DKL(P^P)\displaystyle D_{\textup{KL}}(\hat{P}\|P^{*}) 1τQ^Q+1CaeQ(a)/τlogeQ(a)/τeQ^(a)/τ\displaystyle\leq\frac{1}{\tau}\left\|\hat{Q}-Q^{*}\right\|_{\infty}+\frac{1}{C^{*}}\cdot\sum_{a}e^{Q^{*}(a)/\tau}\log\frac{e^{Q^{*}(a)/\tau}}{e^{\hat{Q}(a)/\tau}}
=1τQ^Q+1CaeQ(a)/t1τ(Q(a)Q^(a))\displaystyle=\frac{1}{\tau}\left\|\hat{Q}-Q^{*}\right\|_{\infty}+\frac{1}{C^{*}}\cdot\sum_{a}e^{Q^{*}(a)/t}\cdot\frac{1}{\tau}\left(Q^{*}(a)-\hat{Q}(a)\right)
1τQ^Q+1CaeQ(a)/t=1Q^Qτ\displaystyle\leq\frac{1}{\tau}\left\|\hat{Q}-Q^{*}\right\|_{\infty}+\underbrace{\frac{1}{C^{*}}\cdot\sum_{a}e^{Q^{*}(a)/t}}_{=1}\cdot\frac{\left\|\hat{Q}-Q^{*}\right\|_{\infty}}{\tau}
=2τQ^Q.\displaystyle=\frac{2}{\tau}\left\|\hat{Q}-Q^{*}\right\|_{\infty}.

Taking expectation, we have the bound

𝔼[DKL(P^P)]\displaystyle\mathbb{E}\big{[}D_{\textup{KL}}(\hat{P}\|P^{*})\big{]} 2τ𝔼[Q^Q]\displaystyle\leq\frac{2}{\tau}\mathbb{E}\big{[}\left\|\hat{Q}-Q^{*}\right\|_{\infty}\big{]}
2τ𝔼[Q^Q1]\displaystyle\leq\frac{2}{\tau}\mathbb{E}\big{[}\left\|\hat{Q}-Q^{*}\right\|_{1}\big{]}
2τa𝒜𝔼[|Q^(s,a)Q(s,a)|]2|𝒜|ετ,\displaystyle\leq\frac{2}{\tau}\sum_{a\in\mathcal{A}}\mathbb{E}\big{[}|\hat{Q}(s,a)-Q^{*}(s,a)|\big{]}\leq\frac{2|\mathcal{A}|\varepsilon}{\tau},

as desired. ∎

The following lemma states a generic result regarding the maximum difference of two vectors.

Lemma 16.

Consider two nn-dimensional vectors X=(x1,x2,,xn)X=(x_{1},x_{2},\ldots,x_{n}) and Y=(y1,y2,,yn)Y=(y_{1},y_{2},\ldots,y_{n}). We have

|maxi[n]{xi}maxj[n]{yj}|\displaystyle\big{|}\max_{i\in[n]}\{x_{i}\}-\max_{j\in[n]}\{y_{j}\}\big{|} maxk[n]|xkyk|,\displaystyle\leq\max_{k\in[n]}|x_{k}-y_{k}|,
|mini[n]{xi}minj[n]{yj}|\displaystyle\big{|}\min_{i\in[n]}\{x_{i}\}-\min_{j\in[n]}\{y_{j}\}\big{|} maxk[n]|xkyk|.\displaystyle\leq\max_{k\in[n]}|x_{k}-y_{k}|.
Proof.

Assume that iargmaxi[n]{xi},i^{*}\in\arg\max_{i\in[n]}\{x_{i}\}, and jargmaxj[n]{yj}j^{*}\in\arg\max_{j\in[n]}\{y_{j}\}. Then

maxi[n]{xi}maxj[n]{yj}=xiyjxiyi,\max_{i\in[n]}\{x_{i}\}-\max_{j\in[n]}\{y_{j}\}=x_{i^{*}}-y_{j^{*}}\leq x_{i^{*}}-y_{i^{*}},

and

xiyjxjyj.x_{i^{*}}-y_{j^{*}}\geq x_{j^{*}}-y_{j^{*}}.

Thus,

|maxi[n]{xi}maxj[n]{yj}|\displaystyle\Big{|}\max_{i\in[n]}\{x_{i}\}-\max_{j\in[n]}\{y_{j}\}\Big{|} =|xiyj|\displaystyle=\big{|}x_{i^{*}}-y_{j^{*}}\big{|}
max{|xiyi|,|xjyj|}\displaystyle\leq\max\big{\{}\big{|}x_{i^{*}}-y_{i^{*}}\big{|},\big{|}x_{j^{*}}-y_{j^{*}}\big{|}\big{\}}
maxk[n]|xkyk|.\displaystyle\leq\max_{k\in[n]}|x_{k}-y_{k}|.

The same argument holds for the other inequality, and this completes the proof of Lemma 16.

F.3 Improvement Property: Proof of Theorem 11

We are now ready to prove Theorem 11. As discussed in Appendix F.1, the same guarantees of the modified fixed HH-depth MCTS algorithm for the game setting can be established as in Shah et al. (2019). In the sequel, we extend these guarantees, together with the previous two lemmas, to analyze the improvement module example. In particular, we derive error bounds for the outputs of Algorithm 2, V^\hat{V} and π^\hat{\pi}, and analyze the corresponding sample complexity.

Consider deterministic state transitions. The complete proof proceeds in two steps. The first step is to analyze the outputs of querying the fixed HH-depth MCTS algorithm. Based on those outputs, as the next step we then analyze the outcomes of Algorithm 2, V^\hat{V} and π^\hat{\pi} . Finally, we characterize the corresponding sample complexity of the overall process. Throughout the first two steps, since the current model f=(V,π)f=(V,\pi) may be random, let us fix a realization; we will take expectation in the end to arrive at the desired results in Theorem 11.

Step 1: Error bounds for outputs of the fixed depth MCTS algorithm. In Shah et al. (2019), the authors establish the following concentration result for the estimated value function of the root node ss, V^m(s)\hat{V}_{m}(s), under the fixed HH-depth MCTS algorithm with mm simulations (cf. Proof of Theorem 1 in Shah et al. (2019)): there exist constants β>1\beta>1 and ξ>0\xi>0 and η[1/2,1)\eta\in[1/2,1) such that for every z1z\geq 1,

(|V^m(s)μs(0)|mη1z)2βzξ,\mathbb{P}\Big{(}\Big{|}\hat{V}_{m}(s)-\mu_{s}^{(0)}\Big{|}\geq m^{\eta-1}z\Big{)}\leq\frac{2\beta}{z^{\xi}},

where the probability is measured with respect to the randomness in the MCTS algorithm, and μs(0)\mu_{s}^{(0)} satisfies the following condition

|μs(0)V(s)|γHε0.|\mu_{s}^{(0)}-V^{*}(s)|\leq\gamma^{H}\varepsilon_{0}.

Here, ε0\varepsilon_{0} denotes the error when evaluating the leaf nodes using current model, i.e., ε0=VV\varepsilon_{0}=\|V-V^{*}\|_{\infty}, where VV is the current value model. Note that η[1/2,1)\eta\in[1/2,1) is a hyper-parameter for the MCTS algorithm that could be freely chosen. Throughout the proof, we set

η12.\eta\triangleq\frac{1}{2}.

Recall that γ(0,1)\gamma\in(0,1) is the discount factor. In addition, ξ\xi is larger than 11 (cf. Section A.6.4 of Shah et al. (2019)). Therefore, for every tmη1t\geq m^{\eta-1},

(|V^(m)(s)μs(0)|t)2βtξmξ(η1).\mathbb{P}\Big{(}\Big{|}\hat{V}_{(m)}(s)-\mu_{s}^{(0)}\Big{|}\geq t\Big{)}\leq\frac{2\beta}{t^{\xi}}m^{\xi(\eta-1)}.

It follows that

𝔼[|V^m(s)μs(0)|]\displaystyle\mathbb{E}\Big{[}\Big{|}\hat{V}_{m}(s)-\mu_{s}^{(0)}\Big{|}\Big{]} =0(|V^m(s)μs(0)|t)𝑑t\displaystyle=\int_{0}^{\infty}\mathbb{P}\Big{(}\Big{|}\hat{V}_{m}(s)-\mu_{s}^{(0)}\Big{|}\geq t\Big{)}dt
0mη11𝑑t+mη12βtξmξ(η1)𝑑t\displaystyle\leq\int_{0}^{m^{\eta-1}}1\cdot dt+\int_{m^{\eta-1}}^{\infty}\frac{2\beta}{t^{\xi}}m^{\xi(\eta-1)}\cdot dt
=mη1+2βmξ(η1)m(1ξ)(η1)ξ1\displaystyle=m^{\eta-1}+2\beta m^{\xi(\eta-1)}\frac{m^{(1-\xi)(\eta-1)}}{\xi-1} ξ>1\displaystyle\xi>1
=(1+2βξ1)mη1.\displaystyle=(1+\frac{2\beta}{\xi-1})m^{\eta-1}.

Thus

𝔼[|Vm^(s)V(s)|]\displaystyle\mathbb{E}\big{[}\big{|}\hat{V_{m}}(s)-V^{*}(s)\big{|}\big{]} 𝔼[|V^m(s)μs(0)|]+|μs(0)V(s)|\displaystyle\leq\mathbb{E}\Big{[}\big{|}\hat{V}_{m}(s)-\mu_{s}^{(0)}\big{|}\Big{]}+|\mu_{s}^{(0)}-V^{*}(s)|
γHε0+O(mη1).\displaystyle\leq\gamma^{H}\varepsilon_{0}+O(m^{\eta-1}).

This leads to a variant of Theorem 1 in Shah et al. (2019) for the performance of the fixed depth MCTS, as stated following.

Proposition 17.

Let f=(V,π)f=(V,\pi) be the current model and consider the fixed depth MCTS algorithm (with depth HH) employed in Algorithm 2. Then, for each query state sSs\in S, the following claim holds for the output V^m\hat{V}_{m} of the fixed HH-depth MCTS with mm simulations:

𝔼[|V^m(s)V(s)|]γHVV+O(mη1),\mathbb{E}\big{[}\big{|}\hat{V}_{m}(s)-V^{*}(s)\big{|}\big{]}\leq\gamma^{H}\|V-V^{*}\|_{\infty}+O\Big{(}m^{\eta-1}\Big{)},

where η[1/2,1)\eta\in[1/2,1) is a constant and the expectation is taken with respect to the randomness in the MCTS simulations.

Step 2: Error bounds for outputs of the improvement module. Now, we are ready to obtain a non-asymptotic analysis for the outputs of Algorithm 2. Consider Line 4 of Algorithm 2, where we call the fixed depth MCTS algorithm on the state s0as_{0}\circ a. For each a𝒜a\in\mathcal{A}, as mm simulations are performed with root node s0as_{0}\circ a during the simulation, Proposition 17 implies that

𝔼[|V^(s0a)V(s0a)|]γHVV+O(mη1).\mathbb{E}\big{[}\big{|}\hat{V}(s_{0}\circ a)-V^{*}(s_{0}\circ a)\big{|}\big{]}\leq\gamma^{H}\|V-V^{*}\|_{\infty}+O\Big{(}m^{\eta-1}\Big{)}. (21)

Recall that state transitions are assumed to be deterministic and note that the estimated Q-value for (s0,a)(s_{0},a) is given by (i.e., Line 6 of Algorithm 2)

Q^(s0,a)=r^(s0,a)+γV^(s0a),\hat{Q}(s_{0},a)=\hat{r}(s_{0},a)+\gamma\hat{V}(s_{0}\circ a),

where r^(s0,a)=1mi=1mri(s0,a)\hat{r}(s_{0},a)=\frac{1}{m}\sum_{i=1}^{m}r_{i}(s_{0},a) is the empirical average of the immediate rewards for playing action aa when in state s0s_{0}. Note that {ri(s0,a)}i\{r_{i}(s_{0},a)\}_{i} are independent random variables that satisfy |ri(s0,a)|Rmax|r_{i}(s_{0},a)|\leq R_{\max}. By Hoeffding inequality, it holds that

(|r^(s0,a)𝔼[r(s0,a)]|>t)2exp(mt22Rmax2).\mathbb{P}\big{(}\big{|}\hat{r}(s_{0},a)-\mathbb{E}[r(s_{0},a)]\big{|}>t\big{)}\leq 2\exp\Big{(}\frac{-mt^{2}}{2R_{\max}^{2}}\Big{)}.

Thus

𝔼[|r^(s0,a)𝔼[r(s0,a)]|]\displaystyle\mathbb{E}\big{[}\big{|}\hat{r}(s_{0},a)-\mathbb{E}[r(s_{0},a)]\big{|}\big{]} =0(|r^(s0,a)𝔼[r(s0,a)]|>t)𝑑t\displaystyle=\int_{0}^{\infty}\mathbb{P}\big{(}\big{|}\hat{r}(s_{0},a)-\mathbb{E}[r(s_{0},a)]\big{|}>t\big{)}dt
02exp(mt22Rmax2)𝑑t\displaystyle\leq\int_{0}^{\infty}2\exp\Big{(}\frac{-mt^{2}}{2R_{\max}^{2}}\Big{)}dt
=Rmax2πm.\displaystyle=R_{\max}\sqrt{\frac{2\pi}{m}}.

It follows that

𝔼[|Q^(s0,a)Q(s0,a)|]\displaystyle\mathbb{E}\Big{[}\big{|}\hat{Q}(s_{0},a)-Q^{*}(s_{0},a)\big{|}\Big{]}
=\displaystyle= 𝔼[|r^(s0,a)+γV^(s0a)𝔼[r(s0,a)]γV(s0a)|]\displaystyle\mathbb{E}\Big{[}\big{|}\hat{r}(s_{0},a)+\gamma\hat{V}(s_{0}\circ a)-\mathbb{E}[r(s_{0},a)]-\gamma V^{*}(s_{0}\circ a)\big{|}\Big{]}
\displaystyle\leq 𝔼[|r^(s0,a)𝔼[r(s0,a)]|]+γ𝔼[|V^(s0a)V(s0a)|]\displaystyle\mathbb{E}\Big{[}\big{|}\hat{r}(s_{0},a)-\mathbb{E}[r(s_{0},a)]\big{|}\Big{]}+\gamma\mathbb{E}\Big{[}\big{|}\hat{V}(s_{0}\circ a)-V^{*}(s_{0}\circ a)\big{|}\Big{]}
\displaystyle\leq Rmax2πm1/2+γHVV+O(mη1)\displaystyle R_{\max}\sqrt{{2\pi}{}}m^{-1/2}+\gamma^{H}\|V-V^{*}\|_{\infty}+O\Big{(}m^{\eta-1}\Big{)} γ<1\displaystyle\gamma<1
\displaystyle\leq γHVV+O(mη1),\displaystyle\gamma^{H}\|V-V^{*}\|_{\infty}+O\Big{(}m^{\eta-1}\Big{)}, (22)

where the last inequality follows from the fact that η[1/2,1).\eta\in[1/2,1).

We now consider the value estimate V^(s0)\hat{V}(s_{0}) and the policy estimate π^(|s0)\hat{\pi}(\cdot|s_{0}) (Line 9 of Algorithm 2) separately:

(a) Value estimate V^(s0)\hat{V}(s_{0}): In order to obtain an error bound for the value estimate of the query state s0s_{0}, i.e., V^(s0)=maxa𝒜Q^(s,a)\hat{V}(s_{0})=\max_{a\in\mathcal{A}}\hat{Q}(s,a), we apply Lemma 16 from the previous section. For the query state s0s_{0}, if I(s0)I(s_{0}) is player 𝖯𝟣\mathsf{P1}, applying Lemma 16 yields

|V^(s0)V(s0)|\displaystyle\big{|}\hat{V}(s_{0})-V^{*}(s_{0})\big{|} =|maxa𝒜{Q^(s0,a)}maxa𝒜{Q(s0,a)}|maxa𝒜|Q^(s0,a)Q(s0,a)|.\displaystyle=\big{|}\max_{a\in\mathcal{A}}\{\hat{Q}(s_{0},a)\}-\max_{a\in\mathcal{A}}\{Q^{*}(s_{0},a)\}\big{|}\leq\max_{a\in\mathcal{A}}\big{|}\hat{Q}(s_{0},a)-Q^{*}(s_{0},a)\big{|}.

Therefore,

𝔼[|V^(s0)V(s0)|]\displaystyle\mathbb{E}\big{[}\big{|}\hat{V}(s_{0})-V^{*}(s_{0})\big{|}\big{]} 𝔼[maxa𝒜|Q^(s0,a)Q(s0,a)|]\displaystyle\leq\mathbb{E}\big{[}\max_{a\in\mathcal{A}}\big{|}\hat{Q}(s_{0},a)-Q^{*}(s_{0},a)\big{|}\big{]}
a𝒜𝔼[|Q^(s0,a)Q(s0,a)|]\displaystyle\leq\sum_{a\in\mathcal{A}}\mathbb{E}\big{[}\big{|}\hat{Q}(s_{0},a)-Q^{*}(s_{0},a)\big{|}\big{]}
|𝒜|[γHVV+O(mη1)].\displaystyle\leq|\mathcal{A}|\Big{[}\gamma^{H}\|V-V^{*}\|_{\infty}+O\Big{(}m^{\eta-1}\Big{)}\Big{]}. (23)

Similarly, if I(s0)I(s_{0}) is player 𝖯𝟤\mathsf{P2}, applying Lemma 16 also yields the same desired result, Eq. (23).

(b) Policy estimate π^(|s0)\hat{\pi}(\cdot|s_{0}): In order to obtain an error bound for the policy estimate of the query state s0s_{0}, i.e., π^(a|s0)=eQ^(s0,a)/τ/a𝒜eQ^(s0,a)/τ\hat{\pi}(a|s_{0})=e^{\hat{Q}(s_{0},a)/\tau}/\sum_{a^{\prime}\in\mathcal{A}}e^{\hat{Q}(s_{0},a^{\prime})/\tau}, we apply Lemma 15 from the previous section. Together with Eq. (22), Lemma 15 yields the following bound:

𝔼[DKL(π^(|s)Pτ(|s))]\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s)\|P_{\tau}^{*}(\cdot|s)\big{)}\Big{]} 2|𝒜|τ[γHVV+O(mη1)].\displaystyle\leq\frac{2|\mathcal{A}|}{\tau}\Big{[}\gamma^{H}\|V-V^{*}\|_{\infty}+O\Big{(}m^{\eta-1}\Big{)}\Big{]}. (24)

Step 3: Completing the proof of Theorem 11. Recall that by the assumption of Theorem 11,

𝔼[VV]ε0,v.\mathbb{E}\Big{[}||V-V^{*}||_{\infty}\Big{]}\leq\varepsilon_{0,v}.

Now, taking expectation of Eqs. (23) and (24) over the randomness in the current model ff, we have

𝔼[|V^(s0)V(s0)|]\displaystyle\mathbb{E}\big{[}\big{|}\hat{V}(s_{0})-V^{*}(s_{0})\big{|}\big{]} |𝒜|[γHε0,v+O(mη1)],\displaystyle\leq|\mathcal{A}|\Big{[}\gamma^{H}\varepsilon_{0,v}+O\Big{(}m^{\eta-1}\Big{)}\Big{]}, (25)
𝔼[DKL(π^(|s)Pτ(|s))]\displaystyle\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s)\|P_{\tau}^{*}(\cdot|s)\big{)}\Big{]} 2|𝒜|τ[γHε0,v+O(mη1)].\displaystyle\leq\frac{2|\mathcal{A}|}{\tau}\Big{[}\gamma^{H}\varepsilon_{0,v}+O\Big{(}m^{\eta-1}\Big{)}\Big{]}. (26)

Recall that in Theorem 11, our goal for improvement is as follows:

𝔼[|V^(s0)V(s)||]ζvε0,v, and 𝔼[DKL(π^(|s0)||Pτ(|s))]ζpε0,v.\mathbb{E}\Big{[}\big{|}\hat{V}(s_{0})-V^{*}(s)|\big{|}\Big{]}\leq\zeta_{v}\cdot\varepsilon_{0,v},\>\textrm{ and }\>\mathbb{E}\Big{[}D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s_{0})||P^{*}_{\tau}(\cdot|s)\big{)}\Big{]}\leq\zeta_{p}\cdot\varepsilon_{0,v}. (27)

It is not hard to see that we can choose the parameters of the fixed depth MCTS algorithm, in particular, the depth HH and the number of simulations mm, in an appropriate way such that Eqs. (25) and (26) satisfy the desired improvement bound. In particular, we could choose the depth HH such that

max{|𝒜|γH,2|𝒜|τγH}min{ζv,ζp}2,\max\Big{\{}|\mathcal{A}|\gamma^{H},\frac{2|\mathcal{A}|}{\tau}\gamma^{H}\Big{\}}\leq\frac{\min\{\zeta_{v},\zeta_{p}\}}{2}, (28)

and choose the number of simulations, mm, to be large enough such that the term O(mη1)O(m^{\eta-1}) is less than γHε0,v\gamma^{H}\varepsilon_{0,v}. For small temperature τ<1\tau<1, choosing the depth

H=logτmin{ζv,ζp}4|𝒜|logγH=\frac{\log\frac{\tau\min\{\zeta_{v},\zeta_{p}\}}{4|\mathcal{A}|}}{\log\gamma}

would satisfy the condition Eq. (28). Recall that the tunable hyper-parameter η\eta is set to η=1/2\eta=1/2. With the above HH, this implies that the number of simulations should be

m=O((τmin{ζv,ζp}ε0,v4|𝒜|)11η)=O((τmin{ζv,ζp}ε0,v)2).m=O\Big{(}\Big{(}\frac{\tau\min\{\zeta_{v},\zeta_{p}\}\varepsilon_{0,v}}{4|\mathcal{A}|}\Big{)}^{\frac{1}{1-\eta}}\Big{)}=O\Big{(}\big{(}{\tau\cdot\min\{\zeta_{v},\zeta_{p}\}\cdot\varepsilon_{0,v}}\big{)}^{-2}\Big{)}.

To summarize, we show that the desired improvement, Eq. (27), can indeed be satisfied with appropriate algorithmic parameters. Finally, regarding the sample complexity, we note that with a HH-depth tree, each simulation of the fixed depth MCTS algorithm incurs HH state transitions. Therefore, the total sample complexity of querying the improvement module is mHm\cdot H, which is equal to

O((τmin{ζv,ζp}ε0,v)2log(τmin{ζv,ζp})logγ).O\Big{(}\big{(}{\tau\cdot\min\{\zeta_{v},\zeta_{p}\}\cdot\varepsilon_{0,v}}\big{)}^{-2}\cdot\frac{\log({\tau\cdot\min\{\zeta_{v},\zeta_{p}\}})}{\log\gamma}\Big{)}.

Appendix G Proof of Proposition 12

Proof.

Let hh, KK and Δ\Delta be positive numbers to be chosen later, and recall that NN(h)N\equiv N(h) is the h/2h/2-covering number of SS. Let Tj:=BjTT_{j}:=B_{j}\cap T and Kj:=|Tj|K_{j}:=\left|T_{j}\right|. For each j[N]j\in[N], we have

|1KjxTj(V^(x)V(x))|\displaystyle\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-V^{*}(x)\right)\right| |1KjxTj(V^(x)𝔼V^(x))|+|1KjxTj(𝔼V^(x)V(x))|\displaystyle\leq\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-\mathbb{E}\hat{V}(x)\right)\right|+\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\mathbb{E}\hat{V}(x)-V^{*}(x)\right)\right|
|1KjxTj(V^(x)𝔼V^(x))|+1KjxTj𝔼|V^(x)V(x)|\displaystyle\leq\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-\mathbb{E}\hat{V}(x)\right)\right|+\frac{1}{K_{j}}\sum_{x\in T_{j}}\mathbb{E}\left|\hat{V}(x)-V^{*}(x)\right|
|1KjxTj(V^(x)𝔼V^(x))|+εv,\displaystyle\leq\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-\mathbb{E}\hat{V}(x)\right)\right|+\varepsilon_{v},

where the second step follows from the Jensen’s inequality, and the last step follows from the premise on the training error of the value function. To bound the first term of RHS above, we note that the KjK_{j} random variables {V^(x),xTj}\left\{\hat{V}(x),x\in T_{j}\right\} are independent and bounded by VmaxV_{\max}. So Hoffedings inequality ensures that

(|1KjxTj(V^(x)𝔼V^(x))|>Δ)\displaystyle\mathbb{P}\left(\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-\mathbb{E}\hat{V}(x)\right)\right|>\Delta\right) 2exp(2KjΔ2Vmax2)2exp(2KΔ2Vmax2).\displaystyle\leq 2\exp\left(-\frac{2K_{j}\Delta^{2}}{V_{\max}^{2}}\right)\leq 2\exp\left(-\frac{2K\Delta^{2}}{V_{\max}^{2}}\right).

Combining the last two equations and applying a union bound over j[N]j\in[N], we obtain

(maxj[N]|1KjxTj(V^(x)V(x))|>Δ+εv)2Nexp(2KΔ2Vmax2).\mathbb{P}\left(\max_{j\in[N]}\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-V^{*}(x)\right)\right|>\Delta+\varepsilon_{v}\right)\leq 2N\exp\left(-\frac{2K\Delta^{2}}{V_{\max}^{2}}\right).

Since the random variable Z:=maxj[N]|1KjxTj(V^(x)V(x))|Z:=\max_{j\in[N]}\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-V^{*}(x)\right)\right| satisfies |Z|2Vmax\left|Z\right|\leq 2V_{\max}, we may convert the above inequality into an expectation bound:

𝔼Z\displaystyle\mathbb{E}Z =𝔼[Z𝕀{ZΔ+εv}]+𝔼[Z𝕀{Z>Δ+εv}]\displaystyle=\mathbb{E}\left[Z\mathbb{I}_{\{Z\leq\Delta+\varepsilon_{v}\}}\right]+\mathbb{E}\left[Z\mathbb{I}_{\{Z>\Delta+\varepsilon_{v}\}}\right]
Δ+εv+2Vmax(Z>Δ+εv)\displaystyle\leq\Delta+\varepsilon_{v}+2V_{\max}\mathbb{P}\left(Z>\Delta+\varepsilon_{v}\right)
Δ+εv+4VmaxNexp(2KΔ2Vmax2).\displaystyle\leq\Delta+\varepsilon_{v}+4V_{\max}N\exp\left(-\frac{2K\Delta^{2}}{V_{\max}^{2}}\right).

We are now ready to bound the quantity of interest:

𝔼supsS|VNN(s)V(s)|\displaystyle\mathbb{E}\sup_{s\in S}\left|V_{\textup{NN}}(s)-V^{*}(s)\right|
=𝔼maxj[N]supsBj|1KjxTj(V^(x)V(s))|\displaystyle=\mathbb{E}\max_{j\in[N]}\sup_{s\in B_{j}}\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-V^{*}(s)\right)\right|
𝔼[maxj[N]supsBj|1KjxTj(V^(x)V(x))|+maxj[N]supsBj|1KjxTj(V(x)V(s))|]\displaystyle\leq\mathbb{E}\left[\max_{j\in[N]}\sup_{s\in B_{j}}\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-V^{*}(x)\right)\right|\right.+\left.\max_{j\in[N]}\sup_{s\in B_{j}}\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left({V}^{*}(x)-V^{*}(s)\right)\right|\right]
(i)𝔼[maxj[N]|1KjxTj(V^(x)V(x))|]+maxj[N]supsBjLv1KjxTjd(x,s)\displaystyle\overset{(i)}{\leq}\mathbb{E}\left[\max_{j\in[N]}\left|\frac{1}{K_{j}}\sum_{x\in T_{j}}\left(\hat{V}(x)-V^{*}(x)\right)\right|\right]+\max_{j\in[N]}\sup_{s\in B_{j}}L_{v}\frac{1}{K_{j}}\sum_{x\in T_{j}}d(x,s)
(ii)Δ+εv+4VmaxNexp(2KΔ2Vmax2)+Lvh,\displaystyle\overset{(ii)}{\leq}\Delta+\varepsilon_{v}+4V_{\max}N\exp\left(-\frac{2K\Delta^{2}}{V_{\max}^{2}}\right)+L_{v}h,

where step (i)(i) holds because VV^{*} is LvL_{v}-Lipschitz, and step (ii)(ii) holds because the sets TjBjT_{j}\subseteq B_{j} have diameter at most hh. Now taking Δ=εv\Delta=\varepsilon_{v}, we have

𝔼supsS|VNN(s)V(s)|\displaystyle\mathbb{E}\sup_{s\in S}\left|V_{\textup{NN}}(s)-V^{*}(s)\right| 2εv+4VmaxNexp(2Kεv2Vmax2)+Lvh.\displaystyle\leq 2\varepsilon_{v}+4V_{\max}N\exp\left(-\frac{2K\varepsilon_{v}^{2}}{V_{\max}^{2}}\right)+L_{v}h. (29)

We now turn to the second inequality. Under the premise on the training error of policy, we have for each i[n]i\in[n],

εp\displaystyle\varepsilon_{p} 𝔼[DKL(π^(|si)Pτ(|si))]\displaystyle\geq\mathbb{E}\left[D_{\textup{KL}}\big{(}\hat{\pi}(\cdot|s_{i})\,\|\,P^{*}_{\tau}(\cdot|s_{i})\big{)}\right]
2𝔼[TV(π^(|si),Pτ(|si))2]\displaystyle\geq 2\mathbb{E}\left[\textup{TV}\left(\hat{\pi}(\cdot|s_{i}),P^{*}_{\tau}(\cdot|s_{i})\right)^{2}\right] Pinsker’s inequality
2𝔼[maxa|π^(a|si)Pτ(a|si)|2].\displaystyle\geq 2\mathbb{E}\left[\max_{a}\left|\hat{\pi}(a|s_{i})-P^{*}_{\tau}(a|s_{i})\right|^{2}\right].

For each a𝒜a\in\mathcal{A}, let us fit the action probability πNN(a|):𝒮[0,1]{\pi_{\textup{NN}}}(a|\cdot):\mathcal{S}\rightarrow[0,1] using a similar Nearest Neighbor type algorithm as VNNV_{\textup{NN}}:

πNN(a|s)=1|BjT|xBjTπ^(a|x),sBj.\pi_{\textup{NN}}(a|s)=\frac{1}{\left|B_{j}\cap T\right|}\sum_{x\in B_{j}\cap T}\hat{\pi}(a|x),\quad\forall s\in B_{j}.

Note that xT,\forall x\in T, a𝒜,\forall a\in\mathcal{A}, π^(a|x)[0,1]\hat{\pi}(a|x)\in[0,1] and Pτ(a|s)[0,1]P^{*}_{\tau}(a|s)\in[0,1]. Applying a similar argument as above for the fitted action probability function πNN(a|)\pi_{\textup{NN}}(a|\cdot) w.r.t. the squared error, we have

𝔼[supsS|πNN(a|s)Pτ(a|s)|2]\displaystyle\mathbb{E}\left[\sup_{s\in S}\left|\pi_{\textup{NN}}(a|s)-P^{*}_{\tau}(a|s)\right|^{2}\right] 2Δ2+εp+4Nexp(2KΔ2)+2Lp2h2.\displaystyle\leq 2\Delta^{2}+\varepsilon_{p}+4N\exp(-2K\Delta^{2})+2L^{2}_{p}h^{2}.

Choosing Δ=εp/2,\Delta=\sqrt{\varepsilon_{p}/2}, we obtain that

𝔼[supsS|πNN(a|s)Pτ(a|s)|2]\displaystyle\mathbb{E}\left[\sup_{s\in S}\left|\pi_{\textup{NN}}(a|s)-P^{*}_{\tau}(a|s)\right|^{2}\right] 2εp+4Nexp(Kεp)+2Lp2h2.\displaystyle\leq 2\varepsilon_{p}+4N\exp(-K\varepsilon_{p})+2L^{2}_{p}h^{2}. (30)

Taking

h=min{εvLv,εp/2Lp},K=max{Vmax22εv2log(4VmaxNεv),1εplog(4Nεp)},\displaystyle h=\min\Big{\{}\frac{\varepsilon_{v}}{L_{v}},\frac{\sqrt{\varepsilon_{p}/2}}{L_{p}}\Big{\}},K=\max\Big{\{}\frac{V_{\max}^{2}}{2\varepsilon_{v}^{2}}\log\Big{(}\frac{4V_{\max}N}{\varepsilon_{v}}\Big{)},\frac{1}{\varepsilon_{p}}\log\Big{(}\frac{4N}{\varepsilon_{p}}\Big{)}\Big{\}}, (31)

from Eqs (29)-(30), we have the following bounds

𝔼supsS\displaystyle\mathbb{E}\sup_{s\in S} |VNN(s)V(s)|4εv,\displaystyle\left|V_{\textup{NN}}(s)-V^{*}(s)\right|\leq 4\varepsilon_{v},
𝔼supsS\displaystyle\mathbb{E}\sup_{s\in S} |πNN(a|s)Pτ(a|s)|24εp.\displaystyle\left|\pi_{\textup{NN}}(a|s)-P^{*}_{\tau}(a|s)\right|^{2}\leq 4\varepsilon_{p}.

This proves the first inequality of the proposition.

We now focus on the inequality for the policy function. By Jensen’s inequality, we have

supsS𝔼[|πNN(a|s)Pτ(a|s)|2]𝔼[supsS|πNN(a|s)Pτ(a|s)|2]4εp.\displaystyle\sup_{s\in S}\mathbb{E}\left[\left|\pi_{\textup{NN}}(a|s)-P^{*}_{\tau}(a|s)\right|^{2}\right]\leq\mathbb{E}\left[\sup_{s\in S}\left|\pi_{\textup{NN}}(a|s)-P^{*}_{\tau}(a|s)\right|^{2}\right]\leq 4\varepsilon_{p}.

This is equivalent to saying that for each sSs\in S,

4εp\displaystyle 4\varepsilon_{p} maxa𝔼[|πNN(a|s)Pτ(a|s)|2]\displaystyle\geq\max_{a}\mathbb{E}\left[\left|\pi_{\textup{NN}}(a|s)-P^{*}_{\tau}(a|s)\right|^{2}\right]
1|𝒜|𝔼[maxa|πNN(a|s)Pτ(a|s)|2].\displaystyle\geq\frac{1}{\left|\mathcal{A}\right|}\mathbb{E}\left[\max_{a}\left|\pi_{\textup{NN}}(a|s)-P^{*}_{\tau}(a|s)\right|^{2}\right]. (32)

On the other hand, for each s𝒮s\in\mathcal{S}, each a𝒜,a\in\mathcal{A}, we have the bound

Pτ(a|s)\displaystyle P^{*}_{\tau}(a|s) =exp(Q(s,a)/τ)aexp(Q(s,a)/τ)\displaystyle=\frac{\exp\left(Q^{*}(s,a)/\tau\right)}{\sum_{a^{\prime}}\exp\left(Q^{*}(s,a)/\tau\right)}
exp(Vmax/τ)|𝒜|exp(Vmax/τ)=:α,\displaystyle\geq\frac{\exp\left(-V_{\max}/\tau\right)}{\left|\mathcal{A}\right|\exp\left(V_{\max}/\tau\right)}=:\alpha,

where α>0\alpha>0. Therefore, by the reverse Pinsker’s inequality (cf. Appendix A) we have

DKL(πNN(|s)Pτ(|s))\displaystyle D_{\textup{KL}}\big{(}\pi_{\textup{NN}}(\cdot|s)\,\|\,P^{*}_{\tau}(\cdot|s)\big{)} 2αTV(πNN(|s),Pτ(|s))2\displaystyle\leq\frac{2}{\alpha}\textup{TV}\left(\pi_{\textup{NN}}(\cdot|s),P^{*}_{\tau}(\cdot|s)\right)^{2}
|𝒜|2αmaxa|πNN(a|s)Pτ(a|s)|2,\displaystyle\leq\frac{\left|\mathcal{A}\right|^{2}}{\alpha}\max_{a}\left|\pi_{\textup{NN}}(a|s)-P^{*}_{\tau}(a|s)\right|^{2},

Combining with the bound (32), we obtain that

𝔼[DKL(πNN(|s)Pτ(|s))]|𝒜|2α|𝒜|4=:cεp.\mathbb{E}\left[D_{\textup{KL}}\big{(}\pi_{\textup{NN}}(\cdot|s)\,\|\,P^{*}_{\tau}(\cdot|s)\big{)}\right]\leq\underbrace{\frac{\left|\mathcal{A}\right|^{2}}{\alpha}\cdot\left|\mathcal{A}\right|\cdot 4}_{=:c}\cdot\varepsilon_{p}.

This completes the proof of the second inequality. ∎

Appendix H Proof of Proposition 13

In the sequel, we use the shorthand NN(h)N\equiv N(h) and refer to each BjB_{j} as a ball. For each integer t1t\geq 1, let WtW_{t} be the number of balls visited up to time tt. Let Tinf{t1:Wt=N}T\triangleq\inf\left\{t\geq 1:W_{t}=N\right\} be the first time when all balls are visited. For each w{1,2,,N}w\in\{1,2,\ldots,N\}, let Twinf{t1:Wt=w}T_{w}\triangleq\inf\left\{t\geq 1:W_{t}=w\right\} be the the first time when the ww-th ball is visited, and let DwTwTw1D_{w}\triangleq T_{w}-T_{w-1} be the time to visit the ww ball after (w1)(w-1) balls have been visited. We use the convention that T0=D0=0T_{0}=D_{0}=0. By definition, we have T=w=1NDw.T=\sum_{w=1}^{N}D_{w}.

When w1w-1 balls have been visited, the probability of visiting a new ball is at least

minI[N],|I|=Nw+1{sTw1+1iIBi|sTw1}\displaystyle\min_{I\subseteq[N],\left|I\right|=N-w+1}\mathbb{P}\Big{\{}s_{T_{w-1}+1}\in\bigcup_{i\in I}B_{i}|s_{T_{w-1}}\Big{\}}
=\displaystyle= minI[N],|I|=Nw+1iI{sTw1+1Bi|sTw1}\displaystyle\min_{I\subseteq[N],\left|I\right|=N-w+1}\sum_{i\in I}\mathbb{P}\left\{s_{T_{w-1}+1}\in B_{i}|s_{T_{w-1}}\right\}
\displaystyle\geq (Nw+1)mini[N]{sTw1+1Bi|sTw1}\displaystyle(N-w+1)\min_{i\in[N]}\mathbb{P}\left\{s_{T_{w-1}+1}\in B_{i}|s_{T_{w-1}}\right\}
\displaystyle\geq (Nw+1)c0N,\displaystyle(N-w+1)\cdot\frac{c_{0}}{N},

where the last inequality follows from the regularity assumption. Therefore, the time to visit the ww-th pair after w1w-1 pairs have been visited, Dw,D_{w}, is stochastically dominated by a geometric random variable with mean at most N(Nw+1)c0.\frac{N}{(N-w+1)c_{0}}. It follows that

𝔼T\displaystyle\mathbb{E}T =w=1N𝔼Dww=1NN(Nw+1)c0=O(Nc0logN).\displaystyle=\sum_{w=1}^{N}\mathbb{E}D_{w}\leq\sum_{w=1}^{N}\frac{N}{(N-w+1)c_{0}}=O\left(\frac{N}{c_{0}}\log N\right).

This prove that the expected time to sample a (h,1)(h,1)-representative set is upper bounded by O(Nc0logN)O\left(\frac{N}{c_{0}}\log N\right). Note that if the trajectory samples KK (h,1)(h,1)-representative sets, then each ball must be visited at least KK times. Therefore, K𝔼TK\cdot\mathbb{E}T gives an upper bound for the expected time to sample a (h,K)(h,K)-representative set, hence

B(τ,εv,εp)=O(KNc0logN).\displaystyle B(\tau,\varepsilon_{v},\varepsilon_{p})=O\left(\frac{KN}{c_{0}}\log N\right).

Appendix I Proof of Theorem 14

We will reuse the notation introduced in the proof of Theorem 2. We initialize the value model V0(s)=0,V_{0}(s)=0, s𝒮\forall s\in\mathcal{S}. Hence V0VVmax.\big{\|}V_{{0}}-V^{*}\big{\|}_{\infty}\leq V_{\max}. Consider the ll-th iteration. Let ωv(l)\omega_{v}^{(l)} and ωp(l)\omega_{p}^{(l)} denote the estimation errors for the model fl1=(Vl1,πl1)f_{l-1}=(V_{l-1},\pi_{l-1}), at the beginning of ll-th iteration:

ωv(l)\displaystyle\omega_{v}^{(l)} =𝔼[VlV],\displaystyle=\mathbb{E}\Big{[}\big{\|}V_{{l}}-V^{*}\big{\|}_{\infty}\Big{]},
ωp(l)\displaystyle\omega_{p}^{(l)} =supsS𝔼[DKL(πl(|s)Pτ(|s))].\displaystyle=\sup_{s\in S}\mathbb{E}\Big{[}D_{\text{KL}}\big{(}\pi_{{l}}(\cdot|s)\|P_{\tau}^{*}(\cdot|s)\big{)}\Big{]}.

Let S(l)={si}i=1nlS^{(l)}=\{s_{i}\}_{i=1}^{n_{l}} be the set of states visited by the random sampling policy during the ll-th iteration. We require S(l)S^{(l)} to be (h(l),K(l))(h^{(l)},K^{(l)})-representative, where

ξ(l)\displaystyle\xi^{(l)} Vmaxρl4\displaystyle\triangleq\frac{V_{\max}\rho^{l}}{4} (33)
h(l)\displaystyle h^{(l)} =min{ξ(l)Lv,ξ(l)/2Lp}\displaystyle=\min\Big{\{}\frac{\xi^{(l)}}{L_{v}},\frac{\sqrt{\xi^{(l)}/2}}{L_{p}}\Big{\}} (34)
K(l)\displaystyle K^{(l)} =max{Vmax22(ξ(l))2log(4VmaxN(h(l))ξ(l)),1ξ(l)log(4N(h(l))ξ(l))}.\displaystyle=\max\Big{\{}\frac{V_{\max}^{2}}{2\big{(}\xi^{(l)}\big{)}^{2}}\log\Big{(}\frac{4V_{\max}N(h^{(l)})}{\xi^{(l)}}\Big{)},\frac{1}{\xi^{(l)}}\log\Big{(}\frac{4N(h^{(l)})}{\xi^{(l)}}\Big{)}\Big{\}}. (35)

We use 𝒟(l)={(si,V^(l)(si),π^(l(|si))}i=1nl\mathcal{D}^{(l)}=\Big{\{}\big{(}s_{i},\hat{V}^{(l)}(s_{i}),\hat{\pi}^{(l}(\cdot|s_{i})\big{)}\Big{\}}_{i=1}^{n_{l}} to denote the set of training data generated by querying MCTS. Consider choosing the depth H(l)H^{(l)} and simulation number m(l)m^{(l)} parameters for MCTS at the ll-th iteration as follows:

H(l)\displaystyle H^{(l)} =logτρ16|𝒜|logγ,\displaystyle=\frac{\log\frac{\tau\rho}{16|\mathcal{A}|}}{\log\gamma}, (36)
m(l)\displaystyle m^{(l)} =c1(τVmaxρl16|𝒜|)2,\displaystyle=c_{1}\Big{(}\frac{\tau V_{\max}\rho^{l}}{16|\mathcal{A}|}\Big{)}^{-2}, (37)

where c1c_{1} is a sufficiently large constant. By Theorem 11, for each query state sis_{i}, the output (V^(si),π^(|si))(\hat{V}(s_{i}),\hat{\pi}(\cdot|s_{i})) from MCTS satisfies

𝔼[|V^(si)V(si)||]ρ4ωv(l), and 𝔼[DKL(π^(|si)||Pτ(|s))]ρ4ωv(l).\displaystyle\mathbb{E}\Big{[}\big{|}\hat{V}(s_{i})-V^{*}(s_{i})|\big{|}\Big{]}\leq\frac{\rho}{4}\omega_{v}^{(l)},\>\textrm{ and }\>\mathbb{E}\Big{[}D_{\mbox{KL}}\big{(}\hat{\pi}(\cdot|s_{i})||P^{*}_{\tau}(\cdot|s)\big{)}\Big{]}\leq\frac{\rho}{4}\omega_{v}^{(l)}.

That is, the improvement factors for the value function and policy, ζv\zeta_{v} and ζp\zeta_{p}, of MCTS are follows:

ζv=ζp=ρ4.\displaystyle\zeta_{v}=\zeta_{p}=\frac{\rho}{4}.

Note that the training set 𝒟(l)\mathcal{D}^{(l)} have estimation error ρ4ωv(l)\frac{\rho}{4}\omega_{v}^{(l)} for both value and policy, and the sampled states S(l)S^{(l)} of the training set are (h(l),K(l))(h^{(l)},K^{(l)})-representative. By Proposition 12, the output of nearest neighbor supervised learning at the end of ll-th iteration satisfies the following generalization property:

𝔼[VNNV]\displaystyle\mathbb{E}\left[\left\|V_{\textup{NN}}-V^{*}\right\|_{\infty}\right] ρωv(l),\displaystyle\leq\rho\omega_{v}^{(l)},
𝔼[DKL(πNN(|s)P(|s))]\displaystyle\mathbb{E}\left[D_{\textup{KL}}\big{(}\pi_{\textup{NN}}(\cdot|s)\,\|\,P^{*}(\cdot|s)\big{)}\right] cρωv(l)4,s𝒮.\displaystyle\leq\frac{c\rho\omega_{v}^{(l)}}{4},\quad\forall s\in\mathcal{S}.

Therefore,

ωv(l+1)\displaystyle\omega_{v}^{(l+1)} ρωv(l),andωp(l+1)c4ρωv(l)\displaystyle\leq\rho\omega_{v}^{(l)},\quad\mbox{and}\quad\omega_{p}^{(l+1)}\leq\frac{c}{4}\rho\omega_{v}^{(l)}

Since ωv(1)=Vmax,\omega_{v}^{(1)}=V_{\max}, we have

ωv(l+1)\displaystyle\omega_{v}^{(l+1)} Vmaxρl,andωp(l+1)cVmax4ρl.\displaystyle\leq V_{\max}\rho^{l},\quad\mbox{and}\quad\omega_{p}^{(l+1)}\leq\frac{cV_{\max}}{4}\rho^{l}.

That is,

𝔼[VLV]Vmaxρl,\displaystyle\mathbb{E}\Big{[}\big{\|}V_{{L}}-V^{*}\big{\|}_{\infty}\big{]}\leq V_{\max}\rho^{l},
𝔼[DKL(πL(|s)Pτ(|s))]cVmax4ρl,s𝒮.\displaystyle\mathbb{E}\Big{[}D_{KL}\big{(}{\pi}_{L}(\cdot|s)\|P_{\tau}^{*}(\cdot|s)\big{)}\Big{]}\leq\frac{cV_{\max}}{4}\rho^{l},\quad\forall s\in\mathcal{S}.

During ll-th iteration, the total sample complexity M(l)M^{(l)} is given by M(l)=H(l)m(l)nl.M^{(l)}=H^{(l)}m^{(l)}n_{l}. From Eqs. (33)-(37), we have

H(l)\displaystyle H^{(l)} =c2log1τρ,\displaystyle=c_{2}\log\frac{1}{\tau\rho},
m(l)\displaystyle m^{(l)} =c3τ2ρ2l,\displaystyle=\frac{c_{3}}{\tau^{2}\rho^{2l}},
h(l)\displaystyle h^{(l)} =c4ρl,\displaystyle=c_{4}\rho^{l},
K(l)\displaystyle K^{(l)} =c5ρ2llogN(c4ρl)ρl,\displaystyle=\frac{c_{5}}{\rho^{2l}}\log\frac{N(c_{4}\rho^{l})}{\rho^{l}},

where c2,c3,c4,c5c_{2},c_{3},c_{4},c_{5} are positive constants independent of ρ\rho and ll. By Proposition 13, we have

𝔼[nl]\displaystyle\mathbb{E}[n_{l}] O(K(l)N(h(l))c0logN(h(l)))=c6K(l)N(h(l))logN(h(l)),\displaystyle\leq O\Big{(}\frac{K^{(l)}N(h^{(l)})}{c_{0}}\log N(h^{(l)})\Big{)}=c_{6}K^{(l)}N(h^{(l)})\log N(h^{(l)}),

where c6>0c_{6}>0 is a constant.

Following the argument in the proof of Proposition 3, we have: with probability at least 1δ,1-\delta,

l=1LM(l)\displaystyle\sum_{l=1}^{L}M^{(l)} =l=1LH(l)m(l)nl\displaystyle=\sum_{l=1}^{L}H^{(l)}m^{(l)}n_{l}
l=1LH(l)m(l)𝔼[nl]elogLδ\displaystyle\leq\sum_{l=1}^{L}H^{(l)}m^{(l)}\mathbb{E}[n_{l}]e\log\frac{L}{\delta}
=l=1Lc2log1τρc3τ2ρ2lc6c5ρ2llogN(c4ρl)ρlN(c4ρl)logN(c4ρl)elogLδ\displaystyle=\sum_{l=1}^{L}c_{2}\log\frac{1}{\tau\rho}\cdot\frac{c_{3}}{\tau^{2}\rho^{2l}}\cdot c_{6}\frac{c_{5}}{\rho^{2l}}\log\frac{N(c_{4}\rho^{l})}{\rho^{l}}\cdot N(c_{4}\rho^{l})\log N(c_{4}\rho^{l})\cdot e\log\frac{L}{\delta}
=l=1Lclog1τρ1τ2ρ4llogN(c4ρl)ρlN(c4ρl)logN(c4ρl)logLδ.\displaystyle=\sum_{l=1}^{L}c^{\prime}\log\frac{1}{\tau\rho}\cdot\frac{1}{\tau^{2}\rho^{4l}}\cdot\log\frac{N(c_{4}\rho^{l})}{\rho^{l}}\cdot N(c_{4}\rho^{l})\cdot\log N(c_{4}\rho^{l})\cdot\log\frac{L}{\delta}.

Thus

l=1LM(l)\displaystyle\sum_{l=1}^{L}M^{(l)} =O(Llog1τρ1τ2ρ4LlogN(ρL)ρLN(ρL)logN(ρL)logLδ).\displaystyle=O\Big{(}L\cdot\log\frac{1}{\tau\rho}\cdot\frac{1}{\tau^{2}\rho^{4L}}\cdot\log\frac{N(\rho^{L})}{\rho^{L}}\cdot N(\rho^{L})\cdot\log N(\rho^{L})\cdot\log\frac{L}{\delta}\Big{)}.

Given 0<ρ<10<\rho<1, for L=Θ(log1ε)L=\Theta(\log\frac{1}{\varepsilon}), i.e., ρLε\rho^{L}\asymp\varepsilon, we have

l=1LM(l)\displaystyle\sum_{l=1}^{L}M^{(l)} =O(log1εlog1τρ1τ2ε4logN(ε)εN(ε)logN(ε)loglog1εδ).\displaystyle=O\Big{(}\log\frac{1}{\varepsilon}\cdot\log\frac{1}{\tau\rho}\cdot\frac{1}{\tau^{2}\varepsilon^{4}}\cdot\log\frac{N(\varepsilon)}{\varepsilon}\cdot N(\varepsilon)\cdot\log N(\varepsilon)\cdot\log\frac{\log\frac{1}{\varepsilon}}{\delta}\Big{)}.

In particular, if the state space 𝒮\mathcal{S} is unit hypercube in d\mathbb{R}^{d}, we have N(ε)=O(εd).N(\varepsilon)=O\big{(}\varepsilon^{-d}\big{)}. Therefore,

l=1LM(l)\displaystyle\sum_{l=1}^{L}M^{(l)} =O(log1τρ1τ2εd+4log(1ε)4log1δ).\displaystyle=O\bigg{(}\log\frac{1}{\tau\rho}\cdot\frac{1}{\tau^{2}\varepsilon^{d+4}}\cdot\log\big{(}\frac{1}{\varepsilon}\big{)}^{4}\cdot\log\frac{1}{\delta}\bigg{)}.

Appendix J Empirical Results: A Case-Study

To understand how well the EIS method does for turn-based Markov games, we consider a simple game as a proof of concept. We shall design a simple non-deterministic game and apply our EIS framework. As mentioned in Section 2, the sparse sampling oracle (Kearns et al., 2002) could be used for the improvement module in this case. This oracle is simple but suffices to convey the insights. In what follows, we shall demonstrate the effectiveness of EIS by comparing its final estimates of value function with the optimal one obtained via standard value iteration for games.

Setup. Consider a two-player turn-based Markov game (𝒮1,𝒮2,𝒜1,𝒜2,r,P,γ),(\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{A}_{1},\mathcal{A}_{2},r,P,\gamma), where 𝒮1=[0.1,1.1]\mathcal{S}_{1}=[0.1,1.1] and 𝒮2=[1.1,0.1]\mathcal{S}_{2}=[-1.1,-0.1] are the set of states controlled by 𝖯𝟣\mathsf{P1} and 𝖯𝟤,\mathsf{P2}, respectively; 𝒜1=𝒜2={0.1,0.2,0.3,0.4,0.5}\mathcal{A}_{1}=\mathcal{A}_{2}=\{0.1,0.2,0.3,0.4,0.5\} are the set of actions; r(s,a)=3(|s|0.5)2ar(s,a)=3(|s|-0.5)^{2}-a is the reward received by 𝖯𝟣\mathsf{P1} when the corresponding player I(s)I(s) takes action aa at state ss. For each real number u,u, define two clipping operators Π𝒮i(u)=min{max{min𝒮i,u},max𝒮i},\Pi_{\mathcal{S}_{i}}(u)=\min\{\max\{\min{\mathcal{S}_{i}},u\},\max{\mathcal{S}_{i}}\}, i{1,2}.i\in\{1,2\}. That is, Π𝒮i(u)\Pi_{\mathcal{S}_{i}}(u) projects uu to the state space of player i.i. At state s,s,, upon taking an action aa by the corresponding player I(s),I(s), the system transitions to next state sΠ𝒮i(|s|+𝒩(a,1)),s^{\prime}\sim\Pi_{\mathcal{S}_{i}}\big{(}-|s|+\mathcal{N}(a,1)\big{)}, where 𝒩(,)\mathcal{N}(\cdot,\cdot) is the Guassian distribution, and i=1i=1 if I(s)=2,I(s)=2, and i=2i=2 if I(s)=1.I(s)=1. We consider the case γ=0.8.\gamma=0.8.

Before proceeding, let us intuitively understand how the solution of this game should be. Recall that 𝖯𝟣\mathsf{P1} is the referenced reward maximizer. Since the reward r(s,a)=3(|s|0.5)2ar(s,a)=3(|s|-0.5)^{2}-a, by design, 𝖯𝟣\mathsf{P1} would prefer to stay at states that are far from 0.50.5. That is, we expect the value function for 𝖯𝟣\mathsf{P1} in the range [0.1,1.1][0.1,1.1] (recall that 𝒮1=[0.1,1.1]\mathcal{S}_{1}=[0.1,1.1]) to be larger at states far from 0.5. On the contrary, as the reward minimizer, 𝖯𝟤\mathsf{P2} would like to stay at state 0.5-0.5 (recall that 𝒮2=[1.1,0.1]\mathcal{S}_{2}=[-1.1,-0.1]) and potentially take large action aa to minimize the reward. As such, we expect the value function in [1.1,0.1][-1.1,0.1] to be small around state 0.5-0.5. We will confirm this intuition with both value iteration and our EIS method.

Algorithm. We apply the EIS algorithm to learn the Nash equilibrium of the Markov game described above. Specifically, we use nearest neighbor regression as the supervised learning module and random sampling as the exploration module (cf. Section 6)). As the Markov game is stochastic, we will use a variant of sparse sampling method Kearns et al. (2002) as the improvement module. This algorithm is simple to describe and analyze, while suffices to convey essential insights.

Sparse Sampling Oracle. The sparse sampling algorithm can be viewed as a form of non-adaptive tree search for estimating the value of a given state. In particular, each node on the tree represents a state and each edge is labeled by an action and a reward. Starting from the root node, i.e., the queried state s0s_{0} of the oracle, the tree is built in a simple manner: consider a node (state) ss, for each action a𝒜a\in\mathcal{A}, call the generative model CC times on the state-action pair (s,a)(s,a) and obtain CC children of the action aa; the children nodes are the states returned by the generative model, and each edge is labeled by the action aa and the reward returned by the generative model. The process is repeated for all nodes of each level, and then moves on to the next level until reaching a depth of HH. In essence, this process builds a |𝒜|C|\mathcal{A}|C-array tree of depth HH. It represents a partial HH-step look-ahead tree starting from the queried state s0s_{0}, and hence the term sparse sampling oracle.

To obtain estimates for the value of the queried state, estimation from the supervised learning module are assigned to the leaf nodes at depth HH. These values, together with the associated rewards on the edges, are then backed-up to find estimates of values for their parents, i.e., nodes at depth H1H-1. The backup is just a simple average over the children, followed by taking appropriate max or min operation depending on who is the acting player at this layer of the tree. The process is recursively applied from the leaves up to the root level to find estimates of V^(s0)\hat{V}(s_{0}) for the root node s0s_{0}. In the experiments, we set H=2,C=30.H=2,C=30.

Refer to caption
Refer to caption
Figure 1: Results of EIS for various iterations. Left: Approximate optimal V^\hat{V}^{*} and the value function estimation VtV_{t} of EIS obtained at various iterations. Right: Average distance and maximum distance between V^\hat{V}^{*} and VtV_{t} at each EIS iteration.

Evaluation. We first use approximate value iteration to compute the optimal value function VV^{*}. The value iteration operator 𝒯\mathcal{T} is defined as follows:

(𝒯V)(s)={maxa𝒜1{r(s,a)+γ𝔼[V(s)|s0=s,a0=a]},if s𝒮1mina𝒜2{r(s,a)+γ𝔼[V(s)|s0=s,a0=a]},if s𝒮2(\mathcal{T}V)(s)=\begin{cases}\max_{a\in\mathcal{A}_{1}}\big{\{}r(s,a)+\gamma\mathbb{E}[V(s^{\prime})|s_{0}=s,a_{0}=a]\big{\}},&\text{if }s\in\mathcal{S}_{1}\\ \min_{a\in\mathcal{A}_{2}}\big{\{}r(s,a)+\gamma\mathbb{E}[V(s^{\prime})|s_{0}=s,a_{0}=a]\big{\}},&\text{if }s\in\mathcal{S}_{2}\end{cases}

For the continuous game considered here, we discretize the state space of each player to be 1,5001,500 equally spaced states. The above value iteration operator is then applied to obtain an approximate optimal value function V{V}^{*}. The approximate V{V}^{*} generated by 3030 iterations of value iteration is plotted in red in Figure 1 (Left). As expected, the result is consistent with our intuition that 𝖯𝟣\mathsf{P1} attempts to stay away from 0.50.5 and 𝖯𝟤\mathsf{P2} tries to minimize the reward by staying around 0.5-0.5.

Next, we evaluate our EIS method and compare the outputs against the approximate V^\hat{V}^{*}. In particular, let VtV_{t} denote the value function obtained by EIS after tt iterations. Figure 1 (Left) shows the progress of EIS (i.e., VtV_{t}) at various iterations. It is clear that EIS gradually improves the estimation of value function. On the right of Figure 1, we plot the average distance as well as the maximum distance between VtV_{t} and V^\hat{V}^{*} over 1515 iterations. We remark that there is an inevitable gap between V^\hat{V}^{*} and VV^{*} due to discretization. As can be seen, the error of EIS output decays gradually.