On Reinforcement Learning for Turn-based Zero-sum
Markov Games
Abstract
We consider the problem of finding Nash equilibrium for two-player turn-based zero-sum games. Inspired by the AlphaGo Zero (AGZ) algorithm (Silver et al., 2017b), we develop a Reinforcement Learning based approach. Specifically, we propose Explore-Improve-Supervise (EIS) method that combines “exploration”, “policy improvement” and “supervised learning” to find the value function and policy associated with Nash equilibrium. We identify sufficient conditions for convergence and correctness for such an approach. For a concrete instance of EIS where random policy is used for “exploration”, Monte-Carlo Tree Search is used for “policy improvement” and Nearest Neighbors is used for “supervised learning”, we establish that this method finds an -approximate value function of Nash equilibrium in steps when the underlying state-space of the game is continuous and -dimensional. This is nearly optimal as we establish a lower bound of for any policy.
1 Introduction
In 2016, AlphaGo (Silver et al., 2016) became the first program to defeat the world champion in the game of Go. Soon after, another program, AlphaGo Zero (AGZ) (Silver et al., 2017b), achieved even stronger performance despite learning the game from scratch given only the rules. Starting tabula rasa, AGZ mastered the game of Go entirely through self-play using a new reinforcement learning algorithm. The same algorithm was shown to achieve superhuman performance in Chess and Shogi (Silver et al., 2017a).
One key innovation of AGZ is to learn a policy and value function using supervised learning from samples generated via Monte-Carlo Tree Search. Motivated by the remarkable success of this method, in this work we study the problem of finding Nash Equilibrium for two-player turn-based zero-sum games and in particular consider a reinforcement learning based approach.
Our Contributions. The central contribution of this work is the Explore-Improve-Supervise (EIS) method for finding Nash Equilibrium for two-player turn-based zero-sum games with continuous state space, modeled through the framework of Markov game. It is an iterative method where in each iteration three components are intertwined carefully: “explore” that allows for measured exploration of the state space, “improve” which allows for improving the current value and policy for the state being explored, and “supervise” which learns the improved value and policy over the explored states so as to generalize over the entire state space.
Importantly, we identify sufficient conditions, in terms of each of the “explore”, “improve” and “supervise” modules, under which convergence to the value function of the Nash equilibrium is guaranteed. In particular, we establish finite sample complexity bounds for such a generic method to find the -approximate value function of Nash equilibrium. See Theorem 2 and Proposition 3 for the precise statements.
We establish that when random sampling is used for “explore”, Monte-Carlo-Tree-Search (MCTS) is used for “policy improvement” and Nearest Neighbor is used for “supervised learning”, the theoretical conditions identified for convergence of EIS policy are satisfied. Using our finite sample bound for EIS policy, and quantification of conditions as stated above, we conclude that such an instance of EIS method find approximate value function of Nash equilibrium in steps, where is the dimension of the state space of the game (cf. Theorem 8). We also establish a mini-max lower bound on the number of steps required for learning -approximate value function of Nash equilibrium as for any method (cf. Theorem 4). This establishes near-optimality of an instance of EIS.
Related Work. The Markov Decision Processes (MDP) provide a canonical framework to study the single-agent setting. Its natural extension, the Markov Games, provide a canonical framework to study multi-agent settings (Littman, 1994). In this work, we consider an instance of it—turn-based two players or agents with zero-sum rewards. Analogous to learning the optimal policy in MDP setting, here we consider finding the Nash Equilibrium in the setting of Markov Games. There has been a rich literature on existence, uniqueness as well as algorithms for finding the Nash Equilibrium. In what follows, we describe the most relevant literature in that regard.
To start with, in Shapley (1953), for finite state and action spaces where game would terminate in a finite number of stages with positive probability, the existence of optimal stationary strategies and uniqueness of the optimal value function are established. For generic state space, the existence of Nash Equilibrium has been established for Markov Games with discounted rewards. Particularly, when the state space is a compact metric space, Maitra & Parthasarathy (1970, 1971) and Parthasarathy (1973) show the uniqueness of value function and existence of optimal stationary policy. The same result has been established by Kumar & Shiau (1981) when the state space is complete, separable metric space. For two-player zero-sum discounted Markov games, the Bellman operator corresponding to the Nash equilibrium is a contraction and hence, the value function is unique and there exists a deterministic stationary optimal policy (Szepesvári & Littman, 1996; Hansen et al., 2013). We also note that existence of Nash equilibrium for a general class of games (stochastic shortest path) is established by Patek (1997). It argues that the optimal value function is unique and can be achieved by mixed stationary strategies.
For computing or finding optimal value function and policy associated with the Nash equilibrium, there are two settings considered in the literature: (i) when system model is entirely known, and (ii) when model is not known but one can sample from the underlying model. In the first setting, classical approaches from the setting of MDPs such as value/policy iteration are adapted to find the optimal value function or policy associated with the Nash equilibrium (Patek, 1997; Hansen et al., 2013). In the second setting which is considered here, various approximate dynamic programming algorithms have been proposed (Szepesvári & Littman, 1996; Bowling & Veloso, 2001; Littman, 2001a, b; Hu et al., 1998; Hu & Wellman, 2003; Lagoudakis & Parr, 2002; Perolat et al., 2015; Pérolat et al., 2016). More recently, with the advance of deep reinforcement learning (Mnih et al., 2015; Lillicrap et al., 2016; Schulman et al., 2015, 2017; Yang et al., 2019a), recent work approximates the value function/policy by deep neural networks (Silver et al., 2016, 2017a, 2017b).
In terms of theoretical results, there has been work establishing asymptotic convergence to the optimal value function when the state space is finite. For example, Q-learning for MDP adapted to the setting of two-player zero-sum games asymptotically converges (Szepesvári & Littman, 1996). Non-asymptotic results are available for model-based algorithms developed for Markov games with finite states, including R-max algorithm (Brafman & Tennenholtz, 2002) and an algorithm that extends upper confidence reinforcement learning algorithm (Wei et al., 2017). Recent work by Sidford et al. (2019) provides an algorithm that computes an -optimal strategy with near-optimal sample complexity for Markov games with finite states. For Markov games where the transition function can be embedded in a given feature space, the work by Jia et al. (2019) analyzes the sample complexity of a Q-learning algorithm. However, non-asymptotic or finite sample analysis for continuous state space without a special structure, such as that considered in this work, receives less attention in the literature.
Comparison with Prior Work. In this work, we develop Explore-Improve-Supervise (EIS) policy when the model is unknown, but one is able to sample from the underlying model. We study the convergence and sample complexity of our approach. Our goal is to provide a formal study on the general framework of EIS. The overall framework is inspired by AlphaGo Zero and inherits similar components. However, we take an important step towards bridging the gap between sound intuitions and theoretical guarantees, which is valuable for a better understanding on applying or extending this framework with different instantiations. We note that EIS bears certain similarities with another AlphaGo-inspired study (Shah et al., 2019). Both works follow the main idea of coupling improvements with supervised learning. However, there are major differences. Shah et al. (2019) focus on approximating value function in deterministic MDPs and only studies a particular instance of the modules. In contrast, we focus on a broader class of algorithms, formulating general principles and studying the guarantees. This poses different challenges and requires generic formulations on properties of the modules that are technically precise and practically implementable.
Finally, as mentioned previously, non-asymptotic analysis for continuous state space, considered in this work, is scarce for Markov games. While there are some results for finite states, the bounds are not directly comparable. For example, the complexity in Perolat et al. (2015) depends on some oracle complexities for linear programming and regression.
For the setting with continuous state space, the sample complexity results in Jia et al. (2019) for Q-learning rely on the assumption of linear structure of the transition kernel. The recent work by Yang et al. (2019b) studies the finite-sample performance of minimax deep Q-learning for two-player zero-sum games, where the convergence rate depends on the family of neural networks. We remark that these belong to a different class of algorithms. We also derive a fundamental mini-max lower bound on sample-complexity for any method (cf. Theorem 4). The lower bound is interesting on its own. Moreover, it shows near optimal dependence on dimension for an instance of our EIS framework.
Organization. The remainder of the paper is organized as follows. We formally introduce the framework of Markov Games and Nash equilibrium in Section 2. Section 3 describes a generic Explore-Improve-Supervise (EIS) algorithm. The precise technical properties for the modules of EIS are then stated in Section 4, under which we establish our main results, convergence and sample complexity of EIS, in Section 5. Finally, a concrete instantiation is provided in Section 6, demonstrating the applicability of the generic EIS algorithm. All the proofs are presented in Appendices.
2 Two-Player Markov Games and Nash Equilibrium
We introduce the framework of Markov Games (MGs) (also called Stochastic Games (Shapley, 1953)) with two players and zero-sum rewards. The goal in this setting is to learn the Nash equilibrium.
2.1 Two-player Zero-sum Markov Game
We consider two-player turn-based Markov games like Go and Chess, where players take turns to make decisions. We denote the two players as and . Formally, a Markov game can be expressed as a tuple where and are the set of states controlled by and respectively, and are the set of actions players can take, represents reward function, represents transition kernel and is the discount factor. Specifically, for let be the set of feasible actions for player in a given state . We assume that 111This assumption is typical for turn-based games. In particular, one can incorporate the “turn” of the player as part of the state, and thus ’s state space (i.e. when it is player 1’s turn) is disjoint from that of ’s turn. .
Let . For each state let indicate the current player to play. At state , upon taking action by the corresponding player , player receives a reward . In zero-sum games, . Without loss of generality, we let be our reference and use the notation for the definitions of value functions.
Let be the distribution of the new state after playing action , in state , by player . In this paper, we focus on the setting where the state transitions are deterministic. This means that is supported on a single state, , where denotes the state after taking action at state .
For each , let be the policy for player , where is a probability distribution over . Denote by the set of all stationary policies of player , and let be the set of all polices for the game. A two-player zero-sum game can be seen as player aiming to maximize the accumulated discounted reward while attempting to minimize it. The value function and Q function for a zero-sum Markov game can be defined in a manner analogous to the MDP setting:
where and . That is, is the expected total discounted reward for if the game starts from state , players and use the policies and respectively. The interpretation for Q-value is similar.
To simplify the notation, we assume that , where is a finite set. We consider to be a compact subset of (). The rewards are independent random variables taking value in for some . Define It follows that absolute value of value function and Q function for any policy is bounded by
Regarding Deterministic Transitions. Let us clarify this assumption. In fact, our approach and main results of EIS framework (i.e., Sections 4 and 5) apply to general non-deterministic cases as well. However, the example in Section 6 considers deterministic cases. In particular, the improvement module is instantiated by a variant of Monte Carlo Tree Search, where a clean non-asymptotical analysis has been only established for the deterministic case (Shah et al., 2019). To facilitate a coherent exposition, we focus on deterministic cases here. Indeed, many games, such as Go and Chess, are deterministic. Additionally, note that one could instantiate our EIS framework with other methods for the non-deterministic cases—for instance, by adapting the sparse sampling oracle (Kearns et al., 2002) as the improvement module—to obtain a similar analysis. As a proof of concept, we provide empirical results in Appendix J on a non-deterministic game with the sparse sampling oracle.
2.2 Nash Equilibrium
(1) |
(2) |
(3) |
Definition 1 (Optimal Counter Policy).
Given a policy , policy for is said to be an optimal counter-policy against , if and only if for every , we have Similarly, a policy for is said to be an optimal counter-policy against a given policy for , if and only if for every
In a two-player zero-sum game, it has been shown that the pairs of optimal policies coincides with the Nash equilibrium of this game (Maitra & Parthasarathy, 1970; Parthasarathy, 1973; Kumar & Shiau, 1981). In particular, a pair of policies is called an equilibrium solution of the game, if is an optimal counter policy against and is an optimal counter policy against . The value function of the optimal policy, is the unique fixed point of a -contraction operator. In the sequel, we will simply refer to the strategy as the optimal policy. Finally, we use the concise notation and to denote the optimal value function and the optimal Q-value, respectively, i.e., and .
3 EIS: Explore-Improve-Supervise
We describe Explore-Improve-Supervise (EIS) algorithm for learning the optimal value function and optimal policy . The algorithm consists of three separate, but intertwined modules: exploration, improvement and supervised learning. Below is a brief summary of these modules. The precise, formal description of properties desired from these modules is stated in Section 4, which will lead to convergence and correctness of the EIS algorithm as stated in Theorem 2. Section 6 provides a concrete example of modules of EIS satisfying properties stated in Section 4.
Exploration Module. To extract meaningful information for the entire game, sufficient exploration is required so that enough representative states will be visited. This is commonly achieved by an appropriate exploration policy, such as -greedy policy and Boltzmann policy. We require the existence of an exploration module guaranteeing sufficient exploration.
Improvement Module. For the overall learning to make any progress, the improvement module improves the existing estimates of the optimal solution. In particular, given the current estimates for and for , for a state , a query of the improvement module produces better estimates and that are closer to the optimal and .
Supervised Learning Module. The previous two modules can be collectively viewed as a data generation process: the exploration module samples sufficient representative states, while a query of the improvement module provides improved estimates for the optimal value and policy. With these as training data, supervised learning module would learn and generalize the improvement of the training data to the entire state space. Subsequently, the trained supervised learning module produces better estimates for and .
Combining together, the three modules naturally lead to the following iterative algorithm whose pseudo-code is provided in Algorithm 1. Initially, the algorithm starts with an arbitrary model for value function and policy. In each iteration , it performs two steps:
Step 1. Data Generation. Given current model : for current state , query the improvement module to obtain better estimates and than the current estimates ; and then query the exploration module to arrive at the next state ; repeat the above process to obtain training data of samples, .
Step 2. Supervised Learning. Given the improved estimates , use the supervised learning module to build a new model .
Intuitively, the iterative algorithm keeps improving our estimation after each iteration, and eventually converges to optimal solutions. The focus of this paper is to formally understand under what conditions on each of the exploration, improvement and supervised learning module does the algorithm work. Of course, proof is in the puddling—we provide examples of existence of such modules in Section 6.
4 Properties of Modules
In this section we formally state the desired properties of each of the three modules of EIS. With these properties, we establish convergence and correctness of EIS algorithm in Section 5 to follow. We remark that the properties are not made for the ease of technical analysis. Examples satisfying them shall be provided in Section 6.
4.1 Improvement Module
This module improves both value function and policy. The value function is real-valued, whereas policy for each given state can be viewed as a probability distribution over all possible actions. This requires a careful choice of metric for quantifying improvement. Let and be the estimates output by the improvement module in the -th iteration of EIS. Improvement of value function means . Improvement for policy is measured by the KL divergence between and . Here some care is needed as KL divergence would become infinite if supports of the distributions mismatch.
Note that the optimal policy only assigns positive probability to the optimal actions. On the other hand, there is no guarantee that always has a full support on To overcome these challenges, we instead measure the KL divergence with an alternative "optimal policy" that guarantees a full support on . This naturally leads to the optimal Boltzmann policy: given a temperature , the optimal Boltzmann policy is given by
(4) |
If is player , use instead in the above equation to construct the Boltzmann policy (Recall that player is set to be our reference in Section 2). By definition, is guaranteed to be finite for any estimate . Furthermore, converges to as . Therefore, we could use the KL divergence with a small enough to measure the improvement of the estimates.
Finally, it makes sense to take into account the number of samples (i.e., observed state transitions) required by the module to improve the policy and value function. We now formally lay down the following property for the improvement module.
Property 1.
(Improvement Property) Suppose the current model (potentially random) has estimation errors and for the value and policy estimates, respectively, i.e.,
where the expectations are taken with respect to the randomness of the model .
Fix any state , and query the improvement module on via . Let the temperature be , and improvement factors be and . Then, there exists a function such that if number of samples are used, the new estimates satisfy that
where the expectations are with respect to the randomness in the model and the improvement module.
Property 1 allows for a randomized improvement module, but requires that on average, the errors for the value and policy estimates should strictly shrink.
4.2 Supervised Learning Module
To direct the model update in an improving manner, the supervised learning step (line 10 of Algorithm 1) should be able to learn from the training data, and , and generalize to unseen states by preserving the same order of error as the training data. Generically speaking, generalization would require two conditions: (1) sufficiently many training data that are “representative” of the underlying state space; (2) the model itself is expressive enough to capture the characteristics of the function that is desired to be learned.
Before specifying the generalization property, let us provide a few remarks on the above conditions. Condition (1) is typically ensured by using an effective exploration module. Recall that the state space is continuous. The exploration module should be capable of navigating the space until sufficiently many different states are visited. Intuitively, these finite states should properly cover the entire space, i.e., they are representative of the entire space so that learning from these states provide enough information for other states. Formally, this means that given the current estimation errors and for the optimal value and policy, there exists a sufficiently large set of training states, such that supervised learning applied to those training data would generalize to the entire state space with the same order of accuracy. The precise definition of representative states may depend on the particular supervised learning algorithm.
Regarding condition (2), generalization performance of traditional models has been well studied in classical statistical learning theory. More recently, deep neural networks exhibit superior empirical generalization ability, although a complete rigorous proof seems beyond the reach of existing techniques. Our goal is to seek general principle underlying the supervised learning step and as such, we do not limit ourselves to specific models—the learning model could be a parametric model that learns via minimizing empirical squared loss and cross-entropy loss, or it could be a non-parametric model such as nearest neighbors regression. With the above conditions in mind, we state the following general property for the supervised learning module:
Property 2.
(Generalization Property) Let temperature , estimation errors and be given. There exists at least one set of finite states, denoted by , with size , so that the following generalization bound holds:
Suppose that a training dataset satisfies and the following error guarantees:
where the expectation is taken with respect to the randomness of the value and . Then, there exist non-negative universal constants and such that after querying the supervised learning module, i.e., Supervised Learning Module, satisfy
4.3 Exploration Module
With the above development, it is now straightforward to identify the desired property of the exploration module. In particular, as part of the data generation step, it should be capable of exploring the space so that a set of representative states are visited. Consequently, the supervised learning module can then leverage the training data to generalize. Formally, let be the set of all possible representative sets that satisfy the Generalization Property:
Denote by the set of states explored by querying the exploration module up to time , with being the initial state and (cf. line 7 of Algorithm 1). We now state the exploration property, which stipulates that starting at an arbitrary state , the explored states should contain one of the representative sets in , within a finite number of steps.
Property 3.
(Exploration Property) Given the temperature , and estimation errors and for the value and policy, define
Then, the exploration module satisfies that ,
for some independent of . The above expectation is taken with respect to the randomness in the exploration module and the environment (i.e., state transitions).
In the sequel, when the context is clear or the initial state does not matter, we usually drop the dependence in to simplify the notation, i.e., .
5 Main Results: Convergence Guarantees and Sample Complexity
5.1 Convergence Guarantees
As the main result of this paper, we establish convergence of the EIS algorithm under the three desired properties given in Section 4, and quantify the corresponding finite sample complexity. We also provide an algorithm-independent minimax lower bound; in Section 6 we introduce an instance of EIS that essentially matches this lower bound.
Theorem 2.
Theorem 2 implies that the model learned by the generic EIS algorithm converges to the optimal value function and the optimal Boltzmann policy exponentially with respect to the number of iterations. In particular, after
iterations, we can obtain estimates for both and that are within estimation errors. We note that with a sufficiently small temperature, is close to the optimal policy . Therefore, the model can be close to for a large .
5.2 Sample Complexity
We can also characterize the sample complexity of the EIS algorithm. Recall that the sample complexity is defined as the total number of state transitions required for the algorithm to learn -approximate value/policy function. The sample complexity of Algorithm 1 comes from two parts: the improvement module and the exploration module. Recall that the improvement module requires samples for each call (cf. Property 1). The sample complexity of exploration module is proportional to , which satisfies (cf. Property 3). The following proposition bounds the sample complexity in terms of the above relevant quantities.
Proposition 3.
In Section 6, we provide a concrete instance of EIS that finds -approximate value function and policy of Nash equilibrium with transitions.
5.3 A Generic Lower Bound
To understand how good the above sample complexity upper bound is, we establish a lower bound for any algorithm under any sampling policy. In particular, we leverage the the minimax lower bound for the problem of non-parametric regression (Tsybakov, 2009; Stone, 1982) to establish the lower bound, as stated in the following theorem.
Theorem 4.
Given an algorithm, let be the estimate of after samples of transitions for the given Markov game. Then, for each , there exists a two-player zero-sum Markov game such that in order to achieve it must be that
where is an algorithm-independent constant.
6 Implementation: A Concrete Instantiation of the Key Modules
In this section, we demonstrate the applicability of the generic EIS algorithm by giving a concrete instantiation. Specifically, we will use a variant of Monte Carlo Tree Search (MCTS) as the improvement module, nearest neighbor regression as the supervised learning module, and random sampling as the exploration module. We prove that all properties in Section 4 are satisfied. This shows that these properties are reasonable and hence gives a strong support for the generic recipe developed in this paper. Due to space limit, we provide high-level discussions here with informal technical results, and defer precise statements to Appendix E.
Improvement Module: MCTS. Recall that the improvement module should be able to provide improved estimates for the value and policy functions, at the queried state . Since both the value and policy are related to the function, one approach for estimate improvement is to first obtain better estimates for and then construct the improved estimates of value and policy from . We will take this approach in this example and use MCTS to obtain the estimates of (see Algorithm 2 in Appendix E). We assume the existence of a generative model (i.e., a simulator). The following theorem states the property of this specific improvement module, which directly implies the desired improvement property, i.e., Property 1.
Theorem 5 (Informal Statement, Theorem 11, Appendix E).
Suppose that the state transitions are deterministic. Given the current model such that the value model satisfies Then, with appropriately chosen parameters for MCTS, for each query state , the output satisfies
The above is achieved with a sample complexity of
Supervised Learning Module: Nearest Neighbor Regression. We employ a nearest neighbor algorithm to learn the optimal value function and policy. Intuitively, suppose that the optimal value function and the Boltzmann policy is Lipschitz in the state space, then this algorithm will generalize if there are sufficiently many (say ) training data points around each state in the state space . Quantitatively, consider covering with balls of diameter . We call the training data -representative if each covering ball has at least training data. Here, and would depend on the temperature and estimation errors of the training data.
Proposition 6 (Informal Statement, Proposition 12, Appendix E).
Under appropriate regularity conditions, if the training data is representative with respect to appropriate chosen and , the nearest neighbor supervised learning satisfies Property 2. In particular, given training data with estimation errors and , we have
where the constant is independent of , the size of training data.
As discussed in Appendix E, the representative number of data points for training required in the above for generalization depends on the property of the state-space. For example, if state space is the unit ball in dimension, for generalization error scaling with we require representative data points scaling as .
Exploration Module: Random Sampling Policy. In the above supervised learning module, the sampled states for nearest neighbor regression should be -representative. In other words, to satisfy the exploration property, the exploration module must visit a set of -representative states within a finite expected number of steps. We show that a uniformly random sampling policy achieves this. Let be the -covering number of the compact state space.
Proposition 7 (Informal Statement, Proposition 13, Appendix E).
Under appropriate regularity conditions, with chosen as per desired the estimation errors, and , for the value and policy, the expected number of steps to obtain a set of -representative states under the random sampling policy is upper bounded by
Convergence Guarantees and Sample Complexity of the Instance. For this instance of EIS, we have shown that each module satisfies the desired properties. Therefore, the convergence result stated in Theorem 2 holds for this specific instance. Below we make this result explicit, providing concrete bounds on the estimation errors and sample complexity. In the following, the s denote appropriate constants. Please refer to Appendix E for details.
Theorem 8 (Informal Statement, Theorem 14, Appendix E).
For a given , there exist appropriately chosen parameters for this instance such that:
-
1.
The output at the end of -th iteration satisfies
-
2.
With probability at least the above result is achieved with sample complexity of
-
3.
In particular, if is a unit volume hypercube in , then the total sample complexity to achieve -error value function and policy is given by
Theorem 8 states that for a unit hypercube, the sample complexity of the instance of EIS scales as (omitting the logarithmic factor). Note that the minimax lower bound in Theorem 4 scales as . Hence, in terms of the dependence on the dimension, the instance we consider here is nearly optimal. We note that the sample complexity results from two parts: the MCTS contributes a sample complexity scaling as due to simulating the search tree, while nearest neighbor requires samples due to the need of sufficiently many good neighbors. Obtaining tighter bound with potentially more powerful improvement module or supervised learning module such as neural networks is an interesting future avenue.
7 Conclusion
In this paper, we take theoretical steps towards understanding reinforcement learning for zero-sum turn-based Markov games. We develop the Explore-Improve-Supervise (EIS) method with three intuitive modules intertwined carefully. Such an abstraction of three key modules allows us to isolate the fundamental principles from the implementation details. Importantly, we identify conditions for successfully finding the optimal solutions, backed by a concrete instance satisfying those conditions. Overall, the abstraction and the generic properties developed in this paper could serve as some guidelines, with the potential of finding broader applications with different instantiations. Finally, it would be interesting to extend this framework to general Markov games with simultaneous moves. We believe the generic modeling techniques in Section 4 could be applied, but a key challenge is to develop an improvement module with rigorous non-asymptotic guarantees that satisfies the desired property. We believe that addressing this challenge and formally establishing the framework is a fruitful future direction.
References
- Bowling & Veloso (2001) Bowling, M. and Veloso, M. Rational and convergent learning in stochastic games. In International joint conference on artificial intelligence, volume 17, pp. 1021–1026. Lawrence Erlbaum Associates Ltd, 2001.
- Brafman & Tennenholtz (2002) Brafman, R. I. and Tennenholtz, M. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):213–231, 2002.
- Hansen et al. (2013) Hansen, T. D., Miltersen, P. B., and Zwick, U. Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor. J. ACM, 60(1):1:1–1:16, February 2013. ISSN 0004-5411.
- Hu & Wellman (2003) Hu, J. and Wellman, M. P. Nash q-learning for general-sum stochastic games. Journal of machine learning research, 4(Nov):1039–1069, 2003.
- Hu et al. (1998) Hu, J., Wellman, M. P., et al. Multiagent reinforcement learning: theoretical framework and an algorithm. In ICML, volume 98, pp. 242–250. Citeseer, 1998.
- Jia et al. (2019) Jia, Z., Yang, L. F., and Wang, M. Feature-based q-learning for two-player stochastic games, 2019.
- Kaufmann & Koolen (2017) Kaufmann, E. and Koolen, W. M. Monte-carlo tree search by best arm identification. In Advances in Neural Information Processing Systems, pp. 4897–4906, 2017.
- Kearns et al. (2002) Kearns, M., Mansour, Y., and Ng, A. Y. A sparse sampling algorithm for near-optimal planning in large markov decision processes. Machine learning, 49(2-3):193–208, 2002.
- Kocsis et al. (2006) Kocsis, L., Szepesvári, C., and Willemson, J. Improved Monte-Carlo search. Univ. Tartu, Estonia, Tech. Rep, 2006.
- Kumar & Shiau (1981) Kumar, P. R. and Shiau, T.-H. Existence of value and randomized strategies in zero-sum discrete-time stochastic dynamic games. SIAM Journal on Control and Optimization, 19(5):617–634, 1981.
- Lagoudakis & Parr (2002) Lagoudakis, M. G. and Parr, R. Value function approximation in zero-sum markov games. In Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence, pp. 283–292. Morgan Kaufmann Publishers Inc., 2002.
- Lillicrap et al. (2016) Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep reinforcement learning. 2016.
- Littman (1994) Littman, M. L. Markov games as a framework for multi-agent reinforcement learning. In Machine Learning Proceedings 1994, pp. 157–163. Elsevier, 1994.
- Littman (2001a) Littman, M. L. Friend-or-foe q-learning in general-sum games. In ICML, volume 1, pp. 322–328, 2001a.
- Littman (2001b) Littman, M. L. Value-function reinforcement learning in markov games. Cognitive Systems Research, 2(1):55–66, 2001b.
- Maitra & Parthasarathy (1970) Maitra, A. and Parthasarathy, T. On stochastic games. Journal of Optimization Theory and Applications, 5(4):289–300, 1970.
- Maitra & Parthasarathy (1971) Maitra, A. and Parthasarathy, T. On stochastic games, ii. Journal of Optimization Theory and Applications, 8(2):154–160, 1971.
- Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
- Parthasarathy (1973) Parthasarathy, T. Discounted, positive, and noncooperative stochastic games. International Journal of Game Theory, 2(1):25–37, 1973.
- Patek (1997) Patek, S. D. Stochastic Shortest Path Games: Theory and Algorithms. PhD dissertation, Massachusetts Institute of Technology, 1997.
- Perolat et al. (2015) Perolat, J., Scherrer, B., Piot, B., and Pietquin, O. Approximate dynamic programming for two-player zero-sum markov games. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1321–1329, 2015.
- Pérolat et al. (2016) Pérolat, J., Piot, B., Geist, M., Scherrer, B., and Pietquin, O. Softened approximate policy iteration for markov games. In ICML 2016-33rd International Conference on Machine Learning, 2016.
- Schulman et al. (2015) Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust region policy optimization. In International Conference on Machine Learning, pp. 1889–1897, 2015.
- Schulman et al. (2017) Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
- Shah & Xie (2018) Shah, D. and Xie, Q. Q-learning with nearest neighbors. In Advances in Neural Information Processing Systems 31, pp. 3115–3125. 2018.
- Shah et al. (2019) Shah, D., Xie, Q., and Xu, Z. On reinforcement learning using Monte Carlo tree search with supervised learning: Non-asymptotic analysis. arXiv preprint arXiv:1902.05213, 2019.
- Shapley (1953) Shapley, L. S. Stochastic games. Proceedings of the National Academy of Sciences, 39(10):1095–1100, 1953.
- Sidford et al. (2019) Sidford, A., Wang, M., Yang, L. F., and Ye, Y. Solving discounted stochastic two-player games with near-optimal time and sample complexity, 2019.
- Silver et al. (2016) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
- Silver et al. (2017a) Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al. Mastering chess and Shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017a.
- Silver et al. (2017b) Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. Mastering the game of Go without human knowledge. Nature, 550(7676):354, 2017b.
- Stone (1982) Stone, C. J. Optimal global rates of convergence for nonparametric regression. The Annals of Statistics, pp. 1040–1053, 1982.
- Szepesvári & Littman (1996) Szepesvári, C. and Littman, M. L. Generalized Markov decision processes: Dynamic-programming and reinforcement-learning algorithms. In Proceedings of International Conference of Machine Learning, volume 96, 1996.
- Tsybakov (2009) Tsybakov, A. B. Introduction to Nonparametric Estimation. Springer Series in Statistics. Springer, 2009.
- Wainwright (2019) Wainwright, M. J. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press, 2019.
- Wei et al. (2017) Wei, C.-Y., Hong, Y.-T., and Lu, C.-J. Online reinforcement learning in stochastic games. In Advances in Neural Information Processing Systems, pp. 4987–4997, 2017.
- Yang et al. (2019a) Yang, Y., Zhang, G., Xu, Z., and Katabi, D. Harnessing structures for value-based planning and reinforcement learning. arXiv preprint arXiv:1909.12255, 2019a.
- Yang et al. (2019b) Yang, Z., Xie, Y., and Wang, Z. A theoretical analysis of deep q-learning, 2019b.
Appendix A Preliminary Facts
The following inequalities are used for developing our technical results:
Jensen’s Inequality: Let be a random variable and be a convex function, then .
Pinsker’s Inequality: Let and be two probability distributions, then the total variation distance and the KL divergences satisfy the bound
Note that if and are discrete distributions, then , where denotes the total variation (or ) norm.
Reverse Pinsker’s Inequality: If and are discrete distributions defined on a common finite set , then
where .
Log-sum inequality: Suppose that . We have
Appendix B Proof of Theorem 2
Proof.
With the three detailed properties, the proof is conceptually straightforward. At each iteration, the improvement module would produce better estimates for the explored states, by factors of and . The exploration continues until one of the desired representative sets in has been visited, and the exploration property guarantees that the exploration time will be finite. The current iteration then ends by calling the supervised learning module to generalize the improvement to the entire state space. In what follows, we make these statements formal.
Let us first introduce some notion. We will use the term iteration to refer to a complete round of improvement, exploration and supervised learning (cf. Line 2 of Algorithm 1). In general, at each iteration, we use a superscript to denote quantities relevant to the -th iteration, except that for the supervised learning module, we follow the convention in the paper and use a subscript , i.e., . We denote by all the information during the -th iteration. Let be the sigma-algebra generated by the stochastic process , where the randomness comes from the environment and any randomness that may be used in the three modules. Let and be the estimation errors of the model at the beginning of the iteration, i.e.,
We use to denote the set of training data generated by the exploration module and querying the improvement module during the -th iteration. Let be the set of states visited by the exploration module. Correspondingly, the estimation errors for the value function and the optimal policy after querying the improvement module are denoted by and , respectively:
At the -th iteration, the supervised learning modules takes the outputs of the improvement module, , as the training data. Let and denote the estimation errors for the new model , after querying the supervised learning module:
Note that by definition, and .
First, the improvement property of the improvement module (cf. Property 1) implies that
(7) | ||||
(8) |
For the supervised learning module, according to the generalization property (cf. Property 2), when the size of training set is sufficiently large and the sampled states are representative of the state space, the same order of accuracy of the training data will be generalized to the entire state space. For now, let us assume that this is the case; we will come back to verify the generalization bound in Property 2 can indeed be satisfied by . Then, the following bounds hold:
Hence
Therefore, when querying the improvement module, if we select the improvement factors to be
(9) |
then we have
(10) | |||
(11) |
It is worth taking note of the fact that and would be larger than (cf. Property 2): a reasonable supervised learning model may generalize the same order of accuracy as training data, but unlikely for it be smaller; hence, and are required to be smaller than in Property 1 so that .
By definition, and . Therefore, we have the desired inequalities:
Finally, to complete the proof, as we mentioned before, we need to verify that we could sample enough representative states at each iteration in finite time steps. This is indeed guaranteed by the exploration property. In particular, note that at the -th iteration, we would like to sample enough representative states that are of errors and for the value and policy functions (cf. Eqs. (7) and (8)). By a recursive argument (cf. Eqs. (10) and (11)), it is not hard to see that at the -th iteration, we need to query the exploration module until the sampled states, , contain one of the representative sets in , i.e., we immediately stop querying the exploration module at time when the following holds:
where and are given by Eq. (9). From the exploration property, we know that is finite, which implies that is also finite with high probability. Therefore, we are guaranteed that the training data contains one of the representative sets, and hence the supervised learning module will generalize at each iteration. This completes the proof of Theorem 2.
∎
Appendix C Proof of Proposition 3
To prove Proposition 3, we first establish the following useful lemma:
Lemma 9.
Consider the exploration module and suppose that . Then, with probability at least ,
(12) |
Proof of Lemma 9.
Consider a total time steps of . All the states, , are sampled via querying the exploration module. Let us divide the total time steps into segments, each consisting of states. Denote by the set of states in the -th segment, i.e., . The key idea of the proof is to argue that with high probability, at least one of the sets , will contain a representative set in .
Denote by the event that the -th segment does not contain any the representative sets, i.e.,
Let be the filtration containing information untill the end of segment . Since , by Markov inequality, we have,
This then implies that
Therefore,
which completes the proof of Lemma 9. ∎
Proof of Proposition 3.
With Lemma 9, we are now ready to prove Proposition 3. This is achieved by simply counting the sample complexity for each of the iterations. As discussed in the convergence proof of Theorem 2, at the -th iteration, we need to query the exploration module until the sampled states, , contains one of the representative sets in . For each of the explored states, a query of the improvement module incurs a deterministic sample complexity of , for the required improvement factors and . Let us now apply Lemma 9. Then, we know
That is, with probability at most , the sample complexity of the -th iteration is larger than
Finally, applying union bound over the iterations, we have
Therefore, with probability at least , for every , . Equivalently, with probability at least , the total sample complexity is upper bounded by
∎
Appendix D Proof of Theorem 4
The recent work Shah & Xie (2018) establishes a lower bound on the sample complexity for reinforcement learning algorithms on MDPs. We follow a similar argument to establish a lower bound on the sample complexity for two-player zero-sum Markov games. We provide the proof for completeness. The key idea is to connect the problem of estimating the value function to the problem of non-parametric regression, and then leveraging known minimax lower bound for the latter. In particular, we show that a class of non-parametric regression problem can be embedded in a Markov game problem, so any algorithm for the latter can be used to solve the former. Prior work on non-parametric regression (Tsybakov, 2009; Stone, 1982) establishes that a certain number of observations is necessary to achieve a given accuracy using any algorithms, hence leading to a corresponding necessary condition for the sample size of estimating the value function in a Markov game problem. We now provide the details.
Step 1. Non-parametric regression
Consider the following non-parametric regression problem: Let and assume that we have independent pairs of random variables such that
(13) |
where and is the unknown regression function. Suppose that the conditional distribution of given is a Bernoulli distribution with mean . We also assume that is -Lipschitz continuous with respect to the Euclidean norm, i.e.,
Let be the collection of all -Lipschitz continuous function on , i.e.,
The goal is to estimate given the observations and the prior knowledge that .
It is easy to verify that the above problem is a special case of the non-parametric regression problem considered in the work by Stone (1982) (in particular, Example 2 therein). Let denote an arbitrary (measurable) estimator of based on the training samples . By Theorem 1 in Stone (1982), we have the following result: there exists a such that
(14) |
where infimum is over all possible estimators .
Translating this result to the non-asymptotic regime, we obtain the following theorem.
Theorem 10.
Under the above assumptions, for any number , there exits some numbers and such that
Step 2. Two-player zero-sum Markov game
Consider a class of (degenerate) two-player zero-sum discounted Markov game , where
In words, the transition is deterministic, and the expected reward is independent of the action taken and the current state.
Let be the observed reward at step . We assume that the distribution of given is , independently of . The expected reward function is assumed to be -Lipschitz and bounded. It is easy to see that for all , ,
(15) |
Step 3. Reduction from regression to a Markov game
Given a non-parametric regression problem as described in Step , we may reduce it to the problem of estimating the value function of the Markov game described in Step . To do this, we set
and
In this case, it follows from equations (15) that the value function is given by . Moreover, the expected reward function is -Lipschitz, so the assumptions of the Markov game in Step are satisfied. This reduction shows that the Markov game problem is at least as hard as the nonparametric regression problem, so a lower bound for the latter is also a lower bound for the former.
Applying Theorem 10 yields the following result: for any number , there exist some numbers and , such that
Consequently, for any reinforcement learning algorithm and any sufficiently small , there exists a Markov game problem such that in order to achieve
one must have
where is a constant.
Appendix E Details: A Concrete Instantiation of the Key Modules
In Section 6, we provide a sketch of our instantiation of the key modules and their informal properties. In this appendix, we close the gap by giving a detailed treatment on each of the three modules. We discuss in details each instantiation and its formal property. Combining together, we provide a precise statement on the sample complexity of the overall EIS algorithm.
E.1 Improvement Module: MCTS
Recall that the improvement module should be capable of providing improved estimates for both the value and policy functions, at the queried state . Since both the value and the policy are closely related to the function, one simple approach to simultaneously produce improved estimates is to obtain better estimates of first and then construct the improved estimates of value and policy from . We will take this approach in this example and use MCTS to obtain the estimates.
MCTS is a class of popular search algorithms for sequential decision-makings, by building search trees and randomly sampling the state space. It is also one of the key components underlying the success of AlphaGo Zero. Most variants of MCTS in literature uses some forms of upper confidence bound (UCB) algorithm to select actions at each depth of the search tree. Since our focus is to demonstrate the improvement property, we employ a fixed -depth MCTS, which takes the current model of the value function as inputs and outputs a value estimate of the root node . The current model of the value function is used for evaluating the value of the leaf nodes at depth during the Monte Carlo simulation. This fixed depth MCTS has been rigorously analyzed in Shah et al. (2019) with non-asymptotic error bound for the root node, when the state transition is deterministic.
We refer readers to Shah et al. (2019) (precisely, Algorithm 2) for the details of the pseudo code. We remark that in principle, many other variants of MCTS that has a precise error guarantee could be used instead; we choose the fixed -depth variant here to provide a concrete example.
We now lay down the overall algorithm of the improvement module in Algorithm 2 below. Recall that the state transition is deterministic and the reward could be random (cf. Section 2). Given the queried state , note that the Q-value estimate for each is given by the sum of two components: (1) empirical average of the reward ; (2) the estimated value for the next state, returned from calling the fixed depth MCTS algorithm with being the root node. Further recall that we use player as the reference (i.e., ). The module then obtains improved value estimate by taking proper max or min of the Q-value estimates —depending on whether is player (maximizer) or player (minimizer)—and improved policy estimate as the Boltzmann policy based on . It is worth mentioning that the fixed depth MCTS algorithm was designed for discounted MDP in Shah et al. (2019), but extending to game setting is straightforward as in literature (Kocsis et al., 2006; Kaufmann & Koolen, 2017), i.e., by alternating between max nodes (i.e., plays) and min nodes (i.e., ) for each depth in the tree. We defer details to Appendix F, where we prove the key theorem, Theorem 11, of our improvement module.
(16) |
(17) |
(18) |
The following theorem states the property of this specific improvement module (Algorithm 2). It is not hard to see that it directly implies the desired improvement property, i.e., Property 1.
Theorem 11.
Suppose that the state transitions are deterministic. Given the current model , a small temperature , and the improvement factors and . Suppose that the current value model, , satisfies that
Then, with appropriately chosen parameters for Algorithm 2, for each query state , , we have:
-
1.
-
2.
The above is achieved with a sample complexity of
E.2 Supervised Learning Module: Nearest Neighbor Regression.
To establish the generalization property of nearest neighbor supervised learning algorithm for estimating the optimal value function and policy, we make the following structural assumption about the Markov game. Specifically, we assume that the optimal solutions (i.e., true regression function) are smooth in some sense.
Assumption 1.
(A1.) The state space is a compact subset of . The chosen distance metric associated with the state space satisfies that forms a compact metric space. (A2.) The optimal value function is bounded by and satisfies Lipschitz continuity with parameter , i.e., (A3.) The optimal Boltzmann policy defined in Eq. (4) is Lipschitz continuous with parameter , i.e.,
For each , the compact has a finite -covering number . There exists a partition of , , such that each has a diameter at most , that is, . We assume that states in the training set, , are sufficiently representative in the sense that for any given and , the sample size can be chosen large enough to ensure that for all . If satisfies this condition, we call it -representative.
Given the training data, we fit a value function using the following Nearest Neighbor type algorithm: set
For each , a similar algorithm can be used to fit the action probability . The proposition below, proved in Appendix G, shows that this algorithm has the desired generalization property.
To simplify the notation, we use and to represent the estimation errors of the value function and the policy, respectively, for the training data. That is, and in Property 2.
Proposition 12.
The size of a representative data set should at least scale as Consider a simple setting where the state space is a unit volume hypercube in with metric. By (Wainwright, 2019, Lemma 5.7), the covering number of scales as Let Note that Therefore, to achieve the desired generalization property, the size of the training dataset should satisfy
E.3 Exploration Module: Random Sampling Policy
As stated in Proposition 12, the sampled states for nearest neighbor regression should be -representative, where are given by Eq. (19). We will show that a random sampling policy—uniformly sampling the state space—is able to visit a set of -representative states within a finite expected number of steps. We need to assume that the state space is sufficiently regular near the boundary. In particular, we impose the following assumption which is naturally satisfied by convex compact sets in , for example.
Assumption 2.
The partition of satisfies
(20) |
for some constant , where is the Lesbegue measure in .
Proposition 13.
Suppose that the state space is a compact subset of satisfying Assumption 2. Given temperature , and estimation errors and for the value and policy respectively, define as in Eq. (19). Then the expected number of steps to obtain a set of -representative states under the random sampling policy is upper bounded by
E.4 Convergence Guarantees of the Instance
For the instance of EIS algorithm with MCTS, random sampling policy and nearest neighbor supervised learning, we have shown that each module satisfies the desired properties (cf. Theorem 11 and Propositions 12-13). Therefore, the convergence result stated in Theorem 2 holds for the specific instance we consider here. Moreover, the non-asymptotic analysis of these three methods provides an explicit upper bound on the sample complexity of this instance. The following corollary states the precise result.
Theorem 14.
Suppose that Assumptions 1 and 2 hold. For a given , and a small there exist appropriately chosen parameters for the instance of Algorithm 1 with MCTS, random sampling and nearest neighbor supervised learning, such that:
-
1.
The output at the end of -th iteration satisfies
where is the generation constant for policy in Proposition 12.
-
2.
With probability at least the above result is achieved with sample complexity of
where and are constants independent of and
-
3.
In particular, if is a unit volume hypercube in , then the total sample complexity to achieve -error value function and policy is given by
Theorem 14 states that the sample complexity of the instance of EIS algorithm scales as (omitting the logarithmic factor). Note that Theorem 4 implies that for any policy to learn the optimal value function within approximation error, the number of samples required must scale as . Hence in terms of the dependence on the dimension, the instance we consider here is nearly optimal.
Appendix F Example Improvement Module and Proof of Theorem 11
In this section, we formally show the improvement property of the specific example in Section E.1. To this end, we first elaborate some details regarding Algorithm 2 in Appendix F.1. We then state two useful lemmas in Appendix F.2 and finally, we complete the proof of Theorem 11 in Appendix F.3.
F.1 Details of the Improvement Module Example
Before proving the theorem, let us first discuss some details of the improvement module (i.e., Algorithm 2). It is worth mentioning some necessary modifications for applying the fixed -depth MCTS algorithm (Shah et al., 2019) in Algorithm 2. In particular, the original algorithm is introduced and analyzed for infinite-horizon discounted MDPs, but extending to a game setting is straightforward as similar to the literature (Kocsis et al., 2006; Kaufmann & Koolen, 2017). We now elaborate both the algorithmic and the technical extensions of the fixed -depth MCTS algorithm for Markov games.
Algorithmically, for turn-based zero-sum games, each layer in the tree would alternate between max nodes (i.e., player ’s turn) and min nodes (i.e., player ’s turn). For max nodes, the algorithm proceeds as usual by selecting the action with the maximum sum of the empirical average and the upper confidence term. For min nodes, the algorithm could choose the action with the minimum value of the empirical average minus the upper confidence term. More precisely, if the current node is a min node, then Line 6 (action selection) of the fixed depth MCTS algorithm in Shah et al. (2019) should be modified to (using the notation in Shah et al. (2019) to be consistent):
Alternatively, the algorithm could first negate the empirical average and then choose the action that maximizes the sum of the negated empirical average and the upper confidence term. With these modifications, the fixed depth MCTS algorithm could be used to estimate values for the game setting considered in this paper.
Technically, we note that one could readily obtain the same guarantees for the fixed depth MCTS algorithm for the game setting as the algorithm for MDPs in Shah et al. (2019), by following essentially the same proof in Appendix A of Shah et al. (2019). We only remark some technical points in the following:
- 1.
-
2.
The technical results were derived for rewards that are bounded in for convenience. It is not hard to see (cf. Remark 1 in Appendix A.3 of Shah et al. (2019)) that the same proof applies seamlessly for our setting, i.e., bounded rewards in .
-
3.
Since the original derivation was for MDPs, Lemma 4 in Appendix A.8 of Shah et al. (2019) used the Bellman equation for MDPs. In the game setting, it is straightforward to replace it with the Bellman equation for the Markov games:
A similar result of Lemma 4 then readily follows.
F.2 Two Useful Lemmas
We first state two useful lemmas. The first lemma bounds the difference between the two Boltzmann policies in terms of the difference of the underlying values that are used to construct the policies.
Lemma 15.
Fix a state . Suppose that the Q-value estimates satisfy
Consider the two Boltzmann policies with temperature :
We have that
Proof.
Since is fixed, we drop it for the ease of exposition. Let and . Then
The second term above can be bounded using the log-sum inequality (cf. Appendix A), which gives . We then continue the above chain of inequalities to obtain
Taking expectation, we have the bound
as desired. ∎
The following lemma states a generic result regarding the maximum difference of two vectors.
Lemma 16.
Consider two -dimensional vectors and . We have
Proof.
Assume that and . Then
and
Thus,
The same argument holds for the other inequality, and this completes the proof of Lemma 16.
∎
F.3 Improvement Property: Proof of Theorem 11
We are now ready to prove Theorem 11. As discussed in Appendix F.1, the same guarantees of the modified fixed -depth MCTS algorithm for the game setting can be established as in Shah et al. (2019). In the sequel, we extend these guarantees, together with the previous two lemmas, to analyze the improvement module example. In particular, we derive error bounds for the outputs of Algorithm 2, and , and analyze the corresponding sample complexity.
Consider deterministic state transitions. The complete proof proceeds in two steps. The first step is to analyze the outputs of querying the fixed -depth MCTS algorithm. Based on those outputs, as the next step we then analyze the outcomes of Algorithm 2, and . Finally, we characterize the corresponding sample complexity of the overall process. Throughout the first two steps, since the current model may be random, let us fix a realization; we will take expectation in the end to arrive at the desired results in Theorem 11.
Step 1: Error bounds for outputs of the fixed depth MCTS algorithm. In Shah et al. (2019), the authors establish the following concentration result for the estimated value function of the root node , , under the fixed -depth MCTS algorithm with simulations (cf. Proof of Theorem 1 in Shah et al. (2019)): there exist constants and and such that for every ,
where the probability is measured with respect to the randomness in the MCTS algorithm, and satisfies the following condition
Here, denotes the error when evaluating the leaf nodes using current model, i.e., , where is the current value model. Note that is a hyper-parameter for the MCTS algorithm that could be freely chosen. Throughout the proof, we set
Recall that is the discount factor. In addition, is larger than (cf. Section A.6.4 of Shah et al. (2019)). Therefore, for every ,
It follows that
Thus
This leads to a variant of Theorem 1 in Shah et al. (2019) for the performance of the fixed depth MCTS, as stated following.
Proposition 17.
Let be the current model and consider the fixed depth MCTS algorithm (with depth ) employed in Algorithm 2. Then, for each query state , the following claim holds for the output of the fixed -depth MCTS with simulations:
where is a constant and the expectation is taken with respect to the randomness in the MCTS simulations.
Step 2: Error bounds for outputs of the improvement module. Now, we are ready to obtain a non-asymptotic analysis for the outputs of Algorithm 2. Consider Line 4 of Algorithm 2, where we call the fixed depth MCTS algorithm on the state . For each , as simulations are performed with root node during the simulation, Proposition 17 implies that
(21) |
Recall that state transitions are assumed to be deterministic and note that the estimated Q-value for is given by (i.e., Line 6 of Algorithm 2)
where is the empirical average of the immediate rewards for playing action when in state . Note that are independent random variables that satisfy . By Hoeffding inequality, it holds that
Thus
It follows that
(22) |
where the last inequality follows from the fact that
We now consider the value estimate and the policy estimate (Line 9 of Algorithm 2) separately:
(a) Value estimate : In order to obtain an error bound for the value estimate of the query state , i.e., , we apply Lemma 16 from the previous section. For the query state , if is player , applying Lemma 16 yields
Therefore,
(23) |
Similarly, if is player , applying Lemma 16 also yields the same desired result, Eq. (23).
(b) Policy estimate : In order to obtain an error bound for the policy estimate of the query state , i.e., , we apply Lemma 15 from the previous section. Together with Eq. (22), Lemma 15 yields the following bound:
(24) |
Step 3: Completing the proof of Theorem 11. Recall that by the assumption of Theorem 11,
Now, taking expectation of Eqs. (23) and (24) over the randomness in the current model , we have
(25) | ||||
(26) |
Recall that in Theorem 11, our goal for improvement is as follows:
(27) |
It is not hard to see that we can choose the parameters of the fixed depth MCTS algorithm, in particular, the depth and the number of simulations , in an appropriate way such that Eqs. (25) and (26) satisfy the desired improvement bound. In particular, we could choose the depth such that
(28) |
and choose the number of simulations, , to be large enough such that the term is less than . For small temperature , choosing the depth
would satisfy the condition Eq. (28). Recall that the tunable hyper-parameter is set to . With the above , this implies that the number of simulations should be
To summarize, we show that the desired improvement, Eq. (27), can indeed be satisfied with appropriate algorithmic parameters. Finally, regarding the sample complexity, we note that with a -depth tree, each simulation of the fixed depth MCTS algorithm incurs state transitions. Therefore, the total sample complexity of querying the improvement module is , which is equal to
Appendix G Proof of Proposition 12
Proof.
Let , and be positive numbers to be chosen later, and recall that is the -covering number of . Let and . For each , we have
where the second step follows from the Jensen’s inequality, and the last step follows from the premise on the training error of the value function. To bound the first term of RHS above, we note that the random variables are independent and bounded by . So Hoffedings inequality ensures that
Combining the last two equations and applying a union bound over , we obtain
Since the random variable satisfies , we may convert the above inequality into an expectation bound:
We are now ready to bound the quantity of interest:
where step holds because is -Lipschitz, and step holds because the sets have diameter at most . Now taking , we have
(29) |
We now turn to the second inequality. Under the premise on the training error of policy, we have for each ,
Pinsker’s inequality | ||||
For each , let us fit the action probability using a similar Nearest Neighbor type algorithm as :
Note that and . Applying a similar argument as above for the fitted action probability function w.r.t. the squared error, we have
Choosing we obtain that
(30) |
Taking
(31) |
from Eqs (29)-(30), we have the following bounds
This proves the first inequality of the proposition.
We now focus on the inequality for the policy function. By Jensen’s inequality, we have
This is equivalent to saying that for each ,
(32) |
On the other hand, for each , each we have the bound
where . Therefore, by the reverse Pinsker’s inequality (cf. Appendix A) we have
Combining with the bound (32), we obtain that
This completes the proof of the second inequality. ∎
Appendix H Proof of Proposition 13
In the sequel, we use the shorthand and refer to each as a ball. For each integer , let be the number of balls visited up to time . Let be the first time when all balls are visited. For each , let be the the first time when the -th ball is visited, and let be the time to visit the ball after balls have been visited. We use the convention that . By definition, we have
When balls have been visited, the probability of visiting a new ball is at least
where the last inequality follows from the regularity assumption. Therefore, the time to visit the -th pair after pairs have been visited, is stochastically dominated by a geometric random variable with mean at most It follows that
This prove that the expected time to sample a -representative set is upper bounded by . Note that if the trajectory samples -representative sets, then each ball must be visited at least times. Therefore, gives an upper bound for the expected time to sample a -representative set, hence
Appendix I Proof of Theorem 14
We will reuse the notation introduced in the proof of Theorem 2. We initialize the value model . Hence Consider the -th iteration. Let and denote the estimation errors for the model , at the beginning of -th iteration:
Let be the set of states visited by the random sampling policy during the -th iteration. We require to be -representative, where
(33) | ||||
(34) | ||||
(35) |
We use to denote the set of training data generated by querying MCTS. Consider choosing the depth and simulation number parameters for MCTS at the -th iteration as follows:
(36) | ||||
(37) |
where is a sufficiently large constant. By Theorem 11, for each query state , the output from MCTS satisfies
That is, the improvement factors for the value function and policy, and , of MCTS are follows:
Note that the training set have estimation error for both value and policy, and the sampled states of the training set are -representative. By Proposition 12, the output of nearest neighbor supervised learning at the end of -th iteration satisfies the following generalization property:
Therefore,
Since we have
That is,
During -th iteration, the total sample complexity is given by From Eqs. (33)-(37), we have
where are positive constants independent of and . By Proposition 13, we have
where is a constant.
Given , for , i.e., , we have
In particular, if the state space is unit hypercube in , we have Therefore,
Appendix J Empirical Results: A Case-Study
To understand how well the EIS method does for turn-based Markov games, we consider a simple game as a proof of concept. We shall design a simple non-deterministic game and apply our EIS framework. As mentioned in Section 2, the sparse sampling oracle (Kearns et al., 2002) could be used for the improvement module in this case. This oracle is simple but suffices to convey the insights. In what follows, we shall demonstrate the effectiveness of EIS by comparing its final estimates of value function with the optimal one obtained via standard value iteration for games.
Setup. Consider a two-player turn-based Markov game where and are the set of states controlled by and respectively; are the set of actions; is the reward received by when the corresponding player takes action at state . For each real number define two clipping operators That is, projects to the state space of player At state , upon taking an action by the corresponding player the system transitions to next state where is the Guassian distribution, and if and if We consider the case
Before proceeding, let us intuitively understand how the solution of this game should be. Recall that is the referenced reward maximizer. Since the reward , by design, would prefer to stay at states that are far from . That is, we expect the value function for in the range (recall that ) to be larger at states far from 0.5. On the contrary, as the reward minimizer, would like to stay at state (recall that ) and potentially take large action to minimize the reward. As such, we expect the value function in to be small around state . We will confirm this intuition with both value iteration and our EIS method.
Algorithm. We apply the EIS algorithm to learn the Nash equilibrium of the Markov game described above. Specifically, we use nearest neighbor regression as the supervised learning module and random sampling as the exploration module (cf. Section 6)). As the Markov game is stochastic, we will use a variant of sparse sampling method Kearns et al. (2002) as the improvement module. This algorithm is simple to describe and analyze, while suffices to convey essential insights.
Sparse Sampling Oracle. The sparse sampling algorithm can be viewed as a form of non-adaptive tree search for estimating the value of a given state. In particular, each node on the tree represents a state and each edge is labeled by an action and a reward. Starting from the root node, i.e., the queried state of the oracle, the tree is built in a simple manner: consider a node (state) , for each action , call the generative model times on the state-action pair and obtain children of the action ; the children nodes are the states returned by the generative model, and each edge is labeled by the action and the reward returned by the generative model. The process is repeated for all nodes of each level, and then moves on to the next level until reaching a depth of . In essence, this process builds a -array tree of depth . It represents a partial -step look-ahead tree starting from the queried state , and hence the term sparse sampling oracle.
To obtain estimates for the value of the queried state, estimation from the supervised learning module are assigned to the leaf nodes at depth . These values, together with the associated rewards on the edges, are then backed-up to find estimates of values for their parents, i.e., nodes at depth . The backup is just a simple average over the children, followed by taking appropriate max or min operation depending on who is the acting player at this layer of the tree. The process is recursively applied from the leaves up to the root level to find estimates of for the root node . In the experiments, we set


Evaluation. We first use approximate value iteration to compute the optimal value function . The value iteration operator is defined as follows:
For the continuous game considered here, we discretize the state space of each player to be equally spaced states. The above value iteration operator is then applied to obtain an approximate optimal value function . The approximate generated by iterations of value iteration is plotted in red in Figure 1 (Left). As expected, the result is consistent with our intuition that attempts to stay away from and tries to minimize the reward by staying around .
Next, we evaluate our EIS method and compare the outputs against the approximate . In particular, let denote the value function obtained by EIS after iterations. Figure 1 (Left) shows the progress of EIS (i.e., ) at various iterations. It is clear that EIS gradually improves the estimation of value function. On the right of Figure 1, we plot the average distance as well as the maximum distance between and over iterations. We remark that there is an inevitable gap between and due to discretization. As can be seen, the error of EIS output decays gradually.