Sample Efficient Reinforcement Learning with REINFORCE
Abstract
Policy gradient methods are among the most effective methods for large-scale reinforcement learning, and their empirical success has prompted several works that develop the foundation of their global convergence theory. However, prior works have either required exact gradients or state-action visitation measure based mini-batch stochastic gradients with a diverging batch size, which limit their applicability in practical scenarios. In this paper, we consider classical policy gradient methods that compute an approximate gradient with a single trajectory or a fixed size mini-batch of trajectories under soft-max parametrization and log-barrier regularization, along with the widely-used REINFORCE gradient estimation procedure. By controlling the number of “bad” episodes and resorting to the classical doubling trick, we establish an anytime sub-linear high probability regret bound as well as almost sure global convergence of the average regret with an asymptotically sub-linear rate. These provide the first set of global convergence and sample efficiency results for the well-known REINFORCE algorithm and contribute to a better understanding of its performance in practice.
1 Introduction
In this paper, we study the global convergence rates of the REINFORCE algorithm (Williams 1992) for episodic reinforcement learning. REINFORCE is a vanilla policy gradient method that computes a stochastic approximate gradient with a single trajectory or a fixed size mini-batch of trajectories with particular choice of gradient estimator, where we use ‘vanilla’ here to disambiguate the method from more exotic variants such as natural policy gradient methods. REINFORCE and its variants are among the most widely used policy gradient methods in practice due to their good empirical performance and implementation simplicity (Mnih and Gregor 2014; Gu et al. 2015; Zoph and Le 2016; Rennie et al. 2017; Guu et al. 2017; Johnson et al. 2017; Yi et al. 2018; Kool, van Hoof, and Welling 2018, 2020). Related methods include the actor-critic family (Konda and Tsitsiklis 2003; Mnih et al. 2016) and deterministic and trust-region based variants (Silver et al. 2014; Schulman et al. 2017, 2015).
The theoretical results for policy gradient methods have, up to recently, been restricted to convergence to local stationary points (Agarwal et al. 2019). Lately, a series of works have established global convergence results. These recent developments cover a broad range of issues including global optimality characterization (Fazel et al. 2018; Bhandari and Russo 2019), convergence rates (Zhang et al. 2019; Mei et al. 2020; Bhandari and Russo 2020; Cen et al. 2020), the use of function approximation (Agarwal et al. 2019; Wang et al. 2019; Fu, Yang, and Wang 2020), and efficient exploration (Agarwal et al. 2020) (for more details, see the related work section, which we defer to Appendix E due to space limits). Nevertheless, prior work on vanilla policy gradient methods either requires exact and deterministic policy gradients or only guarantees convergence up to with a fixed mini-batch size of trajectories collected when performing a single update (where is in most cases), while global convergence is only achieved when the batch size goes to infinity. By contrast, practical implementations of policy gradient methods typically use either a single or a fixed number of sample trajectories, which tends to perform well. In addition, prior theoretical results (for general MDPs) have used the state-action visitation measure based gradient estimation (see e.g., (Wang et al. 2019, (3.10))), which are typically not used in practice.
The main purpose of this paper is to bridge this gap between theory and practice. We do this in two major ways. First, we derive performance bounds for the case of a fixed mini-batch size, rather than requiring diverging size. Second, we remove the need for the state-action visitation measure based gradient, instead using the REINFORCE gradient estimator. It is nontrivial to go from a diverging mini-batch size to a fixed one. In fact, by allowing for an arbitrarily large batch size, existing works in the literature were able to make use of IID samples to decouple the analysis into deterministic gradient descent/ascent and error control of stochastic gradient estimations. In contrast, with a single trajectory or a fixed batch size, such a decoupling is no longer feasible. In addition, the state-action visitation measure based gradient estimations are unbiased and unbounded, while REINFORCE gradient estimations are biased and bounded. Hence a key to the analysis is to deal with the bias while making better use of the boundedness. Our analysis not only addresses these challenges, but also leads to convergence results in almost sure and high probability senses, which are stronger than the expected convergence results that dominate the literature (for vanilla policy gradient methods). We also emphasize that the goal of this work is to provide a deeper understanding of a widely used algorithm, REINFORCE, with little or no modifications, rather than tweaking it to achieve near-optimal performance bounds. Lastly, our analysis is not the complete picture and several open questions about the performance of policy gradient methods remain. We discuss these issues in the conclusion.
1.1 Contribution
Our major contribution can be summarized as follows. We establish the first set of global convergence results for the REINFORCE algorithm. In particular, we establish an anytime sub-linear high probability regret bound as well as almost sure global convergence of the average regret with an asymptotically sub-linear rate for REINFORCE, showing that the algorithm is sample efficient (i.e., with polynomial/non-exponential complexity). To our knowledge, these (almost sure and high probability) results are stronger than existing global convergence results for (vanilla) policy gradient methods in the literature. Moreover, our convergence results remove the non-vanishing term (with being the mini-batch size of the trajectories and being some constant exponent) and hence show for the first time that policy gradient estimations with a single or finite number of trajectories also enjoy global convergence properties. Finally, the widely-used REINFORCE gradient estimation procedure is studied, as opposed to the state-action visitation measure based estimators typically studied in the literature but rarely used in practice.
2 Problem setting and preliminaries
Below we begin with our problem setting and some preliminaries on MDPs and policy optimization. For brevity we restrict ourselves to the stationary infinite-horizon discounted setting. We briefly discuss potential extensions beyond this setting in §6.
2.1 Problem setting
We consider a finite MDP , which is characterized by a finite state space , a finite action space , a transition probability (with being the probability of transitioning to state given the current state and action ), a reward function (with being the instantaneous reward when taking action at state ), a discount factor and an initial state distribution . Here denotes the probability simplex over a finite set . A (stationary, stochastic) policy is a mapping from to . We will use , or alternatively to denote the probability of taking action at state following policy . The policy can also be viewed as an dimensional vector in
(1) |
Notice that here we use the double indices and for notational convenience. We use to denote the sub-vector . We also assume that is deterministic for any and for simplicity, although our results hold for any with an almost sure uniform bound. Here can be similarly viewed as an -dimensional vector. Without loss of generality, we assume that for all and , which is a common assumption (Jaksch, Ortner, and Auer 2010; Agarwal et al. 2019; Mei et al. 2020; Even-Dar and Mansour 2003; Jin et al. 2018). We also assume that is component-wise positive, as is assumed in (Bhandari and Russo 2019).
Given a policy , the expected cumulative reward of the MDP is defined as
(2) |
where , , and the goal is to find a policy which solves the following optimization problem:
(3) |
Any policy is said to be optimal, and the corresponding optimal objective value is denoted as . Note that in the literature, is also commonly written as and referred to as the value function. Here we hide the dependency on as it is fixed throughout the paper.
2.2 Vanilla policy gradient method and REINFORCE algorithm
When the transition probability and reward are fully known, problem (3) reduces to solving an MDP, in which case various classical algorithms are available, including value iteration and policy iteration (Bertsekas 2017). In this paper, we consider the episodic reinforcement learning setting in which the agent accesses and by interacting with the environment over successive episodes, i.e., the agent accesses the environment in the form of a -restart model (Shani, Efroni, and Mannor 2019), which is commonly adopted in the policy gradient literature (Kakade et al. 2003). In addition, we focus on the REINFORCE algorithm, a representative policy gradient method.
Policy parametrization and surrogate objectives.
Here we consider parametrizing the policy with parameter , i.e., , and take the following (regularized) optimization problem as an approximation to (3):
(4) |
where and is a differentiable regularization term that improves convergence, to be specified later. Although our ultimate goal is still to solve the original problem (3) this regularized optimization problem is a useful surrogate and our approach will be to tackle problem (4) with progressively smaller regularization penalties, thereby converging to solving the actual problem we care about.
Policy gradient method.
In each episode , the policy gradient method directly performs an online stochastic gradient ascent update on a surrogate objective , i.e.,
(5) |
where is the step-size and is the regularization parameter. Here the stochastic gradient is computed by sampling a single trajectory following policy from with the initial state distribution . Here , where is a finite (and potentially random) stopping time of the trajectory (to be specified below), , , and for all . We summarize the generic policy gradient method (with single trajectory gradient estimates) in Algorithm 1. An extension to mini-batch scenarios will be discussed in §5. As is always (implicitly) assumed in the literature of episodic reinforcement learning (e.g., cf. (Marbach and Tsitsiklis 2001)), given the current policy, we assume that the sampled trajectory is conditionally independent of all previous policies and trajectories.
REINFORCE algorithm.
There are several ways of choosing the stochastic gradient operator in the policy gradient method, and the well-known REINFORCE algorithm (Williams 1992) corresponds to a specific family of estimators based on the policy gradient theorem (Sutton et al. 2000) (cf. §3). Other common alternatives include zeroth order/random search (Fazel et al. 2018; Malik et al. 2018) and actor-critic (Konda and Tsitsiklis 2003) approximations. One may also choose to parametrize the policy as a mapping from the parameter space to a specific action, which would then result in deterministic policy gradient approximations (Silver et al. 2014).
Although our main goal is to study the REINFORCE algorithm, our analysis indeed holds for rather generic stochastic gradient estimates. In the next section, we introduce the (mild) assumptions needed for our convergence analysis and the detailed gradient estimation procedures in the REINFORCE algorithm, and then verify that the assumptions do hold for these gradient estimations.
2.3 Phased learning and performance criteria
Phased learning.
To facilitate the exposition below, we divide the optimization in Algorithm 1 into successive phases , each with length . We then fix the regularization coefficient within each phase . In addition, a post-processing step is enforced at the end of each phase to produce the initialization of the next phase. The resulting algorithm is described in Algorithm 2. Here the trajectory is denoted as , and we will refer to as the -th iterate hereafter. The post-processing function is required to guarantee that the resulting policy is lower bounded by a pre-specified tolerance to ensure that the regularization is bounded (cf. Algorithm 3 for a formal description and §3.1 for an example realization).
Note that here the -th episode in phase corresponds to the -th episode in the original indexing with . For notational compactness below, for , we define , where maps the double index to the corresponding original episode number, with . The mapping is a bijection and we denote its inverse by .
Performance criteria.
The criterion we adopt to evaluate the performance of Algorithm 2 is regret. For any , the regret up to episode is defined as the cumulative sub-optimality of the policy over the episodes. Formally, we define
(6) |
Here the summation is over all -th iterates whose corresponding original episode numbers are smaller or equal to , and , where , , , , and denotes the conditional expectation given the -th iteration . Notice that the regret defined above takes into account the fact that the trajectories are stopped/truncated to have finite horizons , which characterizes the actual loss caused by sampling the trajectories in line 5 of Algorithm 2. A similar regret definition for the episodic (discounted) reinforcement learning setting considered here is adopted in (Fu, Yang, and Wang 2020). We remark that all our regret bounds remain correct up to lower order terms when we replace with or an expectation-free version.
Similarly, we also define the single phase version of regret as follows. The regret up to episode in phase is defined as
(7) |
3 Assumptions and REINFORCE gradients
3.1 Assumptions
Here we list a few fundamental assumptions that we require for our analysis.
Assumption 1 (Setting).
The regularization term is a log-barrier, i.e.,
and the policy is parametrized to be a soft-max, i.e., , with .
The first assumption concerns the form of the policy parameterization and the regularization. Notice that the regularization term here can also be seen as a relative entropy/KL regularization (with a uniform distribution policy reference) (Agarwal et al. 2019). Such kind of regularization terms are also widely adopted in practice (although typically with variations) (Peters, Mülling, and Altun 2010; Schulman, Chen, and Abbeel 2017).
With Assumption 1, the post-processing function in Algorithm 3 can be for example realized by first calculating , and then return with . Here is an all-one vector and () are arbitrary real numbers.
Assumption 2 (Policy gradient estimator).
There exist constants , such that for all , , we have almost surely and that
where , . In addition, , .
The second assumption requires that the gradient estimates are almost surely bounded, nearly unbiased and satisfy a bounded second-order moment growth condition. This is a slight generalization of standard assumptions in the stochastic gradient descent literature (Bottou, Curtis, and Nocedal 2018). Additionally, we also require that the trajectory lengths are at least logarithmically growing in to control the loss of rewards due to truncation. For notational simplicity, hereafter we omit to mention the trajectory sampling (i.e., ) when we write down .
Notice that Assumption 2 immediately holds if is unbiased and has a bounded second-order moment. We have implicitly assumed that is differentiable, which we can do due to the following lemma:
3.2 REINFORCE gradient estimations
Now we introduce REINFORCE gradient estimation with baselines, and specify the hyper-parameters under which the technical Assumption 2 holds, when operating under the setting Assumption 1.
REINFORCE gradient estimation with log-regularization takes the following form:
(9) |
where , , and the second term above corresponds to the gradient of the regularization . Notice that here the outer summation is only up to , which ensures that is sufficiently accurate. Here is called the baseline, and is required to be independent of the trajectory (Agarwal, Jiang, and Kakade 2019, §4.1). The purpose of subtracting from the approximate -values is to (potentially) reduce the variance of the “plain” REINFORCE gradient estimation, which corresponds to the case when .
With this we have the following result, the proof of which can be found in the Appendix.
Lemma 2.
This result extends without modification to non-stationary baselines , as long as each is independent of trajectory and for any . Note that the explicit upper bound on is pessimistic, and in practice is usually much smaller than with appropriate choices of baselines (e.g., the adaptive reinforcement baselines (Williams 1992; Zhao et al. 2011)), although the latter has a smaller upper bound as stated in Lemma 2.
4 Main convergence results
4.1 Preliminary tools
We first present some preliminary tools for our analysis.
Non-convexity and control of “bad” episodes.
One of the key difficulties in applying policy gradient methods to solve an MDP problem towards global optimality is that problem (3) is in general non-convex (Agarwal et al. 2019). Fortunately, we have the following result, which connects the gradient of the surrogate objective with the global optimality gap of the original optimization problem (3).
Proposition 3 ((Agarwal et al. 2019, Theorem 5.3)).
Under Assumption 1, for any , suppose that we have and that . Then .
Here for any policy , is the discounted state visitation distribution, where is the probability of arriving at in step starting from following policy in . In addition, the division in is component-wise.
Now motivated by Proposition 3, when analyzing the regret up to episode in phase , we define the following set of “bad episodes”:
Then one can show that for any , if we choose , we have that for any , while holds trivially for due to the assumption that the rewards are between and . We then establish a sub-linear (in ) bound the size of , which serves as the key stepping stone for the overall sub-linear regret bound. The details of these arguments can be found in the Appendix.
Doubling trick.
The second tool is a classical doubling trick that is commonly adopted in the design of online learning algorithms (Besson and Kaufmann 2018; Basei, Guo, and Hu 2020), which can be used to stitch together the regret over multiple learning phases in Algorithm 2.
Notice that Proposition 3 suggests that for any pre-specified tolerance , one can select proportional to and then run (stochastic) gradient ascent to drive below the tolerance. To obtain the eventual convergence and regret bound in the long run we apply the doubling trick, which specifies a growing phase length sequence with in Algorithm 2 and a suitably decaying sequence of regularization parameters .
From high probability to almost sure convergence.
The last tool is an observation that an arbitrary anytime sub-linear high probability regret bound with logarithmic dependency on immediately leads to almost sure convergence of the average regret with a corresponding asymptotic rate. Although such an observation seems to be informally well-known in the theoretical computer science community, we provide a compact formal discussion below for self-contained-ness.
Lemma 4.
Suppose that , with probability at least , , we have
(11) |
for some constants and . Then we also have
where the events , and
To put it another way, we have
with an asymptotic rate of .
Notice that here we restrict the right-hand side of (11) to a rather specific form simply because our bounds below are all of this form. However, similar results hold for much more general forms of bounds.
4.2 Regret analysis
In this section, we establish the regret bound of Algorithm 2, when used with the REINFORCE gradient estimator from §3.2. We begin by bounding the regret of a single phase and then use the doubling trick to combine these into the overall regret bound.
Single phase analysis.
We begin by bounding the regret defined in (7) of each phase in Algorithm 2. Note that a single phase in Algorithm 2 is exactly Algorithm 1 terminated in episode , with for all and . Also notice that for a given phase , in order for Theorem 5 below to hold, we actually only need the conditions in Assumption 2 to be satisfied for this specific .
Theorem 5.
The proof as well as a more formal statement of Theorem 5 with details of the constants (cf. Theorem 9) are deferred to the Appendix. Here the constant is the smoothness constant from Proposition 1. We remark that when is fixed, the regret bound (12) can be seen as a sub-linear (in as ) regret term plus an error term . Alternatively, one can interpret it as follows:
Namely, the average regret in episode converges to a constant multiple of the pre-specified tolerance at a sub-linear rate (as ).
Overall regret bound.
Now we stitch together the single phase regret bounds established above to obtain the overall regret bound of Algorithm 2, with the help of the doubling trick. This leads to the following theorem.
Theorem 6 (Regret for REINFORCE).
Under Assumption 1, suppose that for each , we choose , with and , and choose , , and . In addition, suppose that (9) is adopted to evaluate , with , for any (where is a constant), and that (10) holds for for all . Then we have for any , with probability at least , for any , we have
(13) |
In addition, we have
(14) |
with asymptotic rate .
Note that the almost sure convergence (14) is immediately implied by the high probability bound (13) via Lemma 4. Here for clarity, we have restricted the statement to the case when we use the REINFORCE gradient estimation from §3.2. A more general counterpart result can be found in Appendix B.3, from which Theorem 6 is immediately implied. See also Appendix C for a more formal statement of the regret bound (cf. Corollary 11) for REINFORCE with detailed constants.
Notice that compared to the single phase regret bound in (12), the overall regret bound in (13) now gets rid of the dependency on a pre-specified tolerance . This should be attributed to the adaptivity in the regularization parameter sequence. Also notice that here we have followed the convention of the reinforcement learning literature to make all the problem dependent quantities (e.g., , etc.) explicit in the big- notation.
One crucial difference between our regret bound and those in the existing literature of vanilla policy gradient methods in the general MDP settings (which are sometimes not stated in the form of regret, but can be easily deduced from their proofs in those cases) is that the previous results either require exact and deterministic updates or contain a non-vanishing term, with being the mini-batch size (of the trajectories) and being some exponent (with a typical value of ). By removing such non-vanishing terms, we obtain the first sub-linear regret bound for model-free vanilla policy gradient methods with finite mini-batch sizes.
5 Extension to mini-batch updates
We now consider extending our previous results to mini-batch settings, by modifying Algorithm 2 as follows. Firstly, in each inner iteration, instead of sampling only one trajectory in line 5, we sample independent trajectories from following policy and then compute an approximate gradient () using each of these trajectories. We then modify the update in line 6 as
See Algorithm 4 in Appendix D for a formal description of the modified algorithm.
Regret with mini-batches.
Notice that since each inner iteration (in Algorithm 4) now consists of episodes, we need to slightly modify the definition of the regret up to episode () as follows:
(15) |
where and is the same as in (6). The above definition accounts for the fact that each of the episodes in an inner iteration/step corresponds to the same iterate and hence has the same contribution to the regret. The second term on the right-hand side accounts for the contribution of the (remaining) episodes (among a total of episodes) in inner iteration/step .
Then the following regret bound can be established.
Corollary 7 (Regret for mini-batch REINFORCE).
Under Assumption 1, suppose that for each , we choose , with and , and choose , , and . In addition, suppose that the assumptions in Lemma 12 hold (note that Assumption 1 and already automatically hold by the other assumptions). Then we have for any , with probability at least , jointly for all episodes , we have (for the mini-batch Algorithm 4)
In addition, we also have
with an asymptotic rate of
Again, we note that the almost sure convergence above is directly implied by the high probability bound via Lemma 4. The proof and a more formal statement of this corollary (cf. Corollary 13) can be found in Appendix D. In particular, when , the bound above reduces to (13). In addition, we can see that there might be a trade-off between the terms and . The intuition behind this is a trade-off between lower variance with larger batch sizes and more frequent updates with smaller batch sizes.
6 Conclusion and open problems
In this work, we establish the global convergence rates of practical policy gradient algorithms with a fixed size mini-batch of trajectories combined with REINFORCE gradient estimation.
Although in §4 and §5, we only instantiate the bounds for the REINFORCE gradient estimators, we note that our general results (in particular, Theorem 10 in Appendix B.3) can be easily applied to other gradient estimators (e.g., actor-critic and state-action visitation measure based estimators) as well, as long as one can verify the existence of the constants in Assumption 2 in a similar way to Lemma 2. In addition, one can also easily derive sample complexity results as by-products of our analysis. In fact, our proof of Theorem 5 immediately implies a sample complexity bound (for Algorithm 1 with REINFORCE gradient estimators and a constant regularization parameter) for any pre-specified tolerance , where we use to indicate the big- notation with logarithmic terms suppressed. We have focused only on regret in this paper mainly for clarity purposes.
It is also relatively straightforward to extend our results to finite horizon non-stationary settings, in which the soft-max policy parametrization will have a dimension of and different policy gradient estimators can be adopted (without trajectory truncation), with being the horizon of each episode. In this case, it’s also easy to rewrite the regret bound as a function of the total number of time steps , where is the total number of episodes. Other straightforward extensions include refined convergence to stationary points (in both almost sure and high probability senses and with no requirement on large batch sizes), and inexact convergence results when (cf. Assumption 2) is not square summable (e.g., when is fixed or not growing sufficiently fast).
There are also several open problems that may be resolved by combining the techniques introduced in this paper with existing results in the literature. Firstly, it would be desirable to remove the “exploration” assumption that the initial distribution is component-wise positive. This may be achieved by combining our results with the policy cover technique in (Agarwal et al. 2020) or the optimistic bonus tricks in (Cai et al. 2019; Efroni et al. 2020). Secondly, the bounds in our paper are likely far from optimal (i.e., sharp). Hence it would be desirable to either refine our analysis or apply our techniques to accelerated policy gradient methods (e.g., IS-MBPG (Huang et al. 2020)) to obtain better global convergence rates and/or last-iterate convergence. Thirdly, it would be very interesting to see if global convergence results still hold for REINFORCE when the relative entropy regularization term used in this paper is replaced with the practically adopted entropy regularization term in the literature. The answer is affirmative when exact gradient estimations are available (Mei et al. 2020; Cen et al. 2020), but it remains unknown how these results might be generalized to the stochastic settings in our paper. We conjecture that entropy regularization leads to better global convergence rates and can help us remove the necessity of the PostProcess steps in Algorithm 2 as they are uniformly bounded. Finally, one may also consider relaxing the uniform bound assumption on the rewards to instead being sub-Gaussian, introducing function approximation, and extending our results to natural policy gradient and actor-critic methods as well as more modern policy gradient methods like DPG, PPO and TRPO.
Acknowledgment
We would like to thank Anran Hu for pointing out a mistake in the proof of an early draft of this paper. We thank Guillermo Angeris, Shane Barratt, Haotian Gu, Xin Guo, Yusuke Kikuchi, Bennet Meyers, Xinyue Shen, Mahan Tajrobehkar, Jonathan Tuck, Jiaming Wang and Xiaoli Wei for their feedback on some preliminary results in this paper. We thank Junyu Zhang for several detailed and fruitful discussions of the draft. We also thank the anonymous (meta-)reviewers for the great comments and suggestions. Jongho Kim is supported by Samsung Scholarship.
References
- Abbasi-Yadkori et al. (2019) Abbasi-Yadkori, Y.; Bartlett, P.; Bhatia, K.; Lazic, N.; Szepesvari, C.; and Weisz, G. 2019. Politex: Regret bounds for policy iteration using expert prediction. In International Conference on Machine Learning, 3692–3702.
- Agarwal et al. (2020) Agarwal, A.; Henaff, M.; Kakade, S.; and Sun, W. 2020. PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning. arXiv preprint arXiv:2007.08459 .
- Agarwal, Jiang, and Kakade (2019) Agarwal, A.; Jiang, N.; and Kakade, S. 2019. Reinforcement Learning: Theory and Algorithms. Technical report, Department of Computer Science, University of Washington.
- Agarwal et al. (2019) Agarwal, A.; Kakade, S.; Lee, J.; and Mahajan, G. 2019. Optimality and approximation with policy gradient methods in Markov decision processes. arXiv preprint arXiv:1908.00261 .
- Basei, Guo, and Hu (2020) Basei, M.; Guo, X.; and Hu, A. 2020. Linear Quadratic Reinforcement Learning: Sublinear Regret in the Episodic Continuous-Time Framework. arXiv preprint arXiv:2006.15316 .
- Baxter and Bartlett (2001) Baxter, J.; and Bartlett, P. 2001. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research 15: 319–350.
- Bertsekas (2017) Bertsekas, D. 2017. Dynamic programming and optimal control, volume II. Athena scientific Belmont, MA, 4th edition.
- Besson and Kaufmann (2018) Besson, L.; and Kaufmann, E. 2018. What doubling tricks can and can’t do for multi-armed bandits. arXiv preprint arXiv:1803.06971 .
- Bhandari and Russo (2019) Bhandari, J.; and Russo, D. 2019. Global optimality guarantees for policy gradient methods. arXiv preprint arXiv:1906.01786 .
- Bhandari and Russo (2020) Bhandari, J.; and Russo, D. 2020. A Note on the Linear Convergence of Policy Gradient Methods. arXiv preprint arXiv:2007.11120 .
- Bottou, Curtis, and Nocedal (2018) Bottou, L.; Curtis, F.; and Nocedal, J. 2018. Optimization methods for large-scale machine learning. SIAM Review 60(2): 223–311.
- Cai et al. (2019) Cai, Q.; Yang, Z.; Jin, C.; and Wang, Z. 2019. Provably efficient exploration in policy optimization. arXiv preprint arXiv:1912.05830 .
- Carmona, Laurière, and Tan (2019) Carmona, R.; Laurière, M.; and Tan, Z. 2019. Linear-quadratic mean-field reinforcement learning: convergence of policy gradient methods. arXiv preprint arXiv:1910.04295 .
- Cen et al. (2020) Cen, S.; Cheng, C.; Chen, Y.; Wei, Y.; and Chi, Y. 2020. Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization. arXiv preprint arXiv:2007.06558 .
- Efroni et al. (2020) Efroni, Y.; Shani, L.; Rosenberg, A.; and Mannor, S. 2020. Optimistic Policy Optimization with Bandit Feedback. arXiv preprint arXiv:2002.08243 .
- Even-Dar, Kakade, and Mansour (2004) Even-Dar, E.; Kakade, S.; and Mansour, Y. 2004. Experts in a Markov decision process. Advances in neural information processing systems 17: 401–408.
- Even-Dar, Kakade, and Mansour (2009) Even-Dar, E.; Kakade, S.; and Mansour, Y. 2009. Online Markov decision processes. Mathematics of Operations Research 34(3): 726–736.
- Even-Dar and Mansour (2003) Even-Dar, E.; and Mansour, Y. 2003. Learning rates for Q-learning. Journal of machine learning Research 5(Dec): 1–25.
- Fazel et al. (2018) Fazel, M.; Ge, R.; Kakade, S.; and Mesbahi, M. 2018. Global convergence of policy gradient methods for the linear quadratic regulator. arXiv preprint arXiv:1801.05039 .
- Fei et al. (2020) Fei, Y.; Yang, Z.; Wang, Z.; and Xie, Q. 2020. Dynamic regret of policy optimization in non-stationary environments. arXiv preprint arXiv:2007.00148 .
- Fu et al. (2019) Fu, Z.; Yang, Z.; Chen, Y.; and Wang, Z. 2019. Actor-critic provably finds Nash equilibria of linear-quadratic mean-field games. arXiv preprint arXiv:1910.07498 .
- Fu, Yang, and Wang (2020) Fu, Z.; Yang, Z.; and Wang, Z. 2020. Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy. arXiv preprint arXiv:2008.00483 .
- Glynn (1986) Glynn, P. 1986. Stochastic approximation for Monte Carlo optimization. In Proceedings of the 18th conference on Winter simulation, 356–365.
- Gu et al. (2015) Gu, S.; Levine, S.; Sutskever, I.; and Mnih, A. 2015. MuProp: Unbiased backpropagation for stochastic neural networks. arXiv preprint arXiv:1511.05176 .
- Guo et al. (2020) Guo, X.; Hu, A.; Xu, R.; and Zhang, J. 2020. A General Framework for Learning Mean-Field Games. arXiv preprint arXiv:2003.06069 .
- Guu et al. (2017) Guu, K.; Pasupat, P.; Liu, E.; and Liang, P. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. arXiv preprint arXiv:1704.07926 .
- Huang et al. (2020) Huang, F.; Gao, S.; Pei, J.; and Huang, H. 2020. Momentum-Based Policy Gradient Methods. arXiv preprint arXiv:2007.06680 .
- Jaksch, Ortner, and Auer (2010) Jaksch, T.; Ortner, R.; and Auer, P. 2010. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research 11(Apr): 1563–1600.
- Jin et al. (2018) Jin, C.; Allen-Zhu, Z.; Bubeck, S.; and Jordan, M. 2018. Is Q-learning provably efficient? In Advances in Neural Information Processing Systems, 4863–4873.
- Jin, Schmitt, and Wen (2020) Jin, Z.; Schmitt, J.; and Wen, Z. 2020. On the Analysis of Model-free Methods for the Linear Quadratic Regulator. arXiv preprint arXiv:2007.03861 .
- Johnson et al. (2017) Johnson, J.; Hariharan, B.; Van Der Maaten, L.; Hoffman, J.; Fei-Fei, L.; Lawrence Zitnick, C.; and Girshick, R. 2017. Inferring and executing programs for visual reasoning. In Proceedings of the IEEE International Conference on Computer Vision, 2989–2998.
- Kakade et al. (2003) Kakade, S.; et al. 2003. On the sample complexity of reinforcement learning. Ph.D. thesis, University of London London, England.
- Klenke (2013) Klenke, A. 2013. Probability Theory: A Comprehensive Course. Springer Science & Business Media.
- Konda and Tsitsiklis (2003) Konda, V.; and Tsitsiklis, J. 2003. On actor-critic algorithms. SIAM journal on Control and Optimization 42(4): 1143–1166.
- Kool, van Hoof, and Welling (2018) Kool, W.; van Hoof, H.; and Welling, M. 2018. Attention, learn to solve routing problems! arXiv preprint arXiv:1803.08475 .
- Kool, van Hoof, and Welling (2020) Kool, W.; van Hoof, H.; and Welling, M. 2020. Estimating gradients for discrete random variables by sampling without replacement. arXiv preprint arXiv:2002.06043 .
- Liu et al. (2019) Liu, B.; Cai, Q.; Yang, Z.; and Wang, Z. 2019. Neural proximal/trust region policy optimization attains globally optimal policy. arXiv preprint arXiv:1906.10306 .
- Ma et al. (2016) Ma, Y.; Zhao, T.; Hatano, K.; and Sugiyama, M. 2016. An online policy gradient algorithm for Markov decision processes with continuous states and actions. Neural Computation 28(3): 563–593.
- Malik et al. (2018) Malik, D.; Pananjady, A.; Bhatia, K.; Khamaru, K.; Bartlett, P.; and Wainwright, M. 2018. Derivative-free methods for policy optimization: Guarantees for linear quadratic systems. arXiv preprint arXiv:1812.08305 .
- Marbach and Tsitsiklis (2001) Marbach, P.; and Tsitsiklis, J. 2001. Simulation-based optimization of Markov reward processes. IEEE Transactions on Automatic Control 46(2): 191–209.
- Mazumdar et al. (2019) Mazumdar, E.; Ratliff, L.; Jordan, M.; and Sastry, S. 2019. Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games. arXiv preprint arXiv:1907.03712 .
- Mei et al. (2020) Mei, J.; Xiao, C.; Szepesvari, C.; and Schuurmans, D. 2020. On the Global Convergence Rates of Softmax Policy Gradient Methods. arXiv preprint arXiv:2005.06392 .
- Mnih and Gregor (2014) Mnih, A.; and Gregor, K. 2014. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030 .
- Mnih et al. (2016) Mnih, V.; Badia, A.and Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, 1928–1937.
- Neu et al. (2010) Neu, G.; Antos, A.; György, A.; and Szepesvári, C. 2010. Online Markov decision processes under bandit feedback. In Advances in Neural Information Processing Systems, 1804–1812.
- Neu, Jonsson, and Gómez (2017) Neu, G.; Jonsson, A.; and Gómez, V. 2017. A unified view of entropy-regularized Markov decision processes. arXiv preprint arXiv:1705.07798 .
- Papini et al. (2018) Papini, M.; Binaghi, D.; Canonaco, G.; Pirotta, M.; and Restelli, M. 2018. Stochastic variance-reduced policy gradient. arXiv preprint arXiv:1806.05618 .
- Peters, Mülling, and Altun (2010) Peters, J.; Mülling, K.; and Altun, Y. 2010. Relative entropy policy search. In AAAI, volume 10, 1607–1612. Atlanta.
- Pham et al. (2020) Pham, N.; Nguyen, L.; Phan, D.; Nguyen, P.; van Dijk, M.; and Tran-Dinh, Q. 2020. A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning. arXiv preprint arXiv:2003.00430 .
- Rennie et al. (2017) Rennie, S.; Marcheret, E.; Mroueh, Y.; Ross, J.; and Goel, V. 2017. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7008–7024.
- Ryu and Boyd (2016) Ryu, E.; and Boyd, S. 2016. Primer on monotone operator methods. Appl. Comput. Math 15(1): 3–43.
- Schulman, Chen, and Abbeel (2017) Schulman, J.; Chen, X.; and Abbeel, P. 2017. Equivalence between policy gradients and soft Q-learning. arXiv preprint arXiv:1704.06440 .
- Schulman et al. (2015) Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M.; and Moritz, P. 2015. Trust region policy optimization. In International conference on machine learning, 1889–1897.
- Schulman et al. (2017) Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 .
- Shani, Efroni, and Mannor (2019) Shani, L.; Efroni, Y.; and Mannor, S. 2019. Adaptive trust region policy optimization: Global convergence and faster rates for regularized MDPs. arXiv preprint arXiv:1909.02769 .
- Shen et al. (2019) Shen, Z.; Ribeiro, A.; Hassani, H.; Qian, H.; and Mi, C. 2019. Hessian aided policy gradient. In International Conference on Machine Learning, 5729–5738.
- Silver et al. (2014) Silver, D.; Lever, G.; Heess, N.; Degris, T.; Wierstra, D.; and Riedmiller, M. 2014. Deterministic Policy Gradient Algorithms. In Proceedings of the 31st International Conference on Machine Learning, volume 32(1), 387–395.
- Sutton and Barto (2018) Sutton, R.; and Barto, A. 2018. Reinforcement Learning: An Introduction. MIT press.
- Sutton et al. (2000) Sutton, R.; McAllester, D.; Singh, S.; and Mansour, Y. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, 1057–1063.
- Wang et al. (2019) Wang, L.; Cai, Q.; Yang, Z.; and Wang, Z. 2019. Neural policy gradient methods: Global optimality and rates of convergence. arXiv preprint arXiv:1909.01150 .
- Williams (1992) Williams, R. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8(3-4): 229–256.
- Xiong et al. (2020) Xiong, H.; Xu, T.; Liang, Y.; and Zhang, W. 2020. Non-asymptotic Convergence of Adam-type Reinforcement Learning Algorithms under Markovian Sampling. arXiv preprint arXiv:2002.06286 .
- Xu, Gao, and Gu (2019) Xu, P.; Gao, F.; and Gu, Q. 2019. Sample efficient policy gradient methods with recursive variance reduction. arXiv preprint arXiv:1909.08610 .
- Xu, Gao, and Gu (2020) Xu, P.; Gao, F.; and Gu, Q. 2020. An improved convergence analysis of stochastic variance-reduced policy gradient. In Uncertainty in Artificial Intelligence, 541–551. PMLR.
- Yi et al. (2018) Yi, K.; Wu, J.; Gan, C.; Torralba, A.; Kohli, P.; and Tenenbaum, J. 2018. Neural-symbolic VQA: Disentangling reasoning from vision and language understanding. In Advances in neural information processing systems, 1031–1042.
- Yuan et al. (2020) Yuan, H.; Lian, X.; Liu, J.; and Zhou, Y. 2020. Stochastic Recursive Momentum for Policy Gradient Methods. arXiv preprint arXiv:2003.04302 .
- Zhang et al. (2020) Zhang, J.; Koppel, A.; Bedi, A.; Szepesvari, C.; and Wang, M. 2020. Variational Policy Gradient Method for Reinforcement Learning with General Utilities. arXiv preprint arXiv:2007.02151 .
- Zhang et al. (2019) Zhang, K.; Koppel, A.; Zhu, H.; and Başar, T. 2019. Global convergence of policy gradient methods to (almost) locally optimal policies. arXiv preprint arXiv:1906.08383 .
- Zhang, Yang, and Basar (2019) Zhang, K.; Yang, Z.; and Basar, T. 2019. Policy optimization provably converges to Nash equilibria in zero-sum linear quadratic games. In Advances in Neural Information Processing Systems, 11602–11614.
- Zhao, Li, and Wen (2019) Zhao, M.; Li, Y.; and Wen, Z. 2019. A Stochastic Trust-Region Framework for Policy Optimization. arXiv preprint arXiv:1911.11640 .
- Zhao et al. (2011) Zhao, T.; Hachiya, H.; Niu, G.; and Sugiyama, M. 2011. Analysis and improvement of policy gradient estimation. In Advances in Neural Information Processing Systems, 262–270.
- Zoph and Le (2016) Zoph, B.; and Le, Q. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 .
Appendix
In this appendix, we provide detailed proofs and formal statements of the results in the main text. For notational simplicity, we sometimes abbreviate “almost sure” as “a.s.” or even omit “a.s.” whenever it is clear from the context. Also notice that as is always implicitly assumed in the literature of episodic reinforcement learning (e.g., cf. (Marbach and Tsitsiklis 2001)), given the current policy, the sampled trajectory is conditionally independent of all previous policies and trajectories.
Big- notation.
We first clarify the precise definitions of the Big- notation used in our statements. Let be a function of the total number of episodes and all problem and algorithm dependent quantities (written jointly as a vector) . Similarly, let be a function of and some (subset of) problem and algorithm dependent quantities , with . Then we write to indicate that there exist constants and (independent of and ), such that for all .
Appendix A Proofs for REINFORCE gradient estimations
A.1 Proof of Lemma 2 (with )
Proof.
We validate the three groups conditions in Assumption 2 in order. For the simplicity of exposition, we first restrict to the case when , i.e., no baseline is incorporated.
Gradient estimation boundedness. Firstly, notice that since , we have . And by the soft-max parametrization in Assumption 1, we have
where the vector has all zero entries apart from the one corresponding to the state-action pair . Hence for any , and we see that
(16) |
Hence we can take .
Validation of nearly unbiasedness. Secondly, notice that
where
And since , we have
and similarly
Hence for any , by taking
we have , and that
(18) |
which implies that
where the last two steps used Cauchy inequality, (18) and the fact that by (17),
Hence we can take and . Thus we have
and hence we can take . Notice that for notational simplicity, we have taken in the statement of the proposition.
Validation of bounded second-order moment growth. Finally, we bound the second-order moment of the policy gradient. In the following, for a random vector , we define , and similarly , where denotes the conditional variance given the -th iteration . Now define the constant as the uniform upper bound on the variance of the policy gradient vector, i.e.,
where is sampled from following policy , and .
Then we have for any by definition. In addition, since for any random vector ,
we have by the same argument as (16) that
Finally, since for any random vector ,
we have
and hence we can take and . This completes our proof. ∎
A.2 Proof of Lemma 2
Proof.
The proof is nearly identical to the case when above. Hence we only outline the proof while highlighting the differences.
Secondly, by the proof of (Agarwal, Jiang, and Kakade 2019, Lemma 4.10), we have
(19) |
Hence is the same as in the proof when above, and hence we can take
Finally, by definition, can be written explicitly as
where is sampled from following policy , and .
Hence similar to in then proof when above, we have , and . ∎
Appendix B Proofs for convergence analysis
Firstly, we state a simple result from elementary analysis, which will be used repeatedly in our proof below.
Lemma 8.
Let for . Then we have
Proof.
The first inequality immediately comes from the fact that is monotonically decreasing in . The second inequality can be derived by noticing that for any , and that
This completes the proof. ∎
B.1 Proof of Lemma 4
Proof.
The proof is a direct application of the well-known Borel-Cantelli lemma (Klenke 2013). Let and define the events as
Then , and hence . Hence by Borel-Cantelli lemma, we have
Finally, by noticing that the complement of is a subset of , the proof is complete. ∎
B.2 Proof of Theorem 5
Theorem 9 (Formal statement of Theorem 5).
Proof.
The proof consists of two parts. In the first part, we establish an upper bound on the weighted gradient norms sum of previous iterates in the current phase. The second part then utilizes this bound to establish an upper bound on the phase regret.
Bounding the weighted gradient norms sum.
By Proposition 1 and an equivalent definition of strongly smoothness (cf. (Ryu and Boyd 2016, Appendix)), we have
Let . Then the above inequality implies that
(21) |
Now define (with ), then
(22) |
Here is the filtration up to episode in phase , i.e., the -algebra generated by all iterations up to the -th one. Notice that the second equality makes use of the fact that given the current policy, the correspondingly sampled trajectory is conditionally independent of all previous policies and trajectories.
In addition, for any ,
Here we use the fact that
which follows from the same argument as (16). The above inequality on also implies that , which, together with (22), implies that is a martingale. Notice that although is only defined for , we can augment the sequence by setting and for all , and it’s obvious that (22) and also hold for . And by saying that is a martingale, we are indeed referring to this (infinite length) augmented sequence of random variables.
Now by the definition of , it’s easy to see that , where
(23) |
Hence by Azuma-Hoeffding inequality, for any ,
(24) |
Then by summing up the inequalities (21) from to , we obtain that
(25) |
where we use the fact that the regularization term for all .
Hence we have
(26) |
Bounding the phase regret.
We now establish the regret bound in phase using (26).
Fix and . Let
For simplicity, assume for now that . Then since is decreasing in , we have
Hence we have (by the simple fact that for any )
(27) |
Now by Proposition 3 and the choice of , we have that for any ,
Since for any , , we have . Hence by (27), we have
This immediately implies that
(28) |
B.3 Overall regret bound for general policy gradient estimators
In this section, we state and prove the overall regret bound for general policy gradient estimators, which generalizes Theorem 6 for REINFORCE gradient estimators.
Theorem 10 (General regret bound).
Under Assumptions 1 and 2, suppose that for each , we choose for some , with and . In addition, suppose that we specify , choose , and for each . Then we have for any , with probability at least , for any , we have
(29) |
where
(30) |
Here the constants ,
with .
In addition, we also have
with an asymptotic rate of .
Remark 1.
Notice that the constant is a uniform lower bound of (), while the constants and are uniform upper bounds of and (), respectively. Here the constants are those defined in Theorem 9.
Remark 2.
In the big- notation above, we have (temporarily) hidden the problem dependent quantities, which will be made explicit when we specialize the results to the REINFORCE gradient estimation below.
Proof.
We first prove the high probability result. By (20) and the choices of and , we have that for any phase , with probability at least , for all ,
where ,
with and . Here we used the fact that , which then implies that and
We also used the fact that for any and that by the definition of PostProcess, .
Now recall that for any , we have
where . In addition, by the choices of , we have that for any ,
Hence for any , we have .
Thus we have that with probability at least , for any ,
where
Finally, noticing that , we have
which immediately imply (29) and (30). Notice that here we used the fact that (since ), and that for all and .
Finally, by invoking Lemma 4, we immediately obtain the almost sure convergence result. This completes our proof. ∎
Appendix C Formal statement of REINFORCE regret bound
Here we provide a slightly more formal restatement of Theorem 6, with details about the constants in the big- notation in the main text. Recall that our goal there is specialize (and slightly simplify) the regret bound in Theorem 10 to the case when the REINFORCE gradient estimation in §3.2 is adopted to evaluate . In particular, we have the following corollary. The proof is done by simply combining Lemma 2 (with by their definitions in Theorem 6 or Corollary 11 below) and Theorem 10, together with the specific choices of the hyper-parameters as well as the constants in Lemma 2 plugged in and some elementary algebraic simplifications, and is hence omitted.
Corollary 11 (Formal statement of Theorem 6).
Under Assumption 1, suppose that for each , we choose , with , and , and choose , , and . In addition, suppose that (9) is adopted to evaluate , with , for any (where is a constant), and that (10) holds for for all . Then we have for any , with probability at least , for any , we have
where
Here the constants are , and
Here is the variance bound defined in Lemma 2.
Suppose in addition that we specify , then we can simplify the regret bound into the following simple form:
In addition, we also have
with an asymptotic rate of .
Remark 3.
Notice that here (and below), with the specific choices of algorithm hyper-parameters and gradient estimators we are finally able to make all the problem dependent quantities (e.g., , etc.) explicit in the big- notation, which is consistent with the convention of the reinforcement learning literature. Here the only hidden quantities are some absolute constants.
Appendix D Mini-batch phased policy gradient method
Here we formalize the mini-batch version of Algorithm 2 described at the beginning of §5 as Algorithm 4, and provide a formal statement as well as a proof for Corollary 7.
Firstly, we have the following lemma, which transfers guarantees on () to the averaged gradient estimation . The proof follows directly from the fact that the variance of the sum of independent random variables is the sum of the variances, and is thus omitted.
Lemma 12.
Now we are ready to state and prove (a more formal version of) Corollary 7.
Corollary 13 (Formal statement of Corollary 7).
Under Assumption 1, suppose that for each , we choose , with , and , and choose , , and . In addition, suppose that the assumptions in Lemma 12 hold (note that Assumption 1 and already automatically hold by the other assumptions). Then we have for any , with probability at least , for any , we have that (for the mini-batch Algorithm 4)
where
Here the constants and are the same as in Corollary 11, while
Here is the variance bound defined in Lemma 2.
Suppose in addition that we specify . Then we can simplify the regret bound into the following simple form:
In addition, we also have
with an asymptotic rate of
Proof of Corollary 7.
By the definition of , we immediately see that
(31) |
where () is the original regret (6) in the mini-batch setting, which is defined only for the total number of inner iterations/steps (instead of episodes, so not magnified with a factor of ). More precisely, we have that for any ,
Appendix E Related work
Policy gradient methods are a large family of algorithms for reinforcement learning that directly operate on the agent policy, rather than on the action-value function (Glynn 1986; Sutton and Barto 2018). Examples of policy gradient methods include REINFORCE (Williams 1992), A3C (Mnih et al. 2016), DPG (Silver et al. 2014), PPO (Schulman et al. 2017), and TRPO (Schulman et al. 2015), to name just a few. These methods seek to directly maximize the cumulative reward as a function of the policies, they are straightforward to implement and are amenable to function approximations. The asymptotic convergence of (some) policy gradient methods to a stationary point has long been established (Sutton et al. 2000; Konda and Tsitsiklis 2003; Marbach and Tsitsiklis 2001; Baxter and Bartlett 2001). The rate of such convergence is also known and has been improved more recently, with the help of variance reduction (Papini et al. 2018; Xu, Gao, and Gu 2020, 2019), Hessian information (Shen et al. 2019) and momentum techniques (Xiong et al. 2020; Yuan et al. 2020; Pham et al. 2020; Huang et al. 2020). In contrast, until recently, the global convergence of (vanilla) policy gradient methods (like REINFORCE) had not been established unless unrealistic assumptions like concavity of the expected cumulative reward function (Ma et al. 2016) are imposed. The only exceptions are TRPO (Neu, Jonsson, and Gómez 2017) and the soft-max natural policy gradient method with fully known models (Agarwal et al. 2019), which happen to be equivalent to the MDP Expert algorithms (Even-Dar, Kakade, and Mansour 2004, 2009; Neu et al. 2010).
In the past two years, a line of research on the global convergence theory for (both vanilla and natural) policy gradient methods has emerged. By using a gradient domination property of the cumulative reward, global convergence of (both vanilla and natural) policy gradient methods is first established for linear-quadratic regulators (Fazel et al. 2018). For general Markov Decision Processes (MDPs), (Zhang et al. 2019) establishes convergence to approximately locally optimal (i.e., second-order stationary) solutions for vanilla policy gradient methods. The global optimality of stationary points for general MDPs is shown in (Bhandari and Russo 2019), and rates of convergence towards globally optimal solutions for (both vanilla and natural) policy gradient methods with (neural network) function approximation are derived in (Agarwal et al. 2019; Wang et al. 2019). These convergence results are then improved by several very recent works focusing on exact gradient settings. In particular, (Mei et al. 2020) focuses on the more practically relevant soft-max parametrization and vanilla policy gradient methods and improves the results of (Agarwal et al. 2019) by removing the requirement of the relative entropy regularization and obtaining faster convergence rates; (Bhandari and Russo 2020) obtains linear convergence for a general class of policy gradient methods; (Cen et al. 2020) shows local quadratic convergence of natural policy gradient methods; and (Zhang et al. 2020) extends the results to reinforcement learning with general utilities. For more modern policy gradient methods, (Zhao, Li, and Wen 2019) establishes the asymptotic global convergence of of TRPO, while (Liu et al. 2019) further derives the global convergence rates for PPO and TRPO. These rates are then improved in (Shani, Efroni, and Mannor 2019) for TRPO with adaptive regularization terms. Very recently, (Fu, Yang, and Wang 2020) extends these results to obtain the global convergence rates of single-timescale actor-critic methods with PPO actor updates, and (Agarwal et al. 2020) derives global convergence rates of a new policy gradient algorithm combining natural policy gradient methods with a policy cover technique and show that the algorithm entails better exploration behavior and hence removes the necessity for the access to a fully supported initial distribution , which is assumed in most other works on global convergence of policy gradient methods (including our work). All the above works either require exact and deterministic updates or mini-batch updates with a diverging mini-batch size.
Lately, (Jin, Schmitt, and Wen 2020) studies vanilla policy gradient methods using the REINFORCE gradient estimators computed with a single trajectory in each episode and obtains high probability sample complexity results, but the setting is restricted to linear-quadratic regulators and their bounds have polynomial dependency on (in contrast to our logarithmic dependency on ), where is the probability that the bounds are violated. The authors of (Abbasi-Yadkori et al. 2019) study natural policy gradient methods with a general high probability estimation oracle for state-action value functions (i.e., -functions) in the average reward settings, and establish high probability regret bounds for these algorithms. Finally, we remark that there are also some recent results on the global convergence rates of natural policy gradient methods in adversarial settings (with full-information feedback) (Cai et al. 2019), model-based natural policy gradient methods (Efroni et al. 2020) as well as extensions to non-stationary (Fei et al. 2020) and multi-agent game settings (Zhang, Yang, and Basar 2019; Mazumdar et al. 2019; Carmona, Laurière, and Tan 2019; Fu et al. 2019; Guo et al. 2020), which are beyond the scope of this paper.