This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

ACPO: A Policy Optimization Algorithm for Average MDPs with Constraints

Akhil Agnihotri    Rahul Jain    Haipeng Luo
Abstract

Reinforcement Learning (RL) for constrained MDPs (CMDPs) is an increasingly important problem for various applications. Often, the average criterion is more suitable than the discounted criterion. Yet, RL for average-CMDPs (ACMDPs) remains a challenging problem. Algorithms designed for discounted constrained RL problems often do not perform well for the average CMDP setting. In this paper, we introduce a new policy optimization with function approximation algorithm for constrained MDPs with the average criterion. The Average-Constrained Policy Optimization (ACPO) algorithm is inspired by trust region-based policy optimization algorithms. We develop basic sensitivity theory for average CMDPs, and then use the corresponding bounds in the design of the algorithm. We provide theoretical guarantees on its performance, and through extensive experimental work in various challenging OpenAI Gym environments, show its superior empirical performance when compared to other state-of-the-art algorithms adapted for the ACMDPs.

Machine Learning, ICML

1 Introduction

Over the last decade, we have seen an enormous impact of RL techniques on a variety of problems, from mastering complex games like Go (Silver et al., 2017) and StarCraft (Vinyals et al., 2019) to robotic control (Levine et al., 2016; Akkaya et al., 2019; Aractingi et al., 2023). Many of these have used RL-policy optimization algorithms such as Schulman et al. (2017) for discounted MDPs (DMDPs). These have come in handy even in generative AI, e.g., training large language models (LLMs) (Achiam et al., 2023). However, applications often need satisfaction of some constraints, e.g., physical safety of mobile robots (Hu et al., 2022), safe language, image or multi-modal output generation. Furthermore, the average criterion when long-term rewards and safety are of consideration is more suitable. Using discounted cost formulations (as a proxy for safety) incentivizes policy optimization algorithms to search for policies that are short-term safe but not long-term because of future-discounting.

The planning problem for MDPs with constraints is often formulated as a Constrained MDP (CMDP) model (Manne, 1960; Hordijk & Kallenberg, 1979; Altman, 1999). Unfortunately, CMDP models do not satisfy Bellman’s principle of optimality, and hence dynamic programming (DP)-style algorithms cannot be developed for the setting. Instead, an alternative approach called the convex analytic approach (Borkar, 1988; Altman, 1999) is used by way of introducing occupation measures that leads to optimization formulations. This can be done for both discounted (DCMDPs) and average-criterion (ACMDPs) constrained MDPs.

Theory and algorithms for RL deal with settings when the MDP model is unknown. While DP-inspired RL algorithms such as DQN, when combined with deep learning architectures for function approximation work remarkably effectively (Mnih et al., 2015), policy optimization algorithms such as TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017) have proven even more effective in solving high dimensional problems. Since the discounted criterion is sometimes not suitable, policy optimization algorithms such as ATRPO (Zhang et al., 2021; Wan et al., 2021; Liao et al., 2022) have been developed for AMDPs. Furthermore, as already mentioned, certain RL applications have multiple objectives, one of which is to be optimized and the rest constrained. Thus, the Constrained Policy Optimization (CPO) algorithm (Achiam et al., 2017) was introduced for infinite-horizon DCMDP problems. Unfortunately, as motivated above, not all such applications fit the discounted-criterion formulation: there are settings, for example where there may be safety requirements when the average-CMDP model is a better fit. No scalable RL algorithms are currently available for such settings.

We note that the RL problem is usually harder than the corresponding planning problem; average-MDPs are more challenging than discounted MDPs; and constrained MDPs are more challenging than unconstrained ones. In this paper, we present the first practical algorithm for policy optimization-based RL algorithm for average-constrained MDPs. We propose ACPO, a policy optimization algorithm for an average-CMDP with deep learning for function approximation. Our approach is motivated by theoretical guarantees that bound the difference between the average long-run rewards or costs of different policies. It draws inspiration from CPO (Achiam et al., 2017) (see also Tessler et al. (2019) for DCMDPs), which uses a policy improvement theorem for the discounted setting based on the trust region methods of Schulman et al. (2015). Unfortunately, this result trivializes for the average setting and hence can’t be used. Instead, we derive a new bound that depends on the worst-case level of “mixture” of the irreducible Markov chain associated with a policy. Our proposed algorithm, ACPO is based on these theoretical developments. For experimental evaluation, we use several OpenAI Gym environments from Todorov et al. (2012), train large neural network policies and demonstrate the effectiveness and superior performance of the ACPO algorithm as compared to others.

Main Contributions and Novelty.

Algorithmic: We introduce the first practical policy optimization-based RL algorithm for average-constrained MDPs with new and tight performance bounds and violation guarantees. The algorithm draws inspiration from CPO (for discounted-CMDPs) and ATRPO (for average-MDPs) but is not a straightforward extension of either. One may posit that setting the discount factor γ=1\gamma=1 in CPO for the discounted setting may suffice but that does not perform well on average-CMDPs even with a large discount factor. Further, constraint violation and policy degradation bounds of CPO do not hold in the average setting and hence we develop novel bounds (in Corollary 3.5). In fact, the advantage function estimation routine in our algorithm (line 4 and 6 in Algorithm 1) is also different from that in CPO, since the discounted-setting procedure cannot be used for the average setting (see Appendix A.3): We first approximate the average-reward bias and then use a one-step TD backup to estimate the action-bias function. Furthermore, policy optimization algorithms for the average case (Zhang & Ross, 2020; Wan et al., 2021; Liao et al., 2022) cannot incorporate constraints. We enable this by introducing sublevel sets of cost constraints. We also introduce an approximate but novel line search procedure that improves the empirical performance of our algorithm, an idea that may help improve performance of other policy optimization algorithms such as PPO.
Technical: Since ACPO is a trust region method, one can expect some overlap in analysis techniques with other similar algorithms. Nevertheless, our analysis has several novel elements: Lemma 3.3, where we use eigenvalues of the transition matrix to relate total variation of stationary distributions with that of the policies, and in Lemma A.3, we use the sublevel sets of constraints and projection inequality of Bregman divergence. Furthermore, several important results from CPO and ATRPO papers cannot be applied to the analysis of our algorithm.
Empirical: We evaluate the empirical performance of ACPO in the OpenAI Gym (Mujoco) environments, a standard benchmark. We find that ACPO outperforms all state-of-the-art Deep RL algorithms such as CPO in (Achiam et al., 2017), PCPO in (Yang et al., 2020), PPO in (Schulman et al., 2017), BVF-PPO in (Satija et al., 2020) and ATRPO in (Zhang et al., 2021). We use a large discount factor if the algorithm is not for the average setting, and a Lagrangian objective if it is not for the constrained setting, and in some cases both.
Significance: ACPO is the first practical trust region-style policy optimization algorithm for ACMDPs with excellent empirical performance. ACMDPs are important models because they allow incorporation of long term safety constraints, which are important not only in the context of safe robot learning and control, but also safety-constrained RLHF fine-tuning and inference for LLMs (Moskovitz et al., 2023) and Diffusion models as well. In the absence of suitable policy optimization algorithms for ACMDPs, researchers have resorted to using adaptations of PPO, etc.

Related Work. Learning constraint-satisfaction policies has been explored in the Deep RL literature in (Agnihotri et al., 2019; Garcia & Fernandez, 2015). This can either be (1) through expert annotations and demonstrations as in (Rajeswaran et al., 2017; Gao et al., 2018) or, (2) by exploration with constraint satisfaction as in (Achiam et al., 2017; Tessler et al., 2019). While the former approach is not scalable since it requires human interventions, current state-of-the-art algorithms for the latter are not applicable to the average reward setting.

Previous work on RL with the average reward criterion has mostly attempted to extend stochastic approximation schemes for the tabular setting, such as Q-learning in (Abounadi et al., 2001; Wan et al., 2021), to the non-tabular setting with function approximation in (Wei et al., 2021; Zhang & Ross, 2020). (Chen et al., 2022) deals with online learning in a constrained MDP setting, but their aim is regret minimization or exploration, both in tabular settings. We are inspired by the work of (Zhang et al., 2021) to develop techniques required to derive the policy degradation and constraint violation bounds in Section 3.

The more recent works of (Bhatnagar & Lakshmanan, 2012) and (Calvo-Fullana et al., 2023) also fail to address our problem setting as the former test on a 2x4 queueing network with maximum state space of 128, while the latter test on a grid of size 10x10 (maximum states of 100). In addition to that, the way they incorporate constraints during training is just via a Lagrangian formulation. In our paper we show that simply doing this (in the case of PPO and ATRPO for example) leads to much inferior performance to ACPO, which can outperform current state-of-the-art algorithms in state spaces of upto 109610^{96}.

2 Preliminaries

A Markov decision process (MDP) is a tuple, (S,A,r,P,μS,A,r,P,\mu), where SS is the set of states, AA is the set of actions, r:S×A×Sr:S\times A\times S\to{\mathbb{R}} is the reward function, P:S×A×S[0,1]P:S\times A\times S\to[0,1] is the transition probability function such that P(s|s,a)P(s^{\prime}|s,a) is the probability of transitioning to state ss^{\prime} from state ss by taking action aa, and μ:S[0,1]\mu:S\to[0,1] is the initial state distribution. A stationary policy π:SΔ(A)\pi:S\to\Delta(A) is a mapping from states to probability distributions over the actions, with π(a|s)\pi(a|s) denoting the probability of selecting action aa in state ss, and Δ(A)\Delta(A) is the probability simplex over the action space AA. We denote the set of all stationary policies by Π\Pi. For the average setting, we will make the standard assumption that the MDP is ergodic and is unichain.

In reinforcement learning, we aim to select a policy π\pi which maximizes a performance measure, J(π)J(\pi), which, for continuous control tasks is either the discounted reward criterion or the average reward approach. Below, we briefly discuss both formulations.

2.1 Discounted criterion

For a given discount factor γ(0,1)\gamma\in(0,1), the discounted reward objective is defined as

Jγ(π):=𝔼τπ[t=0γtr(st,at,st+1)]=11γ𝔼sdπ,γaπsP(|s,a)[r(s,a,s)]\begin{aligned} J_{\gamma}(\pi)&:=\underset{\begin{subarray}{c}\tau\sim\pi\end{subarray}}{{\mathbb{E}}}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t},s_{t+1})\right]\\ &=\frac{1}{1-\gamma}\underset{\begin{subarray}{c}s\sim d_{\pi,\gamma}\\ a\sim\pi\\ s^{\prime}\sim P(\cdot|s,a)\end{subarray}}{{\mathbb{E}}}[r(s,a,s^{\prime})]\end{aligned}

where τ\tau refers to a sample trajectory of (s0,a0,s1,)(s_{0},a_{0},s_{1},\cdots) generated when following a policy π\pi, that is, atπ(|st)a_{t}\sim\pi(\cdot|s_{t}) and st+1P(|st,at)s_{t+1}\sim P(\cdot|s_{t},a_{t})\,; dπ,γd_{\pi,\gamma} is the discounted occupation measure that is defined by dπ,γ(s)=(1γ)t=0γtPτπ(st=s)d_{\pi,\gamma}(s)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}\underset{\tau\sim\pi}{P}(s_{t}=s), which essentially refers to the discounted fraction of time spent in state ss while following policy π\pi.

2.2 Average criterion

The average-reward objective is given by:

J(π):=limN1N𝔼τπ[t=0N1r(st,at,st+1)]=𝔼sdπaπ(|s)sP(|s,a)[r(s,a,s)],\begin{split}J(\pi)&:=\lim_{N\to\infty}\frac{1}{N}\underset{\begin{subarray}{c}\tau\sim\pi\end{subarray}}{{\mathbb{E}}}\left[\sum_{t=0}^{N-1}r(s_{t},a_{t},s_{t+1})\right]\\ &=\underset{\begin{subarray}{c}s\sim d_{\pi}\\ a\sim\pi(\cdot|s)\\ s^{\prime}\sim P(\cdot|s,a)\end{subarray}}{{\mathbb{E}}}[r(s,a,s^{\prime})],\end{split} (1)

where dπ(s):=limN1Nt=0N1Pτπ(st=s)d_{\pi}(s):=\lim_{N\to\infty}\frac{1}{N}\sum_{t=0}^{N-1}P_{\tau\sim\pi}(s_{t}=s) is the stationary state distribution under policy π\pi. The limits in J(π)J(\pi) and dπ(s)d_{\pi}(s) are guaranteed to exist under our ergodic assumption. Since the MDP is aperiodic, it can also be shown that dπ(s)=limtPτπ(st=s)d_{\pi}(s)=\lim_{t\to\infty}P_{\tau\sim\pi}(s_{t}=s). Since we have limγ1dπ,γ(s)dπ(s),s\lim_{\gamma\to 1}d_{\pi,\gamma}(s)\to d_{\pi}(s),\forall s, it can be shown that limγ1(1γ)Jγ(π)=J(π)\lim_{\gamma\to 1}(1-\gamma)J_{\gamma}(\pi)=J(\pi).

In the average setting, we seek to keep the estimate of the state value function unbiased and hence, introduce the average-reward bias function as

V¯π(s):=𝔼τπ[t=0(r(st,at,st+1)J(π))|s0=s]\widebar{V}^{\pi}(s):=\underset{\begin{subarray}{c}\tau\sim\pi\end{subarray}}{{\mathbb{E}}}\left[\sum_{t=0}^{\infty}(r(s_{t},a_{t},s_{t+1})-J(\pi))\;\bigg{|}\;s_{0}=s\right]

and the average-reward action-bias function as

Q¯π(s,a):=𝔼τπ[t=0(r(st,at,st+1)\displaystyle\widebar{Q}^{\pi}(s,a):=\underset{\begin{subarray}{c}\tau\sim\pi\end{subarray}}{{\mathbb{E}}}\bigg{[}\sum_{t=0}^{\infty}(r(s_{t},a_{t},s_{t+1})- J(π))|s0=s,a0=a].\displaystyle J(\pi))\;\bigg{|}\;\underset{a_{0}=a}{s_{0}=s,}\bigg{]}.

Finally, define the average-reward advantage function as A¯π(s,a):=Q¯π(s,a)V¯π(s)\widebar{A}^{\pi}(s,a):=\widebar{Q}^{\pi}(s,a)-\widebar{V}^{\pi}(s).

2.3 Constrained MDPs

A constrained Markov decision process (CMDP) is an MDP augmented with constraints that restrict the set of allowable policies for that MDP. Specifically, we augment the MDP with a set CC of auxiliary cost functions, C1,,CmC_{1},\cdots,C_{m} (with each function Ci:S×A×SC_{i}:S\times A\times S\to{\mathbb{R}} mapping transition tuples to costs, just like the reward function), and bounds l1,,lml_{1},\cdots,l_{m}. Similar to the value functions being defined for the average reward criterion, we define the average cost objective with respect to the cost function CiC_{i} as

JCi(π):=limN1N𝔼τπ[t=0N1Ci(st,at,st+1)]=𝔼sdπaπsP(|s,a)[Ci(s,a,s)].\begin{split}J_{C_{i}}(\pi)&:=\lim_{N\to\infty}\frac{1}{N}\underset{\begin{subarray}{c}\tau\sim\pi\end{subarray}}{{\mathbb{E}}}\left[\sum_{t=0}^{N-1}C_{i}(s_{t},a_{t},s_{t+1})\right]\\ &=\underset{\begin{subarray}{c}s\sim d_{\pi}\\ a\sim\pi\\ s^{\prime}\sim P(\cdot|s,a)\end{subarray}}{{\mathbb{E}}}[C_{i}(s,a,s^{\prime})].\end{split} (2)

where JCiJ_{C_{i}} will be referred to as the average cost for constraint CiC_{i}. The set of feasible stationary policies for a CMDP then is given by ΠC:={πΠ:JCi(π)li,i{1,,M}}\Pi_{C}:=\left\{\pi\in\Pi\;:\;J_{C_{i}}(\pi)\leq l_{i},{\;\;\forall\;}i\in\{1,\cdots,M\}\right\}. The goal is to find a policy π\pi^{\star} such that πargmaxπΠCJ(π).\pi^{\star}\in\arg\max_{\pi\in\Pi_{C}}J(\pi).

However, finding an exact π\pi^{\star} is infeasible for large-scale problems. Instead, we aim to derive an iterative policy improvement algorithm that given a current policy, improves upon it by approximately maximizing the increase in the reward, while not violating the constraints by too much and not being too different from the current policy.

Lastly, analogous to V¯π\widebar{V}^{\pi}, Q¯π\widebar{Q}^{\pi}, and A¯π\widebar{A}^{\pi}, we define similar quantities for the cost functions Ci()C_{i}(\cdot), and denote them by V¯Ciπ\widebar{V}_{C_{i}}^{\pi}, Q¯Ciπ\widebar{Q}_{C_{i}}^{\pi}, and A¯Ciπ\widebar{A}_{C_{i}}^{\pi}.

2.4 Policy Improvement for discounted CMDPs

In many on-policy constrained RL problems, we improve policies iteratively by maximizing a predefined function within a local region of the current best policy as in (Tessler et al., 2019; Achiam et al., 2017; Yang et al., 2020; Song et al., 2020). (Achiam et al., 2017) derived a policy improvement bound for the discounted CMDP setting as:

Jγ(πk+1)Jγ(πk)\displaystyle J_{\gamma}(\pi_{k+1})-J_{\gamma}(\pi_{k})\geq (3)
11γ𝔼sdπkaπk+1[Aγπk(s,a)2γϵπk+11γDTV(πk+1||πk)[s]],\displaystyle\frac{1}{1-\gamma}\underset{\begin{subarray}{c}s\sim d^{\pi_{k}}\\ a\sim\pi_{k+1}\end{subarray}}{{\mathbb{E}}}\left[A_{\gamma}^{\pi_{k}}(s,a)-\frac{2\gamma\epsilon^{\pi_{k+1}}}{1-\gamma}D_{TV}(\pi_{k+1}||\pi_{k})[s]\right],

where AγπkA_{\gamma}^{\pi_{k}} is the discounted version of the advantage function, ϵπk+1:=maxs|𝔼aπk+1[Aγπk(s,a)]|\epsilon^{\pi_{k+1}}:=\max_{s}|{\mathbb{E}}_{a\sim\pi_{k+1}}[A_{\gamma}^{\pi_{k}}(s,a)]|, and DTV(πk+1||πk)[s]=(1/2)a|πk+1(a|s)πk(a|s)|D_{TV}(\pi_{k+1}||\pi_{k})[s]=(1/2)\sum_{a}\left|\pi_{k+1}(a|s)-\pi_{k}(a|s)\right| is the total variational divergence between πk+1\pi_{k+1} and πk\pi_{k} at ss. These results laid the foundations for on-policy constrained RL algorithms as in(Wu et al., 2017; Vuong et al., 2019).

However, Equation (3) does not generalize to the average setting (γ1\gamma\to 1) (see Appendix A.1). In the next section, we will derive a policy improvement bound for the average case and present an algorithm based on trust region methods, which will generate almost-monotonically improving iterative policies. Proofs of theorems and lemmas, if not already given, are available in Appendix A.

3 ACPO: The Average-Constrained Policy Optimization Algorithm

In this section, we present the main results of our work. For conciseness, we denote by dπ|S|d_{\pi}\in{\mathbb{R}}^{|S|} the column vector whose components are dπ(s)d_{\pi}(s) and Pπ|S|×|S|P_{\pi}\in{\mathbb{R}}^{|S|\times|S|} to be the state transition probability matrix under policy π\pi.

3.1 Policy Improvement for the Average-CMDP

Let π\pi^{\prime} be the policy obtained via some update rule from the current policy π\pi. Analogous to the discounted setting of a CMDP, we would like to characterize the performance difference J(π)J(π)J(\pi^{\prime})-J(\pi) by an expression which depends on π\pi and some divergence metric between the two policies.

Lemma 3.1.

(Zhang & Ross, 2020) Under the unichain assumption of the underlying Markov chain, for any stochastic policies π\pi and π\pi^{\prime}:

J(π)J(π)=𝔼sdπaπ[A¯π(s,a)].J(\pi^{\prime})-J(\pi)=\underset{\begin{subarray}{c}s\sim d_{\pi^{\prime}}\\ a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right]. (4)

Note that this difference depends on the stationary state distribution obtained from the new policy, dπd_{\pi^{\prime}}. This is computationally impractical as we do not have access to this dπd_{\pi^{\prime}}. Fortunately, by use of the following lemma we can show that if dπd_{\pi} and dπd_{\pi^{\prime}} are “close” with respect to some metric, we can approximate Eq. (4) using samples from dπd_{\pi}.

Lemma 3.2.

Under the unichain assumption, for any stochastic policies π\pi and π\pi^{\prime} we have:

|J(π)J(π)𝔼sdπaπ[A¯π(s,a)]|2ϵDTV(dπdπ)\begin{split}\left|J(\pi^{\prime})-J(\pi)-\underset{\begin{subarray}{c}s\sim d_{\pi}\\ a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right]\right|\leq 2\epsilon D_{\text{TV}}(d_{\pi^{\prime}}\parallel d_{\pi})\end{split} (5)

, where ϵ=maxs|𝔼aπ[A¯π(s,a)]|\epsilon=\max_{s}\big{|}\underset{\begin{subarray}{c}a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi}(s,a)]\big{|}. See Appendix A.1 for proof. Lemma 3.2 implies J(π)J(π)+𝔼[A¯π(s,a)]J(\pi^{\prime})\approx J(\pi)+\underset{\begin{subarray}{c}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi}(s,a)] when dπd_{\pi} and dπd_{\pi^{\prime}} are “close”. Now that we have established this approximation, we need to study the relation of how the actual change in policies affects their corresponding stationary state distributions. For this, we turn to standard analysis of the underlying Markov chain of the CMDP.

Under the ergodic assumption, we have that PπP_{\pi} is irreducible and hence its eigenvalues {λπ,i}i=1|S|\{\lambda_{\pi,i}\}_{i=1}^{|S|} are such that λπ,1=1\lambda_{\pi,1}=1 and λπ,i1<1\lambda_{\pi,i\neq 1}<1. For our analysis, we define σπ=maxi1(1λπ,i)1/2\sigma^{\pi}=\max_{i\neq 1}\,(1-\lambda_{\pi,i})^{-1/2}, and from (Levene & Loizou, 2002) and (Doyle, 2009), we connect {λπ,i}i=1|S|\{\lambda_{\pi,i}\}_{i=1}^{|S|} to the sensitivity of the stationary distributions to changes in the policy using the result below.

Lemma 3.3.

Under the ergodic assumption, the divergence between the stationary distributions dπd_{\pi} and dπd_{\pi^{\prime}} is upper bounded as:

DTV(dπdπ)σ𝔼sdπ[DTV(ππ)[s]],D_{\text{TV}}(d_{\pi^{\prime}}\parallel d_{\pi})\leq\sigma^{\star}\underset{\begin{subarray}{c}s\sim d_{\pi}\end{subarray}}{{\mathbb{E}}}[D_{\text{TV}}(\pi^{\prime}\parallel\pi)[s]], (6)

, where σ=maxπσπ\sigma^{\star}=\max_{\pi}\sigma^{\pi}. See Appendix A.1 for proof. This bound is tighter and easier to compute than the one given by (Zhang et al., 2021), which replaces σ\sigma^{\star} by κ=maxπκπ\kappa^{\star}=\max_{\pi}\kappa^{\pi}, where κπ\kappa^{\pi} is known as Kemeny’s constant from (Kemeny & Snell, 1960). It is interpreted as the expected number of steps to get to any goal state, where the expectation is taken with respect to the stationary-distribution of those states.

Combining the bounds in Lemma 3.2 and Lemma 3.3 gives us the following result:

Proposition 3.4.

Under the ergodic assumption, the following bounds hold for any stochastic policies π\pi and π\pi^{\prime}:

Lπ(π)J(π)J(π)Lπ+(π)L_{\pi}^{-}(\pi^{\prime})\leq J(\pi^{\prime})-J(\pi)\leq L_{\pi}^{+}(\pi^{\prime}) (7)

where

Lπ±(π)\displaystyle L_{\pi}^{\pm}(\pi^{\prime}) =𝔼sdπaπ[A¯π(s,a)]±2ν𝔼sdπ[DTV(ππ)[s]]\displaystyle=\underset{\begin{subarray}{c}s\sim d_{\pi}\\ a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right]\pm 2\nu\underset{\begin{subarray}{c}s\sim d_{\pi}\end{subarray}}{{\mathbb{E}}}[D_{\text{TV}}(\pi^{\prime}\parallel\pi)[s]]
andν\displaystyle\text{and}\quad\nu =σmaxs|𝔼aπ[A¯π(s,a)]|.\displaystyle=\sigma^{\star}\max_{s}\big{|}\underset{\begin{subarray}{c}a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi}(s,a)]\big{|}.

It is interesting to compare the inequalities of Equation (7) to Equation (4). The term 𝔼[A¯π(s,a)]\underset{\begin{subarray}{c}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi}(s,a)] in Prop. 3.4 is somewhat of a surrogate approximation to J(π)J(π)J(\pi^{\prime})-J(\pi), in the sense that it uses dπd_{\pi} instead of dπd_{\pi^{\prime}}. As discussed before, we do not have access to dπd_{\pi^{\prime}} since the trajectories of the new policy are not available unless the policy itself is updated. This surrogate is a first order approximation to J(π)J(π)J(\pi^{\prime})-J(\pi) in the parameters of π\pi^{\prime} in a neighborhood around π\pi as in (Kakade & Langford, 2002). Hence, Eq. (7) can be viewed as bounding the worst-case approximation error.

Extending this discussion to the cost function of our CMDP, similar expressions follow immediately.

Corollary 3.5.

For any policies π,π\pi^{\prime},\pi, and any cost function CiC_{i}, the following bound holds:

Mπ(π)JCi(π)JCi(π)Mπ+(π)M_{\pi}^{-}(\pi^{\prime})\leq J_{C_{i}}(\pi^{\prime})-J_{C_{i}}(\pi)\leq M_{\pi}^{+}(\pi^{\prime}) (8)

where

Mπ±(π)\displaystyle M_{\pi}^{\pm}(\pi^{\prime}) =𝔼sdπaπ[A¯Ciπ(s,a)]±2νCi𝔼sdπ[DTV(ππ)[s]]\displaystyle=\underset{\begin{subarray}{c}s\sim d_{\pi}\\ a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}_{C_{i}}^{\pi}(s,a)\right]\pm 2\nu_{C_{i}}\underset{\begin{subarray}{c}s\sim d_{\pi}\end{subarray}}{{\mathbb{E}}}[D_{\text{TV}}(\pi^{\prime}\parallel\pi)[s]]
andνCi\displaystyle\text{and}\quad\nu_{C_{i}} =σmaxs|𝔼aπ[A¯Ciπ(s,a)]|.\displaystyle=\sigma^{\star}\max_{s}\big{|}\underset{\begin{subarray}{c}a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}[\widebar{A}_{C_{i}}^{\pi}(s,a)]\big{|}.

Until now, we have been dealing with bounds given with regards to the TV divergence of the policies. However, in practice, bounds with respect to the KL divergence of policies is more commonly used as in (Schulman et al., 2015, 2016; Ma et al., 2021). From Pinsker’s and Jensen’s inequalities, we have that

𝔼sdπ[DTV(ππ)[s]]\displaystyle\underset{\begin{subarray}{c}s\sim d_{\pi}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{TV}}(\pi^{\prime}\parallel\pi)[s]\big{]} 𝔼sdπ[DKL(ππ)][s]]/2.\displaystyle\leq\sqrt{\underset{\begin{subarray}{c}s\sim d_{\pi}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi^{\prime}\middle\|\pi\right)][s]\big{]}/2}. (9)

We can thus use Eq. (9) in the bounds of Proposition 3.4 and Corollary 3.5 to make policy improvement guarantees, i.e., if we find updates such that πk+1argmaxπLπk(π)\pi_{k+1}\in\operatorname*{arg\,max}_{\pi}L_{\pi_{k}}^{-}(\pi), then we will have monotonically increasing policies as, at iteration kk, 𝔼sdπk,aπ[A¯πk(s,a)]=0\underset{\begin{subarray}{c}s\sim d_{\pi_{k}},a\sim\pi\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi_{k}}(s,a)]=0, 𝔼sdπk[DKL(ππk)[s]]=0\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}[D_{\text{KL}}\left(\pi\middle\|\pi_{k}\right)[s]]=0 for π=πk\pi=\pi_{k}, implying that J(πk+1)J(πk)0J(\pi_{k+1})-J(\pi_{k})\geq 0. However, this sequence does not guarantee constraint satisfaction at each iteration, so we now turn to trust region methods to incorporate constraints, do policy improvement and provide safety guarantees.

3.2 Trust Region Based Approach

For large or continuous state and action CMDPs, solving for the exact optimal policy is impractical. However, trust region-based policy optimization algorithms have proven to be effective for solving such problems as in (Schulman et al., 2015, 2016, 2017; Achiam, 2017). For these approaches, we usually consider some parameterized policy class ΠΘ={πθ:θΘ}\Pi_{\Theta}=\{\pi_{\theta}:\theta\in\Theta\} for tractibility. In addition, for CMDPs, we also require the policy iterates to be feasible, so instead of optimizing just over ΠΘ\Pi_{\Theta}, we optimize over ΠΘΠC\Pi_{\Theta}\cap\Pi_{C}. However, it is much easier to solve the above problem if we introduce hard constraints, rather than limiting the set to ΠΘΠC\Pi_{\Theta}\cap\Pi_{C}. Therefore, we now introduce the ACPO algorithm, which is inspired by the trust region formulations above as the following optimization problem:

maximizeπΠΘ\displaystyle\underset{\pi\in\Pi_{\Theta}}{\text{maximize}} 𝔼sdπθkaπ[A¯πθk(s,a)]\displaystyle\quad\underset{\begin{subarray}{c}s\sim d_{\pi_{\theta_{k}}}\\ a\sim\pi\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi_{\theta_{k}}}(s,a)] (10)
s.t. JCi(πθk)+𝔼sdπθkaπ[A¯Ciπθk(s,a)]li,i\displaystyle\qquad J_{C_{i}}(\pi_{\theta_{k}})+\underset{\begin{subarray}{c}s\sim d_{\pi_{\theta_{k}}}\\ a\sim\pi\end{subarray}}{{\mathbb{E}}}[\widebar{A}_{C_{i}}^{\pi_{\theta_{k}}}(s,a)]\leq l_{i},\;\;\;{\;\;\forall\;}i
D¯KL(ππθk)δ\displaystyle\qquad\bar{D}_{\text{KL}}(\pi\parallel\pi_{\theta_{k}})\leq\delta

where D¯KL(ππθk):=𝔼sdπθk[DKL(ππθk)[s]]\bar{D}_{\text{KL}}(\pi\parallel\pi_{\theta_{k}}):=\underset{\begin{subarray}{c}s\sim d_{\pi_{\theta_{k}}}\end{subarray}}{{\mathbb{E}}}[D_{\text{KL}}\left(\pi\middle\|\pi_{\theta_{k}}\right)[s]], A¯πθk(s,a)\widebar{A}^{\pi_{\theta_{k}}}(s,a) is the average advantage function defined earlier, and δ>0\delta>0 is a step size. We use this form of updates as it is an approximation to the lower bound given in Proposition 3.4 and the upper bound given in Corollary 3.5.

In most cases, the trust region threshold for formulations like Eq. (10) are heuristically motivated. We now show that it is quantitatively motivated and comes with a worst case performance degradation and constraint violation. Proof is in Appendix A.2.

Theorem 3.6.

Let πθk+1\pi_{\theta_{k+1}} be the optimal solution to Eq. (10) for some πθkΠΘ\pi_{\theta_{k}}\in\Pi_{\Theta}. Then, we have

J(πθk+1)J(πθk)\displaystyle J(\pi_{\theta_{k+1}})-J(\pi_{\theta_{k}})\geq 2(δ+Vmax)νπθk+1\displaystyle-\sqrt{2(\delta+V_{max})}\nu^{\pi_{\theta_{k+1}}} (11)
andJCi(πθk+1)li\displaystyle\text{and}\;\;J_{C_{i}}(\pi_{\theta_{k+1}})\leq l_{i} +2(δ+Vmax)νCiπθk+1i,\displaystyle+\sqrt{2(\delta+V_{max})}\nu^{\pi_{\theta_{k+1}}}_{C_{i}}{\;\;\forall\;}i, (12)

where νπθk+1=σπθk+1maxs|𝔼aπθk+1[A¯πθk(s,a)]|\nu^{\pi_{\theta_{k+1}}}=\sigma^{\pi_{\theta_{k+1}}}\max_{s}\big{|}\underset{\begin{subarray}{c}a\sim\pi_{\theta_{k+1}}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi_{\theta_{k}}}(s,a)]\big{|}, νCiπθk+1=σπθk+1maxi,s|𝔼aπθk+1[A¯Ciπθk(s,a)]|\nu^{\pi_{\theta_{k+1}}}_{C_{i}}=\sigma^{\pi_{\theta_{k+1}}}\max_{i,s}\big{|}\underset{\begin{subarray}{c}a\sim\pi_{\theta_{k+1}}\end{subarray}}{{\mathbb{E}}}[\widebar{A}_{C_{i}}^{\pi_{\theta_{k}}}(s,a)]\big{|}, Vmax=maxiβi2V_{max}=\max_{i}\beta_{i}^{2},  and  βi=[JCi(πθk)li]+\beta_{i}=[J_{C_{i}}(\pi_{\theta_{k}})-l_{i}]_{+}.

Remark 3.7. Note that if the constraints are ignored (by setting Vmax=0V_{max}=0), then this bound is tighter than given in (Zhang et al., 2021) for the unconstrained average-reward setting.

However, the update rule of Eq. (10) is difficult to implement in practice as it takes steps that are too small, which degrades convergence. In addition, it requires the exact knowledge of A¯πθk(s,a)\widebar{A}^{\pi_{\theta_{k}}}(s,a) which is computationally infeasible for large-scale problems. In the next section, we will introduce a specific sampling-based practical algorithm to alleviate these concerns.

4 Practical Implementation of ACPO

In this section, we introduce a practical version of the ACPO algorithm with a principle recovery method. With a small step size δ\delta, we can approximate the reward function and constraints with a first order expansion, and approximate the KL divergence constraint with a second order expansion. This gives us a new optimization problem which can be solved exactly using Lagrangian duality.

4.1 An Implementation of ACPO

Since we are working with a parameterized class, we shall now overload notation to use θk\theta_{k} as the policy at iteration kk, i.e., θkπθk\theta_{k}\equiv\pi_{\theta_{k}}. In addition, we use gg to denote the gradient of the advantage function objective, aia_{i} to denote the gradient of the advantage function of the cost CiC_{i}, HH as the Hessian of the KL-divergence. Formally,

g\displaystyle g :=θ𝔼sdθkaθ[A¯θk(s,a)]|θ=θk,\displaystyle:=\nabla_{\theta}\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\theta_{k}}\\ a\sim\theta\end{subarray}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\theta_{k}}(s,a)]\Bigr{|}_{\theta=\theta_{k}},
ai\displaystyle a_{i} :=θ𝔼sdθkaθ[A¯Ciθk(s,a)]|θ=θk,\displaystyle:=\nabla_{\theta}\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\theta_{k}}\\ a\sim\theta\end{subarray}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\theta_{k}}_{C_{i}}(s,a)]\Bigr{|}_{\theta=\theta_{k}},
H\displaystyle H :=θ2𝔼sdθk[DKL(θθk))[s]]|θ=θk.\displaystyle:=\nabla^{2}_{\theta}\underset{\begin{subarray}{c}s\sim d_{\theta_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\theta\middle\|\theta_{k}\right))[s]\big{]}\Bigr{|}_{\theta=\theta_{k}}.

In addition, let ci:=JCi(θk)lic_{i}:=J_{C_{i}}(\theta_{k})-l_{i}. The approximation to the problem in Eq. (10) is:

maxθ\displaystyle\max_{\theta} gT(θθk)\displaystyle g^{T}(\theta-\theta_{k}) (13)
s.t. ci+aiT(θθk)0,i\displaystyle\quad c_{i}+a_{i}^{T}(\theta-\theta_{k})\leq 0,{\;\;\forall\;}i
and, 12(θθk)TH(θθk)δ.\displaystyle\quad\tfrac{1}{2}(\theta-\theta_{k})^{T}H(\theta-\theta_{k})\leq\delta.

This is a convex optimization problem in which strong duality holds, and hence it can be solved using a Lagrangian method. The update rule for the dual problem then takes the form

θk+1=θk+1λH1(gAμ).\theta_{k+1}=\theta_{k}+\frac{1}{\lambda^{\star}}H^{-1}\left(g-A\mu^{\star}\right). (14)

where λ\lambda^{\star} and μ\mu^{\star} are the Lagrange multipliers satisfying the dual

maxλ0μ012λ(gTH1g2rTμ+μTSμ)+μTcλδ2,\displaystyle\max_{\begin{subarray}{c}\lambda\geq 0\\ \mu\succeq 0\end{subarray}}\frac{-1}{2\lambda}\left(g^{T}H^{-1}g-2r^{T}\mu+\mu^{T}S\mu\right)+\mu^{T}c-\frac{\lambda\delta}{2}, (15)

with r:=gTH1Ar:=g^{T}H^{-1}A, S:=ATH1AS:=A^{T}H^{-1}A, A:=[a1,,am]A:=[a_{1},\cdots,a_{m}], and c:=[c1,,cm]Tc:=[c_{1},\cdots,c_{m}]^{T}.

4.2 Feasibility and Recovery

The approximation regime described in Eq. (13) requires HH to be invertible. For large parametric policies, HH is computed using the conjugate gradient method as in (Schulman et al., 2015). However, in practice, using this approximation along with the associated statistical sampling errors, there might be potential violations of the approximate constraints leading to infeasible policies.

To rectify this, for the case where we only have one constraint, one can recover a feasible policy by applying a recovery step inspired by the TRPO update on the cost surrogate as:

θk+1/2=θk2δ[tH1aaTH1a+(1t)H1ggTH1g]\begin{aligned} \theta_{k+1/2}=\theta_{k}-\sqrt{2\delta}\bigg{[}t\cdot\frac{H^{-1}a}{\sqrt{a^{T}H^{-1}a}}+(1-t)\cdot\frac{H^{-1}g}{\sqrt{g^{T}H^{-1}g}}\bigg{]}\end{aligned}

(16)

where t[0,1]t\in[0,1]. Contrasting with the policy recovery update of (Achiam et al., 2017) which only uses the cost advantage function gradient aa, we introduce the reward advantage function gradient gg as well. This choice is to ensure recovery while simultaneously balancing the “regret” of not choosing the best (in terms of the objective value) policy πk\pi_{k}. In other words, we wish to find a policy πk+1/2\pi_{k+1/2} as close to πk\pi_{k} in terms of their objective function values. We follow up this step with a simple linesearch to find feasible πk+1\pi_{k+1}. Based on this, Algorithm 1 provides a basic outline of ACPO. For more details of the algorithm, see Appendix A.3.

Algorithm 1 Average-Constrained Policy Optimization (ACPO)
1:  Input: Initial random policy π0Πθ\pi_{0}\in\Pi_{\theta}
2:  for k=0,1,2,,Kk=0,1,2,...,K do
3:     Sample a set of trajectories Ω\Omega using πk=πθk\pi_{k}=\pi_{\theta_{k}}
4:     Find estimates of g,a,H,cg,a,H,c using Ω\Omega
5:     if a feasible solution to Equation (13) exists then
6:        Solve dual problem in Equation (15) for λk,μk\lambda^{\star}_{k},\mu^{\star}_{k}
7:        Find policy update πk+1\pi_{k+1} with Equation (14)
8:     else
9:        Find recovery policy πk+1/2\pi_{k+1/2} with Equation (16)
10:        Obtain πk+1\pi_{k+1} by linesearch till approximate constraint satisfaction of Equation (13)
11:     end if
12:  end for

Average Rewards:
Refer to caption Refer to caption Refer to caption                     Average Constraint values:                 Refer to caption

Refer to caption
(a) Ant Gather
Refer to caption
(b) Bottleneck
Refer to caption
(c) Grid
Figure 1: The average reward and constraint cost function values vs iterations (in 10410^{4}) learning curves for some algorithm-task pairs. Solid lines in each figure are the empirical means, while the shaded area represents 1 standard deviation, all over 5 runs. The dashed line in constraint plots is the constraint threshold ll. ATRPO and PPO are tested with constraints, which are included in their Lagrangian formulation. Additional results are available in Appendix A.6.

5 Empirical Results

We conducted a series of experiments to evaluate the relative performance of the ACPO algorithm and answer the following questions: (i) Does ACPO learn a sequence of constraint satisfying policies while maximizing the average reward in the long run? (ii) How does ACPO compare with the already existing constraint policy optimization algorithms which are applied with a large discount factor? (iii) What are the factors that affect the performance of ACPO?

We work with the OpenAI Gym environments to train the various learning agent on the following tasks - Gather, Circle, Grid, and Bottleneck tasks (see Figure 3 in Appendix A.6.1 for more details on the environments). For our experiments we only work with a single constraint with policy recovery using Eq. (16) (this is only a computational limitation; ACPO in principle can handle multiple constraints). We compare ACPO with the following baseline algorithms: CPO by (Achiam et al., 2017), ATRPO by (Zhang et al., 2021), PCPO by (Yang et al., 2020) (a close variant of CPO), BVF-PPO by (Satija et al., 2020) and PPO by (Schulman et al., 2017).

Although ATRPO and PPO originally do not incorporate constraints, for fair comparison, we introduce constraints using a Lagrangian. Also, CPO, PCPO and PPO are compared with γ=0.999\gamma=0.999. See Appendix A.5 for more details.

5.1 Evaluation Details and Protocol

For the Gather and Circle tasks we test two distinct agents: a point-mass (S9,A2S\subseteq{\mathbb{R}}^{9},A\subseteq{\mathbb{R}}^{2}), and an ant robot (S32,A8S\subseteq{\mathbb{R}}^{32},A\subseteq{\mathbb{R}}^{8}). The agent in the Bottleneck task in S71,A16S\subseteq{\mathbb{R}}^{71},A\subseteq{\mathbb{R}}^{16}, and for the Grid task is S96,A4S\subseteq{\mathbb{R}}^{96},A\subseteq{\mathbb{R}}^{4}. We use two hidden layer neural networks to represent Gaussian policies for the tasks. For Gather and Circle, size is (64,32) for both layers, and for Grid and Bottleneck the layer sizes are (16,16) and (50,25). We set the step size δ\delta to 10410^{-4}, and for each task, we conduct 5 runs to get the mean and standard deviation for reward objective and cost constraint values during training. We train CPO, PCPO, and PPO with the discounted objective, however, evaluation and comparison with BVF-PPO, ATRPO and ACPO111Code of the ACPO implementation will be made available on GitHub. is done using the average reward objective (this is a standard evaluation scheme as in (Schulman et al., 2015; Wu et al., 2017; Vuong et al., 2019)).

For each environment, we train an agent for 10510^{5} steps, and for every 10310^{3} steps, we instantiate 10 evaluation trajectories with the current (deterministic) policy. For each of these trajectories, we calculate the trajectory average reward for the next 10310^{3} steps and finally report the total average-reward as the mean of these 10 trajectories. Learning curves for the algorithms are compiled in Figure 1 (for Point-Circle, Point-Gather, and Ant-Circle see Appendix A.6).

Since there are two objectives (rewards in the objective and costs in the constraints), we show the plots which maximize the reward objective while satisfying the cost constraint. See Appendix A.4 and A.5 for more details.

5.2 Performance Analysis

From Figure 1, we can see that ACPO is able to improve the reward objective while having approximate constraint satisfaction on all tasks. In particular, ACPO is the only algorithm that best learns almost-constraint-satisfying maximum average-reward policies across all tasks: in a simple Gather environment, ACPO is able to almost exactly track the cost constraint values to within the given threshold ll; however, for the high dimensional Grid and Bottleneck environments we have more constraint violations due to complexity of the policy behavior. Regardless, in these environments, ACPO still outperforms all other baselines.

ACPO vs. CPO/PCPO. For the Point-Gather environment (see Figure 4), we see that initially ACPO and CPO/PCPO give relatively similar performance, but eventually ACPO improves over CPO and PCPO by 52.5% and 36.1% on average-rewards respectively. This superior performance does not come with more constraint violation. The Ant-Gather environment particularly brings out the effectiveness of ACPO where it shows 41.1% and 61.5% improvement over CPO and PCPO respectively, while satisfying the constraint. In the high dimensional Bottleneck and Grid environments, ACPO is particularly quick at optimizing for low constraint violations, while improving over PCPO and CPO in terms of average-reward.

ACPO vs Lagrangian ATRPO/PPO. One could suppose to use the state of the art unconstrained policy optimization algorithms with a Lagrangian formulation to solve the average-rewards CMDP problem in consideration, but we see that such an approach, although principled in theory, does not give satisfactory empirical results. This can be particularly seen in the Ant-Circle, Ant-Gather, Bottleneck, and Grid environments, where Lagrangian ATRPO and PPO give the least rewards, while not even satisfying the constraints. If ATRPO and PPO were used with constraints ignored, one would see higher rewards but even worse constraint violations, which are not useful.

ACPO vs BVF-PPO. BVF-PPO is a whole different formulation than the other baselines, as it translates the cumulative cost constraints into state-based constraints, which results in an almost-safe policy improvement method which maximizes returns at every step. However, we see that this approach fails to satisfy the constraints even in the moderately difficult Ant Gather environment, let alone the high dimensional Bottleneck and Grid environments.

5.3 Dependence of the Recovery Regime

Refer to caption
(a) Rewards
Refer to caption
(b) Costs
Refer to caption
Figure 2: Comparison of performance of ACPO with different values of the hyperparameter tt in the Point-Circle environment. X-axis is iterations in 10410^{4}. See Appendix A.6 for more details.

In Equation (16) we introduced a hyperparameter tt, which provides for an intuitive trade-off as follows: either we purely decrease the constraint violations (t=1t=1), or we decrease the average-reward (t=0t=0), which consequently decreases the constraint violation. The latter formulation is principled in that if we decrease rewards, we are bound to decrease constraints violation due to the nature of the environments. Figure 2 shows the experiments we conducted with varying tt. With t=1t=1, we obtain the same recovery scheme as that of (Achiam et al., 2017). Our results show that this scheme does not lead to the best performance, and that t=0.75t=0.75 and t=1t=1 perform the best across all tasks. See Appendix A.6 for a detailed study.

6 Conclusions

In this paper, we studied the problem of learning policies that maximize average-rewards for a given CMDP with average-cost constraints. We showed that the current algorithms with constraint violation bounds for the discounted setting do not generalize to the average setting. We then proposed a new algorithm, the Average-Constrained Policy Optimization (ACPO) that is inspired by the TRPO class of algorithms but based on theoretical sensitivity-type bounds for average-CMDPs we derive, and use in designing the algorithm. Our experimental results on a range of OpenAI Gym environments (including some high dimensional ones) show the effectiveness of ACPO on ACMDP RL problems, as well as its superior empirical performance vis-a-vis some current alternatives. A direction for future work is implementation of ACPO to fully exploit the parallelization potential.

Impact: This paper not only advances the field of RL theory and algorithms but also introduces a practical and scalable algorithm (supported by theory) that is of utility in many fields including LLMs, Diffusion Models, and robotic control. Currently, algorithms such as PPO are adapted for use simply because of lack of alternatives.

References

  • Abounadi et al. (2001) Abounadi, J., Bertsekas, D., and Borkar, V. S. Learning algorithms for markov decision processes with average cost. SIAM Journal on Control and Optimization, 40(3):681–698, 2001.
  • Achiam (2017) Achiam, J. UC Berkeley CS 285 (Fall 2017), Advanced Policy Gradients, 2017. URL: http://rail.eecs.berkeley.edu/deeprlcourse-fa17/f17docs/lecture_13_advanced_pg.pdf. Last visited on 2020/05/24.
  • Achiam et al. (2017) Achiam, J., Held, D., Tamar, A., and Abbeel, P. Constrained policy optimization. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp.  22–31. JMLR. org, 2017.
  • Achiam et al. (2023) Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  • Agnihotri et al. (2019) Agnihotri, A., Saraf, P., and Bapnad, K. R. A convolutional neural network approach towards self-driving cars. In 2019 IEEE 16th India Council International Conference (INDICON), pp.  1–4, 2019. doi: 10.1109/INDICON47234.2019.9030307.
  • Akkaya et al. (2019) Akkaya, I., Andrychowicz, M., Chociej, M., Litwin, M., McGrew, B., Petron, A., Paino, A., Plappert, M., Powell, G., Ribas, R., et al. Solving rubik’s cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019.
  • Altman (1999) Altman, E. Constrained Markov decision processes, volume 7. CRC Press, 1999.
  • Aractingi et al. (2023) Aractingi, M., Léziart, P.-A., Flayols, T., Perez, J., Silander, T., and Souères, P. Controlling the solo12 quadruped robot with deep reinforcement learning. scientific Reports, 13(1):11945, 2023.
  • Bhatnagar & Lakshmanan (2012) Bhatnagar, S. and Lakshmanan, K. An Online Actor–Critic Algorithm with Function Approximation for Constrained Markov Decision Processes. https://doi.org/10.1007/s10957-012-9989-5, 2012. [Accessed 08-10-2023].
  • Borkar (1988) Borkar, V. S. A convex analytic approach to markov decision processes. Probability Theory and Related Fields, 78(4):583–602, 1988.
  • Calvo-Fullana et al. (2023) Calvo-Fullana, M., Paternain, S., Chamon, L. F., and Ribeiro, A. State augmented constrained reinforcement learning: Overcoming the limitations of learning with rewards. IEEE Transactions on Automatic Control, 2023.
  • Chen et al. (2022) Chen, L., Jain, R., and Luo, H. Learning infinite-horizon average-reward Markov decision process with constraints. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S. (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp.  3246–3270. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/chen22i.html.
  • Cho & Meyer (2001) Cho, G. E. and Meyer, C. D. Comparison of perturbation bounds for the stationary distribution of a markov chain. Linear Algebra and its Applications, 335(1-3):137–150, 2001.
  • Doyle (2009) Doyle, P. G. The kemeny constant of a markov chain. arXiv preprint arXiv:0909.2636, 2009.
  • Gao et al. (2018) Gao, Y., Xu, H., Lin, J., Yu, F., Levine, S., and Darrell, T. Reinforcement learning from imperfect demonstrations. arXiv preprint arXiv:1802.05313, 2018.
  • Garcia & Fernandez (2015) Garcia, J. and Fernandez, F. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437–1480, 2015.
  • Hordijk & Kallenberg (1979) Hordijk, A. and Kallenberg, L. Linear programming and markov decision chains. Management Science, 25(4):352–362, 1979.
  • Hu et al. (2022) Hu, H., Liu, Z., Chitlangia, S., Agnihotri, A., and Zhao, D. Investigating the impact of multi-lidar placement on object detection for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.  2550–2559, June 2022.
  • Hunter (2005) Hunter, J. J. Stationary distributions and mean first passage times of perturbed markov chains. Linear Algebra and its Applications, 410:217–243, 2005.
  • Hunter (2014) Hunter, J. J. Mathematical techniques of applied probability: Discrete time models: Basic theory, volume 1. Academic Press, 2014.
  • Kakade & Langford (2002) Kakade, S. and Langford, J. Approximately optimal approximate reinforcement learning. In International Conference on Machine Learning, volume 2, pp.  267–274, 2002.
  • Kemeny & Snell (1960) Kemeny, J. and Snell, I. Finite Markov Chains. Van Nostrand, New Jersey, 1960.
  • Levene & Loizou (2002) Levene, M. and Loizou, G. Kemeny’s constant and the random surfer. The American mathematical monthly, 109(8):741–745, 2002.
  • Levine et al. (2016) Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(1):1334–1373, 2016.
  • Liao et al. (2022) Liao, P., Qi, Z., Wan, R., Klasnja, P., and Murphy, S. A. Batch policy learning in average reward markov decision processes. The Annals of Statistics, 50(6):3364–3387, 2022.
  • Ma et al. (2021) Ma, X., Tang, X., Xia, L., Yang, J., and Zhao, Q. Average-reward reinforcement learning with trust region methods. arXiv preprint arXiv:2106.03442, 2021.
  • Manne (1960) Manne, A. S. Linear programming and sequential decisions. Management Science, 6(3):259–267, 1960.
  • Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. nature, 518(7540):529–533, 2015.
  • Moskovitz et al. (2023) Moskovitz, T., Singh, A. K., Strouse, D., Sandholm, T., Salakhutdinov, R., Dragan, A. D., and McAleer, S. Confronting reward model overoptimization with constrained rlhf. arXiv preprint arXiv:2310.04373, 2023.
  • Rajeswaran et al. (2017) Rajeswaran, A., Kumar, V., Gupta, A., Vezzani, G., Schulman, J., Todorov, E., and Levine, S. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. arXiv preprint arXiv:1709.10087, 2017.
  • Satija et al. (2020) Satija, H., Amortila, P., and Pineau, J. Constrained Markov decision processes via backward value functions. In III, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp.  8502–8511. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/satija20a.html.
  • Schulman et al. (2015) Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust region policy optimization. In International Conference on Machine Learning, pp.  1889–1897, 2015.
  • Schulman et al. (2016) Schulman, J., Moritz, P., Levine, S., Jordan, M., and Abbeel, P. High-dimensional continuous control using generalized advantage estimation. International Conference on Learning Representations (ICLR), 2016.
  • Schulman et al. (2017) Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  • Silver et al. (2017) Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. Mastering the game of go without human knowledge. nature, 550(7676):354–359, 2017.
  • Song et al. (2020) Song, H. F., Abdolmaleki, A., Springenberg, J. T., Clark, A., Soyer, H., Rae, J. W., Noury, S., Ahuja, A., Liu, S., Tirumala, D., et al. V-mpo: on-policy maximum a posteriori policy optimization for discrete and continuous control. International Conference on Learning Representations, 2020.
  • Tessler et al. (2019) Tessler, C., Mankowitz, D. J., and Mannor, S. Reward constrained policy optimization. International Conference on Learning Representation (ICLR), 2019.
  • Tibshirani (2017) Tibshirani, R. J. Dykstra’s algorithm, admm, and coordinate descent: Connections, insights, and extensions. Advances in Neural Information Processing Systems, 30, 2017.
  • Todorov et al. (2012) Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.  5026–5033. IEEE, 2012.
  • Vinitsky et al. (2018) Vinitsky, E., Kreidieh, A., Le Flem, L., Kheterpal, N., Jang, K., Wu, F., Liaw, R., Liang, E., and Bayen, A. M. Benchmarks for reinforcement learning in mixed-autonomy traffic. In Proceedings of Conference on Robot Learning, pp.  399–409, 2018.
  • Vinyals et al. (2019) Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350–354, 2019.
  • Vuong et al. (2019) Vuong, Q., Zhang, Y., and Ross, K. W. Supervised policy update for deep reinforcement learning. In International Conference on Learning Representation (ICLR), 2019.
  • Wan et al. (2021) Wan, Y., Naik, A., and Sutton, R. S. Learning and planning in average-reward markov decision processes. In International Conference on Machine Learning, pp.  10653–10662. PMLR, 2021.
  • Wei et al. (2021) Wei, C.-Y., Jahromi, M. J., Luo, H., and Jain, R. Learning infinite-horizon average-reward mdps with linear function approximation. In International Conference on Artificial Intelligence and Statistics, pp.  3007–3015. PMLR, 2021.
  • Wu et al. (2017) Wu, Y., Mansimov, E., Grosse, R. B., Liao, S., and Ba, J. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In Advances in neural information processing systems (NIPS), pp.  5285–5294, 2017.
  • Yang et al. (2020) Yang, T.-Y., Rosca, J., Narasimhan, K., and Ramadge, P. J. Projection-based constrained policy optimization. In International Conference on Learning Representation (ICLR), 2020.
  • Zhang et al. (2021) Zhang, S., Wan, Y., Sutton, R. S., and Whiteson, S. Average-reward off-policy policy evaluation with function approximation. arXiv preprint arXiv:2101.02808, 2021.
  • Zhang & Ross (2020) Zhang, Y. and Ross, K. Average reward reinforcement learning with monotonic policy improvement. Preprint, 2020.

Appendix A Appendix

A.1 Proofs

Lemma A.1 (Trivialization of Discounted Criterion Bounds).

Consider the policy performance bound of (Achiam et al., 2017), which says that for any two stationary policies π\pi and π\pi^{\prime}:

Jγ(π)Jγ(π)11γ[𝔼sdπ,γaπ[Aγπ(s,a)]2γϵγ1γ𝔼sdπ,γDTV(ππ)[s]]J_{\gamma}(\pi^{\prime})-J_{\gamma}(\pi)\geq\frac{1}{1-\gamma}\left[\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi,\gamma}\\ a\sim\pi^{\prime}\end{subarray}\end{subarray}}{{\mathbb{E}}}[A_{\gamma}^{\pi}(s,a)]-\frac{2\gamma\epsilon^{\gamma}}{1-\gamma}\underset{\begin{subarray}{c}s\sim d_{\pi,\gamma}\end{subarray}}{{\mathbb{E}}}D_{\text{TV}}(\pi^{\prime}\parallel\pi)[s]\right] (17)

where ϵγ=maxs|𝔼aπ[Aγπ(s,a)]|\epsilon^{\gamma}=\max_{s}\left|\underset{\begin{subarray}{c}a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}[A_{\gamma}^{\pi}(s,a)]\right|. Then, the right hand side times 1γ1-\gamma goes to negative infinity as γ1\gamma\to 1.

Proof.

Since dπ,γd_{\pi,\gamma} approaches the stationary distribution dπd_{\pi} as γ1\gamma\to 1, we can multiply the right hand side of (17) by (1γ)(1-\gamma) and take the limit which gives us:

limγ1(𝔼sdπ,γaπ[Aγπ(s,a)]±2γϵγ1γ𝔼sdπ,γDTV(ππ)[s])\displaystyle\lim_{\gamma\to 1}\left(\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi,\gamma}\\ a\sim\pi^{\prime}\end{subarray}\end{subarray}}{{\mathbb{E}}}[A_{\gamma}^{\pi}(s,a)]\pm\frac{2\gamma\epsilon^{\gamma}}{1-\gamma}\underset{\begin{subarray}{c}s\sim d_{\pi,\gamma}\end{subarray}}{{\mathbb{E}}}D_{\text{TV}}(\pi^{\prime}\parallel\pi)[s]\right)
=\displaystyle= 𝔼sdπaπ[A¯π(s,a)]2ϵ𝔼sdπ[DTV(ππ)[s]]limγ1γ1γ\displaystyle\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi}\\ a\sim\pi^{\prime}\end{subarray}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi}(s,a)]-2\epsilon\underset{\begin{subarray}{c}s\sim d_{\pi}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{TV}}(\pi^{\prime}\parallel\pi)[s]\big{]}\cdot\lim_{\gamma\to 1}\frac{\gamma}{1-\gamma}
=\displaystyle= \displaystyle-\infty

Here ϵ=maxs|𝔼aπ[A¯π(s,a)]|\epsilon=\max_{s}\left|\underset{\begin{subarray}{c}a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi}(s,a)]\right|. The first equality is a standard result of limγ1Aγπ(s,a)=A¯π(s,a)\lim_{\gamma\to 1}A_{\gamma}^{\pi}(s,a)=\widebar{A}^{\pi}(s,a). ∎

See 3.1

Proof.

We directly expand the right-hand side using the definition of the advantage function and Bellman equation, which gives us:

𝔼sdπaπ[A¯π(s,a)]\displaystyle\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi^{\prime}}\\ a\sim\pi^{\prime}\end{subarray}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right] =𝔼sdπaπ[Q¯π(s,a)V¯π(s)]\displaystyle=\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi^{\prime}}\\ a\sim\pi^{\prime}\end{subarray}\end{subarray}}{{\mathbb{E}}}\left[\widebar{Q}^{\pi}(s,a)-\widebar{V}^{\pi}(s)\right]
=𝔼sdπaπsP(|s,a)[r(s,a,s)J(π)+V¯π(s)V¯π(s)]\displaystyle=\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi^{\prime}}\\ a\sim\pi^{\prime}\\ s^{\prime}\sim P(\cdot|s,a)\end{subarray}\end{subarray}}{{\mathbb{E}}}\left[r(s,a,s^{\prime})-J(\pi)+\widebar{V}^{\pi}(s^{\prime})-\widebar{V}^{\pi}(s)\right]
=J(π)J(π)+𝔼sdπaπsP(|s,a)[V¯π(s)]𝔼sdπ[V¯π(s)]A\displaystyle=J(\pi^{\prime})-J(\pi)+\underbrace{\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi^{\prime}}\\ a\sim\pi^{\prime}\\ s^{\prime}\sim P(\cdot|s,a)\end{subarray}\end{subarray}}{{\mathbb{E}}}[\widebar{V}^{\pi}(s^{\prime})]-\underset{\begin{subarray}{c}s\sim d_{\pi^{\prime}}\end{subarray}}{{\mathbb{E}}}[\widebar{V}^{\pi}(s)]}_{A}

Analyzing AA, since dπ(s)d_{\pi^{\prime}}(s) is a stationary distribution:

𝔼sdπaπsP(|s,a)[V¯π(s)]\displaystyle\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi^{\prime}}\\ a\sim\pi^{\prime}\\ s^{\prime}\sim P(\cdot|s,a)\end{subarray}\end{subarray}}{{\mathbb{E}}}[\widebar{V}^{\pi}(s^{\prime})] =sdπ(s)aπ(a|s)sP(s|s,a)V¯π(s)\displaystyle=\sum_{s}d_{\pi^{\prime}}(s)\sum_{a}\pi^{\prime}(a|s)\sum_{s^{\prime}}P(s^{\prime}|s,a)\widebar{V}^{\pi}(s^{\prime})
=sdπ(s)sPπ(s|s)V¯π(s)=sdπ(s)V¯π(s)\displaystyle=\sum_{s}d_{\pi^{\prime}}(s)\sum_{s^{\prime}}P_{\pi^{\prime}}(s^{\prime}|s)\widebar{V}^{\pi}(s^{\prime})=\sum_{s^{\prime}}d_{\pi^{\prime}}(s^{\prime})\widebar{V}^{\pi}(s^{\prime})

Therefore, A=sdπ(s)V¯π(s)𝔼sdπ[V¯π(s)]=0A=\sum_{s^{\prime}}d_{\pi^{\prime}}(s^{\prime})\widebar{V}^{\pi}(s^{\prime})-\underset{\begin{subarray}{c}s\sim d_{\pi^{\prime}}\end{subarray}}{{\mathbb{E}}}[\widebar{V}^{\pi}(s)]=0. Hence, proved. ∎

See 3.2

Proof.
|J(π)J(π)𝔼sdπaπ[A¯π(s,a)]|\displaystyle\left|J(\pi^{\prime})-J(\pi)-\underset{\begin{subarray}{c}s\sim d_{\pi}\\ a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right]\right| =|𝔼sdπaπ[A¯π(s,a)]𝔼sdπaπ[A¯π(s,a)]|\displaystyle=\left|\underset{\begin{subarray}{c}s\sim d_{\pi^{\prime}}\\ a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right]-\underset{\begin{subarray}{c}s\sim d_{\pi}\\ a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right]\right| (from Lemma 3.1)
=|s𝔼aπ[A¯π(s,a)](dπ(s)dπ(s))|\displaystyle=\left|\sum_{s}\underset{\begin{subarray}{c}a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right]\left(d_{\pi^{\prime}}(s)-d_{\pi}(s)\right)\right|
s|𝔼aπ[A¯π(s,a)](dπ(s)dπ(s))|\displaystyle\leq\sum_{s}\left|\underset{\begin{subarray}{c}a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right]\left(d_{\pi^{\prime}}(s)-d_{\pi}(s)\right)\right|
maxs|𝔼aπ[A¯π(s,a)]|dπdπ1\displaystyle\leq\max_{s}\left|\underset{\begin{subarray}{c}a\sim\pi^{\prime}\end{subarray}}{{\mathbb{E}}}\left[\widebar{A}^{\pi}(s,a)\right]\right|\left\|d_{\pi^{\prime}}-d_{\pi}\right\|_{1} (Holder’s inequality)
=2ϵDTV(dπdπ)\displaystyle=2\epsilon D_{\text{TV}}(d_{\pi^{\prime}}\parallel d_{\pi})

See 3.3

Proof.

This proof takes ideas from Markov chain perturbation theory in (Cho & Meyer, 2001; Hunter, 2005; Zhang & Ross, 2020). Firstly we state a standard result with Pπ=1dπTP_{\pi}^{\star}=\textbf{1}d_{\pi}^{T}

(dπdπ)T(IPπ+Pπ)=dπTdπTdπT+dπTPπ=dπTPπdπT=dπT(PπPπ).(d_{\pi^{\prime}}-d_{\pi})^{T}(I-P_{\pi^{\prime}}+P_{\pi^{\prime}}^{\star})=d_{\pi^{\prime}}^{T}-d_{\pi}^{T}-d_{\pi^{\prime}}^{T}+d_{\pi}^{T}P_{\pi^{\prime}}=d_{\pi}^{T}P_{\pi^{\prime}}-d_{\pi}^{T}=d_{\pi}^{T}(P_{\pi^{\prime}}-P_{\pi}).

Denoting the fundamental matrix of the Markov chain Zπ=(IPπ+Pπ)1Z^{\pi^{\prime}}=(I-P_{\pi^{\prime}}+P_{\pi^{\prime}}^{\star})^{-1} and the mean first passage time matrix Mπ=(IZπ+EZdgπ)DπM^{\pi^{\prime}}=(I-Z^{\pi^{\prime}}+EZ^{\pi^{\prime}}_{\text{dg}})D^{\pi^{\prime}}, and right multiplying the above by (Zπ)1(Z^{\pi^{\prime}})^{-1} we have,

dπTdπT=dπT(PπPπ)(IMπ(Dπ)1)\displaystyle d_{\pi^{\prime}}^{T}-d_{\pi}^{T}=d_{\pi}^{T}(P_{\pi^{\prime}}-P_{\pi})(I-M^{\pi^{\prime}}(D^{\pi^{\prime}})^{-1}) dπdπ=(IMπ(Dπ)1)T(PπTPπT)dπ\displaystyle\;\Rightarrow\;d_{\pi^{\prime}}-d_{\pi}=(I-M^{\pi^{\prime}}(D^{\pi^{\prime}})^{-1})^{T}(P_{\pi^{\prime}}^{T}-P_{\pi}^{T})d_{\pi} (18)
i.e.dπdπ1\displaystyle\text{i.e.}\qquad\left\|d_{\pi^{\prime}}-d_{\pi}\right\|_{1} (IMπ(Dπ)1)T(PπTPπT)dπ1\displaystyle\leq\left\|(I-M^{\pi^{\prime}}(D^{\pi^{\prime}})^{-1})^{T}(P_{\pi^{\prime}}^{T}-P_{\pi}^{T})d_{\pi}\right\|_{1} (submultiplicative property)
dπdπ1\displaystyle\left\|d_{\pi^{\prime}}-d_{\pi}\right\|_{1} (IMπ(Dπ)1)T1(PπTPπT)dπ1T2\displaystyle\leq\underbrace{\left\|(I-M^{\pi^{\prime}}(D^{\pi^{\prime}})^{-1})\right\|_{\infty}}_{T_{1}}\underbrace{\left\|(P_{\pi^{\prime}}^{T}-P_{\pi}^{T})d_{\pi}\right\|_{1}}_{T_{2}}

We know that κπ=Tr(Zπ)\kappa^{\pi}=\text{Tr}(Z^{\pi}) and from (Hunter, 2014), we can write T1T_{1} using the eigenvalues {λπ,i}i=1|S|\{\lambda_{\pi,i}\}_{i=1}^{|S|} of the underlying PπP_{\pi} as

T11|S|i=2|S|1(1λπ,i)1/2maxi(1λπ,i)1/2=σπmaxπσπ=σ.T_{1}\leq\frac{1}{|S|}\sum_{i=2}^{|S|}\frac{1}{(1-\lambda_{\pi,i})^{1/2}}\leq\max_{i}(1-\lambda_{\pi,i})^{-1/2}=\sigma^{\pi}\leq\max_{\pi}\sigma^{\pi}=\sigma^{\star}.

For T2T_{2}, we refer to the result by (Zhang & Ross, 2020), and provide the proof for completeness below.

T2\displaystyle T_{2} =s|s(aP(s|s,a)π(a|s)P(s|s,a)π(a|s))dπ(s)|\displaystyle=\sum_{s^{\prime}}\left|\sum_{s}\left(\sum_{a}P(s^{\prime}|s,a)\pi^{\prime}(a|s)-P(s^{\prime}|s,a)\pi(a|s)\right)d_{\pi}(s)\right|
s,s|aP(s|s,a)(π(a|s)π(a|s))|dπ(s)\displaystyle\leq\sum_{s^{\prime},s}\left|\sum_{a}P(s^{\prime}|s,a)(\pi^{\prime}(a|s)-\pi(a|s))\right|d_{\pi}(s)
s,s,aP(s|s,a)|π(a|s)π(a|s)|dπ(s)\displaystyle\leq\sum_{s,s^{\prime},a}P(s^{\prime}|s,a)\left|\pi^{\prime}(a|s)-\pi(a|s)\right|d_{\pi}(s)
s,a|π(a|s)π(a|s)|dπ(s)=2𝔼sdπ[DTV(ππ)[s]]\displaystyle\leq\sum_{s,a}\left|\pi^{\prime}(a|s)-\pi(a|s)\right|d_{\pi}(s)=2\underset{\begin{subarray}{c}s\sim d_{\pi}\end{subarray}}{{\mathbb{E}}}[D_{\text{TV}}(\pi^{\prime}\parallel\pi)[s]]

Combining these inequalities of T1T_{1} and T2T_{2}, we get the desired result. ∎

A.2 Performance and Constraint Bounds of Trust Region Approach

Consider the trust region formulation in Equation (10). To prove the policy performance bound when the current policy is infeasible (i.e., constraint-violating), we prove the KL divergence between πk\pi_{k} and πk+1\pi_{k+1} for the KL divergence projection, along with other lemmas. We then prove our main theorem for the worst-case performance degradation.

Lemma A.2.

For a closed convex constraint set, if we have a constraint satisfying policy πk\pi_{k} and the KL divergence 𝔼sdπk[DKL(πk+1/2πk)[s]]\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1/2}\middle\|\pi_{k}\right)[s]\big{]} of the ‘Improve’ step is upper bounded by step size δ\delta, then after KL divergence projection of the ‘Project’ step we have

𝔼sdπk[DKL(πk+1πk)[s]]δ.\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k}\right)[s]\big{]}\leq\delta.
Proof.

We make use of the fact that Bregman divergence (hence, KL divergence) projection onto the constraint set (d,d\in{\mathbb{R}}^{d}\,,d\in{\mathbb{N}}) exists and is unique. Since πk\pi_{k} is safe, we have πk\pi_{k} in the constraint set, and πk+1\pi_{k+1} is the projection of πk+12\pi_{k+\frac{1}{2}} onto the constraint set. Using the projection inequality, we have

𝔼sdπk[DKL(πkπk+1)[s]]+𝔼sdπk[DKL(πk+1πk+12)[s]]𝔼sdπk[DKL(πkπk+12)[s]]\displaystyle\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k}\middle\|\pi_{k+1}\right)[s]\big{]}+\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k+\frac{1}{2}}\right)[s]\big{]}\leq\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k}\middle\|\pi_{k+\frac{1}{2}}\right)[s]\big{]}
𝔼sdπk[DKL(πkπk+1)[s]]𝔼sdπk[DKL(πkπk+12)[s]]δ.\displaystyle\;\Rightarrow\;\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k}\middle\|\pi_{k+1}\right)[s]\big{]}\leq\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k}\middle\|\pi_{k+\frac{1}{2}}\right)[s]\big{]}\leq\delta. (DKL()0D_{\text{KL}}\left(\cdot\middle\|\cdot\right)\geq 0)

Since KL divergence is asymptotically symmetric when updating the policy within a local neighbourhood (δ<<1\delta<<1), we have

𝔼sdπk[DKL(πk+1πk)[s]]𝔼sdπk[DKL(πk+12πk)[s]]δ.\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k}\right)[s]\big{]}\leq\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+\frac{1}{2}}\middle\|\pi_{k}\right)[s]\big{]}\leq\delta.

Lemma A.3.

For a closed convex constraint set, if we have a constraint violating policy πk\pi_{k} and the KL divergence 𝔼sdπk[DKL(πk+1/2πk)[s]]\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1/2}\middle\|\pi_{k}\right)[s]\big{]} of the first step is upper bounded by step size δ\delta, then after KL divergence projection of the second step we have

𝔼sdπk[DKL(πk+1πk)[s]]δ+Vmax,\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k}\right)[s]\big{]}\leq\delta+V_{max},

where Vmax=maxiαiβi2,βi=[JCi(πk)li]+V_{max}=\max_{i}\alpha_{i}\beta_{i}^{2},\,\beta_{i}=[J_{C_{i}}(\pi_{k})-l_{i}]_{+}, αi=12aiTH1ai,\alpha_{i}=\frac{1}{2a_{i}^{T}H^{-1}a_{i}}, with aia_{i} as the gradient of the cost advantage function corresponding to constraint CiC_{i}, and HH as the Hessian of the KL divergence constraint. 222For any x,[x]+:=max(0,x)x\in{\mathbb{R}},\,[x]_{+}:=\max(0,x).

Proof.

Let the sublevel set of cost constraint function for the current infeasible policy πk\pi_{k} be given as:

Lπk={π|JCi(π)+𝔼sdπkaπ[A¯Ciπk(s,a)]JCi(πk)i}.L_{\pi_{k}}=\{\pi~{}|~{}J_{C_{i}}(\pi)+\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi_{k}}\\ a\sim\pi\end{subarray}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi_{k}}_{C_{i}}(s,a)]\leq J_{C_{i}}(\pi_{k}){\;\;\forall\;}i\}.

This implies that the current policy πk\pi_{k} lies in LπkL_{\pi_{k}}. The constraint set onto which πk+12\pi_{k+\frac{1}{2}} is projected onto is given by: {π|JCi(πk)+𝔼sdπkaπ[A¯Ciπk(s,a)]lii}.\{\pi~{}|~{}J_{C_{i}}(\pi_{k})+\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi_{k}}\\ a\sim\pi\end{subarray}\end{subarray}}{{\mathbb{E}}}[\widebar{A}^{\pi_{k}}_{C_{i}}(s,a)]\leq l_{i}{\;\;\forall\;}i\}. Let πk+1L\pi_{k+1}^{L} be the projection of πk+12\pi_{k+\frac{1}{2}} onto Lπk.L_{\pi_{k}}.

Note that the Bregman inequality of Lemma A.2 holds for any convex set in d,d{\mathbb{R}}^{d}\,,d\in{\mathbb{N}}. This implies 𝔼sdπk[DKL(πk+1Lπk)[s]]δ\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}^{L}\middle\|\pi_{k}\right)[s]\big{]}\leq\delta since πk\pi_{k} and πk+1L\pi_{k+1}^{L} are both in LπkL_{\pi_{k}}, which is also convex since the constraint functions are convex. Using the Three-point Lemma 333For any ϕ\phi, the Bregman divergence identity: Dϕ(x,y)+Dϕ(y,z)=Dϕ(x,z)+<ϕ(z)ϕ(y),xy>D_{\phi}(x,y)+D_{\phi}(y,z)=D_{\phi}(x,z)+<\nabla\phi(z)-\nabla\phi(y),x-y>, for polices πk,πk+1,\pi_{k},\pi_{k+1}, and πk+1L\pi_{k+1}^{L}, with φ(x):=ixilogxi\varphi(\textbf{x}):=\sum_{i}x_{i}\log x_{i}, we have

δ𝔼sdπk[DKL(πk+1Lπk))[s]]\displaystyle\delta\geq\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}^{L}\middle\|\pi_{k}\right))[s]\big{]} =𝔼sdπk[DKL(πk+1πk)[s]]\displaystyle=\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k}\right)[s]\big{]}
𝔼sdπk[DKL(πk+1πk+1L)[s]]\displaystyle-\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k+1}^{L}\right)[s]\big{]}
+𝔼sdπk[(φ(πk)φ(πk+1L))T(πk+1πk+1L)[s]]\displaystyle+\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}(\nabla\varphi(\pi_{k})-\nabla\varphi(\pi_{k+1}^{L}))^{T}(\pi_{k+1}-\pi_{k+1}^{L})[s]\big{]}
𝔼sdπk[DKL(πk+1πk)[s]]\displaystyle\Rightarrow\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k}\right)[s]\big{]} δ+𝔼sdπk[DKL(πk+1πk+1L)[s]]T1\displaystyle\leq\delta+\underbrace{\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k+1}^{L}\right)[s]\big{]}}_{T_{1}}
𝔼sdπk[(φ(πk)φ(πk+1L))T(πk+1πk+1L)[s]]T2.\displaystyle-\underbrace{\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}(\nabla\varphi(\pi_{k})-\nabla\varphi(\pi_{k+1}^{L}))^{T}(\pi_{k+1}-\pi_{k+1}^{L})[s]\big{]}}_{T_{2}}. (19)

If the constraint violations of the current policy πk\pi_{k} are small, i.e., JCi(πk)li=biJ_{C_{i}}(\pi_{k})-l_{i}=b_{i} is small for all ii, then T1T_{1} can be approximated by a second order expansion. We analyze T1T_{1} for any constraint CiC_{i} and then bound it over all the constraints. As before we overload the notation with πk=πθk=θk\pi_{k}=\pi_{\theta_{k}}=\theta_{k} to write. For any constraint CiC_{i}, we can write T1iT^{i}_{1} as the expected KL divergence if projection was onto the constraint set of CiC_{i} i.e.

T1i12(πk+1πk+1L)TH(πk+1πk+1L)\displaystyle T^{i}_{1}\approx\frac{1}{2}(\pi_{k+1}-\pi_{k+1}^{L})^{T}H(\pi_{k+1}-\pi_{k+1}^{L}) =12(βiaiTH1aiH1ai)TH(βiaiTH1aiH1ai)\displaystyle=\frac{1}{2}\Big{(}\frac{\beta_{i}}{a_{i}^{T}H^{-1}a_{i}}H^{-1}a_{i}\Big{)}^{T}H\Big{(}\frac{\beta_{i}}{a_{i}^{T}H^{-1}a_{i}}H^{-1}a_{i}\Big{)}
=βi22aiTH1ai=αiβi2,\displaystyle=\frac{\beta_{i}^{2}}{2a_{i}^{T}H^{-1}a_{i}}=\alpha_{i}\beta_{i}^{2},

where the second equality is a result of the trust region guarantee (see (Schulman et al., 2015) for more details). Finally we invoke the projection result from (Achiam, 2017) which uses Dykstra’s Alternating Projection algorithm from (Tibshirani, 2017) to bound this projection, i.e., T1maxiT1imaxiαiβi2T_{1}\leq\max_{i}T^{i}_{1}\approx\max_{i}\alpha_{i}\beta_{i}^{2}.

And since δ\delta is small, we have φ(πk)φ(πk+1L)0\nabla\varphi(\pi_{k})-\nabla\varphi(\pi_{k+1}^{L})\approx 0 given ss. Thus, T20T_{2}\approx 0. Combining all of the above, we have 𝔼sdπk[DKL(πk+1πk)[s]]δ+Vmax.\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k}\right)[s]\big{]}\leq\delta+V_{max}.

See 3.6

Proof.

Since D¯KL(πθkπθk)=0\bar{D}_{\text{KL}}(\pi_{\theta_{k}}\parallel\pi_{\theta_{k}})=0, πθk\pi_{\theta_{k}} is feasible. The objective value is 0 for πθ=πθk\pi_{\theta}=\pi_{\theta_{k}}. The bound follows from Equation (7) and Equation (9) where the average KL i.e. 𝔼sdπk[DKL(πk+1πk)[s]]\underset{\begin{subarray}{c}s\sim d_{\pi_{k}}\end{subarray}}{{\mathbb{E}}}\big{[}D_{\text{KL}}\left(\pi_{k+1}\middle\|\pi_{k}\right)[s]\big{]} is bounded by δ+Vmax\delta+V_{max} from Lemma A.3.

Similar to Corollary 3.5, expressions for the auxiliary cost constraints also follow immediately as the second result.

Remark A.4.

Remark If we look at proof as given by (Zhang & Ross, 2020) in Section 5 of their paper, with the distinction now that δ\delta is replaced by δ+Vmax\delta+V_{max}, we have the same result. Our worse bound is due to the constrained nature of our setting, which is intuitive in the sense that for the sake of satisfying constraints, we undergo a worse worst-case performance degradation.

A.3 Approximate ACPO

A.3.1 Policy Recovery Routine

As described in Section 4.2, we need a recovery routine in case the updated policy πk+1/2\pi_{k+1/2} is not approximate constraint satisfying. In this case, the optimization problem is inspired from a simple trust region approach by (Schulman et al., 2015). Since we only deal with one constraint in the practical implementation of ACPO, the recovery rule is obtained by solving the following problem:

minθc+aT(θθk) s.t. 12(θθk)TH(θθk)δ.\begin{array}[]{ll}\min_{\theta}&c+a^{T}\left(\theta-\theta_{k}\right)\\ \text{ s.t. }&\frac{1}{2}\left(\theta-\theta_{k}\right)^{T}H\left(\theta-\theta_{k}\right)\leq\delta.\end{array}

Let x=θθkx=\theta-\theta_{k}, then the dual function L(x,λ)L(x,\lambda) is given by: L(x,λ)=c+aTx+λ(12xTHxδ)L(x,\lambda)=c+a^{T}x+\lambda\left(\frac{1}{2}x^{T}Hx-\delta\right). Now,

Lx=a+λ(Hx)=0x=1λH1a.\frac{L}{\partial x}=a+\lambda(Hx)=0\Longrightarrow x=-\frac{1}{\lambda}H^{-1}a.

xx obtained above should satisfy the trust-region constraint:

12(1λH1a)TH(1λH1a)\displaystyle\frac{1}{2}\left(-\frac{1}{\lambda}H^{-1}a\right)^{T}H\left(-\frac{1}{\lambda}H^{-1}a\right) δ\displaystyle\leq\delta
121λ2aTH1a\displaystyle\Longrightarrow\quad\frac{1}{2}\cdot\frac{1}{\lambda^{2}}\cdot a^{T}H^{-1}a δ\displaystyle\leq\delta
aTH1a2δ\displaystyle\Longrightarrow\sqrt{\frac{a^{T}H^{-1}a}{2\delta}} λ.\displaystyle\leq\lambda.

Therefore, the update rule in case of infeasibility takes the form θ=θk2δaTH1aH1a\theta=\theta_{k}-\sqrt{\frac{2\delta}{a^{T}H^{-1}a}}H^{-1}a. We augment this rule with the gradient of the reward advantage function as well, so the final recovery is

θk+1/2=θk2δ[tH1aaTH1a+(1t)H1ggTH1g];t[0,1]\theta_{k+1/2}=\theta_{k}-\sqrt{2\delta}\bigg{[}t\cdot\frac{H^{-1}a}{\sqrt{a^{T}H^{-1}a}}+(1-t)\cdot\frac{H^{-1}g}{\sqrt{g^{T}H^{-1}g}}\bigg{]}\quad;\quad t\in[0,1]

A.3.2 Line Search

Because of approximation error, the proposed update may not satisfy the constraints in Eq. (10). Constraint satisfaction is enforced via line search, so the final update is

θk+1=θk+sj(θk+1/2θk),\theta_{k+1}=\theta_{k}+s^{j}\left(\theta_{k+1/2}-\theta_{k}\right),

where s(0,1)s\in(0,1) is the backtracking coefficient and j{0,,L}j\in\{0,...,L\} is the smallest integer for which πk+1\pi_{k+1} satisfies the constraints in Equation 10. Here, LL is a finite backtracking budget; if no proposed policy satisfies the constraints after LL backtracking steps, no update occurs.

A.4 Practical ACPO

As explained in Section 4, we use the below problem formulation, which uses first-order Taylor approximation on the objective and second-order approximation on the KL constraint 444The gradient and first-order Taylor approximation of D¯KL(πθπθk)\bar{D}_{\text{KL}}(\pi_{\theta}\parallel\pi_{\theta_{k}}) at θ=θk\theta=\theta_{k} is zero. around θk\theta_{k}, given small δ\delta:

maxθ\displaystyle\max_{\theta} gT(θθk)\displaystyle g^{T}(\theta-\theta_{k}) (20)
s.t. ci+aiT(θθk)0,i;12(θθk)TH(θθk)δ.\displaystyle c_{i}+a_{i}^{T}(\theta-\theta_{k})\leq 0,{\;\;\forall\;}i\qquad;\qquad\tfrac{1}{2}(\theta-\theta_{k})^{T}H(\theta-\theta_{k})\leq\delta.

where

g\displaystyle g :=𝔼sdπθkaπθk[θlogπθ(a|s)|θ=θkA¯πθk(s,a)]\displaystyle:=\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi_{\theta_{k}}}\\ a\sim\pi_{\theta_{k}}\end{subarray}\end{subarray}}{{\mathbb{E}}}\left[\nabla_{\theta}\log\pi_{\theta}(a|s)|_{\theta=\theta_{k}}\widebar{A}^{\pi_{\theta_{k}}}(s,a)\right]\qquad ;ci\displaystyle;\qquad c_{i} :=JCi(θk)lii\displaystyle:=J_{C_{i}}(\theta_{k})-l_{i}{\;\;\forall\;}i
ai\displaystyle a_{i} :=𝔼sdπθkaπθk[θlogπθ(a|s)|θ=θkA¯Ciπθk(s,a)]\displaystyle:=\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi_{\theta_{k}}}\\ a\sim\pi_{\theta_{k}}\end{subarray}\end{subarray}}{{\mathbb{E}}}\left[\nabla_{\theta}\log\pi_{\theta}(a|s)|_{\theta=\theta_{k}}\widebar{A}^{\pi_{\theta_{k}}}_{C_{i}}(s,a)\right]\qquad ;H\displaystyle;\qquad H :=𝔼sdπθkaπθk[θlogπθ(a|s)|θ=θkθlogπθ(a|s)|θ=θkT]\displaystyle:=\underset{\begin{subarray}{c}\begin{subarray}{c}s\sim d_{\pi_{\theta_{k}}}\\ a\sim\pi_{\theta_{k}}\end{subarray}\end{subarray}}{{\mathbb{E}}}\left[\nabla_{\theta}\log\pi_{\theta}(a|s)|_{\theta=\theta_{k}}\nabla_{\theta}\log\pi_{\theta}(a|s)|_{\theta=\theta_{k}}^{T}\right]

Similar to the work of (Achiam et al., 2017), gg, aia_{i}, and HH can be approximated using samples drawn from the policy πθk\pi_{\theta_{k}}. The Hessian HH is identical to the Hessian HH used by (Achiam et al., 2017) and (Zhang & Ross, 2020). However, the definitons gg and aia_{i}’s are different since they include the average reward advantage functions, A¯πθk(s,a)=Q¯πθk(s,a)V¯πθk(s)\widebar{A}^{\pi_{\theta_{k}}}(s,a)=\widebar{Q}^{\pi_{\theta_{k}}}(s,a)-\widebar{V}^{\pi_{\theta_{k}}}(s).

Since rewards and cost advantage functions can be approximated independently, we use the framework of (Zhang & Ross, 2020) to do so. We describe the process of estimation of rewards advantage function, and the same procedure can be used for the cost advantage functions as well. Specifically, first approximate the average-reward bias V¯πθk(s)\widebar{V}^{\pi_{\theta_{k}}}(s) and then use a one-step TD backup to estimate the action-bias function. Concretely, using the average reward Bellman equation gives

A¯πθk(s,a)=r(s,a)J(πθk)+𝔼sP(|s,a)[V¯πθk(s)]V¯πθk(s)\widebar{A}^{\pi_{\theta_{k}}}(s,a)=r(s,a)-J(\pi_{\theta_{k}})+\underset{\begin{subarray}{c}s^{\prime}\sim P(\cdot|s,a)\end{subarray}}{{\mathbb{E}}}\left[\widebar{V}^{\pi_{\theta_{k}}}(s^{\prime})\right]-\widebar{V}^{\pi_{\theta_{k}}}(s) (21)

This expression involves the average-reward bias V¯πθk(s)\widebar{V}^{\pi_{\theta_{k}}}(s), which we can approximated using the standard critic network V¯ϕk(s)\widebar{V}_{\phi_{k}}(s). However, in practice we use the average-reward version of the Generalized Advantage Estimator (GAE) from (Schulman et al., 2016), similar to (Zhang & Ross, 2020). Hence, we refer the reader to that paper for detailed explanation, but provide an overview below for completeness.

Let J^π=1Nt=1Nrt\widehat{J}_{\pi}=\frac{1}{N}\sum_{t=1}^{N}r_{t} denote the estimated average reward. The Monte Carlo target for the average reward value function is V¯ttarget=t=tN(rtJ^π)\widebar{V}^{\text{target}}_{t}=\sum_{t^{\prime}=t}^{N}(r_{t}-\widehat{J}_{\pi}) and the bootstrapped target is V¯ttarget=rtJ^π+V¯ϕπ(st+1)\widebar{V}^{\text{target}}_{t}=r_{t}-\widehat{J}_{\pi}+\widebar{V}^{\pi}_{\phi}(s_{t+1}).

The Monte Carlo and Bootstrap estimators for the average reward advantage function are:

A^MCπ(st,at)=t=tN(rtJ^π)V¯ϕπ(st);A^BSπ(st,at)=ri,tJ^π+V¯ϕπ(st+1)V¯ϕπ(st)\displaystyle\widehat{A}_{\text{MC}}^{\pi}(s_{t},a_{t})=\sum_{t^{\prime}=t}^{N}(r_{t}-\widehat{J}_{\pi})-\widebar{V}^{\pi}_{\phi}(s_{t})\qquad;\qquad\widehat{A}_{\text{BS}}^{\pi}(s_{t},a_{t})=r_{i,t}-\widehat{J}_{\pi}+\widebar{V}^{\pi}_{\phi}(s_{t+1})-\widebar{V}^{\pi}_{\phi}(s_{t})

We can similarly extend the GAE to the average reward setting:

A^GAE(st,at)=t=tNλttδt,δt=rtJ^π+V¯ϕπ(st+1)V¯ϕπ(st).\widehat{A}_{\text{GAE}}(s_{t},a_{t})=\sum_{t^{\prime}=t}^{N}\lambda^{t^{\prime}-t}\delta_{t^{\prime}}\qquad,\qquad\delta_{t^{\prime}}=r_{t^{\prime}}-\widehat{J}_{\pi}+\widebar{V}^{\pi}_{\phi}(s_{t^{\prime}+1})-\widebar{V}^{\pi}_{\phi}(s_{t^{\prime}}). (22)

and set the target for the value function to V¯ttarget=rtJ^π+V¯ϕπ(st+1)+t=t+1Nλttδt\widebar{V}^{\text{target}}_{t}=r_{t}-\widehat{J}_{\pi}+\widebar{V}^{\pi}_{\phi}(s_{t+1})+\sum_{t^{\prime}=t+1}^{N}\lambda^{t^{\prime}-t}\delta_{t^{\prime}}.

A.5 Experimental Details

For detailed explanation of Point-Circle, Point-Gather, Ant-Circle, and Ant-Gather tasks, please refer to (Achiam et al., 2017). For detailed explanation of Bottleneck and Grid tasks, please refer to (Vinitsky et al., 2018). For the simulations in the Gather and Circle tasks, we use neural network baselines with the same architecture and activation functions as the policy networks. For the simulations in the Grid and Bottleneck tasks, we use linear baselines. For all experiments we use Gaussian neural policies whose outputs are the mean vectors and variances are separate parameters to be learned. Seeds used for generating evaluation trajectories are different from those used for training.

For comparison of different algorithms, we make use of CPO, PCPO, ATRPO, and PPO implementations taken from https://github.com/rll/rllab and https://github.com/openai/safety-starter-agents. Even the hyperparameters are selected so as to showcase the best performance of other algorithms for fair comparison. The choice of the hyperparameters given below is inspired by the original papers since we wanted to understand the performance of the average reward case.

We use settings which are common in all open-source implementations of the algorithms, such as normalizing the states by the running mean and standard deviation before being fed into the neural network and similarly normalizing the advantage values (for both rewards and constraints) by their batch means and standard deviations before being used for policy updates. Table 1 summarizes the hyperparameters below.

Table 1: Hyperparameter Setup

Hyperparameter PPO/ATRPO CPO/PCPO/ACPO
No. of hidden layers 2 2
Activation tanh\tanh tanh\tanh
Initial log std -0.5 -1
Batch size 2500 2500
GAE parameter (reward) 0.95 0.95
GAE parameter (cost) N/A 0.95
Trust region step size δ\delta 10410^{-4} 10410^{-4}
Learning rate for policy 2×1042\times 10^{-4} 2×1042\times 10^{-4}
Learning rate for reward critic net 2×1042\times 10^{-4} 2×1042\times 10^{-4}
Learning rate for cost critic net N/A 2×1042\times 10^{-4}
Backtracking coeff. 0.75 0.75
Max backtracking iterations 10 10
Max conjugate gradient iterations 10 10
Recovery regime parameter tt N/A 0.75

For the Lagrangian formulation of ATRPO and PPO, note that the original papers do not provide any blueprint for formulating the Lagrangian, and even CPO and PCPO use unconstrained TRPO for benchmarking. However, we feel that this is unfair to these algorithms as they can possibly perform better with a Lagrangian formulation in an average-reward CMDP setting. To this extent, we introduced a Lagrangian parameter [0,1]\ell\in[0,1] that balances the rewards and constraints in the final objective function. More specifically, Equation (13) for a single constraint now becomes

maxθ\displaystyle\max_{\theta} (1)gT(θθk)[(c1+a1T(θθk))+(12(θθk)TH(θθk)δ)].\displaystyle(1-\ell)g^{T}(\theta-\theta_{k})-\ell\big{[}\big{(}c_{1}+a_{1}^{T}(\theta-\theta_{k})\big{)}+\big{(}\tfrac{1}{2}(\theta-\theta_{k})^{T}H(\theta-\theta_{k})-\delta\big{)}\big{]}. (23)
Note.

The authors of the ATRPO and PPO do not suggest any principled approach for finding an optimal \ell. Hence, the choice of the Lagrangian parameter \ell is completely empirical and is selected such that these algorithms achieve maximum rewards while satisfying the constraints. Also see in Figure 1, for Ant-Gather, Bottleneck, and Grid environments, where the constraints cannot be satisfied for any value of \ell, we include the results for a specific value of \ell for illustrative purposes, as detailed in Table 2.

Table 2: Lagrangian parameter \ell for ATRPO and PPO

Algorithm Point-Gather Ant-Circle Ant-Gather Bottleneck Grid
ATRPO 0.50 0.60 0.45 0.50 0.45
PPO 0.55 0.50 0.50 0.50 0.60

A.6 Experimental Addendum

A.6.1 Environments

All environments tested on are illustrated in Figure 3, along with a detailed description of each.

Refer to caption
(a) Circle
Refer to caption
(b) Gather
Refer to caption
(c) Grid
Refer to caption
(d) Bottleneck
Figure 3: The Circle, Gather, Grid, and Bottleneck tasks. (a) Circle: The agent is rewarded for moving in a specified circle but is penalized if the diameter of the circle is larger than some value as in (Achiam et al., 2017). (b) Gather: The agent is rewarded for collecting the green balls while penalized to gather red balls as in (Achiam et al., 2017). (c) Grid: The agent controls traffic lights in a 3x3 road network and is rewarded for high traffic throughput but is constrained to let lights be red for at most 5 consecutive seconds as in (Vinitsky et al., 2018). (d) Botteneck: The agent controls vehicles (red) in a merging traffic situation and is rewarded for maximizing the number of vehicles that pass through but is constrained to ensure that white vehicles (not controlled by agent) have “low” speed for no more than 10 seconds as in (Vinitsky et al., 2018).

A.6.2 Learning Curves

Due to space restrictions, we present the learning curves for the remaining environments in Figure 4.

Average Rewards:

Refer to caption
Refer to caption

                    Average Constraint values:                        Refer to caption

Refer to caption
(a) Point Gather
Refer to caption
(b) Ant Circle
Figure 4: The average reward and constraint cost function values vs iterations (in 10410^{4}) learning curves for some algorithm-task pairs. Solid lines in each figure are the empirical means, while the shaded area represents 1 standard deviation, all over 5 runs. The dashed line in constraint plots is the constraint threshold ll. ATRPO and PPO are tested with constraints, which are included in their Lagrangian formulation.

A.6.3 Recovery Regime Revisited

In Subsection 5.3, we studied the effect of the hyperparameter tt for only one task. Figure 5 shows the performance of ACPO with different values of tt in various environments.

Rewards:
Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption

                    Constraint values:                                     Refer to caption

Refer to caption
(a) Point Gather
Refer to caption
(b) Ant Circle
Refer to caption
(c) Ant Gather
Refer to caption
(d) Bottleneck
Refer to caption
(e) Grid
Figure 5: Comparison of performance of ACPO with different values of the hyperparameter tt in various environment. X-axis is iterations in 10410^{4}.