Spectral-Risk Safe Reinforcement Learning
with Convergence Guarantees
Abstract
The field of risk-constrained reinforcement learning (RCRL) has been developed to effectively reduce the likelihood of worst-case scenarios by explicitly handling risk-measure-based constraints. However, the nonlinearity of risk measures makes it challenging to achieve convergence and optimality. To overcome the difficulties posed by the nonlinearity, we propose a spectral risk measure-constrained RL algorithm, spectral-risk-constrained policy optimization (SRCPO), a bilevel optimization approach that utilizes the duality of spectral risk measures. In the bilevel optimization structure, the outer problem involves optimizing dual variables derived from the risk measures, while the inner problem involves finding an optimal policy given these dual variables. The proposed method, to the best of our knowledge, is the first to guarantee convergence to an optimum in the tabular setting. Furthermore, the proposed method has been evaluated on continuous control tasks and showed the best performance among other RCRL algorithms satisfying the constraints.
1 Introduction
Safe reinforcement learning (Safe RL) has been extensively researched [2, 15, 25], particularly for its applications in safety-critical areas such as robotic manipulation [3, 17] and legged robot locomotion [28]. In safe RL, constraints are often defined by the expectation of the discounted sum of costs [2, 25], where cost functions are defined to capture safety signals. These expectation-based constraints can leverage existing RL techniques since the value functions for costs follow the Bellman equation. However, they inadequately reduce the likelihood of significant costs, which can result in worst-case outcomes, because expectations do not capture information on the tail distribution. In real-world applications, avoiding such worst-case scenarios is particularly crucial, as they can lead to irrecoverable damage.
In order to avoid worst-case scenarios, various safe RL methods [10, 26] attempt to define constraints using risk measures, such as conditional value at risk (CVaR) [20] and its generalized form, spectral risk measures [1]. Yang et al. [26] and Zhang and Weng [30] have employed a distributional critic to estimate the risk constraints and calculate policy gradients. Zhang et al. [32] and Chow et al. [10] have introduced auxiliary variables to estimate CVaR. These approaches have effectively prevented worst-case outcomes by implementing risk constraints. However, the nonlinearity of risk measures makes it difficult to develop methods that ensure optimality. To the best of our knowledge, even for well-known risk measures such as CVaR, existing methods only guarantee local convergence [10, 32]. This highlights the need for a method that ensures convergence to a global optimum.
In this paper, we introduce a spectral risk measure-constrained RL (RCRL) algorithm called spectral-risk-constrained policy optimization (SRCPO). Spectral risk measures, including CVaR, have a dual form expressed as the expectation over a random variable [7]. Leveraging this duality, we propose a bilevel optimization framework designed to alleviate the challenges caused by the nonlinearity of risk measures. This framework consists of two levels: the outer problem, which involves optimizing a dual variable that originates from the dual form of spectral risk measures, and the inner problem, which aims to find an optimal policy for a given dual variable. For the inner problem, we define novel risk value functions that exhibit linearity in performance difference and propose a policy gradient method that ensures convergence to an optimal policy in tabular settings. The outer problem is potentially non-convex, so gradient-based optimization techniques are not suitable for global convergence. Instead, we propose a method to model and update a distribution over the dual variable, which also ensures finding an optimal dual variable in tabular settings. As a result, to the best of our knowledge, we present the first algorithm that guarantees convergence to an optimum for the RCRL problem. Moreover, the proposed method has been evaluated on continuous control tasks with both single and multiple constraints, demonstrating the highest reward performance among other RCRL algorithms satisfying constraints in all tasks. Our contributions are summarized as follows:
-
•
We propose a novel bilevel optimization framework for RCRL, effectively addressing the complexities caused by the nonlinearity of risk measures.
-
•
The proposed method demonstrates convergence and optimality in tabular settings.
-
•
The proposed method has been evaluated on continuous control tasks with both single and multiple constraints, consistently outperforming existing RCRL algorithms.
2 Related Work
RCRL algorithms can be classified according to how constraints are handled and how risks are estimated. There are two main approaches to handling constraints: the Lagrangian [29, 26, 32] and the primal [15, 30] approaches. The Lagrangian approach deals with dual problems by jointly updating dual variables and policies, which can be easily integrated into the existing RL framework. In contrast, the primal approach directly tackles the original problem by constructing a subproblem according to the current policy and solving it. Subsequently, risk estimation can be categorized into three types. The first type [29, 32] uses an auxiliary variable to precisely estimate CVaR through its dual form. Another one [26, 30] approximates risks as the expectation of the state-wise risks calculated using a distributional critic [8]. The third one [15] approximates CVaR by assuming that cost returns follow Gaussians. Despite these diverse methods, they only guarantee convergence to local optima. In the field of risk-sensitive RL, Bäuerle and Glauner [7] have proposed an algorithm for spectral risk measures that ensures convergence to an optimum. They introduced a bilevel optimization approach, like our method, but did not provide a method for solving the outer problem. Furthermore, their approach to the inner problem is value-based, making it complicated to extend to RCRL, which typically requires a policy gradient approach.
3 Backgrounds
3.1 Constrained Markov Decision Processes
A constrained Markov decision process (CMDP) is defined as , where is a state space, is an action space, is a transition model, is an initial state distribution, is a discount factor, is a reward function, is a cost function, and is the number of cost functions. A policy is defined as a mapping from the state space to the action distribution. Then, the state distribution given is defined as , where , , and for . The state-action return of is defined as:
The state return and the return are defined, respectively, as:
The returns for a cost, , , and , are defined by replacing the reward with the th cost. Then, a safe RL problem can be defined as maximizing the reward return while keeping the cost returns below predefined thresholds to ensure safety.
3.2 Spectral Risk Measures
In order to reduce the likelihood of encountering the worst-case scenario, it is required to incorporate risk measures into constraints. Conditional value at risk (CVaR) is a widely used risk measure in finance [20], and it has been further developed into a spectral risk measure in [1], which provides a more comprehensive framework. A spectral risk measure with spectrum is defined as:
where is an increasing function, , and . By appropriately defining the function , various risk measures can be established, and examples are as follows:
(1) |
where represents a risk level, and the measures become risk neutral when . Furthermore, it is known that the spectral risk measure has the following dual form [7]:
(2) |
where is an increasing convex function, and is the convex conjugate function of . For example, is expressed as , where , and the integral part of the conjugate function becomes . We define the following sub-risk measure corresponding to the function as follows:
(3) |
Then, . Note that the integral part is independent of and only serves as a constant value of risk. As a result, the sub-risk measure can be expressed using the expectation, so it provides computational advantages over the original risk measure in many operations. We will utilize this advantage to show optimality of the proposed method.
3.3 Augmented State Spaces
As mentioned by Zhang et al. [32], an risk-constrained RL problem is non-Markovian, so a history-conditioned policy is required to solve the problem. Also, Bastani et al. [6] showed that an optimal history-conditioned policy can be expressed as a Markov policy defined in a newly augmented state space, which incorporates the discounted sum of costs as part of the state. To follow these results, we define an augmented state as follows:
(4) |
Then, the CMDP is modified as follows:
and the policy is defined on the augmented state space.
4 Proposed Method
The risk measure-constrained RL (RCRL) problem is defined as follows:
(5) |
where is the threshold of the th constraint. Due to the nonlinearity of the risk measure, achieving an optimal policy through policy gradient techniques can be challenging. To address this issue, we propose solving the RCRL problem by decomposing it into two separate problems. Using a feasibility function ,111 if else . the RCRL problem can be rewritten as follows:
(6) | ||||
where (a) is the inner problem, (b) is the outer problem, and is denoted as a dual variable. It is known that if the cost functions are bounded and there exists a policy that satisfies the constraints of (5), there exists an optimal dual variable [7], which transforms the supremum into the maximum in (6). Then, the RCRL problem can be solved by 1) finding optimal policies for various dual variables and 2) selecting the dual variable corresponding to the policy that achieves the maximum expected reward return while satisfying the constraints.
The inner problem is then a safe RL problem with sub-risk measure constraints. However, although the sub-risk measure is expressed using the expectation as in (3), the function causes nonlinearity, which makes difficult to develop a policy gradient method. To address this issue, we propose novel risk value functions that show linearity in the performance difference between two policies. Through these risk value functions, we propose a policy update rule with convergence guarantees for the inner problem in Section 5.2.
The search space of the outer problem is a function space, rendering it impractical. To address this challenge, we appropriately parameterize the function using a parameter in Section 6.1. Then, we can solve the outer problem through a brute-force approach by searching the parameter space. However, it is computationally prohibitive, requiring exponential time regarding to the number of constraints and the dimension of the parameter space. Consequently, it is necessary to develop an efficient method for solving the outer problem. Given the difficulty of directly identifying an optimal , we instead model a distribution to estimate the probability that a given is optimal. We then propose a method that ensures convergence to an optimal distribution , which deterministically samples an optimal , in Section 6.2.
By integrating the proposed methods to solve the inner and outer problems, we finally introduce an RCRL algorithm called spectral-risk-constrained policy optimization (SRCPO). The pseudo-code for SRCPO is presented in Algorithm 1. In the following sections, we will describe how to solve the inner problem and the outer problem in detail.
5 Inner Problem: Safe RL with Sub-Risk Measure
In this section, we propose a policy gradient method to solve the inner problem, defined as:
(7) |
where constraints consist of the sub-risk measures, defined in (3). In the remainder of this section, we first introduce risk value functions to calculate the policy gradient of the sub-risk measures, propose a policy update rule, and finally demonstrate its convergence to an optimum in tabular settings.
5.1 Risk Value Functions
In order to derive the policy gradient of the sub-risk measure, we first define risk value functions. To this end, using the fact that and , where and is defined in the augmented state , we expand the sub-risk measure of the th cost as follows:
where is the probability density function (pdf) of a random variable . To capture the value of the sub-risk measure, we can define risk value functions as follows:
Using the convexity of , we present the following lemma on the bounds of the risk value functions:
Lemma 5.1.
Assuming that a function is differentiable, and are bounded by , and , where and .
The proof is provided in Appendix A. Lemma 5.1 means that the value of is scaled by . To eliminate this scale, we define a risk advantage function as follows:
Now, we introduce a theorem on the difference in the sub-risk measure between two policies.
Theorem 5.2.
Given two policies, and , the difference in the sub-risk measure is:
(8) |
The proof is provided in Appendix A. Since Theorem 5.2 also holds for the optimal policy, it is beneficial for showing optimality, as done in policy gradient algorithms of traditional RL [4]. Additionally, to calculate the advantage function, we use a distributional critic instead of directly estimating the risk value functions. This process is described in detail in Section 7.
5.2 Policy Update Rule
In this section, we derive the policy gradient of the sub-risk measure and introduce a policy update rule to solve the inner problem. Before that, we first parameterize the policy as , where , and denote the objective function as and the th constraint function as . Then, using (8) and the fact that , the policy gradient of the th constraint function is derived as follows:
(9) |
Now, we propose a policy update rule as follows:
where is a learning rate, is Fisher information matrix, is Moore-Penrose inverse matrix of , are weights, and for and when the constraints are not satisfied. There are various possible strategies for determining and , which makes the proposed method generalize many primal approach-based safe RL algorithms [2, 25, 15]. Several options for determining and , as well as our strategy, are described in Appendix B.
5.3 Convergence Analysis
We analyze the convergence of the proposed policy update rule in a tabular setting where both the augmented state space and action space are finite. To demonstrate convergence in this setting, we use the softmax policy parameterization [4] as follows:
Then, we can show that the policy converges to an optimal policy if updated by the proposed method.
Theorem 5.3.
Assume that there exists a feasible policy such that for , where . Then, if a policy is updated by the proposed update rule with a learning rate that follows the Robbins-Monro condition [19], it will converge to an optimal policy of the inner problem.
The proof is provided in Appendix A.1, and the assumption of the existence of a feasible policy is commonly used in convergence analysis in safe RL [12, 5, 15]. Through this result, we can also demonstrate convergence of existing primal approach-based safe RL methods, such as CPO [2], PCPO [27], and P3O [31],222Note that these safe RL algorithms are designed to handle expectation-based constraints, so they are not suitable for RCRL. by identifying proper and strategies for these methods.
6 Outer Problem: Dual Optimization for Spectral Risk Measure

In this section, we propose a method for solving the outer problem that can be performed concurrently with the policy update of the inner problem. To this end, since the domain of the outer problem is a function space, we first introduce a procedure to parameterize the functions, allowing the problem to be practically solved. Subsequently, we propose a method to find an optimal solution for the parameterized outer problem.
6.1 Parameterization
Searching the function space of is challenging. To address this issue, we discretize the spectrum function as illustrated in Figure 1 since the discretization allows to be expressed using a finite number of parameters. The discretized spectrum is then formulated as:
(10) |
where , , and is the number of discretizations. The proper values of and can be achieved by minimizing the norm of the function space, expressed as:
(11) |
Now, we can show that the difference between the original risk measure and its discretized version is bounded by the inverse of the number of discretizations.
Lemma 6.1 (Approximation Error).
The difference between the original risk measure and its discretized version is:
The proof is provided in Appendix A. Lemma 6.1 means that as the number of discretizations increases, the risk measure converges to the original one. Note that the discretized version of the risk measure is also a spectral risk measure, indicating that the discretization process projects the original risk measure into a specific class of spectral risk measures. Now, we introduce a parameterization method as detailed in the following theorem:
Theorem 6.2.
Let us parameterize the function using a parameter as:
where for . Then, the following is satisfied:
According to Remark 2.7 in [7], the infimum function in (2) exists and can be expressed as an integral of the spectrum function if the reward and cost functions are bounded. By using this fact, Theorem 6.2 can be proved, and details are provided in Appendix A. Through this parameterization, the RCRL problem is now expressed as follows:
(12) |
where is a parameter of the th constraint. In the remainder of the paper, we denote the constraint function as , where , and the policy for as .
6.2 Optimization
To obtain the supremum of in (12), a brute-force search can be used, but it demands exponential time relative to the dimension of . Alternatively, can be directly optimized via gradient descent; however, due to the difficulty in confirming the convexity of the outer problem, optimal convergence cannot be assured. To resolve this issue, we instead propose to find a distribution on , called a sampler , that outputs the likelihood that a given is optimal.
For detailed descriptions of the sampler , the probability of sampling is expressed as , where is the th element of , and is . The sampling process is denoted by , where each component is sampled according to . Implementation of this sampling process is similar to the stick-breaking process [23] due to the condition for . Initially, is sampled within , and subsequent values are sampled within . Then, our target is to find the following optimal distribution:
(13) |
where is an optimal solution of the outer problem (12). Once we obtain the optimal distribution, can be achieved by sampling from .
In order to obtain the optimal distribution, we propose a novel loss function and update rule. Before that, let us parameterize the sampler as , where , and define the following function:
It is known that for sufficiently large , the optimal policy of the inner problem, , is also the solution of , which means [31]. Using this fact, we build a target distribution for the sampler as: , where is the policy for at time-step . Then, we define a loss function using the cross-entropy () as follows:
Finally, we present an update rule along with its convergence property in the following theorem.
Theorem 6.3.
Let us assume that the space of is finite and update the sampler according to the following equation:
(14) |
where is a learning rate, and is the pseudo-inverse of Fisher information matrix of . Then, under a softmax parameterization, the sampler converges to an optimal distribution defined in (13).
The proof is provided in Appendix A. As the loss function consists of the current policy , the sampler can be updated simultaneously with the policy update.
7 Practical Implementation
In order to calculate the policy gradient defined in (9), it is required to estimate the risk value functions. Instead of modeling them directly, we use quantile distributional critics [11] , which approximate the pdf of using Dirac delta functions and are parameterized by . Then, the risk value function can be approximated as:
and can be achieved from . The distributional critics are trained to minimize the quantile regression loss [11] and details are referred to Appendix C. Additionally, to implement , we use a truncated normal distribution [9], where is the sampling range. Then, the sampling process of can be implemented as follows:
Additional details of the practical implementation are provided in Appendix C.
8 Experiments and Results
Tasks. The experiments are conducted on the Safety Gymnasium tasks [14] with a single constraint and the legged robot locomotion tasks [15] with multiple constraints. In the Safety Gymnasium, two robots—point and car—are used to perform two tasks: a goal task, which involves controlling the robot to reach a target location, and a button task, which involves controlling the robot to press a designated button. In these tasks, a cost is incurred when the robot collides with an obstacle. In the legged robot locomotion tasks, bipedal and quadrupedal robots are controlled to track a target velocity while satisfying three constraints related to body balance, body height, and foot contact timing. For more details, please refer to Appendix D.
Baselines. Many RCRL algorithms use CVaR to define risk constraints [32, 15, 29]. Accordingly, the proposed method is evaluated and compared with other algorithms under the CVaR constraints with . Experiments on other risk measures are performed in Section 8.1. The baseline algorithms are categorized into three types based on their approach to estimating risk measures. First, CVaR-CPO [32] and CPPO [29] utilize auxiliary variables to estimate CVaR. Second, WCSAC-Dist [26] and SDPO [30] approximate the risk measure with an expected value . Finally, SDAC [15] approximates as a Gaussian distribution and uses the mean and standard deviation of to estimate CVaR. The hyperparameters and network structure of each algorithm are detailed in Appendix D. Note that both CVaR-CPO and the proposed method, SRCPO, use the augmented state space. However, for a fair comparison with other algorithms, the policy and critic networks of these methods are modified to operate on the original state space.


Results. Figures 2 and 3 show the training curves for the locomotion tasks and the Safety Gymnasium tasks, respectively. The reward sum in the figures refers to the sum of rewards within an episode, while the cost rate is calculated as the sum of costs divided by the episode length. In all tasks, the proposed method achieves the highest rewards among methods whose cost rates are below the specified thresholds. These results are likely because only the proposed method guarantees optimality. Specifically in the locomotion tasks, an initial policy often struggles to stabilize the balance of the robot, resulting in high costs from the start. Given this challenge, it is more susceptible to falling into local optima compared to other tasks, which enables the proposed method to outperform other baseline methods. Note that WCSAC-Dist shows the highest rewards in the Safety Gymnasium tasks, but the cost rates exceed the specified thresholds. This issue seems to arise from the approach to estimating risk constraints. WCSAC-Dist estimates the constraints based on the expected risk for each state, but it is lower than the original risk measure, leading to constraint violations.
8.1 Study on Various Risk Measures



In this section, we analyze the results when constraints are defined using various risk measures. To this end, we train policies in the point goal task under constraints based on the and risk measures defined in (2), as well as the Wang risk measure [24]. Although the Wang risk measure is not a spectral but a distortion risk measure, our parameterization method introduced in Section 6.1 enables it to be approximated as a spectral risk measure, and the visualization of this process is provided in Appendix E. We conduct experiments with three risk levels for each risk measure and set the constraint threshold as . Evaluation results are presented in Figure 4, and training curves are provided in Appendix E. Figure 4 (Right) shows intuitive results that increasing the risk level effectively reduces the likelihood of incurring high costs. Similarly, Figure 4 (Left) presents the trend across all risk measures, indicating that higher risk levels correspond to lower cost rates and decreased reward performance. Finally, Figure 4 (Middle) exhibits the differences in how each risk measure addresses worst-case scenarios. In the spectrum formulation defined in (1), applies a uniform penalty to the tail of the cost distribution above a specified percentile, whereas measures like and impose a heavier penalty at higher percentiles. As a result, because imposes a relatively milder penalty on the worst-case outcomes compared to other risk measures, it shows the largest intervals between the th and th percentiles.
9 Conclusions and Limitations
In this work, we introduced a spectral-risk-constrained RL algorithm that ensures convergence and optimality in a tabular setting. Specifically, in the inner problem, we proposed a generalized policy update rule that can facilitate the development of a new safe RL algorithm with convergence guarantees. For the outer problem, we introduced a notion called sampler, which enhances training efficiency by concurrently training with the inner problem. Through experiments in continuous control tasks, we empirically demonstrated the superior performance of the proposed method and its capability to handle various risk measures. However, convergence to an optimum is shown only in a tabular setting, so future research may focus on extending these results to linear MDPs or function approximation settings. Furthermore, since our approach can be applied to risk-sensitive RL, future work can also implement the proposed method in this area.
References
- Acerbi [2002] Carlo Acerbi. Spectral measures of risk: A coherent representation of subjective risk aversion. Journal of Banking & Finance, 26(7):1505–1518, 2002.
- Achiam et al. [2017] Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In Proceedings of International Conference on Machine Learning, pages 22–31, 2017.
- Adjei et al. [2024] Patrick Adjei, Norman Tasfi, Santiago Gomez-Rosero, and Miriam AM Capretz. Safe reinforcement learning for arm manipulation with constrained Markov decision process. Robotics, 13(4):63, 2024.
- Agarwal et al. [2021] Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. Journal of Machine Learning Research, 22(98):1–76, 2021.
- Bai et al. [2022] Qinbo Bai, Amrit Singh Bedi, Mridul Agarwal, Alec Koppel, and Vaneet Aggarwal. Achieving zero constraint violation for constrained reinforcement learning via primal-dual approach. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3682–3689, 2022.
- Bastani et al. [2022] Osbert Bastani, Jason Yecheng Ma, Estelle Shen, and Wanqiao Xu. Regret bounds for risk-sensitive reinforcement learning. In Advances in Neural Information Processing Systems, pages 36259–36269, 2022.
- Bäuerle and Glauner [2021] Nicole Bäuerle and Alexander Glauner. Minimizing spectral risk measures applied to Markov decision processes. Mathematical Methods of Operations Research, 94(1):35–69, 2021.
- Bellemare et al. [2017] Marc G. Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In Proceedings of International Conference on Machine Learning, pages 449–458, 2017.
- Burkardt [2014] John Burkardt. The truncated normal distribution. Department of Scientific Computing Website, Florida State University, 1(35):58, 2014.
- Chow et al. [2017] Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, and Marco Pavone. Risk-constrained reinforcement learning with percentile risk criteria. Journal of Machine Learning Research, 18(1):6070–6120, 2017.
- Dabney et al. [2018] Will Dabney, Mark Rowland, Marc Bellemare, and Rémi Munos. Distributional reinforcement learning with quantile regression. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018.
- Ding et al. [2020] Dongsheng Ding, Kaiqing Zhang, Tamer Basar, and Mihailo Jovanovic. Natural policy gradient primal-dual method for constrained Markov decision processes. In Advances in Neural Information Processing Systems, pages 8378–8390, 2020.
- Haarnoja et al. [2018] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of International Conference on Machine Learning, pages 1861–1870, 2018.
- Ji et al. [2023] Jiaming Ji, Borong Zhang, Xuehai Pan, Jiayi Zhou, Juntao Dai, and Yaodong Yang. Safety-gymnasium. GitHub repository, 2023.
- Kim et al. [2023] Dohyeong Kim, Kyungjae Lee, and Songhwai Oh. Trust region-based safe distributional reinforcement learning for multiple constraints. In Advances in Neural Information Processing Systems, 2023.
- Kim et al. [2024] Dohyeong Kim, Mineui Hong, Jeongho Park, and Songhwai Oh. Scale-invariant gradient aggregation for constrained multi-objective reinforcement learning. arXiv preprint arXiv:2403.00282, 2024.
- Liu et al. [2024] Puze Liu, Haitham Bou-Ammar, Jan Peters, and Davide Tateo. Safe reinforcement learning on the constraint manifold: Theory and applications. arXiv preprint arXiv:2404.09080, 2024.
- Liu et al. [2020] Yongshuai Liu, Jiaxin Ding, and Xin Liu. IPO: Interior-point policy optimization under constraints. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4940–4947, 2020.
- Robbins and Monro [1951] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951.
- Rockafellar et al. [2000] R Tyrrell Rockafellar, Stanislav Uryasev, et al. Optimization of conditional value-at-risk. Journal of Risk, 2:21–42, 2000.
- Schulman et al. [2015] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of International Conference on Machine Learning, pages 1889–1897. PMLR, 2015.
- Schulman et al. [2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
- Sethuraman [1994] Jayaram Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, pages 639–650, 1994.
- Wang [2000] Shaun S Wang. A class of distortion operators for pricing financial and insurance risks. Journal of Risk and Insurance, pages 15–36, 2000.
- Xu et al. [2021] Tengyu Xu, Yingbin Liang, and Guanghui Lan. CRPO: A new approach for safe reinforcement learning with convergence guarantee. In Proceedings of International Conference on Machine Learning, pages 11480–11491, 2021.
- Yang et al. [2023] Qisong Yang, Thiago D Simão, Simon H Tindemans, and Matthijs TJ Spaan. Safety-constrained reinforcement learning with a distributional safety critic. Machine Learning, 112(3):859–887, 2023.
- Yang et al. [2020] Tsung-Yen Yang, Justinian Rosca, Karthik Narasimhan, and Peter J. Ramadge. Projection-based constrained policy optimization. In Proceedings of International Conference on Learning Representations, 2020.
- Yang et al. [2022] Tsung-Yen Yang, Tingnan Zhang, Linda Luu, Sehoon Ha, Jie Tan, and Wenhao Yu. Safe reinforcement learning for legged locomotion. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2454–2461, 2022.
- Ying et al. [2022] Chengyang Ying, Xinning Zhou, Hang Su, Dong Yan, Ning Chen, and Jun Zhu. Towards safe reinforcement learning via constraining conditional value-at-risk. In Proceedings of International Joint Conference on Artificial Intelligence, 2022.
- Zhang and Weng [2022] Jianyi Zhang and Paul Weng. Safe distributional reinforcement learning. In Proceedings of Distributed Artificial Intelligence, pages 107–128, 2022.
- Zhang et al. [2022] Linrui Zhang, Li Shen, Long Yang, Shixiang Chen, Bo Yuan, Xueqian Wang, and Dacheng Tao. Penalized proximal policy optimization for safe reinforcement learning. arXiv preprint arXiv:2205.11814, 2022.
- Zhang et al. [2024] Qiyuan Zhang, Shu Leng, Xiaoteng Ma, Qihan Liu, Xueqian Wang, Bin Liang, Yu Liu, and Jun Yang. CVaR-constrained policy optimization for safe reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 2024.
Appendix A Proofs
See 5.1
Proof.
Since ,
when or . Using the fact that is an increasing and convex function,
Likewise, also satisfies the above property. As a result,
Then, the maximum of is . Since and , where is the current time step,
Consequently,
∎
See 5.2
Proof.
See 6.1
Proof.
See 6.2
Proof.
Since is an increasing convex function, . In addition, according to Remark 2.7 in [1], there exists such that , and its derivative is expressed as . Using this fact, we can formulate by integrating its derivative as follows:
where is a constant. Since for any constant , has the same value due to the integral part of in (3), we set as zero. Also, we can express as , where is due to the definition of . As a result, the following is satisfied:
∎
See 6.3
Proof.
Since we assume that the space of is finite, the sampler can be expressed using the softmax parameterization as follows:
If we update the parameter of the sampler as described in (14), the following is satisfied as in the natural policy gradient:
Furthermore, due to Theorem 5.3, if a policy is updated by the proposed method, the policy converges to an optimal policy . Then, if is not optimal,
since . As a result,
Thus, converges to an optimal sampler. ∎
A.1 Proof of Theorem 5.3
Our derivation is based on the policy gradient works by Agarwal et al. [4] and Xu et al. [25]. Due to the softmax policy parameterization, the following is satisfied:
(18) |
where . By the natural policy gradient,
(19) |
Before proving Theorem 5.3, we first present several useful lemmas.
Lemma A.1.
is non-negative.
Proof.
Due to the advantage functions in (19), . Using the fact that a logarithmic function is concave,
As a result, . ∎
Lemma A.2.
.
Proof.
Using the Pinsker’s inequality,
where (i) follows Lemma A.1, and (ii) follows the fact that . As a result,
∎
See 5.3
Proof.
We divide the analysis into two cases: 1) when the constraints are satisfied and 2) when the constraints are violated. First, let us consider the case where the constraints are satisfied. In this case, the policy is updated as follows:
(20) |
Using Theorem 5.2,
(21) | ||||
where (i) follows Hölder’s inequality, and (ii) follows Lemma A.2. Using (18) and (20),
(22) | ||||
where (i) is from the fact that . By combining (21) and (22), the following is satisfied for any :
(23) |
Then, the following is satisfied for any policy :
(24) | ||||
where (i) follows (23) by substituting into . Now, let us consider the second case where constraints are violated. In this case, the policy is updated as follows:
(25) |
As (21) derived in the first case,
(26) | ||||
(27) | ||||
By combining (26) and (27), the following is satisfied for any :
(28) |
As in (24), the following is satisfied for any policy :
(29) | ||||
Now, the main inequalities for both cases have been derived. Let us denote by the set of time steps in which the constraints are not violated. This means that if , . Then, using (24) and (29), we have:
(30) | ||||
If and , and . Thus, . By substituting into in (30),
For brevity, we denote . Since , (30) can be modified as follows:
It can be simplified as follows:
If is set with positive values in all actions, is bounded for . Additionally, due to the Robbins-Monro condition, RHS converges to some real number as goes to infinity. Since in LHS goes to infinity, must go to minus infinity. It means that since . By substituting into in (30),
Since for , the above equation is rewritten as follows:
Since and the Robbins-Monro condition is satisfied, the following must be held:
where is the th element of . Consequently, the policy converges to an optimal policy. ∎
Appendix B Weight Selection Strategy
In this section, we first identify and for existing primal approach-based safe RL algorithms and present our strategy. For the existing methods, we analyze several safe RL algorithms including constrained policy optimization (CPO, [2]), projection-based constrained policy optimization (PCPO, [27]), constraint-rectified policy optimization (CRPO, [25]), penalized proximal policy optimization (P3O, [31]), interior-point policy optimization (IPO, [18]), and safe distributional actor-critic (SDAC, [15]). Since the policy gradient is obtained differently depending on whether the constraints are satisfied (CS) or violated (CV), we analyze the existing methods for each of the two cases.
Case 1: Constraints are satisfied (CS). First, CPO [2] and SDAC [15] find a direction of the policy gradient by solving the following linear programming with linear and quadratic constraints (LQCLP):
where is a trust region size. The LQCLP can be addressed by solving the following dual problem:
where . Then, the policy is updated in the following direction:
(31) |
which results in . P3O [31], PCPO [27], and CRPO [25] update the policy only using the objective function in this case as follows:
IPO [18] update a policy in the following direction:
where is a penalty coefficient. As a result, is computed as .
Case 2: Constraints are violated (CV). CPO, PCPO and IPO did not handle the constraint violation in multiple constraint settings, so we exclude them in this case. First, SDAC finds a recovery direction by solving the following quadratic programming (QP):
where we will denote by . The QP can be addressed by solving the following dual problem:
where . Then, the policy is updated in the following direction:
If there is only a single constraint, PCPO is compatible with SDAC. Next, CRPO first selects a violated constraint, whose index denoted as , and calculates the policy gradient to minimize the selected constraint as follows:
If there is only a single constraint, CPO is compatible with CRPO. Finally, P3O update the policy in the following direction:
which results in , and if else .
Implementation of trust region method. To this point, we have obtained the direction of the policy gradient , allowing the policy to be updated to . If we want to update the policy using a trust region method, as introduced in TRPO [21], we can adjust the learning rate as follows:
(32) |
where is a trust region size following the Robbins-Monro condition, and are hyperparamters, and . By adjusting the learning rate, the policy can be updated mostly within the trust region, defined as , while the learning rate still satisfies the Robbins-Monro condition. Accordingly, we also need to modify for the CS case and for the CV case, as they are expressed with the learning rate . We explain this modification process by using SDAC as an example. The direction of the policy gradient of SDAC for the CS case expressed as follows:
where in (31) is replaced with . The learning rate is then calculated using and (32). Now, we need to express as . To this end, we can use , where . Consequently, is expressed as follows:
and the maximum of and is adjusted to .
Proposed strategy. Now, we introduce the proposed strategy for determining and . We first consider the CV case. Kim et al. [15] empirically showed that processing multiple constraints simultaneously at each update step effectively projects the current policy onto a feasible set of policies. Therefore, we decide to use the same strategy as SDAC, which deals with constraints simultaneously, and it is written as follows:
where and . Next, let us consider the CS case. We hypothesize that concurrently addressing both constraints and objectives when calculating the policy gradient can lead to more stable constraint satisfaction. Therefore, CPO, SDAC, and IPO can be suitable candidates. However, CPO and SDAC methods require solving the LQCLP whose dual problem is nonlinear, and IPO can introduce computational errors when is close to . To address these challenges, we propose to calculate the policy gradient direction by solving a QP instead of an LQCLP. By applying a concept of transforming objectives into constraints [16] to the safe RL problem, we can formulate the following QP:
where . According to [16], represents the maximum value of when is within the trust region, . Therefore, the transformed constraint, , has the same effect of improving the objective function within the trust region. The QP is then addressed by solving the following dual problem:
where . Then, the policy is updated in the following direction:
and the learning rate is calculated using (32) to ensure that the policy is updated within the trust region. Finally, is expressed as , where .
Appendix C Implementation Details
In this section, we present a practical algorithm that utilizes function approximators, such as neural networks, to tackle more complex tasks. The comprehensive procedure is presented in Algorithm 2.
First, it is necessary to train individual policies for various , so we model a policy network to be conditioned on , expressed as . This network structure enables the implementation of policies with various behaviors by conditioning the policy on different values. In order to train this -conditioned policy across a broad range of the space, a -greedy strategy can be used. At the beginning of each episode, is sampled uniformly with a probability of ; otherwise, it is sampled using the sampler . In our implementation, we did not use the uniform sampling process by setting to reduce fine-tuning efforts, but it can still be used to enhance practical performance.
After collecting rollouts with the sampled , the critic, policy, and sampler need to be updated. For the critic, a distributional target is calculated using the TD() method [15], and we update the critic networks to minimize the quantile regression loss [11], defined as follows:
where , and is the target distribution of . The policy gradient for a specific can be calculated according to the update rule described in Section 5.2. Consequently, the policy is updated using the policy gradient calculated with and rollouts, which are extracted from the replay buffer. Finally, the gradient of the sampler for a given can be calculated as in (14), so the sampler is updated with the average of gradients, which are calculated across multiple .
Appendix D Experimental Details
In this section, we provide details on tasks, network structure, and hyperparameter settings.




D.1 Task Description
There are two environments, legged robot locomotion [15] and Safety Gymnasium [14], and whose rendered images are presented in Figure 5.
Legged Robot Locomotion. The legged robot locomotion tasks aim to control bipedal and quadrupedal robots to maintain a specified target velocity, which consists of linear velocity in the and directions and rotational velocity in the direction, without falling. Each robot takes actions as PD target joint positions at a frequency of 50 Hz, and a PD controller in each joint operates at 500 Hz. The dimension of the action space corresponds to the number of motors: ten for the bipedal robot and 12 for the quadrupedal robot. The state space includes the command, gravity vector, current joint positions and velocities, foot contact phase, and a history of the joint positions and velocities. The reward function is defined as the negative of the squared Euclidean distance between the current velocity and the target velocity. There are three cost functions to prevent robots from falling. The first cost is to balance the robot, which is formulated as , where is the gravity vector. This function is activated when the base frame of the robot is tilted more than degrees. The second cost is to maintain the height of the robot above a predefined threshold, which is expressed as , where is the height of the robot, and is set as in the bipedal robot and in the quadrupedal robot. The third cost is to match the foot contact state with the desired timing. It is defined as the average number of feet that do not touch the floor at the desired time. We set the thresholds for these cost functions as , and , respectively, where converts the threshold value from an average time horizon scale to an infinite time horizon scale.
Safety Gymnasium. We use the goal and button tasks in the Safety Gymnasium [14], and both tasks employ point and car robots. The goal task is to control the robot to navigate to a target goal position without passing hazard area. The button task is to control the robot to push a designated button among several buttons without passing hazard area or hitting undesignated buttons. The dimension of the action space is two for all tasks. The state space dimensions are 60, 72, 76, and 88 for the point goal, car goal, point button, and car button tasks, respectively. For more details on the action and state, please refer to [14]. The reward function is defined to reduce the distance between the robot and either the goal or the designated button, with a bonus awarded at the moment of task completion. When the robot traverses a hazard area or collides with an obstacle, it incurs a cost of one, and the threshold for this cost is set as .
D.2 Network Structure
Across all algorithms, we use the same network architecture for the policy and critics. The policy network is structured to output the mean and standard deviation of a normal distribution, which is then squashed using the function as done in SAC [13]. The critic network is based on the quantile distributional critic [11]. Details of these structures are presented in Table 1. For the sampler , we define a trainable variable and represent the mean of the truncated normal distribution as . To reduce fine-tuning efforts, we fix the standard deviation of the truncated normal distribution to .
Parameter | Value | |
Policy network | Hidden layer | (512, 512) |
Activation | LeakyReLU | |
Last activation | Linear | |
Reward critic network | Hidden layer | (512, 512) |
Activation | LeakyReLU | |
Last activation | Linear | |
# of quantiles | 25 | |
# of ensembles | 2 | |
Cost critic network | Hidden layer | (512, 512) |
Activation | LeakyReLU | |
Last activation | SoftPlus | |
# of quantiles | 25 | |
# of ensembles | 2 |
D.3 Hyperparameter Settings
The hyperparameter settings for all algorithms applied to all tasks are summarized in Table 2.
SRCPO (Ours) | CPPO [29] | CVaR-CPO [32] | SDAC [15] | SDPO [30] | WCSAC-D [26] | |
Discount factor | ||||||
Policy learning rate | - | - | ||||
Critic learning rate | ||||||
# of target quantiles | - | |||||
Coefficient of TD() | - | |||||
Soft update ratio | - | - | - | - | - | |
Batch size | ||||||
Steps per update | ||||||
# of policy update iterations | - | |||||
# of critic update iterations | ||||||
Trust region size | - | |||||
Line search decay | - | - | - | - | - | |
PPO Clip ratio [22] | - | - | - | |||
Length of replay buffer | ||||||
Learning rate for auxiliary variables | - | - | - | - | ||
Learning rate for Lagrange multipliers | - | - | - | |||
Log penalty coefficient | - | - | - | - | - | |
Entropy threshold | - | - | - | - | - | |
Entropy learning rate | - | - | - | - | - | |
Entropy coefficient | - | - | - | |||
Sampler learning rate | - | - | - | - | - | |
for sampler target | - | - | - | - | - |
Appendix E Additional Experimental Results
E.1 Computational Resources
All experiments were conducted on a PC whose CPU and GPU are an Intel Xeon E5-2680 and NVIDIA TITAN Xp, respectively. The training time for each algorithm on the point goal task is reported in Table 3.
SRCPO (Ours) | CPPO | CVaR-CPO | SDAC | WCSAC-D | SDPO | |
Averaged training time | 9h 51m 54s | 6h 17m 16s | 4h 43m 38s | 8h 46m 8s | 14h 18m 20s | 6h 50m 39s |
E.2 Experiments on Various Risk Measures
We have conducted studies on various risk measures in the main text. In this section, we first describe the definition of the Wang risk measure [24], show the visualization of the discretization process, and finally present the training curves of the experiments conducted in Section 8.1.
The Wang risk measure is defined as follows [24]:
is the cumulative distribution function (CDF) of the standard normal distribution, and is called a distortion function. It can be expressed in the form of the spectral risk measure as follows:
and is the pdf of the standard normal distribution. Since goes to infinity when is approaching one, at is not defined. Thus, it is not a spectral risk measure. Nevertheless, our parameterization method introduced in Section 6.1 can project onto a discretized class of spectral risk measures.
The discretization process can be implemented by minimizing (11), and the results are shown in Figure 6. In this process, the number of discretizations is set to five. Additionally, visualizations of other risk measures are provided in Figure 7, and the optimized values of the parameters for the discretized spectrum, which are introduced in (10), are reported in the Table 4. Note that CVaR is already an element of the discretized class of spectral risk measures, as observed in Figure 7. Therefore, CVaR can be precisely expressed using a parameter with .
Finally, we conducted experiments on the point goal task to check how the results are changed according to the risk level and the type of risk measures. The training curves for these experiments are shown in Figure 8.











