Risk-Sensitive Reinforcement Learning: a Martingale Approach to Reward Uncertainty
Abstract
We introduce a novel framework to account for sensitivity to rewards uncertainty in sequential decision-making problems. While risk-sensitive formulations for Markov decision processes studied so far focus on the distribution of the cumulative reward as a whole, we aim at learning policies sensitive to the uncertain/stochastic nature of the rewards, which has the advantage of being conceptually more meaningful in some cases. To this end, we present a new decomposition of the randomness contained in the cumulative reward based on the Doob decomposition of a stochastic process, and introduce a new conceptual tool - the chaotic variation - which can rigorously be interpreted as the risk measure of the martingale component associated to the cumulative reward process. We innovate on the reinforcement learning side by incorporating this new risk-sensitive approach into model-free algorithms, both policy gradient and value function based, and illustrate its relevance on grid world and portfolio optimization problems.
1 Introduction
Classical reinforcement learning (RL) aims at deriving policies that maximize the expected value of the sum (or average) of future rewards, while so-called risk-sensitive RL aims at taking into account some measure of variability of the cumulative reward into the learned policy, usually through a risk criterion such as variance or a risk measure (e.g. entropic, CVaR). The common goal to the existing risk-sensitive RL literature (cf. section 1.3) is to take into account the distribution of the cumulative rewards in order to learn a variety of policies, usually parametrized by a risk parameter such as the mean-variance trade-off, the CVaR percentile or an upper bound on variance. For example, in the mean-variance case, learned policies typically lead to the distribution of cumulative rewards having lower mean but also lower variance (hence we gain confidence on the outcome at the expense of its mean).
In the existing literature, the chosen risk criterion is applied to the cumulative reward as a whole, and hence does not distinguish between the different sources of randomness contained in it. The motivating example of section 1.2 and the Doob decomposition of section 2 show that the randomness contained in the cumulative reward actually consists of two components of different nature and having different practical interpretations. If we denote the reward obtained at time associated to policy , the first chaotic component exactly captures the non-deterministic nature of the reward, i.e. that it is uncertain (or stochastic) when action is chosen in state (as it depends on ). This component is zero if the reward is deterministic. The second predictable component replaces the uncertain rewards by their predictable/deterministic projections , and hence does not depend on their uncertainty/stochasticity but acts as if the reward we get at time after choosing action is always what we predicted it to be based on information available at time . This component possesses some variability due to the switching between states, but no due to the uncertain nature of the rewards. Interestingly, in the average reward case, these two components also appear in the limiting process of the central limit theorem for functionals of Markov chains, applied to the average reward . Indeed, the limit of this term, minus its long-range (ergodic) mean and rescaled by , converges in distribution as to the sum of two independent normal random variables with respective variances and , where the latter depends on the uncertainty of the rewards, whereas the former only depends on their deterministic projections as discussed above. We refer to the supplementary for more details on this observation.
In this work, we develop a new approach complementary to the existing literature that learns policies sensitive to the variability contained in the chaotic component reflecting the uncertainty/stochasticity of the rewards, discussed above. We will work in a framework where the reward received at after having taken action is uncertain in the sense that it depends on the next state and/or possibly on extra "hidden" information (cf. the setting of Shen et al. , (2014) for example).
We believe that this approach is of interest for the following reasons. First, as mentioned in Chow et al. , (2018), deriving risk criteria that are conceptually meaningful is a topic of current research. In our case, sensitivity to the chaotic reward component can nicely be interpreted as sensitivity to the uncertainty of the future reward when an action is chosen, in other words to the difference between the actual reward received at and our best guess of it given information available at time , which is a natural human behavior that is adopted in many real-world situations. For example in section 5, we show on a portfolio optimization problem how classical mean-variance based risk-sensitive RL counterintuitively leads to reduce investment in both the risky and the risk-free asset as the risk aversion parameter increases (and hence we stop taking advantage of the risk-free asset), whereas our new chaotic mean-variance algorithm leads to reduce investment in the risky asset only, which is intuitive as the evolution of the latter is unknown when the investment decision is made and hence can potentially yield significant losses.
Second, depending on the application in mind, mixing the above mentioned predictable and chaotic sources of randomness together and treating it as a single "noise" may lead to undesirable and counterintuitive learned policies, and a more subtle understanding of the randomness is needed in order to gain more interpretability in the learned risk-sensitive policies (cf. section 1.2). It was mentioned in Mannor & Tsitsiklis, (2011) that variance as a risk criterion may sometimes lead to counterintuitive policies, in that an agent who has received unexpected large rewards may seek to incur losses in order to keep the variance small, since variance penalizes deviations of the cumulative reward from its expected value. Hence, Prashanth & Ghavamzadeh, (2016) have considered per-period variance instead. An interesting observation is that when applied to the chaotic component only (which will prove to be a martingale), these two notions of overall and per-period variance bridge under the concept of quadratic variation of a martingale , due to the equality (proposition 1): penalizing reward uncertainty/stochasticity will not force the cumulative reward towards some baseline level and hence will not lead to counterintuitive policies mentioned above.
1.1 Main contributions
We provide in section 2 a novel, conceptually meaningful decomposition of the cumulative reward process that distinguishes between the different sources of randomness contained within it (chaotic and predictable, cf. discussion above). The key idea of the paper is to apply the Doob decomposition of a stochastic process to the cumulative reward process. We believe that this decomposition in itself is of interest in gaining a better understanding of the variability contained within the reward process.
We then introduce in section 3 a new definition of risk that exactly captures reward uncertainty risk, i.e. the risk related to its non-deterministic nature: the chaotic variation associated to the reward process (and to a risk measure).
We incorporate for the first time reward uncertainty risk into model-free value-function based and policy gradient algorithms: the related modified versions of some canonical RL algorithms, so-called Chaotic Mean-Variance algorithms (CMV), are presented in section 4. These algorithms are applied to grid world and portfolio optimization problems in section 5. Although the focus in the main text is on the chaotic mean-variance case, our analysis in section 3 allows for general risk measures and we discuss some of them in the supplementary, such as the chaotic CVaR and Sharpe ratio.
1.2 Motivating example
As mentioned above, we are interested in developing a general RL framework to account for reward uncertainty in learned policies. The example presented in this section aims at illustrating what could sometimes go wrong when taking variance as a measure of risk, as it is commonly the case in the risk-sensitive RL literature (cf. section 1.3). We consider the episodic toy example of timesteps where there are 2 states, 2 actions, and at each state transition the probability to reach either states is the same, i.e. . If , the reward obtained at is 2 if and if ; if , the rewards obtained at are 10 if and if , where are i.i.d. zero mean unit variance, and the noise . Think for example as action as investing in a risky stock at time and getting the corresponding price change at time . If is the policy that always selects action , then have the same expected cumulative reward of . We have plotted in figure 1 the rewards obtained for the 2 policies in each state.
It can be proved that if the noise is small enough (cf. supplementary for a precise quantitative statement), policy will be established as less risky than according to the variance criterion. Nevertheless, we know that the cumulative reward earned using over timesteps is bounded from below by with probability 1. On the other hand, since contains the noise term , there is always a positive probability that the reward hits any arbitrarily low value. That is, leaves the agent with a component that could constitute a true risk for him, whereas doesn’t, but is still established as the most risky according to the variance criterion. Hence, the risky noise component is not detected: section 2 will establish that this risky component is in fact the martingale associated to the Doob decomposition of the cumulative reward process. On the other hand, the conceptual tool introduced in this paper, the chaotic variation, when applied to the mean-variance case, will lead to a risk penalty term of for and proportional to for (cf. supplementary), i.e. exactly the desired outcome that the penalty term is proportional to reward uncertainty only.
Another observation is the following: if , rewards are deterministic and the obvious optimal policy is to simply take the best reward in each state, i.e. if and if . This optimal strategy generates a variance over timesteps of , due to the switching between states. If one chooses the risk aversion coefficient in the mean-variance trade-off to be high enough, one will then choose not to execute that obvious optimal policy and rather execute policy as it generates a lower variance of over timesteps. On the other hand, the chaotic variation, when applied to the mean-variance case, will lead to a risk penalty term of for all policies (since rewards are deterministic) and hence one will always choose the optimal policy, whatever the risk aversion coefficient is.
1.3 Related work
There exists an interesting and growing body of literature on the topic of risk-sensitive RL. Our work contrasts with the latter in that we focus on learning policies sensitive to the uncertain nature of the rewards whereas previous work apply risk criterion to the cumulative reward as a whole, and hence do not distinguish between the different sources of randomness contained in it. Previously studied algorithms can be split into those that (i) are value-function based (cf. value iteration and policy iteration), and (ii) employ policy gradient methods that update the policy parameters in the direction of the gradient of the risk-flavored performance metric. In category (i), we cite the work of Mihatsch & Neuneier, (2002) in which they transform TD increments with a function that is linear by parts, and Borkar, (2002), Borkar, (2005), Borkar, (2010) which focuses on the exponential utility function in an ergodic average-reward type setting. Work of category (ii) are more numerous, and can be split into those that use (Actor-Critic) and do not use (Monte Carlo) a specific representation of the value function, the former usually requiring TD-type updates of the value function. For large action/state spaces, one typically employs function approximation of the value function in the actor-critic setting, which introduces bias but improves variance. On the RL front, Prashanth & Fu, (2018) provides a thorough literature review. Some references are interested in the full distribution of the cumulative reward, cf. Morimura et al. , (2010a), Morimura et al. , (2010b), Bellemare et al. , (2017), Defourny et al. , (2008) while others focus on specific risk measures, e.g. CVaR Tamar, A., Glassner, Y., Manor, S., (2015), Chow et al. , (2018). Some authors define their own notion of risk with specific applications in mind, such as Geibel & Wysotzki, (2005) for which risks refers to the likelihood of reaching so-called "error states". Tamar et al. , (2015) provides a general methodology to handle all coherent risk measures.
The most-studied framework remains the variance-related one. Variance can be understood as the variance of the cumulative reward, or as the sum of variances of step-by-step rewards, cf. the work of Prashanth & Ghavamzadeh, (2016), Tamar et al. , (2016), Tamar et al. , (2012) (in a stochastic shortest path context), Prashanth & Ghavamzadeh, (2013). Sobel, MJ, (1982) was among the firsts to study the mean-variance case, and he explains why using variance as a measure of risk may cause problems, more specifically the inability to use policy iteration type algorithms due to the lack of monotonicity of the variance operator, while Mannor & Tsitsiklis, (2011) shows that these problems can be hard to solve. Tamar et al. , (2016) provides a methodology to estimate the variance of the cumulative reward via a TD-type approach.
2 Markov Decision Processes: Doob decomposition and Martingale formulation
We study a Markov Decision Process (MDP) , where we allow the reward function to be general of the form , and the hidden process as in Shen et al. , (2014). The kernels and are defined in assumption 1. We use the notation that the reward is received at time after having taken action at state at time . We denote the cumulative reward to go and the conditional average reward:
To keep notations consistent in the episodic and discounted reward setting we impose as usual , and only possible in the episodic setting (in the latter case the process stays in the same absorbing state forever once it is reached, with zero associated rewards). The analysis of this section is amenable to the average-reward formulation, which is detailed in the supplementary. We make the below assumption throughout the paper, needed in particular in the proof of theorem 1.
Assumption 1.
is uniformly bounded and the probability kernels and satisfy , .
We need the following notations:
is the sigma-algebra generated by the MDP before or equal to time , i.e. the information available up to time .
Definition 1.
We remind that a discrete-time process is adapted to the filtration if is -measurable for all ; it is predictable if is -measurable for all (i.e. it is known one timestep before); it is a -martingale if for all , is adapted to and .
The below observation is the cornerstone of this paper, as it will formalize precisely the discussion carried on in section 1 about the decomposition of the reward variability into two conceptually meaningful components. The martingale component captures the uncertain/stochastic part of the reward in section 1.2. The key idea here is to apply the Doob decomposition of a stochastic process (cf. supplementary) to the reward process in order to gain better understanding of its variability.
Theorem 1.
(Doob decomposition of the reward) Let be a policy. can be decomposed in a unique way into the sum of i) a measurable random variable ii) a zero-mean predictable process and iii) a zero-mean martingale , with respect to the filtration . The decomposition is given by:
Denoting and we get, taking the limit as :
Definition 2.
We call (resp. ) the chaotic (resp. predictable) reward process associated to .
The Doob decomposition of theorem 1 consists of decomposing rewards into:
-
•
the predictable component that accounts for deviations of the predictable projection of the uncertain rewards (i.e. the best guess of the unknown reward given information at time ) from the overall average reward. This process possesses some variance due to the switching between states, but not to the reward uncertainty.
-
•
the chaotic component exactly captures the uncertainty of the reward by computing the difference between its actual realization and what we expected it to be based on the information available at time . In other words, it captures the "surprise" part of the reward. if and only if is a deterministic reward, in which case we have with probability 1.
3 Chaotic Variation of the reward process
We define below a new conceptual tool, the chaotic variation associated to a reward process (and to a conditional risk measure ), that answers the issues raised in section 1.2 and captures the reward uncertainty risk. The concept of risk that could be intuited from section 1.2 emerges rigorously as a martingale. We refer to Detlefsen & Scandolo, (2005) (cf. supplementary) for the definition of a conditional risk-measure associated to a sigma-algebra , and we use the notation to denote the conditional risk measure associated to the sigma-algebra .
Definition 3.
(Chaotic variation) Let a policy and a family of conditional risk measures associated to the process . The chaotic variation associated to (, ) is defined as . That is, the chaotic variation quantifies the risk related to the chaotic reward process.
In the case of the entropic risk measure, the predictable quadratic variation of the chaotic reward process emerges naturally as a measure of risk as a consequence of martingale theory, as proposition 1 shows it.
Proposition 1.
(chaotic variation in the entropic case). Let a policy. Using definition 3, for , let be the so-called (conditional) entropic risk measure, obtained as the certainty equivalent of the exponential utility function . Then, the chaotic variation satisfies , where is the predictable quadratic variation of the martingale , given by:
As we get:
Definition 4.
(Chaotic variance) The chaotic variance associated to (, ) is defined as . By martingale property of , the chaotic variance is equal to the variance of (scaled by , conditional on ).
4 Reinforcement Learning: Risk-Sensitive Chaotic algorithms
In this section we assume for simplicity that the state and action spaces are finite. Given the conceptual tools introduced in section 3, and given a fixed initial state (generalization to the case of distributions over initial states is straightforward), we are interested in solving so-called chaotic problems of type:
(1) |
The extension to solving problems of type " subject to: " is not discussed here but can be done using similar techniques developed in Prashanth & Fu, (2018), Tamar et al. , (2012) or Prashanth & Ghavamzadeh, (2016).
4.1 Bellman equations: Chaotic Mean-Variance case
We present below the Bellman equation in the chaotic mean-variance case , cf. definition 4. It will be used to formulate a chaotic mean-variance version of Q-Learning in the episodic case (section 4.2), as well as to study actor-critic algorithms and the average reward case (cf. supplementary). The proof of theorem 2 follows directly from the definition of in proposition 1.
Theorem 2.
(Bellman equation) Let the Q-function associated to of definition 4. Then we have the following Bellman equation:
Consequently, if and with associated value function , then and its associated value function satisfy the following Bellman equation in the episodic case with :
In particular, in the episodic case with , theorem 2 yields that the class of optimal policies associated to (1) coincides with that associated to the modified rewards . In contrast to traditional TD-based methods such as -Learning, is not directly observable as it involves the term which will need to be estimated over the course of the algorithm.
4.2 Chaotic Mean-Variance Q-Learning
In the episodic mean-variance case (with ), theorem 2 allows us to derive a chaotic version of Q-learning, cf. algorithm 1 which is based on theorem 3. In the discounted reward setting, it is not possible to combine and (where ), hence we cannot get a Q-Learning type algorithm. The convergence proof is discussed in the supplementary and consists of a minor modification of the proof of Dayan & Watkins, (1992) based on the fact that as with probability 1 for every state-action pair , where is defined in theorem 3. The average reward version of the algorithm is presented in the supplementary.
Theorem 3.
(Chaotic Mean-Variance Q-Learning in the episodic case). Using theorem 2, denote and . Let , and the successive states, actions and rewards observed by the agent. Let a sequence of learning rates satisfying the usual conditions for every state-action pair :
where is the index corresponding to the visit to . Further define the following iterates if and :
and , , otherwise. Then as with probability 1 for every state-action pair , where .
Remark 1.
Note that in theorem 3, one could use a two-timescale stochastic approximation algorithm by using a specific timescale for the process (instead of ). Since the update rule of doesn’t depend on (uncoupled case), this is not required for convergence but could be used to improve the rate of convergence, cf. a remark in Konda & Tsitsiklis, (2004), p. 4.
Input: -table initialized arbitrarily, learning rate , and initialized to 0.
Output: optimal policy
4.3 Monte Carlo Policy gradient algorithms - Episodic case
In this section we consider episodic Monte Carlo based algorithms that start with a parametric form for the policy and optimize equation (1) in the direction of the gradient with respect to , hence aiming for local optima only. We make the following classical assumption in this section (Bhatnagar et al. , (2009)):
Assumption 2.
For every policy , the Markov chain induced by and is ergodic, i.e. irreducible, aperiodic and positive recurrent. Further, for every state-action pair , is continuously differentiable in .
From equation (1), we need to compute unbiased estimates of and . The former is the classical expected reward gradient. By definition 3, consists in applying a risk measure to a mean zero martingale, which is usually simpler as we will see below in the mean-variance case (cf. definition 4), and more importantly the work done in the literature for specific risk measures can be applied straightforwardly, e.g. Chow et al. , (2018) or Tamar, A., Glassner, Y., Manor, S., (2015) in the case of CVaR (however the risk measure in our case is applied to the chaotic reward process only). From now on we focus on the mean-variance case of definition 4, but we present in the supplementary extensions to chaotic CVaR and Sharpe ratio.
Provided we know , the gradient presents no specific difficulty and can be computed using the classical likelihood ratio technique by generating episodes , , , …, , , , , following and computing the unbiased Monte Carlo estimate:
The subtlety here is that we need to learn in the course of the algorithm. In the tabular case, using assumption 2, we are guaranteed that every state-action pair will be visited infinitely often and we can proceed as in the Q-Learning case (theorem 3) by updating the (conditional) average estimate as iterations on are performed. By the strong law of large numbers, convergence of to occurs with probability 1. Remark 1 still holds here: since the iteration on the sample average is uncoupled from the one on , we do not need a two-timescale algorithm to guarantee convergence. For each and , if and we perform the updates:
(2) | ||||
Alternatively, if the state or action spaces are large, one may want to use a parametric approximation of that will be updated in the course of the algorithm: this approach is presented in algorithm 2, where we use an experience replay table and fit using SGD. Proposition 2 quantifies the related gradient bias due to the use of this approximation (proof in supplementary). We present in algorithm 3 the Chaotic Mean-Variance version of REINFORCE, which uses either equation (LABEL:eqe1) or algorithm 2.
Proposition 2.
Let and the gradient obtained using the approximation . The gradient bias satisfies:
where is the action-state -discounted visiting distribution.
Input: initial distributional parameter , experience replay table , number of distributional gradient steps , number of SGD samples .
Output: Approximation of the optimal distributional parameter
Input: Initial policy parameter , learning rate , number of episodes per batch , initialized to 0, Optional: experience replay table .
Output: Approximation of the optimal policy parameter
4.4 Policy Gradient Actor-Critic algorithms
In order to derive Actor-Critic algorithms, the Bellman equations in theorem 2 can be used - as in the CMV-Q-Learning algorithm 1 - in order to adapt the work of e.g. Prashanth & Ghavamzadeh, (2016) (variance case) or Chow et al. , (2018) (CVaR case) in order to derive chaotic versions of their actor-critic algorithms. This extension is discussed in the supplementary.
5 Experiments: Chaotic Mean-Variance
5.1 Grid World
We consider the episodic problem of a robot on a grid aiming at a goal. The state space consists of the 16 grid squares, and the action space consists in choosing to go East, West, North or South. Reaching the goal (resp. taking a step) gives a +1 (resp. -1) reward and negative rewards are positioned on the grid as seen on figure 2. When an action is chosen, there is a probability that the robot goes in a random direction. If the robot hits the extremity of the grid, it stays where it is. We train policies using the CMV-Q-Learning algorithm 1. In figure 2 we plot the path heatmap over rollout steps performed with the learned policy for various risk aversion coefficients (cf. details and additional experiments in supplementary). The higher , the further away from the -20 reward the robot goes, as expected, as it prefers taking the -6 loss rather than walking next to the -20 reward and risking to encur the corresponding loss.
The second element that our algorithm gives is a risk heatmap, highlighting the most uncertain states, i.e. the states which yield the highest reward stochasticity/uncertainty (figure 2). We perform rollouts with the learned and policy and compute for every state . In figure 2 we see that as expected, the heatmap highlights the states next to the -20 reward, and to a lesser intensity the states around the -6 rewards. To the best of our knowledge, only two other work study risk-sensitive Q-Learning algorithms: Mihatsch & Neuneier, (2002) and Shen et al. , (2014). Both transform the TD-increments by a non-linear function in order to obtain risk-sensitive behaviors. The former show that their algorithm converge to the worst-case optimality criterion as the risk parameter changes, hence in our specific example, we could obtain similar paths as in figure 2, but our algorithm, in addition to being the only one to focus on reward stochasticity only, additionally provides the risk heatmap discussed above, which quantifies the extent to which a given state yields uncertain rewards.
5.2 Portfolio optimization
We consider the problem of investing in 2 financial assets. One is risk-free in the sense that investing a quantity in the asset yields a deterministic reward , where is the risk-free rate. The latter can be seen as the overnight rate from day to day , which is known at the end of day when is chosen. The other asset is risky (e.g. a stock) and yields an uncertain reward for a quantity of , where are i.i.d. standard normal and is referred to as volatility. We restrict to be nonnegative integers which sum is less than a total investment constraint . The action is defined as and the reward . In our experiments we take , yielding possible actions. The state space consists of 3 states LowVol, HighVol and MediumVol defined as low volatility (and low risk-free rate ), high volatility (and high risk-free rate ), as well as an intermediate state. The state transition matrix is designed such that the more we trade in the risky asset (i.e. the higher ), the more likely we are to reach a higher volatility state. We train the policy using CMV-REINFORCE algorithm 3 and its baseline (classical mean-variance), and use it to compute performance metrics over rollout episodes of timesteps. Details on training/numerical values used are presented in the supplementary.
We display in figure 3 the fraction of time the process has spent in each state, as well as the investment fraction in each asset (for rollout episodes). With and by design of the state transition matrix, both assets give the same expected reward but the policy is incentivized to trade in the risky asset rather than in the risk-free asset in order to reach the HighVol state, which gives the highest . As increases, in the CMV case, the policy shifts towards trajectories which contain less reward uncertainty, which means investing in the risk-free asset which associated rewards are deterministic. The classical mean-variance case (baseline) penalizes not only the variability related to reward uncertainty but also the variability related to the switching of states appearing in both assets via the deterministic term , hence as increases, the policy is incentivized to stop investing (cf. green line in figure 3 representing the uninvested budget ), i.e. it stops taking advantage of the risk-free asset. The latter can also be seen in figure 4 where we plot for rollout episodes the cumulative reward mean and standard deviation as a function of . In the mean-variance case the reward mean will eventually vanish as the policy will stop trading in both assets: by trying to make the standard deviation lower, it generates a counterintuitive behavior in that it stops taking advantage of the risk-free asset.
6 Conclusion
We presented a novel, conceptually meaningful decomposition of the cumulative reward process based on the Doob decomposition that distinguishes between the different sources of randomness contained within it, introduced a new conceptual tool - the chaotic variation - that exactly captures reward uncertainty risk, and incorporated it into model-free value-function based and policy gradient algorithms. Potential real-world applications include all settings where one is subject to uncertain/stochastic rewards and is interested in deriving interpretable risk-sensitive policies, for example recently studied RL financial market-making problems Guéant & Manziuk, (2019), Ganesh et al. , (2019) where reward uncertainty plays a major role in that market-makers stream prices but do not know whether clients will decide to trade at that price, and further they are typically averse to uncertain fluctuations in the underlying financial asset price. Future work could include extending the framework to the case of delayed rewards.
Disclaimer
This paper was prepared for information purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co and its affiliates (“JP Morgan”), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. © 2020 JPMorgan Chase & Co. All rights reserved.
References
- Bellemare et al. , (2017) Bellemare, Marc G, Dabney, Will, & Munos, Rémi. 2017. A Distributional Perspective on Reinforcement Learning. In: ICML.
- Bhatnagar et al. , (2009) Bhatnagar, Shalabh, Sutton, Richard S, Ghavamzadeh, Mohammad, & Lee, Mark. 2009. Natural actor–critic algorithms. Automatica, 45(11), 2471–2482.
- Borkar, (2002) Borkar, Vivek S. 2002. Q-learning for Risk-Sensitive control. Mathematics of Operations Research, 27(1), 294–31.
- Borkar, (2005) Borkar, Vivek S. 2005. An actor-critic algorithm for constrained Markov decision processes. Systems & Control Letters, 54(3), 207–213.
- Borkar, (2010) Borkar, Vivek S. 2010. Learning Algorithms for Risk-Sensitive Control. Proceedings of the Nineteenth International Symposium on Mathematical Theory of Networks and Systems, 1327–1332.
- Chow et al. , (2018) Chow, Yinlam, Ghavamzadeh, Mohammad, Janson, Lucas, & Pavone, Marco. 2018. Risk-Constrained Reinforcement Learning with Percentile Risk Criteria. J. Mach. Learn. Res., 18(167), 1–51.
- Dayan & Watkins, (1992) Dayan, Peter, & Watkins, Christopher. 1992. Technical Note Q-Learning. Machine Learning, 8, 279–292.
- Defourny et al. , (2008) Defourny, Boris, Ernst, Damien, & Wehenkel, Louis. 2008. Risk-Aware Decision Making and Dynamic Programming. In: NIPS workshop.
- Detlefsen & Scandolo, (2005) Detlefsen, Kai, & Scandolo, Giacomo. 2005. Conditional and dynamic convex risk measures. Finance Stoch., 9(4), 539–561.
- Ganesh et al. , (2019) Ganesh, S., Vadori, N., Xu, M., Zheng, H., Reddy, P., & Veloso, M. 2019. Reinforcement Learning for Market Making in a Multi-agent Dealer Market. In: NeurIPS Workshop on Robust AI in Financial Services.
- Geibel & Wysotzki, (2005) Geibel, Peter, & Wysotzki, Fritz. 2005. Risk-sensitive reinforcement learning applied to control under constraints. J. Artif. Intell. Res., 24, 81–108.
- Guéant & Manziuk, (2019) Guéant, Olivier, & Manziuk, Iuliia. 2019. Deep reinforcement learning for market making in corporate bonds: beating the curse of dimensionality. arXiv.
- Konda & Tsitsiklis, (2004) Konda, Vijay R., & Tsitsiklis, John. 2004. Convergence rate of linear two-timescale stochastic approximation. The Annals of Applied Probability, 14(2), 796–819.
- Mahadevan, (1996) Mahadevan, Sridhar. 1996. Average Reward Reinforcement Learning: Foundations, Algorithms, and Empirical Results. Mach. Learn., 22(1-3), 159–195.
- Mannor & Tsitsiklis, (2011) Mannor, Shie, & Tsitsiklis, John. 2011. Mean-Variance Optimization in Markov Decision Processes. In: ICML.
- Mihatsch & Neuneier, (2002) Mihatsch, Oliver, & Neuneier, Ralph. 2002. Risk-Sensitive Reinforcement Learning. Mach. Learn., 49(2), 267–290.
- Morimura et al. , (2010a) Morimura, Tetsuro, Sugiyama, Masashi, Kashima, Hisashi, Hachiya, Hirotaka, & Tanaka, Toshiyuki. 2010a. Nonparametric Return Distribution Approximation for Reinforcement Learning. In: ICML.
- Morimura et al. , (2010b) Morimura, Tetsuro, Sugiyama, Masashi, & Kashima, Hisashi. 2010b. Parametric Return Density Estimation for Reinforcement Learning. In: UAI.
- Necchi, (2016) Necchi, Pierpaolo. 2016. Policy Gradient Algorithms for the Asset Allocation Problem. M.Phil. thesis, Politechnico di Milano.
- Prashanth & Fu, (2018) Prashanth, L A, & Fu, Michael. 2018. Risk-Sensitive Reinforcement Learning: A Constrained Optimization Viewpoint. arXiv, Oct.
- Prashanth & Ghavamzadeh, (2016) Prashanth, L A, & Ghavamzadeh, Mohammad. 2016. Variance-Constrained Actor-Critic Algorithms for Discounted and Average Reward MDPs. Mach. Learn., 105(3), 367–417.
- Prashanth & Ghavamzadeh, (2013) Prashanth, L.a., & Ghavamzadeh, Mohammad. 2013. Actor-Critic Algorithms for Risk-Sensitive MDPs. Advances in Neural Information Processing Systems 26, 252–260.
- Shen et al. , (2014) Shen, Yun, Tobia, Michael J, Sommer, Tobias, & Obermayer, Klaus. 2014. Risk-sensitive Reinforcement Learning. Neural Computation, 26(7).
- Sobel, MJ, (1982) Sobel, MJ. 1982. The Variance of discounted Markov Decision Processes. J. Appl. Probab.
- Tamar et al. , (2012) Tamar, Aviv, Di Castro, Dotan, & Mannor, Shie. 2012. Policy Gradients with Variance Related Risk Criteria. In: ICML.
- Tamar et al. , (2015) Tamar, Aviv, Chow, Yinla, Ghavamzadeh, Mohammad, & Mannor, Shie. 2015. Policy Gradient for Coherent Risk Measures. In: NIPS.
- Tamar et al. , (2016) Tamar, Aviv, Castro, Dotan Di, & Mannor, Shie. 2016. Learning the Variance of the Reward-To-Go. Journal of Machine Learning Research, 17(13), 1–36.
- Tamar, A., Glassner, Y., Manor, S., (2015) Tamar, A., Glassner, Y., Manor, S. 2015. Optimizing the CVaR via Sampling. In: AAAI.
- Thomas, (2014) Thomas, Philip S. 2014. Bias in Natural Actor-Critic Algorithms. In: ICML.
- Vadori & Swishchuk, (2015) Vadori, N., & Swishchuk, A. 2015. Strong Law of Large Numbers and Central Limit Theorems for Functionals of Inhomogeneous Semi-Markov Processes. Stochastic Analysis and Applications, 33(2), 213–243.
Appendix A Background
A.1 Central limit theorem for Markov chain functionals (section 1 - Introduction)
Assume for simplicity that the state space is finite, and let the reward obtained at time associated to the deterministic policy , so that is a function of and only, and is a Markov chain satisfying . The central limit theorem for Markov chain functionals (Vadori & Swishchuk, (2015), theorem 3.17) states that the limit in distribution as of:
is normal with mean zero and variance , where is the stationary distribution of the Markov chain of transition kernel . Equivalently, the latter limit is equal in distribution to the sum of two independent normal random variables of respective variances and , which are given by the following expressions:
Denoting the conditional expected reward , we see below that only depends on the rewards via the deterministic term , whereas additionally depends on the term which quantifies reward stochasticity/uncertainty. Indeed, the variance operators act as follows:
and:
where the function is the solution of the Poisson equation:
A.2 Doob decomposition of a stochastic process
Let a discrete-time process on a probability space such that (i) for all , and (ii) is adapted to the filtration , i.e. is -measurable for all . Then, there exists an integrable martingale , and an integrable predictable process , such that and for all . This decomposition is almost surely unique. Here, we remind that being predictable means that is -measurable for all (i.e. it is known one timestep before), and martingale means that is -measurable and . Further, and are given by:
A.3 Conditional risk measures
We remind here the definition of a conditional risk measure associated to a sigma-algebra , according to Detlefsen & Scandolo, (2005). Let , the set of resp. bounded random variables and bounded, -measurable random variables. A conditional risk measure associated to a sigma-algebra is a map such that:
-
•
(Normalization) .
-
•
(Conditional Translation Invariance) For any , we have .
-
•
(Monotonicity) For any , if with probability 1, then with probability 1.
Appendix B Proofs
B.1 Proof of theorem 1
Define the process for and , where we remind that:
The process is adapted to the filtration generated by the sigma-algebras , since by definition . Hence, we can apply the Doob decomposition (cf. section A.2), and we get the following decomposition of the process up to unicity a.s., for :
where , are respectively a predictable process and a martingale with respect to the filtration , satisfying , and are given by, for (with the usual convention that ):
We have and if :
If we get:
Overall, this gives for :
Now, we claim that . Indeed, by assumption 1 and since by definition , we get:
This yields for :
Similarly for :
Since , are respectively a predictable process and a martingale with respect to the filtration by the Doob decomposition, and by definition , we get that , are respectively a predictable process and a martingale with respect to the filtration , for , and satisfy:
B.2 Proof of proposition 1
We have the Taylor expansion:
We remind that the quadratic variation of a discrete-time, square-integrable martingale adapted to a filtration satisfies and is given by:
By theorem 1, the process is a mean zero martingale, hence by the dominated convergence theorem and the process is a mean zero martingale, where is the predictable quadratic variation of the martingale , given by:
This yields , that is:
All terms put together we get:
We now proceed to proving that:
Since the logarithm function is increasing, it is sufficient to prove that . Since is a (bounded) martingale, we get that is a martingale, namely the exponential martingale associated to . We then have:
By Hölder’s inequality and taking the limit as (using the dominated convergence theorem), we get:
Since is a martingale, , which gives the result.
B.3 Proof of convergence of Chaotic Mean-Variance Q-Learning algorithm of theorem 3.
By theorem 2 we have:
We also have the classical Bellman equation:
Hence (and its associated value function ) satisfies the classical Bellman equation with modified rewards :
where:
By assumption of the theorem, , hence the visit index to as and so every state-action pair is visited infinitely often. As a consequence we get that as with probability 1 and by the strong law of large numbers, as with probability 1. This guaranties in the proof of Dayan & Watkins, (1992) that in step B.3 of Lemma B (Rewards and transition probabilities converge with probability 1), using their notations, the expected rewards of the so-called action-replay process (ARP) tend as to the expected rewards of the real process. The rest of the convergence proof of Dayan & Watkins, (1992) goes through the same way. Note that in the latter reference, the proof is given in the discounted reward case with , but their section 4 "Discussions and Conclusions" discusses the proof extension to the episodic case with .
B.4 Proof of proposition 2
We have by definition:
and the associated value function, as in definition 4. We have the below equality, which proof is very similar to that Prashanth & Ghavamzadeh, (2016) (lemma 1). We postpone it at the end of the present proof for reader’s convenience:
where is the action-state -discounted visiting distribution. Similarly we have, with replaced by :
where:
By definition we have:
But:
Hence:
which shows that:
with:
Finally, we prove as claimed earlier that:
The proof is very similar to that of Prashanth & Ghavamzadeh, (2016) (lemma 1). We first use Bellman equation for (theorem 2):
hence taking the gradient:
On the other hand by definition of the value function:
Plugging in the expression obtained for we get:
After unrolling infinitely many times we get:
Since , and using the definition of , we get that is equal to:
which completes the proof.
Appendix C Toy example of section 1.2: generalization and quantitative results
We slightly generalize the example in section 1.2 by considering the case where there are states, 2 actions, and at each state transition the probability to reach another state is given by such that and the reward are as follows:
-
•
if action 1 is chosen, the reward received at if is the constant .
-
•
if action 2 is chosen, the reward received at if is , where are zero mean and unit variance i.i.d. and .
Hence the reward has the compact formulation:
where we denote the indicator function by . The below example 1 formulates quantitatively the informal discussion in section 1.2 by showing that, using cumulative reward variance as a measure of risk and provided the noise is small enough, a policy that leaves the agent with a truly risky component (i.e. that can hit any arbitrarily low value with positive probability) may be established as less risky than a policy that doesn’t, hence that "true risk" fails to be captured using this standard criterion.
Example 1.
(Risk fails to be captured using cumulative reward variance as a measure of risk). Let us consider the regime-switching example described above, denote the variance operator and introduce the classical variance penalty (variance of the sum is equal to the sum of variances in this specific example as the rewards are independent):
If and is the policy that always selects action , we get:
In particular, can be negative. For example if , , , , then: i.e. the latter is negative provided the average noise is small enough .
Proof.
We have, with :
By definition of and , we have and for all and hence:
Since by assumption , we get the desired result. If , , , then:
The latter is a 2nd order polynomial in with positive coefficient, hence it can take negative values if and only if it has two distinct real roots, i.e. if . ∎
Proposition 3 below shows that the chaotic variance is proportional to the hidden noise, and in particular is zero in the absence of such noise. That is, chaotic variance captures the risky component contained in the rewards.
Proposition 3.
The chaotic variance (cf. definition 4) associated to the regime-switching example 1 is given by:
where and .
Proof.
By definition:
By definition of we get:
and hence:
so that:
and therefore:
By definition the chaotic variance is given by , and hence:
But and for :
Plugging in the latter expression we get all in all:
Since by definition we get eventually:
∎
Appendix D Average reward version of the Chaotic Mean-Variance Q-Learning algorithm (theorem 3)
In the average reward framework, some modifications are required since doesn’t necessarily converge anymore. We follow the spirit of Prashanth & Ghavamzadeh, (2016), and start by defining
The chaotic variance process needs to be modified similarly according to a recentering by :
In that case we get the following Bellman equations, similar to theorem 2, which will be used in theorem 4:
We present below the chaotic mean-variance version of the R-learning algorithm in Mahadevan, (1996). The algorithm is similar to the one presented in the episodic case (theorem 3), except that the rewards and the chaotic variance have to be adjusted by their mean value as precised above (ergodic limit). The mean values and have to be estimated during the course of the algorithm.
Theorem 4.
(Chaotic Mean-Variance R-Learning in the average reward case). Let and , and the successive states, actions and rewards observed by the agent. Let , be two sequences of learning rates. The Chaotic Mean-Variance R-Learning algorithm is given by:
if and and , , otherwise.
Appendix E Episodic Monte Carlo Policy gradient algorithms for additional risk measures: the chaotic CVaR and Sharpe Ratio cases
Here we discuss the extension of CMV-REINFORCE algorithm 3 to the chaotic Sharpe ratio and CVaR cases.
E.1 The chaotic Sharpe Ratio case
The chaotic Sharpe ratio - which we seek to maximize - is defined as:
where we remind that , , and we assume that -a.s. Taking the gradient, we get:
Provided and are known, we can compute unbiased estimates of the gradients , as done in algorithm 3, in particular updating the same way. In order to estimate and , we use the techniques developed in Tamar et al. , (2012) (theorem 4.3) which uses two timescales: and are estimated on the faster timescale so that they can be considered as converged when the update is performed on the slower timescale . We impose and we perform the fast timescales updates:
The unbiased estimates of the gradients are given as in algorithm 3 by:
Finally, is updated on the slow timescale by:
For completeness we present the assumptions under which the above algorithm is guaranteed to converge, cf. theorem 4.3 in Tamar et al. , (2012) which proof uses two timescales and an ODE based approach:
-
•
, and for : , .
-
•
Assumptions 1, 2 hold true (which guarantee in particular that and are uniformly bounded).
-
•
For all , the objective function has bounded second derivatives. Furthermore, the set of local optima of is countable. Here the objective function is defined as:
E.2 The chaotic CVaR case
We recall that the of a random variable is defined as , where the value-at-risk , where is the cumulative distribution function of . We adapt the work Chow et al. , (2018) (algorithm 1) to the chaotic case, i.e. the case where is the chaotic reward process. In order to obtain a unbiased estimate of the gradient of
we first estimate as in Chow et al. , (2018) (algorithm 1) on a fast timescale so that it can be considered as converged when performing the update:
We can then compute an unbiased estimate of the gradient of as:
Eventually, is updated on the slow timescale as:
where is as in algorithm 3.
Appendix F Policy Gradient Actor-Critic algorithms
In the episodic, discounted or average reward settings, we can use the Bellman equations in theorem 2 to adapt the work of e.g. Prashanth & Ghavamzadeh, (2016) (variance case) or Chow et al. , (2018) (CVaR case) in order to derive chaotic versions of their actor-critic algorithms.
In the chaotic mean-variance discounted reward case, and under assumption 2, the policy gradient theorem gives us for the expected reward and risk-sensitive gradients:
(3) | ||||
where is the action-state -discounted visiting distribution. The proof of the above is very similar to Prashanth & Ghavamzadeh, (2016) (lemma 1), and we have derived it in the proof of section B.4 for the convenience of the reader. In the average reward case, under assumption 2 and using the functions and chaotic variance defined in section D, we get the same equalities except that , where is the stationary distribution of the underlying Markov chain generated by having actions follow . The proof of this result is very similar to that of equations (LABEL:eqac) and is given in e.g. Prashanth & Ghavamzadeh, (2016), lemma 3.
Note that in equations (LABEL:eqac), the functions can, as usual, be replaced by the corresponding advantage functions by subtracting the value functions.
In order to derive online Actor-Critic algorithms, we follow the approach in Prashanth & Ghavamzadeh, (2016) by defining , as parametric approximations of the value function and chaotic variance, where is a feature vector. We then apply theorem 2 to compute Temporal Differences that are used to estimate the advantage functions in equations (LABEL:eqac), and to update the critic parameters.
Remark 2.
In the average reward setting, the policy gradient equations (LABEL:eqac) can be used to derive an online Actor-Critic algorithm as in Prashanth & Ghavamzadeh, (2016), because as tuples (, , , ) are observed following , we are guaranteed to end up in the limit in the stationary distribution of a local optima . In the episodic setting with , the same remark holds. Nevertheless, it much less clear to the author in the infinite horizon discounted reward setting (with ) how the policy gradient equations (LABEL:eqac) could be used rigorously, as they involve the and -discounted visiting distribution. This subtlety is the focus of Thomas, (2014) and was pointed out as well in Prashanth & Ghavamzadeh, (2016) where they mention difficulties linked to the two policy gradient equations involving two distinct distributions: the and discounted visiting distributions, leading them to introduce new SF-based and SPSA-based algorithms.
In the average reward case we use the notations of section D to get algorithm 5, which is a modification of algorithm 2 in Prashanth & Ghavamzadeh, (2016). The three differences are i) the use of the chaotic TD error based on the Bellman equation in theorem 2, ii) the update of the distributional function as it is done in algorithm 3 and iii) the use of the policy gradient equation (LABEL:eqac) for the chaotic variance. In that case the value functions and are approximated by linear functions on lower dimensional spaces and , respectively. The algorithm uses three timescales, so that parameters updated on the faster timescales can be considered as converged to their limiting value when considering updates on the slower timescales.
In the episodic setting with , we obtain algorithm 4, which is the episodic counterpart of algorithm 5.
It is to be noted that we can easily extend algorithm 4, 5 to the chaotic Sharpe ratio case, introduced in section E.1. To do so, we use the theta update taken from section E.1:
with .
We do not discuss here the Actor-Critic extensions of the chaotic risk framework to the infinite horizon discounted reward framework in the CVaR case (Chow et al. , (2018)) and the mean-variance case (Prashanth & Ghavamzadeh, (2016)), but they can be performed using similar techniques as used in the present section and in sections E.1, E.2. In particular, the SF-based and SPSA-based techniques used in the above mentioned literature in the discounted reward framework go through with minor modifications as in algorithm 5, i) computing the chaotic TD error based on the Bellman equation in theorem 2 and ii) computing the update of the distributional function as it is done in algorithm 3.
Input: Initial policy parameter , initial critic parameters (), learning rates (), value function feature vectors (), initialized to 0, Optional: experience replay table
Output: Approximation of the optimal policy parameter
Input: Initial policy parameter , initial critic parameters (), learning rates (), value function feature vectors (), initialized to 0, Optional: experience replay table .
Output: Approximation of the optimal policy parameter
Appendix G Numerical values used in experiments of section 5 and additional experiments
G.1 Grid World - section 5.1
We train the policy using timesteps for each value of using an greedy exploration policy with associated probability to take a random action of , a table initialized to zero, a learning rate (where counts the number of visits to the state-action pair ), and set the probability of the robot going in a random direction to be . We then run the trained policy for rollout steps in order to get the path heatmap and risk penalty heatmap displayed in figure 2. In the latter, the results are averaged over 25 NumPy RNG seeds (1001, 1003, 1006, 1008, 1009, 1010, 1015, 1018, 1021, 1022, 1025, 1028, 1029, 1031, 1033, 1035, 1037, 1038, 1039, 1040, 1041, 1043, 1047, 1048, 1049): for each such seed, we perform the training and obtain the rollouts, which are then averaged over seeds.
In figure 5 we display the path heatmaps similar as in figure 2 but for various values of , representing the probability that the robot goes in a random direction. Figure 6 is similar to figure 5 but with a more severe negative reward of -50 instead of -20. As expected, in figure 5, the higher , the more reward uncertainty/stochasticity there is and the further away from the -20 reward the robot goes. The same observation goes for a lower reward -50 instead of -20 (figure 6).
G.2 Portfolio optimization - section 5.2
As described in section 5.2, the action is defined as the quantities invested in the risk-free and risky assets with a total budget constraint and . This yields a total of 21 possible actions , , ,…, , , , .
In order to train the risk-sensitive policy in section 5.2 according to the CMV-REINFORCE algorithm 3, we use a learning rate , batch size and perform iterations on the policy parameter . We use the Boltzmann exploration policy:
where is of size and is the vector with entries all zero except the entry corresponding to which is set to 1. For each value of , we train the policy as described above and plot in figure 3, 4 the corresponding metrics for rollout episodes of timesteps.
The values of the risk-free rate and volatility in each state are reported in table 1. The state transition matrices depending on the action (via the quantity invested in the risky asset ) are displayed in table 2. They have been designed such that the more we invest in the risky asset, the more likely we are to reach a higher volatility state.
LowVol | MediumVol | HighVol | |
LowVol | MediumVol | HighVol | |
LowVol | |||
MediumVol | |||
HighVol |
LowVol | MediumVol | HighVol | |
LowVol | |||
MediumVol | |||
HighVol |
LowVol | MediumVol | HighVol | |
LowVol | |||
MediumVol | |||
HighVol |
LowVol | MediumVol | HighVol | |
LowVol | |||
MediumVol | |||
HighVol |
In figure 3 and 4, we compare CMV-REINFORCE (algorithm 3) with the mean-variance baseline. Denoting the variance operator, the gradient of the variance needed in the mean-variance method is computed using the classical likelihood ratio technique. Indeed, , and hence:
The likelihood ratio technique then yields (Tamar et al. , (2012), lemma 4.2):
(4) |
with:
Note that we additionally apply an optimal baseline (Necchi, (2016), section 4.4.1.1.) by replacing by in equation (4) with the vector which component is given by: