A modified MSA for stochastic control problems
Abstract.
The classical Method of Successive Approximations (MSA) is an iterative method for solving stochastic control problems and is derived from Pontryagin’s optimality principle. It is known that the MSA may fail to converge. Using careful estimates for the backward stochastic differential equation (BSDE) this paper suggests a modification to the MSA algorithm. This modified MSA is shown to converge for general stochastic control problems with control in both the drift and diffusion coefficients. Under some additional assumptions the rate of convergence is shown. The results are valid without restrictions on the time horizon of the control problem, in contrast to iterative methods based on the theory of forward-backward stochastic differential equations.
1. Introduction
Stochastic control problems appear naturally in a range of applications in engineering, economics and finance. With the exception of very specific cases such as the linear-quadratic control problem in engineering or Merton portfolio optimization task in finance, stochastic control problems typically have no closed form solutions and have to be solved numerically. In this work, we consider a modification to the method of successive approximations (MSA), see Algorithm 1. The MSA is essentially a way of applying the Pontryagin’s optimality principle to get numerical solutions of stochastic control problems.
We will consider the continuous space, continuous time problem where the controlled system is modelled by an -valued diffusion process. Let be a -dimensional Wiener martingale on a filtered probability space . We will provide exact assumptions we need in Section 2. For now, let us fix a finite time and consider the controlled stochastic differential equation (SDE) for given measurable functions and
(1) |
Here is a control process belonging to the space of admissible controls , valued in a separable metric space and we will write to denote the unique solution of (1) which starts from at time whilst being controlled by . Furthermore let and be given measurable functions and consider the gain functional
(2) |
for all and . We want to solve the optimisation problem i.e. to find the optimal control which achieves the minimum of (2) (or, if the infimum cannot be reached by then an -optimal control such that ).
In the present paper, we study an approach based on Pontryagin’s optimality principle, see e.g. [4], [7] or [25]. The main idea is to consider optimality conditions for controls of the problem (2). Given and we define the Hamiltonian as
(3) |
Consider for each , the BSDE, called the adjoint equation
(4) |
It is well known from Pontryagin’s optimality principle that, if an admissible control is optimal, is the corresponding optimally controlled dynamic (1) and is the solution to the associated adjoint equation (4), then and the following holds
(5) |
We now define the augmented Hamiltonian for some by
(6) |
Notice that when we have exactly the definition of Hamiltonian (3). Given the augmented Hamiltonian, let us introduce the modified MSA in Algorithm 1 which consists of successive integrations of the state and adjoint systems and updates to the control. Notice that the backward SDE depends on the Hamiltonian , while the control update step comes from minimizing the augmented Hamiltonian .
(7) |
(8) |
The method of successive approximations (i.e. case ) for numerical solution of deterministic control problems was proposed already in [5]. Recent application of the modified MSA to a deep learning problem has been studied in [32], where they formulated the training of deep neural networks as an optimal control problem and introduced the modified method of successive approximations as an alternative training algorithm for deep learning. For us, the main motivation to explore the modified MSA for stochastic control problems is to obtain convergence, ideally with rate, of an iterative algorithm, applicable to problems with the control in the diffusion part of the controlled dynamics. This is in contrast to [36] where convergence rate of an the Bellman–Howard policy iteration is shown but only for control problems with no control in the diffusion part of the controlled dynamics.
In Lemma 2.3, which can be established using careful BSDE estimates, we can see the estimate on the change of when we do a minimization step of Hamiltonian as in (8). If the sum of the last three terms of (14) is bigger than the first term, then for classical MSA algorithm (i.e. case ) we cannot guarantee that we do an update of the control in optimal descent direction of . That means that the method of successive approximations may diverge. To overcome this, we need to modify the algorithm in such way so that we ensure convergence. With this in mind the desirability of the the augmented Hamiltonian (6) for updating the control becomes clear, as long as it still characterises optimal controls like does. Theorem 2.4 answers this question affirmatively which opens the way to the modified MSA. In Theorem 2.5 we show that the modified method of successive approximations, converges for arbitrary , and in Corollary 2.6, we show logarithmic convergence rate for certain stochastic control problems.
We observe that the forward and backward dynamics in (7) are decoupled, due to the iteration used. Therefore, it can be efficiently approximated, even in high dimension, using deep learning methods, see [31] and [30]. However, the minimization step (8) might be computationally expensive for some problems. A possible approach circumventing this is to replace the full minimization of (8) by gradient descent. A continuous version of this gradient flow is analyzed in [37].
The main contributions of this paper are the probabilistic proof of convergence of the modified method of successive approximations and establishing convergence rate for a specific type of optimal control problems.
This paper is organised as follow: in Section 1.1 we compare our results with existing work. In Section 2 we state the assumptions and main results. In Section 3 we collect all proofs. Finally, in Appendix A we recall an auxiliary lemma which is needed in the proof of Corollary 2.6.
1.1. Related work
One can solve the stochastic optimal control problem using dynamic programming principle. It is well known, see e.g. Krylov [8], that under reasonable assumptions the value function, defined as infimum of (2) over all admissible controls, satisfies the Bellman partial differential equation (PDE). There are several approaches to solve this nonlinear problem. One may apply a finite difference method to discretise the Bellman PDE and get a high dimensional nonlinear system of equations, see e.g [20] or [22]. Or one may linearize the Bellman PDE and then iterate. The classical approach is the Bellman-Howard policy improvement / iteration algorithm, see e.g. [1], [2] or [3]. The algorithm is initialised with a “guess” of Markovian control. Given a Markovian control strategy at step one solves a linear PDE with the given control fixed and then one uses the solution to the linear PDE to update the Markovian control, see e.g. [27], [28] or [29]. In [36], a global rate of convergence and stability for the policy iteration algorithm has been established using backward stochastic differential equations (BSDE) theory. However, the result only applies to stochastic control problems with no control in the diffusion coefficient of the controlled dynamics.
It is known that the solution of the stochastic optimal control problem can be obtained from a corresponding forward backward stochastic differential equation (FBSDE) via the stochastic optimality principle, see [26, Chapter 8.1]. Indeed, let us consider (1) and (4), and recall from the stochastic optimality principle, see [25, Theorem 4.12], that for the optimal control we have that (5) holds. Assume that under some conditions on and we have that the first order condition stated above uniquely determines for by
(9) |
for some function . Therefore, after plugging (9) into (1) and (4), we obtain the following coupled FBSDE:
(10) |
where and
. It is worth mentioning that when does not depend on the control will depend on forward process and time only. This means that does not have and components.
The theory of FBSDE has been studied widely and there are several methods to show the existence and uniqueness result, and a number of numerical algorithms have been proposed based on those methods. First is the method of contraction mapping. It was first studied by Antonelli [9] and later by Pardoux and Tang [15]. The main idea there is to show that a certain map is a contraction, and then to apply a fixed point argument. However, it turns out that this method works only for small enough time horizon . In the case when does not depend on and , having small is sufficient to get contraction. Otherwise, one needs to assume additionally that the Lipschitz constants of in and that of in satisfy a certain inequality, see [26, Theorem 8.2.1]. Using the method of contraction mapping one can then implement a Picard-iteration-type numerical algorithm and show exponential convergence for small . The second method is the Four Step Scheme. It was introduced by Ma, Protter and Yong, see [10], and was later studied by Delarue [17]. The idea is to use a decoupling function and then study an associated quasi-linear PDE. We note that in [10, 17] the forward diffusion coefficient does not depend on . This corresponds to stochastic control problems with the uncontrolled diffusion coefficient. The numerical algorithms based on this method exploits the numerical solution of the associated quasi-linear PDE and therefore faces some limitations for high dimensional problems, see Douglas, Ma and Protter [12], Milstein and Tretyakov [19], Ma, Shen and Zhao [21] and Delarue and Menozzi [18]. Guo, Zhang and Zhuo [24] proposed a numerical scheme for high-dimensional quasi-linear PDE associated with the coupled FBSDE when does not depend on , which is based on a monotone scheme and on probabilistic approach. Finally, there is the method of continuation. This method was developed by Hu and Peng [11], Peng and Wu [16] and by Yong [14]. It allows them to show the existence and uniqueness result for arbitrary under monotonicity conditions on the coefficients, which one would not expect to apply to FBSDEs arising from a control problem as described by (9), (10). Recently, deep learning methods have been applied to solving FBSDEs. In [35], three algorithms for solving fully coupled FBSDEs which have good accuracy and performance for high-dimensional problems are provided. One of the algorithms is based on the Picard iteration and it converges, but only for small enough . Such method for solving high-dimensional FBSDEs has also been proposed in [34].
2. Main results
We fix a finite horizon . Let be a separable metric space. This is the space where the control processes take values. We fix a filtered probability space . Let be a -dimensional Wiener martingale on this space. By we denote the conditional expectation with respect to . Let denote any norm in a finite dimensional Euclidean space. By we denote the norm in . Let for any predictable process . We understand the following as , and , where and . By we denote the transpose of . The state of the system is governed by the controlled SDE (1) . The corresponding adjoint equation satisfies (4).
Assumption 2.1.
The functions and are jointly continuous in and twice differentiable in . There exists such that ,
(11) |
Moreover, assume that .
Clearly the assumption (12) implies that we have
(12) |
The assumption that is needed so that (21), in the proof of Lemma 3.1, holds. Without this assumption (21) would only hold if we could show that . Without additional regularization of the control problem this is impossible. Indeed, with [13, Proposition 5.3] we see that is a version of (the Malliavin derivative of ) and itself satisfies an a linear BSDE. However, to obtain the estimates using this representation, one term that arises is where and . So we would need . This is not necessarily the case here.
Assumption 2.2.
The functions is joinly continuous in , and and are twice differentiable in . There is a constant such that
(13) |
Under these assumptions, we can obtain the following estimate.
Lemma 2.3.
The proof will be given in Section 3. We now state a necessary condition for optimality for the augmented Hamiltonian.
Theorem 2.4 (Extended Pontryagin’s optimality principle).
The proof of Theorem 2.4 will come in Section 3. We are now ready to present the main result of the paper.
Theorem 2.5.
Theorem 2.5 will be proved in Section 3. It can be seen from the proof that needs to be two times larger than the constant appearing in Lemma 2.3, which itself depends increases with and constants from Assumption 2.1 and 2.2.
We cannot guarantee that the Algorithm 1 converges to the optimal control which minimizes (2), since the extended Pontryagin’s optimality principle, see Theorem 2.4, is the necessary condition for optimality. The sufficient condition for optimality tells us that to get the optimal control we need to assume convexity of the Hamiltonian in state and control variables, and need to assume convexity of the terminal cost function. To that end, we need to assume convexity of and in and .
In the following corollary, we show that under a particular setting of the problem we have logarithmic convergence of the modified method of successive approximations to the true solution of the problem.
Corollary 2.6.
3. Proofs
We start working towards the proof of Theorem 2.5. Recall the adjoint equation for an admissible control :
(16) |
From now on, we shall use Einstein notation, so that repeated indices in a single term imply summation over all the values of that index.
Lemma 3.1.
Assume that there exists such that we have
and
Then is bounded.
Proof.
From the definition of the Hamiltonian (3) we have
Hence, one can observe that (16) is a linear BSDE. Therefore, from [33, Proposition 3.2] we can write the formula for the solution of (16):
where the process is the unique strong solution of
and is the inverse process of . Thus, due to [33, Corollary 3.7] and assumptions of lemma we have the following bound:
Hence, due to assumptions of lemma we conclude that is bounded. ∎
Proof of Lemma 2.3.
Let and be some generic admissible controls. We will write for the solution of (1) controlled by and for the solution of (1) controlled by . We denote solutions of corresponding adjoint equations by and . Due to Taylor’s theorem, we note that for some , we have that
The last inequality holds due to Assumption 2.2. Recall that . Hence, using Itô’s product rule, we get
From this, the forward SDE (1) and the adjoint equation (4) we thus get
(17) |
On the other hand, by definition of the Hamiltonian we have
(18) |
Summing up (17) and (18) we get
(19) |
Due to Taylor’s theorem, there exists such that we have
(20) |
Since by Assumption 2.1, we have that
From Lemma 3.1 we know that is bounded a.s. for all . Hence by Assumption 2.1 and 2.2 we have
(21) |
Therefore, after substituting (20) into (19), and by 21 we get
Let us now get a standard SDE estimate for the difference of and . From , from taking the expectation, from Hölder’s inequality, from Assumption 2.1, from the Burkholder-Davis-Gundy inequality and from Gronwall’s inequality we obtain
(22) |
Young’s inequality allows us to get the estimate
Hence, from (22) we have that
for some constant , which depends on , and . ∎
Proof of Theorem 2.4.
Proof of Theorem 2.5.
Let us apply Lemma 2.3 for and . Hence, for some we have
(25) |
Let
Due to the definition of (8) and (15) we have for all
Therefore, we can observe that . Hence we can rewrite the inequality (25) as
(26) |
where . By choosing we have that . Notice that for any integer we have
Since and we have that as . This concludes the proof. ∎
We need to introduce new notation, which will be used in the proof of Corollary 2.6. Denote the set
(27) |
Let us define for all
and
By definition of notice that for all . Let us show an auxiliary lemma.
Lemma 3.2.
For any there exists , which depends on and , such that
Proof.
We will prove by contradiction. Assume that there exists such that we have
(28) |
Denote , , where - integer part. Since for all by definition of and is a superset of we have
(29) |
Hence, by (28) we get
Hence we get the contradiction. ∎
Now we are ready to prove Corollary 2.6.
Proof of Corollary 2.6.
First, observe that
Let us consider the set given by (27). We will specify the choice of and later. Hence, after applying Lemma 2.3 for and we have for some
Since the following holds for all and :
we have for
Therefore, from Lemma 3.2 and from similar calculations as in (26), there exists such that
Let us choose . Hence
(30) |
Let be the optimal control. Indeed, by the sufficient condition for optimality, see e.g. [23], and by assumptions of corollary, we have the existence of the optimal control. Therefore, by convexity of , and by Itô’s product rule we have
Hence, we have that
Recalling the form of and observing that
we have
Since is convex in we have for all that
Therefore, we obtain
(31) |
where the second inequality holds due to
Let , then due to (30) and (31) we have that
Therefore, due to Lemma A.1 we have
for some constant . This concludes the proof. ∎
Appendix A Auxiliary Lemma
Lemma A.1.
Let be the sequence of nonnegative numbers such that
where is a positive constant. Then .
One can find the proof in [6, Lemma 1.4, p. 93]. However, the proof is written in Russian. For convenience of the reader we provide it here.
Proof.
Let for some nonnegative sequence . Then it is enough to show that is bounded for all . By assumption we have
Therefore,
After some transformation, we can rewrite the equation above as
Thus
If we have
Hence . On the other hand, if , we have . Therefore, we conclude that for all we have
∎
References
- [1] R. Bellman, Functional equations in the theory of dynamic programming. v. positivity and quasi-linearity, Proc. Natl. Acad. Sci. U.S.A., 41(10):743–746, 1955.
- [2] R. Bellman, Dynamic Programming, Princeton University Press, Princeton, NJ, USA, 1957.
- [3] R. A. Howard, Dynamic Programming and Markov Processes. MIT Press, Cambridge, MA, 1960.
- [4] V. G. Boltyanskii, R. V. Gamkrelidze, and L. S. Pontryagin, Theory of optimal processes. I. The maximum principle, Izv. Akad. Nauk SSSR Ser. Mat., 24(1): 3–42, 1960.
- [5] I. A. Krylov and F. L. Chernousko, On the method of successive approximations for solution of optimal control problems (in Russian), U.S.S.R. Comput. Math. Math. Phys., 2:6, 1371–1382, 1963.
- [6] V. D. Demyanov and A. M. Rubinov, Approximate Methods in Extremal Problems (in Russian), Leningrad, 1968.
- [7] L. S. Pontryagin, Mathematical Theory of Optimal Processes, CRC Press, 1987.
- [8] N. V. Krylov, Controlled diffusion processes, Springer, 1980.
- [9] F. Antonelli, Backward-forward stochastic differential equations, Ann. Appl. Probab., 3:777–793, 1993.
- [10] J. Ma, P. Protter, and J. M. Yong, Solving forward-backward stochastic differential equations explicitly – a four step scheme, Probab. Th. Rel. Fields, 98:339–359, 1994.
- [11] Y. Hu and S. Peng, Solution of forward-backward stochastic differential equations, Probab. Th. Rel. Fields, 103:273–283, 1995.
- [12] J. Douglas, J. Ma, and P. Protter, Numerical methods for forward-backward stochastic differential equations, Ann. Appl. Probab., 6(3):940–968, 1996.
- [13] N. El Karoui and S. Peng and M. C. Quenez, Backward Stochastic Differential Equations in Finance, Math. Finance, 7(1):1–71, 1997.
- [14] J. Yong, Finding adapted solutions of forward-backward stochastic differential equations: Method of continuation, Probab. Th. Rel. Fields, 107:537–572, 1997.
- [15] E. Pardoux and S. Tang, Forward-backward stochastic differential equations and quasilinear parabolic PDEs, Probab. Th. Rel. Fields, 114(2):123–150, 1999.
- [16] S. Peng and Z. Wu, Fully coupled forward-backward stochastic differential equations and applications to optimal control, SIAM J. Control Optim. 37:825–843, 1999.
- [17] F. Delarue, On the existence and uniqueness of solutions to FBSDEs in a non-degenerate case, Stochastic Process. Appl., 99:209–289, 2002.
- [18] F. Delarue and S. Menozzi, A forward-backward stochastic algorithm for quasi-linear PDEs, Ann. Appl. Probab., 16(1):140–184, 2006.
- [19] G. Milstein and M. Tretyakov, Discretization of forward-backward stochastic differential equations and related quasi-linear parabolic equations, IMA J. Numer. Anal., 27(1):24–44, 2007.
- [20] H. Dong and N. V. Krylov, The rate of convergence of finite-difference approximations for parabolic Bellman equations with Lipschitz coefficients in cylindrical domains, Appl. Math. Optim., 56(1):37–66, 2007.
- [21] J. Ma, J. Shen, and Y. Zhao, On numerical approximations of forward-backward stochastic differential equations, SIAM J. Numer. Anal., 46(5):2636–2661, 2008.
- [22] I. Gyöngy and D. Šiska, On Finite-Difference Approximations for Normalized Bellman Equations, Appl. Math. Optim., 60:297–339, 2009.
- [23] H. Pham, Continuous-time stochastic control and optimization with financial applications, Springer, 2009.
- [24] W. Guo, J. Zhang, and J. Zhuo, A monotone scheme for high-dimensional fully nonlinear PDEs, Ann. Appl. Probab., 25(3):1540–1580, 2015.
- [25] R. Carmona, Lectures on BSDEs, Stochastic Control, and Stochastic Differential Games with Financial Applications, SIAM, 2016.
- [26] J. Zhang, Backward Stochastic Differential Equations: From Linear to Fully Nonlinear Theory, Springer, New York, 2017.
- [27] S. D. Jacka and A. Mijatović, On the policy improvement algorithm in continuous time, Stochastics, 89(1):348–359, 2017.
- [28] S. D. Jacka, A. Mijatović, and D. Siraj, Coupling and a generalised Policy Iteration Algorithm in continuous time, arXiv:1707.07834, 2017.
- [29] J. Maeda and S. D. Jacka, Evaluation of the Rate of Convergence in the PIA, arXiv:1709.06466, 2017.
- [30] M. Sabate Vidales, D. Šiška, and Ł. Szpruch, Unbiased deep solvers for parametric PDEs, arXiv:1810.05094v2, 2018.
- [31] J. Han, A. Jentzen, and W. E, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. U.S.A., 115(34):8505–8510, 2018.
- [32] Q. Li, L. Chen, C.Tai, and W. E, Maximum principle based algorithms for deep learning, J. Mach. Learn. Res., 18(165), 1–29, 2018.
- [33] J. Harter and A. Richou, A stability approach for solving multidimensional quadratic BSDEs, Electron. J. Probab., 24(4), 1–51, 2019.
- [34] J. Han and J. Long, Convergence of the deep BSDE method for coupled FBSDEs, Probab. Uncertain. Quant. Risk., 5(1), 1–33, 2020.
- [35] S. Ji, S. Peng, Y. Peng, and X. Zhang, Three algorithms for solving high-dimensional fully-coupled FBSDEs through deep learning, IEEE Intell. Syst., 35(3):71–84, 2020.
- [36] B. Kerimkulov, D. Šiška, and Ł. Szpruch. Exponential convergence and stability of Howard’s policy improvement algorithm for controlled diffusions, SIAM J. Control Optim., 58(3), 1314–1340, 2020.
- [37] D. Šiška and Ł. Szpruch, Gradient Flows for Regularized Stochastic Control Problems, arXiv:2006.05956, 2020.