This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

State Constrained Stochastic Optimal Control for Continuous and Hybrid Dynamical Systems Using DFBSDE

Bolun Dai [email protected]    Prashanth Krishnamurthy [email protected]    Andrew Papanicolaou [email protected]    Farshad Khorrami [email protected] New York University, Brooklyn, NY, 11201 North Carolina State University, Raleigh, NC, 27695
Abstract

We develop a computationally efficient learning-based forward-backward stochastic differential equations (FBSDE) controller for both continuous and hybrid dynamical (HD) systems subject to stochastic noise and state constraints. Solutions to stochastic optimal control (SOC) problems satisfy the Hamilton–Jacobi–Bellman (HJB) equation. Using current FBSDE-based solutions, the optimal control can be obtained from the HJB equations using deep neural networks (e.g., long short-term memory (LSTM) networks). To ensure the learned controller respects the constraint boundaries, we enforce the state constraints using a soft penalty function. In addition to previous works, we adapt the deep FBSDE (DFBSDE) control framework to handle HD systems consisting of continuous dynamics and a deterministic discrete state change. We demonstrate our proposed algorithm in simulation on a continuous nonlinear system (cart-pole) and a hybrid nonlinear system (five-link biped).

keywords:
Stochastic control, Optimal control, Forward and backward stochastic differential equations
thanks: This work is supported in part by the NSF under grant DMS-1907518. An earlier version [6] of a portion of this paper was presented at the 2021 American Control Conference (Virtual), May 2021.

, , ,

1 Introduction

Optimal control has been applied in various applications, e.g., robotic control [17] [5] and navigation [10]. Optimal control problems are often cast as the minimization of a cost function, which the solution can be found via numerical optimization techniques [11] [8] [9]. Recently, optimal control has been used to solve increasingly complex control problems [18] thanks to the enhanced computational abilities and optimization software and the adoption of neural networks [2]. Applying optimal control requires accurate modeling of the system of interest. However, unmodeled processes are common in real-world systems, e.g., measurement noise and external forces with unknown distributions [16] [12]. We adopt SOC [4] techniques to consider the uncontrolled inputs explicitly.

Recently, neural networks have been widely used to solve SOC problems [20] [7]. A promising direction for deep neural network (DNN) based SOC solutions is formulating the problem as an FBSDE [22]. FBSDE enables the numerical estimation of the solution to the HJB equation, i.e., the value function with respect to a cost function. The optimal control at each state then can be analytically obtained using the estimated value function. The introduction of deep neural networks [22] to FBSDEs improves the value function estimation. Compared to nonlinear model predictive control (MPC) approaches, DFBSDE-based approaches are computationally more efficient during inference by moving the heavy computation offline. On top of the vanilla DFBSDE, work has been done in introducing state constraints using penalty functions [6] and appending a differentiable quadratic program [19]. In this work, we explore a penalty function based approach since it is faster in training and inference.

Another key aspect of this paper is the control of HD systems. The type of HD systems considered consists of two phases: first, a phase of continuously evolving dynamics, and second, a phase of a discrete jump. Such HD systems are very common, e.g., walking robots [14]. Thus, it is worthwhile to understand how it is typically controlled. The control of walking robots is traditionally achieved with a hierarchical control architecture. A reduced-order model is used for motion generation. The generated motion is tracked via a tracking controller [17]. Another approach is to see the HD system as a combination of multiple phases of continuous dynamics [21]. Using this approach, the controller only needs to work well for the continuous dynamics and be robust to changes caused by discrete jumps. This formulation is end-to-end, which better suits DFBSDE.

This paper proposes a SOC setting for nonlinear systems that incorporates state constraints and adapts the method to handle HD systems. The main contribution of this paper is threefold: (1) proposed the state constraint DFBSDE algorithm and adapted it to handle HD systems; (2) devised a new training loss and controller ensemble setting for high-dimensional HD systems; (3) provided simulation studies on both continuous and HD systems to show the efficacy of our approach. In relationship with our previous work [6], this paper provides a more detailed development of the methodology, extension to hybrid systems, and application to bipeds. The remainder of this paper is structured as follows. In Section II, the state-constrained SOC formulation is given. Section III presents a derivation connecting the SOC formulation and FBSDEs. Also, Section III presents the incorporation of control saturation and state constraints. Then using the derived formulation, a deep neural network-based algorithm is presented to solve the FBSDE with state constraints and control saturation. The last part of Section III discusses adapting the DFBSDE algorithm to HD systems. In Section IV, simulation studies are presented for a continuous dynamical system, namely the cart-pole, and a five-link biped [15], which is HD.

2 Problem Formulation

In this section, we outline the SOC problem under state constraints. We consider a filtered probability space (Ω,,{t}t0,)(\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P}), where Ω\Omega is the sample space, \mathcal{F} is the σ\sigma-algebra over Ω\Omega, \mathbb{P} is a probability measure, and {t}t0\{\mathcal{F}_{t}\}_{t\geq 0} is a filtration with index tt denoting time. A controlled affine system with noise represented by stochastic processes can be described using a stochastic differential equation (SDE)

d𝐱(t)=(𝐅(𝐱(t))+𝐆(𝐱(t))𝐮)dt+Σ(𝐱(t))d𝐰(t)d{\mathbf{x}}(t)=(\mathbf{F}(\mathbf{x}(t))+\mathbf{G}(\mathbf{x}(t))\mathbf{u})dt+\Sigma(\mathbf{x}(t))d{\mathbf{w}}(t) (1)

with initial state 𝐱0n\mathbf{x}_{0}\in\mathbb{R}^{n} and 𝐰(t)ν\mathbf{w}(t)\in\mathbb{R}^{\nu} an t\mathcal{F}_{t}-adapted Brownian motion. Throughout this paper, we will write 𝐱(t)\mathbf{x}(t) and other stochastic processes with argument tt omitted whenever appropriate. The states are denoted by 𝐱n\mathbf{x}\in\mathbb{R}^{n} and the control input by 𝐮m\mathbf{u}\in\mathbb{R}^{m}. In (1), 𝐅:nn\mathbf{F}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n} represents the drift, 𝐆:nn×m\mathbf{G}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m} represents the control influence, and Σ:nn×ν\Sigma:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times\nu} represents the diffusion (influence of the Brownian motion to the state). We assume that range(𝐆)range(Σ)\mathrm{range}(\mathbf{G})\subseteq\mathrm{range}(\Sigma), i.e., the noise enters wherever the control input appears (in addition to possibly elsewhere). The state constrained SOC problem is to find a controller 𝐮(𝐱)\mathbf{u}(\mathbf{x}) that minimizes an objective function J𝐮(𝐱,t)+J^{\mathbf{u}}(\mathbf{x},t)\in\mathbb{R}_{+} under a set of state constraints. The objective function is denoted as

J𝐮(𝐱,t)=\displaystyle J^{\mathbf{u}}(\mathbf{x},t)= 𝔼[𝐪N(𝐱(T))+tT(𝐪(𝐱)+𝐫(𝐮))𝑑τ]\displaystyle\ \mathbb{E}\Big{[}\mathbf{q}_{N}(\mathbf{x}(T))+\int_{t}^{T}\Big{(}\mathbf{q}(\mathbf{x})+\mathbf{r}(\mathbf{u})\Big{)}d\tau\Big{]} (2)

where 𝔼\mathbb{E} represents expectation, T+T\in\mathbb{R}_{+} is the terminal time, 𝐪N:n+\mathbf{q}_{N}:\mathbb{R}^{n}\rightarrow\mathbb{R}_{+} is the terminal state cost, 𝐪:n+\mathbf{q}:\mathbb{R}^{n}\rightarrow\mathbb{R}_{+} is the instantaneous state cost, and the instantaneous control cost is 𝐫:m+\mathbf{r}:\mathbb{R}^{m}\rightarrow\mathbb{R}_{+}. The controller 𝐮(𝐱(t))\mathbf{u}(\mathbf{x}(t)) is a function of the state, this dependency will be omitted in the remainder of this paper for brevity, and the controller will be written as 𝐮\mathbf{u}. Without loss of generality, we consider state constraints in the following form 𝐜(𝐱)𝐛\mathbf{c}(\mathbf{x})\leq\mathbf{b}, where 𝐜(𝐱)r\mathbf{c}(\mathbf{x})\in\mathbb{R}^{r} is a vector of functions of the state, and 𝐛r\mathbf{b}\in\mathbb{R}^{r} represents the element-wise upper bound of 𝐜(𝐱)\mathbf{c}(\mathbf{x}). Control saturation (with 𝐔max+m\mathbf{U}_{\max}\in\mathbb{R}_{+}^{m}) can also be introduced into the SOC formulation

𝐮𝒰={𝐮|𝐮i|𝐔i,max,i=1,,m}\mathbf{u}\in\mathcal{U}=\{\mathbf{u}\mid|\mathbf{u}_{i}|\leq\mathbf{U}_{i,\max},\ i=1,\ldots,m\} (3)

where 𝐮i\mathbf{u}_{i} and 𝐔i,max\mathbf{U}_{i,\max} are the ithi^{th} element of 𝐮\mathbf{u} and 𝐔max\mathbf{U}_{\max}, respectively. Note that all operations with stochastic processes are understood a.s. (almost surely, i.e., with probability 1) unless specifically indicated. In summary, we have the SOC problem as

min𝐮𝒰\displaystyle\min_{\mathbf{u}\in\mathcal{U}}\ J𝐮(𝐱0,t0)subjectto(1)and𝐜(𝐱)𝐛a.s.\displaystyle\ J^{\mathbf{u}}(\mathbf{x}_{0},t_{0})\ \mathrm{subject\ to}\ \ \eqref{eq:SDE2}\ \mathrm{and}\ \mathbf{c}(\mathbf{x})\leq\mathbf{b}\ \mathrm{a.s.} (4)

3 Method

In this section, we first show how (4) can be written as an FBSDE. Then, state constraints are considered. Subsequently, adaptations for HD systems are presented. Finally, a DNN-based solution to the FBSDE is proposed.

3.1 SOC to FBSDE

The derivation presented in this section was originally proposed in [22], it is summarized here for completeness. Let 𝐕(,)\mathbf{V}(\cdot,\cdot) be the value function 𝐕(𝐱,t)+\mathbf{V}(\mathbf{x},t)\in\mathbb{R}_{+} defined as

𝐕(𝐱,t):=inf𝐮𝒰J𝐮(𝐱,t).\mathbf{V}(\mathbf{x},t):=\inf_{\mathbf{u}\in\mathcal{U}}J^{\mathbf{u}}(\mathbf{x},t). (5)

Using (2) and Bellman’s principle [1] yields

𝐕t(𝐱,t)+𝐕(𝐱,t)+𝐡(𝐱,𝐕𝐱(𝐱,t))=0\displaystyle\mathbf{V}_{t}(\mathbf{x},t)+\mathcal{L}\mathbf{V}(\mathbf{x},t)+\mathbf{h}(\mathbf{x},\mathbf{V}_{\mathbf{x}}(\mathbf{x},t))=0 (6a)
𝐕(𝐱(T),T)=𝐪N(𝐱(T))\displaystyle\mathbf{V}(\mathbf{x}(T),T)=\mathbf{q}_{N}(\mathbf{x}(T)) (6b)

where 𝐡\mathbf{h} denotes the Hamiltonian

𝐡(𝐱,𝐕𝐱)=inf𝐮𝒰(𝐪(𝐱)+(𝐆(𝐱)𝐮)T𝐕𝐱+𝐫(𝐮)),\mathbf{h}(\mathbf{x},\mathbf{V}_{\mathbf{x}})=\inf_{\mathbf{u}\in\mathcal{U}}\Big{(}\mathbf{q}(\mathbf{x})+(\mathbf{G}(\mathbf{x})\mathbf{u})^{T}\mathbf{V}_{\mathbf{x}}+\mathbf{r}(\mathbf{u})\Big{)}, (7)

the differential operator of 𝐕\mathbf{V} is given by

𝐕(𝐱,t)=12trace(ΣΣT𝐕𝐱𝐱(𝐱,t))+𝐅T(𝐱)𝐕𝐱(𝐱,t),\mathcal{L}\mathbf{V}(\mathbf{x},t)=\frac{1}{2}\mathrm{trace}\Big{(}\Sigma\Sigma^{T}\mathbf{V}_{\mathbf{x}\mathbf{x}}(\mathbf{x},t)\Big{)}+\mathbf{F}^{T}(\mathbf{x})\mathbf{V}_{\mathbf{x}}(\mathbf{x},t), (8)

𝐕t(𝐱,t)\mathbf{V}_{t}(\mathbf{x},t)\in\mathbb{R} is the partial derivative of 𝐕\mathbf{V} with respect to tt, and 𝐕𝐱(𝐱,t)n×1\mathbf{V}_{\mathbf{x}}(\mathbf{x},t)\in\mathbb{R}^{n\times 1} and 𝐕𝐱𝐱(𝐱,t)n×n\mathbf{V}_{\mathbf{x}\mathbf{x}}(\mathbf{x},t)\in\mathbb{R}^{n\times n} denote the first and second partial derivatives, respectively, of 𝐕\mathbf{V} with respect to 𝐱\mathbf{x}. To consider control constraints [22], we define 𝐫(𝐮)\mathbf{r}(\mathbf{u}) as

𝐫(𝐮)=i=1m𝐒i(𝐮i)=i=1mci0𝐮isig1(v𝐮imax)𝑑v\mathbf{r}(\mathbf{u})=\sum_{i=1}^{m}\mathbf{S}_{i}(\mathbf{u}_{i})=\sum_{i=1}^{m}c_{i}\int_{0}^{\mathbf{u}_{i}}{\mathrm{sig}^{-1}\Big{(}\frac{v}{\mathbf{u}_{i}^{\max}}\Big{)}dv} (9)

with 𝐮imax\mathbf{u}_{i}^{\max} being the ii-th element of 𝐔max\mathbf{U}_{\max} and

sig(v)=21+ev1.\mathrm{sig}(v)=\frac{2}{1+e^{-v}}-1. (10)

In (9), cic_{i} weights the importance between different control inputs. Using first-order conditions to solve for an analytic solution of the optimal control action that achieves the infimum of the Hamiltonian yields

𝐆T(𝐱)𝐕𝐱+𝐑sig1(𝐮𝐔max)=0,\mathbf{G}^{T}(\mathbf{x})\mathbf{V}_{\mathbf{x}}+\mathbf{R}\mathrm{sig}^{-1}(\frac{\mathbf{u}^{*}}{\mathbf{U}_{\mathrm{max}}})=0, (11)

with 𝐑=diag(c1,,cm)\mathbf{R}=\mathrm{diag}(c_{1},\cdots,c_{m}). Solving (11) for 𝐮\mathbf{u}^{*} in yields

𝐮(𝐱,t)=𝐔maxsig(𝐑1𝐆T(𝐱)𝐕𝐱(𝐱,t)){\mathbf{u}}^{*}(\mathbf{x},t)=\mathbf{U}_{\max}*\mathrm{sig}(-\mathbf{R}^{-1}\mathbf{G}^{T}(\mathbf{x})\mathbf{V}_{\mathbf{x}}(\mathbf{x},t)) (12)

with “*” denoting element-wise multiplication. The control is saturated between [𝐔max,𝐔max][-\mathbf{U}_{\max},\mathbf{U}_{\max}]. Define 𝐲(t)=𝐕(𝐱(t),t)\mathbf{y}(t)=\mathbf{V}(\mathbf{x}(t),t). From (7), we can write a stochastic system for the optimal value function

d𝐲(t)\displaystyle-d{\mathbf{y}}(t) =(𝐪(𝐱)+𝐫(𝐮))dt𝐕𝐱T(𝐱,t)ΣT(𝐱)d𝐰\displaystyle=(\mathbf{q}(\mathbf{x})+\mathbf{r}(\mathbf{u}^{*}))dt-\mathbf{V}_{\mathbf{x}}^{T}(\mathbf{x},t)\Sigma^{T}(\mathbf{x})d{\mathbf{w}} (13a)
d𝐱(t)\displaystyle d{\mathbf{x}}(t) =(𝐅(𝐱)+𝐆(𝐱)𝐮)dt+Σ(𝐱)d𝐰\displaystyle=(\mathbf{F}(\mathbf{x})+\mathbf{G}(\mathbf{x})\mathbf{u}^{*})dt+\Sigma(\mathbf{x})d{\mathbf{w}} (13b)
𝐲(T)\displaystyle\mathbf{y}(T) =𝐪N(𝐱(T))\displaystyle=\mathbf{q}_{N}(\mathbf{x}(T)) (13c)
𝐱(0)\displaystyle\mathbf{x}(0) =𝐱0\displaystyle=\mathbf{x}_{0} (13d)

which is in the form of a FBSDE (the forward part consists of (13b) (13d), the backward part consists of (13a) (13c)). The forward and backward parts are of the form of a forward SDE (FSDE) and a backward SDE (BSDE), respectively. For details regarding the derivation from (7) to (13), the reader is referred to [22][6].

3.2 Handling State Constraints

Refer to caption
Figure 1: The left figure shows 𝐩(𝐱)\mathbf{p}(\mathbf{x}) in (3.2) under different values of kk, with μ=3\mu=3, L=5L=5, 𝐜(𝐱)=𝐱\mathbf{c}(\mathbf{x})=\mathbf{x}, 𝐛min=1\mathbf{b}_{\min}=1, 𝐛max=5\mathbf{b}_{\max}=5 and 𝐱\mathbf{x} being a scalar. The right figure shows 𝐩(𝐱)\mathbf{p}(\mathbf{x}) in (3.2) under different values of kk, with the same parameters.

In this section, we consider incorporating state constraints into the FBSDE framework using a penalty function based approach. A perfect penalty function would be zero inside the constraint boundary and infinity outside. We propose two differentiable and numerically stable alternatives to a perfect penalty function. The first option is a penalty function based on logistic functions (PFL). When the state constraint has both an upper bound 𝐛max\mathbf{b}_{\max} and lower bound 𝐛min\mathbf{b}_{\min}, the ii-th element of PFL would be

𝐩i(𝐱)=\displaystyle\mathbf{p}_{i}(\mathbf{x})= L1+e𝐤(𝐜i(𝐱)𝐛i,max)L1+e𝐤(𝐜i(𝐱)𝐛i,min)\displaystyle\ \frac{L}{1+e^{-\mathbf{k}(\mathbf{c}_{i}(\mathbf{x})-\mathbf{b}_{i,\max})}}-\frac{L}{1+e^{-\mathbf{k}(\mathbf{c}_{i}(\mathbf{x})-\mathbf{b}_{i,\min})}}
+L2L1+e𝐤(μi𝐛i,max)\displaystyle+L-\frac{2L}{1+e^{-\mathbf{k}(\mu_{i}-\mathbf{b}_{i,\max})}} (14)

where L+L\in\mathbb{R}^{+} determines the maximum value of the penalty, 𝐤+\mathbf{k}\in\mathbb{R}^{+} determines the steepness of the boundary (larger kk leads to steeper boundaries), and μ=(𝐛min+𝐛max)/2\mu=(\mathbf{b}_{\min}+\mathbf{b}_{\max})/2. The proposed penalty function consists of two parts: the first consists of two logistics functions, which give the valley shape; the second sets the minimum value of the penalty function to zero. The second option is a rectified-linear-unit (ReLU) based function which is defined as 𝐑𝐞𝐋𝐔(x)=max(x,0)\mathbf{ReLU}(x)=\max(x,0). When an upper and lower bound exist, the penalty function is

𝐩i(𝐱)=\displaystyle\mathbf{p}_{i}(\mathbf{x})= 𝐤(𝐑𝐞𝐋𝐔(𝐛i,min𝐜i(𝐱))\displaystyle\ \mathbf{k}(\mathbf{ReLU}(\mathbf{b}_{i,\min}-\mathbf{c}_{i}(\mathbf{x}))
+𝐑𝐞𝐋𝐔(𝐜i(𝐱)𝐛i,max)).\displaystyle+\mathbf{ReLU}(\mathbf{c}_{i}(\mathbf{x})-\mathbf{b}_{i,\max})). (15)

Similar to (3.2), 𝐤\mathbf{k} represents steepness of the constraint boundaries. However, in (3.2), there is no bound on the penalty function magnitude. An example of these two types of penalty functions with varying steepness is shown in Fig. 1. When only an upper bound exists, i.e., 𝐜(𝐱)𝐛\mathbf{c}(\mathbf{x})\leq\mathbf{b}, the lower bound is set to be -\infty, and when only a lower bound exists, i.e., 𝐜(𝐱)𝐛\mathbf{c}(\mathbf{x})\geq\mathbf{b}, the upper bound is set to be \infty. The main difference between these two penalty functions is their values within the constraint boundary. For PFL, the value of the penalty function is only close to zero when a large 𝐤\mathbf{k} value is utilized. Otherwise, states within but close to constraint boundaries will have a positive penalty function value. The ReLU-based penalty function is always zero within the bound. Specific use cases are demonstrated in Section 4. After applying penalty function 𝐩(𝐱)\mathbf{p}(\mathbf{x}), the instantaneous state cost is 𝐪¯(𝐱)=𝐪(𝐱)+𝐩(𝐱)\bar{\mathbf{q}}(\mathbf{x})=\mathbf{q}(\mathbf{x})+\mathbf{p}(\mathbf{x}). Ideally, we would pick larger 𝐤\mathbf{k} and LL to ensure a steeper boundary and larger penalty. While a controller without access to measurements of future noise can not guarantee that state constraints will be met for all time due to stochastic nature of the system dynamics, note that the penalty functions guide the learning towards a robust controller that does not violate state constraints (at least under the disturbances seen during training). The expected value of the integral of 𝐩(𝐱)\mathbf{p}(\mathbf{x}) is at most J𝐮J^{\mathbf{u}}. If J𝐮J^{\mathbf{u}} is finite and 𝐩(𝐱)\mathbf{p}(\mathbf{x}) is a perfect penalty function, the constraint is satisfied except on a set of measure zero.

3.3 Deep FBSDE Algorithm

Refer to caption
Figure 2: The DFBSDE architecture for one time step. Curve-edged boxes contains DNNs, and sharp-edged boxes represent intermediate calculations. Everything in green contains the LSTM network, red and blue colors represent the FSDE and the forward version of the BSDE, respectively.

The solution of the FBSDE in (13) requires backward integration in time and the fact that 𝐕𝐱(𝐱)\mathbf{V}_{\mathbf{x}}(\mathbf{x}) is unknown creates additional difficulties in utilizing classical numerical integration approaches. To deal with backward integration in time, following the formulation in [22], we can estimate the value function at time zero using a DNN, which is parameterized by ϕ\phi, i.e., 𝐕𝟎(𝐱,t0)\mathbf{V}_{\mathbf{0}}(\mathbf{x},t_{0}). This allows the forward integration of the BSDE. We can then rewrite the FBSDE as two FSDEs. For unknown 𝐕𝐱(𝐱,t)\mathbf{V}_{\mathbf{x}}(\mathbf{x},t) values, it can also be estimated using a DNN parameterized by θ\theta, i.e., 𝐕𝐱(𝐱,tθ)\mathbf{V}_{\mathbf{x}}(\mathbf{x},t\mid\theta). We deploy an LSTM-based architecture [13] for 𝐕𝐱(𝐱,tθ)\mathbf{V}_{\mathbf{x}}(\mathbf{x},t\mid\theta), which has been shown to be superior to dense-layer-based architectures [22]. Linear regression based approaches have also been considered, but are inferior due to compounding error. Assuming the initial state and time horizon are fixed [22], the initial value function is a learned fixed value 𝐕𝟎(ϕ)\mathbf{V}_{\mathbf{0}}(\phi). Due to the use of LSTMs, two changes need to be made to the DFBSDE algorithm. First, a separate network 𝐇0(φ)\mathbf{H}_{0}(\varphi) is used to estimate the initial hidden state values. Second, 𝐲k\mathbf{y}_{k} will also depend on the hidden state values 𝐇k\mathbf{H}_{k}. The FBSDE can be forward integrated by discretizing the time as follows

𝐲k\displaystyle\mathbf{y}_{k} =V𝟎(𝐱0,t0ϕ)i=1k(𝐪¯(𝐱i1)+𝐫(𝐮i1))Δt\displaystyle=V_{\mathbf{0}}(\mathbf{x}_{0},t_{0}\mid\phi)-\sum_{i=1}^{k}(\bar{\mathbf{q}}(\mathbf{x}_{i-1})+\mathbf{r}(\mathbf{u}_{i-1}^{*}))\Delta{t}
+\displaystyle+ i=2k𝐕𝐱T(𝐱i1,ti1,𝐇i1θ)ΣT(𝐱i1)Δ𝐰i1\displaystyle\sum_{i=2}^{k}\mathbf{V}_{\mathbf{x}}^{T}(\mathbf{x}_{i-1},t_{i-1},\mathbf{H}_{i-1}\mid\theta)\Sigma^{T}(\mathbf{x}_{i-1})\Delta{\mathbf{w}}_{i-1}
+\displaystyle+ 𝐕𝐱T(𝐱0,t0,𝐇0(φ)θ)ΣT(𝐱0)Δ𝐰0\displaystyle\mathbf{V}_{\mathbf{x}}^{T}(\mathbf{x}_{0},t_{0},\mathbf{H}_{0}(\varphi)\mid\theta)\Sigma^{T}(\mathbf{x}_{0})\Delta{\mathbf{w}}_{0} (16a)
𝐱k\displaystyle\mathbf{x}_{k} =𝐱0+i=1kΔ𝐱i1\displaystyle=\mathbf{x}_{0}+\sum_{i=1}^{k}\Delta{\mathbf{x}_{i-1}} (16b)

where k1k\geq 1, Δt=T/N\Delta{t}=T/N, Δ𝐱k=𝐱˙(kΔt)Δt\Delta{\mathbf{x}}_{k}=\dot{\mathbf{x}}(k\Delta{t})\Delta{t} and Δ𝐰k=𝐰˙(kΔt)Δt\Delta{\mathbf{w}}_{k}=\dot{\mathbf{w}}(k\Delta{t})\Delta{t}. The calculation of 𝐱k\mathbf{x}_{k} and 𝐲k\mathbf{y}_{k} is illustrated in Fig. 2. To learn the weights, we minimize the least square loss between the estimated terminal value function and the measured terminal cost,

𝐋FBSDE(θ,ϕ,φ)=j=1M𝐪N(𝐱Nj)𝐲Nj2\mathbf{L}_{\mathrm{FBSDE}}(\theta,\phi,\varphi)=\sum_{j=1}^{M}\|\mathbf{q}_{N}(\mathbf{x}_{N}^{j})-\mathbf{y}_{N}^{j}\|^{2} (17)

where MM is the batch size and the superscript jj denotes the jj-th set of data in the batch. The detailed DFBSDE algorithm is shown in Algorithm 1.

3.4 Penalty Function Update

For the penalty function, if 𝐤\mathbf{k} is set to a large value in the initial stage of training, numerical instabilities can arise due to large gradients generated by states within the steep region near the constraint boundary. On the other hand, if the values of 𝐤\mathbf{k} are kept small, it would not be effective since the penalty for constraint violation is small. Thus, we propose to update 𝐤\mathbf{k} during training so that it gradually increases. The update scheme is as follows. For a given 𝐤\mathbf{k}, the DNN is trained until convergence; then, 𝐤\mathbf{k} is updated. A metric to check for convergence is the variance of the episode-wise cost over a few episodes; the episode window size is denoted as η\eta. If the iterates are converging, the variance will be small. Convergence is determined by comparing the square root of the variance with a threshold β𝐑+\beta\in\mathbf{R}_{+}; if it is smaller than β\beta, 𝐤\mathbf{k} is updated as 𝐤𝐤+δ\mathbf{k}\leftarrow\mathbf{k}+\delta. This condition is checked every η\eta iterations. Additionally, if the condition is not satisfied after η\eta^{\prime} iterations, then also 𝐤\mathbf{k} is updated since it could be stuck at a local minimum. Empirically, we have found η=500\eta=500, 𝐤=1.5\mathbf{k}=1.5, δ=0.5\delta=0.5, and Δδ=0.25\Delta_{\delta}=0.25 to be reasonable values to start with. Both δ\delta and Δδ\Delta_{\delta} can be increased if training is stable and faster growth of 𝐤\mathbf{k} is desired; otherwise, δ\delta and/or Δδ\Delta_{\delta} should be reduced. During training, the variance decreases. Thus, β\beta should also decrease after each update: β=γβ\beta=\gamma\beta, with γ(0,1)\gamma\in(0,1). To make the decrease in β\beta smoother, we gradually increase γ\gamma to 1 using γ=γ+Δ\gamma=\gamma+\Delta with Δ+\Delta\in\mathbb{R}_{+}. Similarly, the acceleration of kk is made negative, i.e., δ=δ+Δδ\delta=\delta+\Delta_{\delta}, with Δδ\Delta_{\delta}\in\mathbb{R}_{-}, leading to finer-grained changes in kk at later stages of training. The initial value of β\beta can be determined after the training converges for the first iteration of Algorithm 2. From empirical evaluations, we find γ=0.9\gamma=0.9 and Δ=0.02\Delta=0.02 to be reasonable values. The penalty function update scheme is shown in Algorithm 2.

3.5 Adaptation to Hybrid Dynamics

This section will discuss the adaptations required for the DFBSDE algorithm to handle HD systems. HD systems consist of two phases: a continuous dynamic phase and a phase of a deterministic discrete jump. Such an HD system is very common, e.g., a biped. The FBSDE formulation in (13) is only for continuous dynamics. Inspired by the cyclic motion in human walking, we treat the hybrid dynamics as having multiple cycles, where the dynamics are continuous within each cycle. For the case of bipedal locomotion, each cycle will be a footstep; at the end of each cycle, the swing foot lands on the ground (more on this in Section 4). The main challenges in adapting the DFBSDE framework to HD systems are robustness to the initial state and assurance of cyclic motion. This requires the initial value function estimator to be robust to a wide range of states. The loss function (17) only trains the estimator for a fixed 𝐱0\mathbf{x}_{0}. To ensure that the learned value function estimator is robust to variations in the initial state, we can train it such that it provides an accurate estimation for all of the states recorded. The measured cost-to-go can approximate the value function

𝐕~(𝐱k,tk)=\displaystyle\tilde{\mathbf{V}}(\mathbf{x}_{k},t_{k})= 𝐪N(𝐱N)+i=kN1(𝐪¯(𝐱i)+𝐫(𝐮i))Δt\displaystyle\ \mathbf{q}_{N}(\mathbf{x}_{N})+\sum_{i=k}^{N-1}(\bar{\mathbf{q}}(\mathbf{x}_{i})+\mathbf{r}(\mathbf{u}_{i}^{*}))\Delta{t}
\displaystyle- 𝐕𝐱T(𝐱i,ti,𝐇iθ)ΣT(𝐱i)Δ𝐰i.\displaystyle\ \mathbf{V}_{\mathbf{x}}^{T}(\mathbf{x}_{i},t_{i},\mathbf{H}_{i}\mid\theta)\Sigma^{T}(\mathbf{x}_{i})\Delta{\mathbf{w}}_{i}. (18)

Using (18), we can learn the initial value function using

𝐋𝐕=j=1Mi=0N𝐕𝟎(𝐱ij,tiϕ)𝐕~(𝐱ij,ti)2.\mathbf{L}_{\mathbf{V}}=\sum_{j=1}^{M}\sum_{i=0}^{N}\|\mathbf{V}_{\mathbf{0}}(\mathbf{x}_{i}^{j},t_{i}\mid\phi)-\tilde{\mathbf{V}}(\mathbf{x}_{i}^{j},t_{i})\|^{2}. (19)

Thus, the loss function for the Hybrid FBSDE (HFBSDE) becomes

𝐋HFBSDE=𝐋FBSDE+λ𝐋𝐕\mathbf{L}_{\mathrm{HFBSDE}}=\mathbf{L}_{\mathrm{FBSDE}}+\lambda\mathbf{L}_{\mathbf{V}} (20)

with the terminal cost calculated using the state after the jump 𝐱N+\mathbf{x}_{N}^{+} and λ+\lambda\in\mathbb{R}_{+} adjusts the weighting between the two losses. This terminal cost encourages cyclic motion.

Algorithm 1 State Constrained DFBSDE Controller
1:Get Initial state and state dynamics, cost function parameters, penalty function, NN: Time horizon, NIN_{I}: Number of iterations, MM: Batch size, Δt\Delta{t}: Time step, λ\lambda: Weight decay parameter, and state constraint parameters (refer to Algorithm 2);
2:Initialize θ\theta, ϕ\phi, φ\varphi, kk, δ\delta, β\beta, γ\gamma;
3:for nI=1n_{I}=1 to NIN_{I}, m=1m=1 to MM, k=0k=0 to N1N-1 do
4:     Calculate tk=kΔtt_{k}=k\Delta{t} and 𝐕𝐱m(𝐱k,tkθ)\mathbf{V}_{\mathbf{x}}^{m}(\mathbf{x}_{k},t_{k}\mid\theta);
5:     Calculate optimal control as in (12);
6:     Sample Brownian noise: Δ𝐰km𝒩(0,Δt)\Delta{\mathbf{w}}_{k}^{m}\sim\mathcal{N}(0,\Delta{t});
7:     Update value function 𝐲km\mathbf{y}_{k}^{m} and system state 𝐱km\mathbf{x}_{k}^{m};
8:     Compute target terminal cost;
9:     Compute 𝐋FBSDE\mathbf{L}_{\mathrm{FBSDE}} (17) or 𝐋HFBSDE\mathbf{L}_{\mathrm{HFBSDE}} (20)
10:     Update θ\theta, ϕ\phi and φ\varphi and run Algorithm 2;
Algorithm 2 Penalty Function Update
1:Given: 𝐤\mathbf{k}: Boundary steepness, δ\delta: Change of boundary steepness, β\beta: Update threshold, γ\gamma: Threshold change ratio, nIn_{I}: Iteration number, Δ\Delta: Boundary steepness change acceleration, η\eta: Update interval, η\eta^{\prime}: Max interval, Δδ\Delta_{\delta}: Threshold change acceleration;
2:if state trajectory not inside constraint boundary then
3:     if (nI𝐦𝐨𝐝η)=0(n_{I}\ \mathbf{mod}\ \eta)=0 then
4:         Calculate σ𝐂2\sigma_{\mathbf{C}}^{2} = variance of the costs {𝐂1,,𝐂η}\{\mathbf{C}_{1},\cdots,\mathbf{C}_{\eta}\} for the past η\eta episodes;
5:         if σ𝐂<β\sigma_{\mathbf{C}}<\beta or (nI𝐦𝐨𝐝η)=0(n_{I}\ \mathbf{mod}\ \eta^{\prime})=0 then
6:              𝐤=𝐤+δ\mathbf{k}=\mathbf{k}+\delta, δ=δ+Δδ\delta=\delta+\Delta_{\delta}, β=γβ\beta=\gamma\beta, γ=γ+Δ\gamma=\gamma+\Delta;               
7:     if δ<0\delta<0 then
8:         δ=0\delta=0;      
9:     if γ>1\gamma>1 then
10:         γ=1\gamma=1;      
11:else
12:     Update penalty function with new parameters.

4 Simulations

In this section, we show the performance of our control algorithm using a continuous nonlinear system: the cart-pole for a swing-up task; and a hybrid nonlinear system: a five-link biped for walking. The trained model was evaluated over 256 trials for all systems. For the cart-pole experiments, two LSTM layers with size 16 are used at each time step, followed by a dense layer with an output size corresponding to the state dimension. For fixed initial states, the initial value function estimation uses a trainable weight of size one. The initial hidden state and cell state for the LSTM layers are estimated using a trainable weight of size 16. For the five-link biped experiments, we used two LSTM layers with size 32, followed by a dense layer with size 10. The initial value function network has four layers with size [8, 16, 8, 1][8,\ 16,\ 8,\ 1]. The initial hidden state network consists of four different networks for estimating the initial hidden state and the initial cell state of the two LSTM layers; they all consist of two dense layers that have an output size of 8.

4.1 Cart-pole Swing-Up Task I

Refer to caption
Figure 3: This figure compares the performance between the state constrained (purple) and unconstrained (blue) controller. The demonstrated performance is from 256 trials. The darker blue and purple curves show a sampled trajectory from each setting. The light blue and purple regions show the state space each controller covers. The black dashed lines are the state constraints, and the orange lines give the state targets.
Refer to caption
Figure 4: The color scheme follows Figure 3. The leftmost figure shows the trajectory generated by the unconstrained controller violates the constraints. By applying the penalty function and following the adaptive update scheme, the velocity trajectory retreats inside the constraint boundaries and eventually satisfies the constraints.

The task for the cart-pole system is to swing the pole up and stabilize it at the top. The cart-pole dynamics are

(M+m)x¨msinθθ˙2+mcosθθ¨\displaystyle(M+m)\ddot{x}-m\ell\sin\theta\dot{\theta}^{2}+m\ell\cos\theta\ddot{\theta} =u\displaystyle=u (21)
m2θ¨+mcosθx¨+mgsinθ\displaystyle m\ell^{2}\ddot{\theta}+m\ell\cos\theta\ddot{x}+mg\ell\sin\theta =0.\displaystyle=0. (22)

The cart position and pendulum angle are represented by xx and θ\theta, respectively. The pendulum angle is zero when it is pointed downwards. The state of the cart-pole system is [x,θ,x˙,θ˙]T[x,\ \theta,\ \dot{x},\ \dot{\theta}]^{T}. In our experiments, we set the cart mass to M=1.0M=1.0kg, the pole mass to m=0.01m=0.01kg (point mass at the tip), the pole length to =0.5\ell=0.5m, 𝐱0=𝟎4×1\mathbf{x}_{0}=\mathbf{0}_{4\times 1}, and the target state as [0,π,0,0]T[0,\pi,0,0]^{T}. The control is saturated at ±10\pm 10N, the cart position is constrained between ±1.5\pm 1.5m, and the cart velocity is constrained between ±2.5\pm 2.5m. The time horizon is chosen to be 2.52.5 sec and the time step is Δt=1/110\Delta{t}=1/110 sec (this specific value of Δt\Delta{t} is chosen randomly). We use the state cost function

Refer to caption
Figure 5: Illustration of five-link biped model.
𝐪¯(𝐱)\displaystyle\bar{\mathbf{q}}(\mathbf{x}) =12(𝐱𝐱¯)T𝐐(𝐱𝐱¯)+𝐩(𝐱)\displaystyle=\frac{1}{2}(\mathbf{x}-\bar{\mathbf{x}})^{T}\mathbf{Q}(\mathbf{x}-\bar{\mathbf{x}})+\mathbf{p}(\mathbf{x}) (23a)
𝐪N(𝐱)\displaystyle\mathbf{q}_{N}(\mathbf{x}) =12(𝐱𝐱¯)T𝐐N(𝐱𝐱¯)\displaystyle=\frac{1}{2}(\mathbf{x}-\bar{\mathbf{x}})^{T}\mathbf{Q}_{N}(\mathbf{x}-\bar{\mathbf{x}}) (23b)

where 𝐱¯\bar{\mathbf{x}} is the target state, 𝐐\mathbf{Q} is the cost weight matrix for the state, and 𝐐N\mathbf{Q}_{N} is the terminal state cost matrix. We use the control cost defined in (9). The cost matrices are chosen as 𝐐=𝐐N=diag([0.5, 1.0, 0.1, 0.1])\mathbf{Q}=\mathbf{Q}_{N}=\mathrm{diag}([0.5,\ 1.0,\ 0.1,\ 0.1]) and 𝐑=[0.1]\mathbf{R}=[0.1]. 𝐩()\mathbf{p}(\cdot) is the penalty function, in this example 𝐩()\mathbf{p}(\cdot) is a logistics function-based penalty function (3.2), with 𝐜(𝐱)=[x,x˙]T\mathbf{c}(\mathbf{x})=[x,\dot{x}]^{T} and 𝐛max=𝐛min=[1.5,2.5]T\mathbf{b}_{\max}=-\mathbf{b}_{\min}=[1.5,2.5]^{T}. Fig. 3 shows the state trajectories generated by the trained controller. The state trajectory generated by the trained constrained DFBSDE controller satisfies the state constraints, while they are violated significantly under the trained unconstrained controller. The effect of the penalty function can be immediately seen in Fig. 4, even with a small kk value. With k=1.5k=1.5, the part of the trajectory that lies outside the constraint boundaries is greatly reduced. After gradually increasing kk to 6.06.0, following Algorithm 2, the entire cart velocity trajectory lies within the constraint boundaries. We also tested directly training with k=6.0k=6.0 without the adaptive update. The algorithm becomes numerically unstable after one epoch due to large gradients. This demonstrates that our method provides a stable training scheme.

4.2 Five-Link Biped Walking Task

The biped model of interest is shown in Fig. 5, derived from [3]. The leg on the ground is the stance leg, the leg in the air is the swing leg, and the link representing the upper body is the torso. The biped state 𝐱10×1\mathbf{x}\in\mathbb{R}^{10\times 1} consists of the link angles with respect to the horizontal plane 𝐪5×1\mathbf{q}\in\mathbb{R}^{5\times 1} and the link angular velocities 𝐪˙5×1\dot{\mathbf{q}}\in\mathbb{R}^{5\times 1}, i.e., 𝐱=[𝐪T,𝐪˙T]T\mathbf{x}=[\mathbf{q}^{T},\ \dot{\mathbf{q}}^{T}]^{T}. Note that the 𝐪\mathbf{q} denoted here is different from the state cost function 𝐪(𝐱)\mathbf{q}(\mathbf{x}). The biped model has four actuated joints at the knee and hip joints. The control 𝐮5×1\mathbf{u}\in\mathbb{R}^{5\times 1} consists of the applied torque at the actuated joints 𝐮=[0,u1,u2,u3,u4]T\mathbf{u}=[0,\ u_{1},\ u_{2},\ u_{3},\ u_{4}]^{T}, where the 0 corresponds to the unactuated stance ankle. The output of the controller will only consist of 𝐮~=[u1,u2,u3,u4]\tilde{\mathbf{u}}=[u_{1},\ u_{2},\ u_{3},\ u_{4}], then a mapping matrix T=[𝟎,𝐈4×4]T=[\mathbf{0},\ \mathbf{I}_{4\times 4}] is used to obtain 𝐮=T𝐮~\mathbf{u}=T\tilde{\mathbf{u}}. The parameters of the biped model can be found in [15]. The five-link biped dynamics is

𝐌(𝐪)𝐪¨+𝐂(𝐪,𝐪˙)𝐪˙+G(𝐪)=𝐮,\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}+G(\mathbf{q})=\mathbf{u}, (24)

where 𝐌(𝐪)5×5\mathbf{M}(\mathbf{q})\in\mathbb{R}^{5\times 5} is the inertia matrix, 𝐂(𝐪,𝐪˙)5×5\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\in\mathbb{R}^{5\times 5} the Coriolis matrix, and G(𝐪)5×1G(\mathbf{q})\in\mathbb{R}^{5\times 1} the gravitational terms. The dynamics is in the form of (1) after considering stochastic noise. A heel strike (HS) occurs when the biped comes into contact with the ground, which signals the termination of the current step, and initiation of the next step. During this transition, the swing and stance legs are swapped. We assume this transition is instantaneous, the biped is symmetric, and the new swing leg leaves the ground once the HS occurs (i.e., no double support phase). The joint indexing depends on the current swing and stance leg definition. Defining the joint angles right before HS as 𝐪5×1\mathbf{q}^{-}\in\mathbb{R}^{5\times 1} and the joint angles right after HS as 𝐪+5×1\mathbf{q}^{+}\in\mathbb{R}^{5\times 1}, we have 𝐪+=𝐈~5×5𝐪\mathbf{q}^{+}=\tilde{\mathbf{I}}_{5\times 5}\mathbf{q}^{-}, where 𝐈~5×55×5\tilde{\mathbf{I}}_{5\times 5}\in\mathbb{R}^{5\times 5} is the anti-diagonal matrix. The HS creates an instantaneous change in angular velocity. Defining the angular velocity before HS as 𝐪˙5×1\dot{\mathbf{q}}^{-}\in\mathbb{R}^{5\times 1} and the angular velocity after HS as 𝐪˙+5×1\dot{\mathbf{q}}^{+}\in\mathbb{R}^{5\times 1}, we have 𝐱+=𝐟H(𝐱)\mathbf{x}^{+}=\mathbf{f}_{H}(\mathbf{x}^{-}), with 𝐟H:10×110×1\mathbf{f}_{H}:\mathbb{R}^{10\times 1}\rightarrow\mathbb{R}^{10\times 1} being the deterministic HS transition map [15].

Refer to caption
Figure 6: Comparison of knee angles for constrained and unconstrained systems. To avoid hyperextending, the knee angle should be positive, i.e., above the black dashed line.
Refer to caption
Figure 7: In this figure, the motion generated by our proposed controller ensemble is shown. The torso is depicted in green, the swing leg in purple, and the stance leg in blue.

To generate realistic walking motion, we need to avoid hyperextension in the knee. We transform this constraint into a variant of the ReLU-based penalty function in (3.2): 𝐩(𝐱)=α𝐑𝐞𝐋𝐔((q4q5))\mathbf{p}(\mathbf{x})=\alpha\mathbf{ReLU}(-(q_{4}-q_{5})) where α\alpha is a weighting coefficient for the penalty function, and q4q_{4} and q5q_{5} are defined in Fig. 5. A quadratic cost is applied to the control. We choose 𝐑=diag([2, 0.2, 0.2, 2])\mathbf{R}=\mathrm{diag}([2,\ 0.2,\ 0.2,\ 2]). This choice of 𝐑\mathbf{R} implies that hip motors are more powerful than knee motors, which is true for many robots. The state cost 𝐪(𝐱)\mathbf{q}(\mathbf{x}) is the weighted distance between the state 𝐱\mathbf{x} and a nominal terminal state 𝐱¯\bar{\mathbf{x}} plus the penalty

𝐪¯(𝐱)=1/2(𝐱𝐱¯)T𝐐(𝐱𝐱¯)+𝐩(𝐱)\bar{\mathbf{q}}(\mathbf{x})=1/2(\mathbf{x}-\bar{\mathbf{x}})^{T}\mathbf{Q}(\mathbf{x}-\bar{\mathbf{x}})+\mathbf{p}(\mathbf{x}) (25)

with 𝐐=10𝐈10×10\mathbf{Q}=10\mathbf{I}_{10\times 10}. The terminal state cost has the same form as (25), the sole difference is replacing the state cost matrix with 𝐐N=10𝐐\mathbf{Q}_{N}=10\mathbf{Q}. The target state is chosen to be

𝐱¯=\displaystyle\bar{\mathbf{x}}= [0.10,0.50,0.10,0.35,0.40,1.50\displaystyle\Big{[}0.10,0.50,-0.10,-0.35,-0.40,-1.50
0.50,0.00,0.55,2.00]T.\displaystyle-0.50,0.00,-0.55,-2.00\Big{]}^{T}. (26)

A comparison between the motion generated by the constrained and unconstrained DFBSDE controller is given in Fig. 6. The unconstrained controller hyperextends the swing knee, while the constrained controller does not. Since the chosen penalty function could constrain the system directly, the kk value doesn’t need to be updated. In previous examples, the penalty functions are logistic function based. This works well when the constraint set is large since it creates a buffer between the interior of the constraint set and its boundary. However, the constraint set for locomotion tasks is relatively small, making ReLU functions a better candidate. For the five-link biped experiments, we use a time step of 0.01s.

Variations in the initial states make dissecting a multi-step walking problem into multiple single-step walking problems a challenging task. We train a robust controller to deal with this issue. We find it difficult for a single controller to handle all possible initial configurations. Thus, we use an ensemble of controllers, where each controller handles a range of initial configurations around a nominal state. For the duration of one footstep, only one controller is used, which is the controller in the ensemble that has the shortest distance between the initial state and its nominal state. The nominal state of a controller is known a priori. After training, the motion generated by this approach is shown in Fig. 7, where the ensemble size is three. The corresponding swing knee angle is shown in Fig. 6. It can be seen that no hyperextension occurred, which shows the effectiveness of enforcing the state constraints. On average, the initial range each controller can handle spans 6.03 deg for the joint angles and 19.01 deg/sec for the joint velocities. Fig. 7 shows a tucking motion generated by the torso and the swing leg. This is due to our biped model’s relatively small control authority over the torso. Without the forward angular momentum generated by the tucking motion, the torso gradually tilts back over multiple steps. Since there is no direct control over the torso, it is difficult for the DFBSDE controller to recover. We also compared the computation time of our proposed method for obtaining the control for one time step with trajectory optimization [15] (TO). As expected, our approach generates comparable results while being computationally more efficient. Solving for the control action requires 0.96s for a Hermite-Simpson TO approach, compared with 5.6ms for our proposed method.

Refer to caption
Figure 8: This figure shows the swing knee angles for the motion depicted in Fig. 7.

5 Conclusion

In this paper, we solved SOC problems with state constraints using an LSTM-based DFBSDE framework which alleviates the curse of dimensionality and numerical integration issues. The state constraints are applied using an adaptive update scheme, significantly improving training stability. We also show how to adapt the algorithm to handle HD systems in addition to the continuous dynamics setting. The efficacy of our approach is demonstrated on a cart-pole system and a five-link biped which has hybrid dynamics.

References

  • [1] R.E. Bellman. Dynamic programming. Dover, New York, 2003.
  • [2] Y. Chen, Y. Shi, and B. Zhang. Optimal control via neural networks: A convex approach. In Proceedings of International Conference on Learning Representations, New Orleans, LA, May 2019.
  • [3] C. Chevallereau, G. Abba, Y. Aoustin, F. Plestan, E.R. Westervelt, C. Canudas-De-Wit, and J.W. Grizzle. RABBIT: a testbed for advanced control theory. IEEE Control Systems Magazine, 23(5):57–79, 2003.
  • [4] L.G. Crespo and J. Sun. Stochastic optimal control via bellman’s principle. Automatica, 39(12):2109–2114, 2003.
  • [5] B. Dai, P. Krishnamurthy, and F. Khorrami. Learning a better control barrier function. In Proceedings of IEEE Conference on Decision and Control, Cancun, Mexico, pages 945–950, December 2022.
  • [6] B. Dai, P. Krishnamurthy, A. Papanicolaou, and F. Khorrami. State constrained stochastic optimal control using LSTMs. In Proceedings of the American Control Conference, New Orleans, LA, pages 1294–1299, May 2021.
  • [7] B. Dai, V.R. Surabhi, P. Krishnamurthy, and F. Khorrami. Learning locomotion controllers for walking using deep FBSDE. CoRR, abs/2107.07931, 2021.
  • [8] Bolun Dai, Heming Huang, Prashanth Krishnamurthy, and Farshad Khorrami. Data-efficient control barrier function refinement. In Proceedings of the American Control Conference, San Diego, CA, May 2023. (To Appear).
  • [9] Bolun Dai, Rooholla Khorrambakht, Prashanth Krishnamurthy, Vinícius Gonçalves, Anthony Tzes, and Farshad Khorrami. Safe navigation and obstacle avoidance using differentiable optimization based control barrier functions. CoRR, abs/2304.08586, 2023.
  • [10] M. Demetriou and E. Bakolas. Navigating over 3d environments while minimizing cumulative exposure to hazardous fields. Automatica, 115:108859, 2020.
  • [11] M. Diehl, H.J. Ferreau, and N. Haverbeke. Efficient numerical methods for nonlinear mpc and moving horizon estimation. Nonlinear model predictive control, pages 391–417, 2009.
  • [12] L. Hewing and M.N. Zeilinger. Scenario-based probabilistic reachable sets for recursively feasible stochastic model predictive control. IEEE Control Systems Letters, 4(2):450–455, 2019.
  • [13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
  • [14] R. Sinnet J. Grizzle, C. Chevallereau and A. Ames. Models, feedback control, and open problems of 3d bipedal robotic walking. Automatica, 50(8):1955–1988, 2014.
  • [15] M. Kelly. An introduction to trajectory optimization: How to do your own direct collocation. SIAM Review, 59(4):849–904, 2017.
  • [16] M. Lorenzen, F. Dabbene, R. Tempo, and F. Allgöwer. Stochastic mpc with offline uncertainty sampling. Automatica, 81:176–183, 2017.
  • [17] M.N. Mistry, J. Buchli, and S. Schaal. Inverse dynamics control of floating base systems using orthogonal decomposition. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, pages 3406–3412, May 2010.
  • [18] M. Neunert, M. Stäuble, M. Giftthaler, C.D. Bellicoso, J. Carius, C. Gehring, M. Hutter, and J. Buchli. Whole-body nonlinear model predictive control through contacts for quadrupeds. IEEE Robotics and Automation Letters, 3(3):1458–1465, 2018.
  • [19] M. Pereira, Z. Wang, I. Exarchos, and E.A. Theodorou. Safe optimal control using stochastic barrier functions and deep forward-backward sdes. In Proceedings of Conference on Robot Learning, Cambridge, MA, volume 155, pages 1783–1801, November 2020.
  • [20] L. Di Persio and M. Garbelli. Deep learning and mean-field games: A stochastic optimal control perspective. Symmetry, 13(1):14, 2021.
  • [21] J. Viereck and L. Righetti. Learning a centroidal motion planner for legged locomotion. In Proceedings of the IEEE International Conference on Robotics and Automation, Xi’an, China, pages 4905–4911, May 2021.
  • [22] Z. Wang, M.A. Pereira, and E.A. Theodorou. Learning deep stochastic optimal control policies using forward-backward SDEs. In Proceedings of Robotics: Science and Systems XV, Freiburg im Breisgau, Germany, June 2019.