Neural Contraction Metrics for Robust Estimation and Control: A Convex Optimization Approach
Abstract
This paper presents a new deep learning-based framework for robust nonlinear estimation and control using the concept of a Neural Contraction Metric (NCM). The NCM uses a deep long short-term memory recurrent neural network for a global approximation of an optimal contraction metric, the existence of which is a necessary and sufficient condition for exponential stability of nonlinear systems. The optimality stems from the fact that the contraction metrics sampled offline are the solutions of a convex optimization problem to minimize an upper bound of the steady-state Euclidean distance between perturbed and unperturbed system trajectories. We demonstrate how to exploit NCMs to design an online optimal estimator and controller for nonlinear systems with bounded disturbances utilizing their duality. The performance of our framework is illustrated through Lorenz oscillator state estimation and spacecraft optimal motion planning problems.
Index Terms:
Machine learning, Observers for nonlinear systems, Optimal control.I Introduction
Provably stable and optimal state estimation and control algorithms for a class of nonlinear dynamical systems with external disturbances are essential to develop autonomous robotic explorers operating remotely on land, in water, and in deep space. In these next generation missions, these robots are supposed to intelligently perform complex tasks with their limited computational resources, which are not necessarily powerful enough to run optimization algorithms in real-time.
Our main contribution is to introduce a Neural Contraction Metric (NCM), a global representation of optimal contraction metrics sampled offline by using a deep Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) (see Fig. 1), and thereby propose a new framework for provably stable and optimal online estimation and control of nonlinear systems with bounded disturbances, which only requires one function evaluation at each time step. A deep LSTM-RNN [1, 2] is a recurrent neural network with an improved memory structure proposed to circumvent gradient vanishing [3] and is a universal approximator of continuous curves [4]. Contrary to previous works, the convex optimization-based sampling methodology in our framework allows us to obtain a large enough dataset of the optimal contraction metric without assuming any hypothesis function space. These sampled metrics, the existence of which is a necessary and sufficient condition for exponential convergence [5], can be approximated with arbitrary accuracy due to the high representational power of the deep LSTM-RNN. We remark that this approach can be used with learned dynamics [6] as a nominal model is assumed to be given. Also, this is distinct from Lyapunov neural networks designed to estimate a largest safe region for deterministic systems [7, 8]: the NCM provides provably stable estimation and control policies, which have a duality in their differential dynamics and are optimal in terms of disturbance attenuation. The NCM construction is summarized as follows.

In the offline phase, we sample contraction metrics by solving an optimization problem with exponential stability constraints, the objective of which is to minimize an upper bound of the steady-state Euclidean distance between perturbed and unperturbed system trajectories. In this paper, we present a convex optimization problem equivalent to this problem, thereby exploiting the differential nature of contraction analysis that enables Linear Time-Varying (LTV) systems-type approaches to Lyapunov function construction. For the sake of practical use, the sampling methodology is reduced to a much simpler formulation than those of [9, 10, 11] derived for Itô stochastic nonlinear systems. These optimal contraction metrics are sampled using the computationally efficient numerical methods for convex programming [12, 13, 14] and then modeled by the deep LSTM-RNN as depicted in Fig. 1. In the online phase, contraction metrics at each time instant are computed by the NCM to obtain the optimal feedback estimation and control gain or a bounded error tube for robust motion planning [15, 16].
We illustrate how to design an optimal NCM-based estimator and controller for nonlinear systems with bounded disturbances, utilizing the estimation and control duality in differential dynamics analogous to the one of the Kalman filter and Linear Quadratic Regulator (LQR) in LTV systems. Their performance is demonstrated using Lorenz oscillator state estimation and spacecraft optimal motion planning problems.
Related Work
Contraction analysis, as well as Lyapunov theory, is one of the most powerful tools in analyzing the stability of nonlinear systems [5]. It studies the differential (virtual) dynamics for the sake of incremental stability by means of a contraction metric, the existence of which leads to a necessary and sufficient characterization of exponential stability of nonlinear systems. Finding an optimal contraction metric for general nonlinear systems is, however, almost as difficult as finding an optimal Lyapunov function.
Several numerical methods have been developed for finding contraction metrics in a given hypothesis function space. A natural application of this concept is to represent their candidate as a linear combination of some basis functions [17, 18, 19, 20]. In [21, 22], a tractable framework to construct contraction metrics for dynamics with polynomial vector fields is proposed by relaxing the stability conditions to the sum of squares conditions. Although it is ideal to use a larger number of basis functions seeking for a more optimal solution, the downside of this approach is that the problem size grows exponentially with the number of variables and basis functions [23].
We could thus alternatively rely on numerical schemes for sampling data points of a Lyapunov function without assuming any hypothesis function space. This includes the state-dependent Riccati equation method [24, 25] and it is proposed in [9, 10, 11] that this framework can be improved to obtain an optimal contraction metric, which minimizes an upper bound of the steady-state mean squared tracking error for nonlinear stochastic systems. However, solving a nonlinear system of equations or an optimization problem at each time instant is not suitable for systems with limited online computational capacity. The NCM addresses this issue by approximating the sampled solutions by the LSTM-RNN.
II Preliminaries
We use the notation and for the Euclidean and induced 2-norm, and , , , and for positive definite, positive semi-definite, negative definite, and negative semi-definite matrices, respectively. Also, and denotes the identity matrix. We first introduce some preliminaries that will be used to construct an NCM.
II-A Contraction Analysis for Incremental Stability
Consider the following perturbed nonlinear system:
(1) |
where , , , and with .
Theorem 1
Let and be the solution of (1) with and , respectively. Suppose there exist , , and s.t.
(2) | ||||
(3) |
Then the smallest path integral is exponentially bounded, thereby yielding a bounded tube of states:
(4) |
where .
II-B Deep LSTM-RNN
An LSTM-RNN is a neural network designed for processing sequential data with inputs and outputs , and defined as where . The activation function is given by the following relations: , , , , and , where is the logistic sigmoid function and and terms represent weight matrices and bias vectors to be optimized, respectively. The deep LSTM-RNN can be constructed by stacking multiple of these layers [1, 2].
III Neural Contraction Metric (NCM)
This section presents an algorithm to obtain an NCM depicted in Fig. 1.
III-A Convex Optimization-based Sampling of Contraction Metrics (CV-STEM)
We derive one approach to sample contraction metrics by using the ConVex optimization-based Steady-state Tracking Error Minimization (CV-STEM) method [10, 11], which could handle the control design of Itô stochastic nonlinear systems. In this section, we propose its simpler formulation for nonlinear systems with bounded disturbances in order to be of practical use in engineering applications.
By Theorem 1, the problem to minimize an upper bound of the steady-state Euclidean distance between the trajectory of the unperturbed system and of the perturbed system (1) ((4) as ) can be formulated as follows:
(5) |
where is used as a decision variable instead of . We assume that the contraction rate and disturbance bound are given in (5) (see Remark 1 on how to select ). We need the following lemma to convexify this nonlinear optimization problem.
Proof:
Since and , multiplying (2) by and then by from both sides preserves matrix definiteness and the resultant inequalities are equivalent to the original ones [27, pp. 114]. These operations yield (6). Next, since , there exists a unique s.t. . Multiplying (3) by from both sides gives as we have and . We get (7) by multiplying this inequality by . ∎
We are now ready to state and prove our main result on the convex optimization-based sampling.
Theorem 2
Consider the convex optimization problem:
(8) |
where and are defined in Lemma 1, and and are assumed to be given. Then .
Proof:
Remark 1
Since (6) and (7) are independent of , the choice of does not affect the optimal value of the minimization problem in Theorem 2. In practice, as we have by (3), it can be used as a penalty to optimally adjust the induced 2-norm of estimation and control gains when the problem explicitly depends on (see Sec. IV for details). Also, although is fixed in Theorem 2, it can be found by a line search as will be demonstrated in Sec. V.
Remark 2
The problem (8) can be solved as a finite-dimensional problem by using backward difference approximation, , where with , and by discretizing it along a pre-computed system trajectory .
III-B Deep LSTM-RNN Training
Instead of directly using sequential data of optimal contraction metrics for neural network training, the positive definiteness of is utilized to reduce the dimension of the target output defined in Sec. II.
Lemma 2
A matrix has a unique Cholesky decomposition, i.e., there exists a unique upper triangular matrix with strictly positive diagonal entries s.t. .
Proof:
See [28, pp. 441]. ∎
As a result of Lemma 2, we can select defined in Theorem 1 as the unique Cholesky decomposition of and train the deep LSTM-RNN using only the non-zero entries of the unique upper triangular matrices . We denote these nonzero entries as . As a result, the dimension of the target data is reduced by without losing any information on .
The pseudocode to obtain an NCM depicted in Fig. 1 is presented in Algorithm 1. The deep LSTM-RNN in Sec. II is trained with the sequential state data and the target data using Stochastic Gradient Descent (SGD). We note these pairs will be sampled for multiple trajectories to increase sample size and to avoid overfitting.
IV NCM-Based Optimal Estimation and Control
This section delineates how to construct an NCM offline and utilize it online for state estimation and feedback control.
IV-A Problem Statement
We apply an NCM to the state estimation problem for the following nonlinear system with bounded disturbances:
(9) |
where , , , , , and with and . Let , , and . We design an estimator as
(10) | ||||
(11) |
where . The virtual system of (9) and (10) is given as
(12) |
where is defined as and . Note that (12) has and as its particular solutions. The differential dynamics of (12) with is given as
(13) |
IV-B Nonlinear Stability Analysis
We have the following lemma for deriving a condition to guarantee the local contraction of (12) in Theorem 3.
Lemma 3
The following theorem along with this lemma guarantees the exponential stability of the estimator (10).
Theorem 3
Proof:
Using (13), we have when . This along with (14) gives in the region where Lemma 3 holds. Thus, using the bound , we have by the same proof as for Theorem 1. Rewriting this with , , and yields (15). This also implies that . Hence, the sufficient condition for in Lemma 3 reduces to the one required in this theorem. ∎
IV-C Convex Optimization-based Sampling (CV-STEM)
We have the following proposition to sample optimal contraction metrics for the NCM-based state estimation.
Proposition 1
that minimizes an upper bound of is found by the convex optimization problem:
(16) | ||||
s.t. and |
where , , , and . The arguments of , , and are omitted for notational simplicity.
Proof:
We have an analogous result for state feedback control.
Corollary 1
Consider the following system and a state feedback controller with the bounded disturbance :
(17) | ||||
(18) |
where , , , , , and is a matrix defined as , assuming that at [24, 25]. Suppose there exist positive constants , , and s.t. and . Then that minimizes an upper bound of can be found by the following convex optimization problem:
(19) | ||||
s.t. and |
where , , , and is a user-defined constant. The arguments of , , and are omitted for notational simplicity.
Proof:
The system with as its particular solutions is given by , where and . Since we have and the differential dynamics is
(20) |
when , we get by the same proof as for Theorem 3 with (13) replaced by (20), (14) by (18), and by , where . (19) then follows as in the proof of Proposition 1, where is for penalizing excessively large control inputs through (see Remark 1). ∎
IV-D NCM Construction and Interpretation
Algorithm 1 along with Proposition 1 and Corollary 1 returns NCMs to compute of (10) and of (17) for state estimation and control in real-time. They also provide the bounded error tube (see Theorem 1, [15, 16]) for robust motion planning problems as will be seen in Sec. V.
The similarity of Corollary 1 to Proposition 1 stems from the estimation and control duality due to the differential nature of contraction analysis as is evident from (13) and (20). Analogously to the discussion of the Kalman filter and LQR duality in LTV systems, this leads to two different interpretations on the weight of ( in (16) and in (19)). As discussed in Remark 1, one way is to see it as a penalty on the induced 2-norm of feedback gains. Since in (16) means no noise acts on , it can also be viewed as an indicator of how much we trust the measurement : the larger the weight of , the less confident we are in . These agree with our intuition as smaller feedback gains are suitable for systems with larger measurement uncertainty.
V Simulation
The NCM framework is demonstrated using Lorentz oscillator state estimation and spacecraft motion planning and control problems. CVXPY [13] with the MOSEK solver [14] is used to solve convex optimization problems. A Python implementation is available at https://github.com/astrohiro/ncm.
V-A State Estimation of Lorenz Oscillator
We first consider state estimation of the Lorentz oscillator with bounded disturbances described as and , where , , , , , , , and . We use for integration, with one measurement per .
V-A1 Sampling of Optimal Contraction Metrics
Using Proposition 1, we sample the optimal contraction metric along trajectories with uniformly distributed initial conditions (). Figure 2 plots in (16) for different trajectories and the optimal is found to be . The optimal estimator parameters averaged over trajectories for are summarized in Table I.

parameters | MSE | |||
---|---|---|---|---|
values |
V-A2 Deep LSTM-RNN Training
A deep LSTM-RNN is trained using Algorithm 1 and Proposition 1 with the sequential data sampled over the different trajectories (). Note that are standardized and normalized to make the SGD-based learning process stable. Figure 3 shows the test loss of the LSTM-RNN models with different number of layers and hidden units. We can see that the models with more than layers overfit and those with less than hidden units underfit to the training samples. Thus, the number of layers and hidden units are selected as and , respectively. The resultant MSE of the trained LSTM-RNN is shown in Table I.

V-A3 State Estimation with an NCM
The estimation problem is solved using the NCM, sampling-based CV-STEM [10, 11], and Extended Kalman Filter (EKF) with , and . We use and for the actual and estimated initial conditions, respectively. The EKF weight matrices are selected as and .
Figure 4 shows the smoothed estimation error using a -point moving average filter. The errors of the NCM and CV-STEM estimators are below the optimal upper bound while the EKF has a larger error compared to the other two. As expected from the small MSE of Table I, the estimation error of the NCM estimator shows a trend similar to that of the sampling-based CV-STEM estimator without losing its estimation performance.

V-B Spacecraft Optimal Motion Planning
We consider an optimal motion planning problem of the planar spacecraft dynamical system, given as , where , , and with , , and being the horizontal coordinate, vertical coordinate, and yaw angle of the spacecraft, respectively. The constant matrix and the state-dependent actuation matrix are defined in [30]. All the parameters of the spacecraft are normalized to .
V-B1 Problem Formulation
In the planar field, we have circular obstacles with radius located at , , , , , and . The goal of the motion planning problem is to find an optimal trajectory that avoids these obstacles and minimize subject to input constraints and the dynamics constraints. The initial and terminal condition are selected as and . Following the same procedure described in the state estimation problem, the optimal control parameters and the MSE of the LSTM-RNN trained using Algorithm 1 with Corollary 1 are determined as shown in Table II.
parameters | MSE | |||
---|---|---|---|---|
values |
V-B2 Motion Planning with an NCM
Given an NCM, we can solve a robust motion planning problem, where the state constraint is now described by the bounded error tube (see Theorem 1) with radius . Figure 5 shows the spacecraft motion on a planar field, computed using the NCM, sampling-based CV-STEM [10, 11], and baseline LQR control with and which does not account for the disturbance.

As summarized in Table III, input constraints are satisfied and the three controllers use a similar amount of control effort. All the controllers except the LQR keep their trajectories within the tube avoiding collision with the circular obstacles, even under the presence of disturbances as depicted in Fig. 5. Also, the NCM controller predicts the computationally-expensive CV-STEM controller with the small MSE as given in the last column of Table II.
NCM | |||
---|---|---|---|
CV-STEM | |||
LQR |
VI Conclusion
In this paper, we present a Neural Contraction Metric (NCM), a deep learning-based global approximation of an optimal contraction metric for online nonlinear estimation and control. The novelty of the NCM approach lies in that: 1) data points of the optimal contraction metric are sampled offline by solving a convex optimization problem, which minimizes an upper bound of the steady-state Euclidean distance between the perturbed and unperturbed trajectories without assuming any hypothesis function space, and 2) the deep LSTM-RNN is constructed to model the sampled metrics with arbitrary accuracy. The superiority of its performance is validated through numerical simulations on state estimation and optimal motion planning problems.
References
- [1] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
- [2] A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in IEEE Int. Conf. Acoust. Speech Signal Process., May 2013, pp. 6645–6649.
- [3] Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Trans. Neural Networks, vol. 5, no. 2, pp. 157–166, Mar. 1994.
- [4] K. Funahashi and Y. Nakamura, “Approximation of dynamical systems by continuous time recurrent neural networks,” Neural Networks, vol. 6, no. 6, pp. 801 – 806, 1993.
- [5] W. Lohmiller and J.-J. E. Slotine, “On contraction analysis for nonlinear systems,” Automatica, vol. 34, no. 6, pp. 683 – 696, 1998.
- [6] G. Shi et al., “Neural lander: Stable drone landing control using learned dynamics,” in IEEE Int. Conf. Robot. Automat., May 2019.
- [7] S. M. Richards, F. Berkenkamp, and A. Krause, “The Lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems,” in Conf. Robot Learn., vol. 87, Oct. 2018, pp. 466–476.
- [8] Y.-C. Chang, N. Roohi, and S. Gao, “Neural Lyapunov control,” in Adv. Neural Inf. Process. Syst., 2019, pp. 3245–3254.
- [9] A. P. Dani, S.-J. Chung, and S. Hutchinson, “Observer design for stochastic nonlinear systems via contraction-based incremental stability,” IEEE Trans. Autom. Control, vol. 60, no. 3, pp. 700–714, Mar. 2015.
- [10] H. Tsukamoto and S.-J. Chung, “Convex optimization-based controller design for stochastic nonlinear systems using contraction analysis,” in 58th IEEE Conf. Decis. Control, Dec. 2019, pp. 8196–8203.
- [11] ——, “Robust controller design for stochastic nonlinear systems via convex optimization,” Submitted to IEEE Trans. Autom. Control, 2019. [Online]. Available: https://arxiv.org/abs/2006.04359
- [12] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, Mar. 2004.
- [13] S. Diamond and S. Boyd, “CVXPY: A Python-embedded modeling language for convex optimization,” J. Mach. Learn. Res., 2016.
- [14] MOSEK ApS, MOSEK Optimizer API for Python 9.0.105, 2020.
- [15] S.-J. Chung, S. Bandyopadhyay, I. Chang, and F. Y. Hadaegh, “Phase synchronization control of complex networks of Lagrangian systems on adaptive digraphs,” Automatica, vol. 49, no. 5, pp. 1148–1161, 2013.
- [16] S. Singh, A. Majumdar, J. Slotine, and M. Pavone, “Robust online motion planning via contraction theory and convex optimization,” in IEEE Int. Conf. Robot. Automat., May 2017, pp. 5883–5890.
- [17] M. Johansson and A. Rantzer, “Computation of piecewise quadratic Lyapunov functions for hybrid systems,” IEEE Trans. Autom. Control, vol. 43, no. 4, pp. 555–559, Apr. 1998.
- [18] T. A. Johansen, “Computation of Lyapunov functions for smooth nonlinear systems using convex optimization,” Automatica, vol. 36, no. 11, pp. 1617 – 1626, 2000.
- [19] P. Giesl, Construction of Global Lyapunov Functions Using Radial Basis Functions, ser. Lecture Notes in Mathematics. Springer-Verlag Berlin Heidelberg, 2007, vol. 1904.
- [20] A. Papachristodoulou and S. Prajna, “On the construction of Lyapunov functions using the sum of squares decomposition,” in IEEE Conf. Decis. Control, vol. 3, Dec. 2002, pp. 3482–3487.
- [21] E. M. Aylward, P. A. Parrilo, and J.-J. E. Slotine, “Stability and robustness analysis of nonlinear systems via contraction metrics and SOS programming,” Automatica, vol. 44, no. 8, pp. 2163 – 2170, 2008.
- [22] I. R. Manchester and J. E. Slotine, “Control contraction metrics: Convex and intrinsic criteria for nonlinear feedback design,” IEEE Trans. Autom. Control, vol. 62, no. 6, pp. 3046–3053, Jun. 2017.
- [23] W. Tan, “Nonlinear control analysis and synthesis using sum-of-squares programming,” Ph.D. dissertation, Univ. California, Berkeley, 2006.
- [24] J. R. Cloutier, “State-dependent Riccati equation techniques: An overview,” in Proc. Amer. Control Conf., vol. 2, 1997, pp. 932–936.
- [25] T. Çimen, “State-dependent Riccati equation (SDRE) control: A survey,” IFAC Proc. Volumes, vol. 41, no. 2, pp. 3761 – 3775, 2008.
- [26] H. K. Khalil, Nonlinear Systems, 3rd ed. Prentice-Hall, 2002.
- [27] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, ser. Studies in Applied Mathematics. Philadelphia, PA: SIAM, Jun. 1994, vol. 15.
- [28] R. A. Horn and C. R. Johnson, Matrix Analysis, 2nd ed. Cambridge University Press, 2012.
- [29] S. Bonnabel and J. Slotine, “A contraction theory-based analysis of the stability of the deterministic extended Kalman filter,” IEEE Trans. Autom. Control, vol. 60, no. 2, pp. 565–569, Feb. 2015.
- [30] Y. Nakka et al., “A six degree-of-freedom spacecraft dynamics simulator for formation control research,” in Astrodyn. Special. Conf., 2018.