Accelerated Continuous-Time Approximate Dynamic Programming via Data-Assisted Hybrid Control111Research supported in part by NSF grant number CNS-1947613.22footnotemark: 2
Abstract
We introduce a new closed-loop architecture for the online solution of approximate optimal control problems in the context of continuous-time systems. Specifically, we introduce the first algorithm that incorporates dynamic momentum in actor-critic structures to control continuous-time dynamic plants with an affine structure in the input. By incorporating dynamic momentum in our algorithm, we are able to accelerate the convergence properties of the closed-loop system, achieving superior transient performance compared to traditional gradient-descent based techniques. In addition, by leveraging the existence of past recorded data with sufficiently rich information properties, we dispense with the persistence of excitation condition traditionally imposed on the regressors of the critic and the actor. Given that our continuous-time momentum-based dynamics also incorporate periodic discrete-time resets that emulate restarting techniques used in the machine learning literature, we leverage tools from hybrid dynamical systems theory to establish asymptotic stability properties for the closed-loop system. We illustrate our results with a numerical example.
keywords:
Approximate dynamic programming, concurrent learning, hybrid systems, Lyapunov theory.p op-symbol = ∂, left-delim = , right-delim = — , subscr-nudge = +4 mu
organization=Department of Electrical, Energy and Computer Engineering. University of Colorado Boulder,city=Boulder, postcode=80305, state=Colorado, country=USA
1 Introduction
Recent technological advances in computation and sensing have incentivized the development and implementation of data-assisted feedback control techniques previously deemed intractable due to their computational complexity. Among these techniques, reinforcement learning (RL) has emerged as a practically viable tool with remarkable degrees of success in robotics [1], autonomous driving [2], water-distribution systems [3], among other cyber-physical applications, see [4]. These types of algorithms, are part of a large landscape of adaptive systems that aim to control a plant while simultaneously optimizing a performance index in a model-free way, with closed-loop stability guarantees.
In this paper, we focus on a particular class of infinite horizon RL problems from the perspective of approximate optimal control and approximate adaptive dynamic programming (AADP). Specifically, we study the optimal control problem for nonlinear continuous-time and control-affine deterministic plants, interconnected with approximate adaptive optimal controllers [5] in an actor-critic configuration. These types of adaptive controllers aim to find, in real time, the solution to the Hamilton-Jacobi-Bellman (HJB) equation by measuring the output of the nonlinear dynamical system while making use of two approximation structures:
-
1.
a critic, used to estimate the optimal value function of the optimal control problem, and
-
2.
an actor, used to estimate the optimal feedback controller.
Our goal is to design online adaptive dynamics for the real-time tuning of the aforementioned structures, while simultaneously achieving closed-loop stability and high transient performance. To achieve this, and motivated by the widespread usage of momentum-based gradient dynamics in practical RL settings [6], we study continuous-time actor-critic dynamics inspired by a class of ordinary differential equations (ODEs) that can be seen as continuous-time counterparts of Nesterov’s accelerated optimization algorithm [7]. Such types of algorithms have gained popularity in optimization and related fields due to the fact that they can minimize smooth convex functions at a rate of order [8]. The main source for the acceleration property in these ODEs comes from the addition of momentum to gradient-based dynamics, in conjunction with a vanishing dynamic damping coefficient. However, as recently shown in [9] and [10], the non-uniform convergence properties that emerge in these types of dynamics complicates their use in feedback systems with plant dynamics in the loop. In this paper, we overcome these challenges by incorporating resets into the proposed momentum-based algorithms, similar to restarting heuristics studied in the machine learning literature, see [11] and [7]. Our resulting actor-critic controller is naturally modeled by a hybrid dynamical system that incorporates continuous-time and discrete-time dynamics, which we analyze using tools from [12].
A traditional assumption in the literature of continuous-time actor-critic RL is that the regressors used in the parameterizations satisfy a persistence of excitation condition along the trajectories of the plant. However, in practice, this condition can be difficult to verify a priori. To circumvent this issue, in this paper we consider a data-assisted approach, where a finite amount of past “sufficiently rich” recorded data is used to guarantee asymptotic learning in the closed-loop system. As a consequence, the resulting data-assisted hybrid control algorithm concurrently uses real-time and recorded data, similar in spirit to concurrent-learning (CL) techniques [13]. By using Lyapunov-based tools for hybrid dynamical systems, we analyze the interconnection of an actor-critic neural-network (NN) controller and the nonlinear plant, establishing that the trajectories of the closed-loop system remain ultimately bounded around the origin of the plant and the optimal actor and critic NN parameters. Since the resulting closed-loop system has suitable regularity properties in terms of continuity of the dynamics, our stability results are in fact robust with respect to arbitrarily small additive disturbances that can be adversarial in nature, or that can arise due to numerical implementations. To the best knowledge of the authors, these are the first theoretical stability guarantees of continuous-time accelerated actor-critic algorithms for neural network-based adaptive dynamic programming controllers in nonlinear deterministic settings.
The rest of this paper is organized as follows: Section 2 presents the notation and some concepts on hybrid dynamical systems, Section 3 presents the problem statement and some preliminaries on optimal control. Section 4 introduces the hybrid momentum-based dynamics for the update of the critic NN, Section 5 presents the update dynamics for the actor NN, and Section 6 studies the properties of closed-loop system. In Section 7 we study a numerical example illustrating our theoretical results.
2 Preliminaries
Notation: We denote the real numbers by , and we use to denote the non-negative real line. We use to represent the -dimensional Euclidean space and to denote its usual vector norm. Given , we use to denote the induced 2-norm for matrices, and we infer its distinction with the vector norm depending on the context. We use to denote the trace operator on matrices. Given a compact set and a vector , we use to represent the minimum distance of to . We also use to denote a closed ball in the Euclidean space, of radius , and centered at the origin. We use to denote the identity matrix, and for the concatenation of the vectors and , i.e., . A function is said to be of class- (), if it is continuous, zero at zero, and nondecreasing. A function is said to be of class- () if for each , it is non-increasing in its second argument, and for each . The gradient of a real valued function is defined as a column vector and denoted by . For a vector valued function , we use to denote its Jacobian matrix.
Hybrid Dynamical Systems: To study our algorithms, we will use tools from hybrid dynamical systems (HDS) theory [12]. A HDS with state , has dynamics
(1) |
where is called the flow map, is called the jump map, and and are closed sets, called the flow set and the jump set, respectively. We use to denote the elements of the HDS . Solutions to system (1) are indexed by a continuous-time parameter , which increases continuously during flows, and a discrete-time index , which increases by one during jumps. Thus, the notation in (1) represents the derivative ; and in (1) represents the value of after an instantaneous jump, i.e., . Therefore, solutions to system (1) are defined on hybrid time domains. For a precise definition of hybrid time domains and solutions to HDS of the form (1), we refer the reader to [12, Ch.2]. The following definitions will be instrumental to study the stability and convergence properties of systems of the form (1).
Definition 1
The compact set is said to be uniformly asymptotically stable (UAS) for system (1) if and such that every solution with satisfies:
(2) |
When for some , the set is said to be uniformly exponentially stable (UES).
3 Problem Statement
Consider a control-affine nonlinear dynamical plant
(3) |
where is the state of the system, is the input, and and are locally Lipschitz functions. Our goal is to design a stable algorithm able to find –in real time– a control law that minimizes the cost functional given by:
(4) |
where represents a solution to (3) from the initial condition , that results from implementing a feedback law , belonging to a class of admissible control laws characterized as follows:
Definition 2
[14, Definition 1] Given the dynamical system in (3), a feedback control is admissible with respect to the cost functional in (4) if
-
1.
is continuous,
-
2.
renders system (3) UAS,
-
3.
for all .
We denote the set of admissible feedback laws as .
In (4), we consider cost functions of the form where the state-cost is given by with , and the control-cost is given by with . To find the optimal control law that minimizes (4), we study the Hamiltonian function related to (3) and (4), given by
(5) |
Using (5), a necessary optimality condition for is given by Pontryagin’s maximum principle [15]:
(6) |
where represents the optimal value function:
On the other hand, under the assumption that is continuously differentiable, the optimal value function can be shown to satisfy the Hamilton-Jacobi-Bellman equation [5, Ch. 1.4]:
Since the functional in (4) does not have an explicit dependence on , it follows that , and hence , meaning that for all , the following holds:
(7) |
The time-invariant Hamilton-Jacobi-Bellman equation in (7), allows for a state-dependent characterization of optimality. Therefore, by using the optimal control law in (6), and assuming that the system dynamics (3) are known, the form (7) could be leveraged to find . Unfortunately, finding an explicit closed-form expression for , and thus for the optimal control law, is, in general, an intractable problem. However, the utility of (7) is not completely lost. As we shall show in the following sections, online and historical “measurements” of (7) can be leveraged in real time to estimate the optimal control law while concurrently rendering a neighborhood of the origin of system (3) asymptotically stable.
4 Data-Assisted Critic Dynamics
To leverage the form of (7), we consider the following parameterization of the optimal value function :
(8) |
where is a compact set, , is a vector of continuously differentiable basis functions, and is the approximation error. The parameterization (8) is always possible on compact sets due to the continuity properties of and the universal approximation theorem [16]. This parametrization results in an optimal Hamiltonian of the form given by:
(9) |
where we defined as:
(10) |
We note that the explicit dependence of on the control action , defined in (10), is a fundamental departure from the previous approaches studied in the context of concurrent learning (CL) NN actor-critic controllers, such as those considered in [17] and [18]. In particular, we note that in the context of CL the data used to estimate the optimal value function is generated from measurements of the optimal Hamiltonian which, by definition, incorporates the optimal control law . Hence, the need to include as part of the regressor vectors becomes crucial; this dependence characterizes how far our recorded measurements of a Hamiltonian are from the optimal Hamiltonian . Indeed, this distance will explicitly emerge in our convergence and stability analysis. Naturally, the dependence of (10) on will impose stronger conditions on the recorded data needed to estimate .
Assuming we have access to , we can define a critic neural network as:
(11) |
which will serve as an approximation of the optimal value function in (8). This critic NN results in an estimated Hamiltonian:
(12) |
which we will use to design the update dynamics of the critic parameters . In particular, our goal is to use previously recorded data from trajectories of the plant to ensure asymptotic stability of the set of optimal critic parameters , while simultaneously enabling the incorporation of instantaneous measurements from the plant. Towards this end, we will assume enough “richness” properties in the recorded data, a notion that is captured by a relaxed (and finite-time) version of persistence of excitation (PE); see [13] and [19].
Assumption 1
Let be a sequence of recorded data, and define:
(13) |
There exists such that , i.e., the data is -sufficiently-rich (-SR).
Remark 1
In this paper, we study reinforcement learning dynamics that do not make explicit usage of exploration signals with standard PE properties, which can be difficult to guarantee in practice. Instead, we assume access to samples obtained by observing the action of optimal values acting on the plant. Note however that this does not imply knowledge of the optimal control policy as a whole, but only of a finite number of demonstrations from an “expert” policy. Similar requirements commonly arise in the literature of imitation learning, or inverse reinforcement learning, and have been recently shown in practice to reduce the exploratory requirements of online reinforcement learning algorithms, with mild assumptions in the sampling of the demonstrations. For recent discussions on these topics in the discrete-time stochastic reinforcement learning setting we refer the reader to [20] and [21].
Now, we consider the instantaneous and data-dependent errors of the estimated Hamiltonian with respect to the optimal one:
where we used the fact that . Moreover, we define the joint instantaneous and data-dependent error as:
(14) |
where and are tunable gains. Since we are interested in designing real-time training dynamics for the estimation of the optimal parameters , we compute the the gradient of (14) with respect to as follows:
(15) |
where and are defined in Assumption 1.
The “propagated” error to the HJB equation that results from the approximate parametrization of in (8), is given by:
(16) |
The following assumption is standard, and it is satisfied when the involved functions are continuous and is compact.
Assumption 2
4.1 Critic Dynamics via Data-Driven Hybrid Momentum-Based Control
To design fast asymptotically stable dynamics for the estimate , we propose a new class of momentum-based critic dynamics inspired by accelerated gradient flows with restarting mechanisms, such as those studied in [7] and [11]. Specifically, we consider the following hybrid dynamics of the form (1), with state and elements:
(17a) | |||
(17b) |
where is a tunable gain, and are auxiliary states that are periodically reset every time via the jump map (17b), with . The dynamical system in (17) flows in continuous time according to (17a) whenever the timer variable is in . As soon as hits , the algorithm (17) resets the timer variable to , as well as the momentum variable to , while leaving unaffected. Accordingly, after the first reset, the system exhibits periodic resets every intervals of time. The following assumption provides data-dependent tuning guidelines for the resetting frequency of the timer variable , which will be leveraged in our stability results.
Assumption 3
The tunable parameters satisfy and
(18) |
where is the level of richness of the recorderd data defined in Assumption 1.
For system (17), we study stability properties with respect to the compact set:
(19a) | ||||
(19b) |
The following theorem is the first main result of this paper. All the proofs are presented in the Appendices.
Theorem 1
Given a number of basis functions parametrizing the critic NN, and a compact set , suppose that Assumptions 1, 2 and 3 are satisfied. Then, there exists and class- functions and , such that for every solution to (17) with initial condition , and using the control policy on the plant, the critic parameters satisfy
(20) |
where , for all
The presence of a residual optimal-control mismatch term in (20) of the form , represents a crucial difference with respect to previous CL adaptive dynamic approaches, such as those studied in [17] and [5, Ch. 4 ]. This term is a direct byproduct of our definition of in (10), its dependence on the control action , and its appearance in the error gradient (15). In principle, the emergence of this term in Theorem 1 is agnostic to the particular gradient-based update dynamics for the critic NN, regardless of the inclusion or not of momentum. Since , the larger the difference between the nominal input and the optimal feedback law , the greater the residual error in the convergence of . In particular, the bound (20) describes a semi-global practical input-to-state stability property that, to the best knowledge of the authors, is novel in the context of CL-based RL. In the next section we will show that the residual error can be removed by incorporating an additional actor NN in the system.
Remark 2
In contrast to standard data-driven gradient-descent dynamics for the estimation of the optimal value function , which can achieve exponential rates of convergence proportional to (cf. [18, 13]), under the assumptions of Theorem 1 the critic update dynamics (17) can achieve exponential convergence with rates proportional to . As shown in [9], momentum-based dynamics of this form can achieve these rates using the restarting parameter
(21) |
This property is particularly useful in settings where the level of richness of the data-set is limited, i.e., when , which is common in practical applications.
Theorem 1 guarantees exponential convergence to a neighborhood of the optimal parameters that define the optimal value function . Consequently, by continuity, and on compact sets, would converge to an -approximation of , which can be leveraged by the control law (6) to stabilize system (3). However, as noted in [22], implementing only critic structures for the control of nonlinear dynamical systems of the form (3) can lead to poor closed-loop transient performance. To tackle this issue, we consider an auxiliary dynamical system, called the actor, which will serve as an estimator of the optimal controller that acts on the plant.

5 Actor Dynamics

Using the optimal value parametrization described in Section 4 the optimal control law can written as:
(22) |
Therefore, using and we can implement an actor neural-network given by:
(23) |
where is defined as:
(24) |
To guarantee convergence of to , we design update dynamics for based on the minimization of the error:
(25) |
which satisfies:
where
(26) |
Based on these definitions, we consider the following gradient-descent dynamics for the actor neural-network:
(27) |
where is a tunable gain. A scheme representing these update dynamics is shown in Figure 2.
6 Momentum-Based Actor-Critic Feedback System
Consider the closed-loop resulting from the interconnection between the plant (3), the critic update dynamics (17), the actor update dynamics (27) and the feedback law in (23) shown in Figure 3(a), and given by:
(28a) | ||||
(28b) | ||||
(28c) |
and with flow set and jump set given by and respectively, where and are as defined in (17). Let be the overall state of the closed-loop system, and define:
The following is the main result of this paper.
Theorem 2
Given the vector of basis functions parametrizing the critic NN and a compact set , where is given as in (8), suppose that Assumption 1-3 are satisfied. Then, there exists , and tunable parameters , such that for every solution to the closed-loop system (28), with initial condition , there exists such that for all :
for all , and
for some constant.
Theorem 2 establishes asymptotic convergence to a neighborhood of the compact set as from any compact set modulo some error , under a suitable choice of tunable parameters. To the best knowledge of the authors this is the first result providing stability certificates for continuous-time actor-critic reinforcement learning using recorded data and accelerated value-function estimation dynamics with momentum. In addition, since the resulting closed-loop system in (28) is given by a well-posed hybrid system, the stability results are robust with respect to arbitrarily small additive disturbances on the states and dynamics [12, Ch. 7].
7 Numerical Example


In this section, we present a numerical experiment that illustrates our theoretical results. In particular, we study the following nonlinear control-affine plant:
(29a) | |||
(29b) | |||
(29c) |
with local state and control costs given by and [18]. The optimal value function for this setting is given by with optimal control law given by . Using this information, we choose , and we implement the prescribed hybrid momentum-based dynamics in (17) for the update of the critic neural network, and the update dynamics for the actor described in (27). We obtain the results shown in Figure 3(b) with , and . We compare the results with the case in which the critic neural-network is updated with the gradient-descent dynamics of [17], and where the sufficiently rich data is a set of data points obtained by sampling the dynamics (29) in a grid around the origin of size . In our simulations we use for the momentum-based dynamics in (17). These particular values are obtained by using the level of richness of the data-set, and the inequalities in (18) in order to ensure compliance with Assumption 3. For both reinforcement learning dynamics we use and . As shown in the figure both update dynamics are able to converge to , with describing the optimal value function . However, the hybrid-based dynamics are able to significantly improve the transient performance of the learning mechanism.333The code used to implement this simulation can be found in the following repository: https://github.com/deot95/Accelerated-Continuous-Time-Approximate-Dynamic-Programming-through-Data-Assisted-Hybrid-Control
8 Conclusions
In this paper, we introduced the first stability guarantees for deterministic continuous-time actor-critic reinforcement learning with accelerated training of neural network structures. To do so, we studied a novel hybrid momentum-based estimation dynamical system for the critic NN, which estimates, in real time, the optimal value function. Our stability analysis leveraged the existence of rich recorded data taken from a finite number of samples along optimal trajectories and inputs of the system. We showed that this finite sequence of samples can be used to train the controller to achieve online optimal performance with fast transient performance. Closed-loop stability was established using tools from hybrid dynamical systems theory. Potential extensions include the study of similar accelerated training dynamics for the actor subsystem, as well as considering reinforcement learning problems in hybrid plants.
References
- [1] J. Ibarz, J. Tan, C. Finn, M. Kalakrishnan, P. Pastor, and S. Levine, “How to train your robot with deep reinforcement learning: lessons we have learned,” The International Journal of Robotics Research, vol. 40, no. 4-5, pp. 698–721, 2021.
- [2] B. R. Kiran, I. Sobh, V. Talpaert, P. Mannion, A. A. Al Sallab, S. Yogamani, and P. Pérez, “Deep reinforcement learning for autonomous driving: A survey,” IEEE Transactions on Intelligent Transportation Systems, 2021.
- [3] J. Martinez-Piazuelo, D. E. Ochoa, N. Quijano, and L. F. Giraldo, “A multi-critic reinforcement learning method: An application to multi-tank water systems,” IEEE Access, vol. 8, pp. 173227–173238, 2020.
- [4] K. G. Vamvoudakis, Y. Wan, F. L. Lewis, and D. Cansever, “Handbook of reinforcement learning and control,” 2021.
- [5] R. Kamalapurkar, P. Walters, J. Rosenfeld, and W. E. Dixon, Reinforcement learning for optimal feedback control: A Lyapunov-based approach. Springer, 2018.
- [6] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015.
- [7] W. Su, S. Boyd, and E. Candes, “A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights,” J. of Machine Learning Research, vol. 17, no. 153, pp. 1–43, 2016.
- [8] A. Wibisono, A. C. Wilson, and M. I. Jordan, “A variational perspective on accelerated methods in optimization,” Proceedings of the National Academy of Sciences, vol. 113, no. 47, pp. E7351–E7358, 2016.
- [9] J. I. Poveda and N. Li, “Robust hybrid zero-order optimization algorithms with acceleration via averaging in continuous time,” Automatica, vol. 123, 2021.
- [10] J. I. Poveda and A. R. Teel, “The heavy-ball ode with time-varying damping: Persistence of excitation and uniform asymptotic stability,” in 2020 American Control Conference (ACC), pp. 773–778, IEEE, 2020.
- [11] O’Donoghue and E. J. Candès, “Adaptive restart for accelerated gradient schemes,” Foundations of Computational Mathematics, vol. 15, no. 3, pp. 715–732, 2013.
- [12] R. Goebel, R. G. Sanfelice, and A. R. Teel, “Hybrid dynamical systems: modeling stability, and robustness,” 2012.
- [13] G. Chowdhary and E. Johnson, “Concurrent learning for convergence in adaptive control without persistency of excitation,” in 49th IEEE Conference on Decision and Control (CDC), pp. 3674–3679, IEEE, 2010.
- [14] R. W. Beard, G. N. Saridis, and J. T. Wen, “Galerkin approximations of the generalized hamilton-jacobi-bellman equation,” Automatica, vol. 33, no. 12, pp. 2159–2177, 1997.
- [15] D. Liberzon, Calculus of variations and optimal control theory. Princeton university press, 2011.
- [16] K. Hornik, M. Stinchcombe, and H. White, “Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks,” Neural networks, vol. 3, no. 5, pp. 551–560, 1990.
- [17] K. G. Vamvoudakis, M. F. Miranda, and J. P. Hespanha, “Asymptotically stable adaptive–optimal control algorithm with saturating actuators and relaxed persistence of excitation,” IEEE transactions on neural networks and learning systems, vol. 27, no. 11, pp. 2386–2398, 2015.
- [18] R. Kamalapurkar, P. Walters, and W. E. Dixon, “Model-based reinforcement learning for approximate optimal regulation,” Automatica, vol. 64, pp. 94–104, 2016.
- [19] K. J. Astrom and B. Wittenmark, Adaptive Control. Addison-Wesley Publishing Company, 1989.
- [20] K. Ciosek, “Imitation learning by reinforcement learning,” in International Conference on Learning Representations, 2022.
- [21] P. Rashidinejad, B. Zhu, C. Ma, J. Jiao, and S. Russell, “Bridging offline reinforcement learning and imitation learning: A tale of pessimism,” Advances in Neural Information Processing Systems, vol. 34, 2021.
- [22] K. Doya, “Reinforcement learning in continuous time and space,” Neural computation, vol. 12, no. 1, pp. 219–245, 2000.
- [23] C. Cai and A. R. Teel, “Characterizations of input-to-state stability for hybrid systems,” Systems & Control Letters, vol. 58, no. 1, pp. 47–53, 2009.
- [24] H. K. Khalil, Nonlinear Systems. Upper Saddle River, NJ: Prentice Hall, 2002.
- [25] D. E. Ochoa, J. I. Poveda, A. Subbaraman, G. S. Schmidt, and F. R. Pour-Safaei, “Accelerated concurrent learning algorithms via data-driven hybrid dynamics and nonsmooth odes,” in Learning for Dynamics and Control, pp. 866–878, PMLR, 2021.
Appendix A Proof Theorem 1
A.1 Gradient of Critic Error-Function in Deviation Variables
First, using (16) together with for all , we obtain:
(30) |
Thus, using (15) and (30), we can rewrite the gradient of as follows:
(31) |
where
(32) |
and
(33) | ||||
(34) |
which, by using the fact that , satisfy:
(35a) | ||||
(35b) |
The following Lemma will be instrumental for our results.
Lemma 1
If the data is -sufficiently-rich, then there exist such that
-
Proof.
Let be arbitrary. Since, by assumption, the data is -SR it follows that:
(36) where . On the other hand, using the fact that , we obtain that:
we obtain:
where .
A.2 Lyapunov-Based Analysis
Recall from Section 4 that , suppose that the assumptions of Theorem 1 hold and consider the Lyapunov candidate function given by:
(37) |
where was defined in Assumption 1 and which satisfies:
(38) | ||||
where . Now, let , and consider the time derivative of along the continuous-time evolution of the critic subsystem, i.e., . Then, by using (31) and Lemma 1, and some algebraic manipulation, can be shown to satisfy
(39) |
where
(40) |
and was defined in Section 4. Since and by means of Asssumption 2, and by construction of the critic update dynamics (17), it follows that with . Hence, from (39) and using (35), we obtain that:
(41) |
where are given by:
Thus, letting , and using (38), (41):
(42a) |
On the other hand, the change of during the jumps in the update dynamics for the critic (17), satisfies:
(43) |
with which satisfies by means of Assumption 2. Together, (42) and (43), in conjuction with the quadratic bounds of (38), imply the results of Theorem 1 via [23, Prop 2.7] and the fact that for all .
Appendix B Proof of Theorem 2
B.1 Gradient of Actor Error-Function in Deviation Variables
First, note that we we can write (25) as:
and consider the following Lemma, instrumental for our results.
Lemma 2
There exists such that
-
Proof.
Let be arbitrary. Then, by the definition of in (26), it follows that:
where . On the other hand, we obtain:
where , represents the Frobenius norm and where we used and .
Now, consider the Lyapunov function
(44a) | |||
(44b) |
where was defined in (37) and where we recall that . By [24, Lemma 4.3], and since is a continuous and positive definite function in , there exist such that . Hence, using (38), and the fact that sum of class is in turn of class , there exist such that:
(45) |
Now, the time derivative of along the trajectories of (28) satisfies:
(46) |
On the other hand, making use of Lemma 2, for the time derivative of we obtain:
(47) |
Hence, using (39), (46), and (47), together with the upper bounds in (35), we obtain that the time derivative of along the trajectories of the closed-loop system satisfies:
(48) |
where
Then, for all , by using and letting , from (48), can be further upper bounded as:
(49) |
Now, pick a set of tunable parameters such that so that from (49), we obtain:
(50a) |
with
Notice that for every compact set of initial conditions for we can pick suitable to satisfy such that (50) holds for every trajectory with . Now, during jumps and do not change, and hence satisfies:
(51) |
The result of the theorem follows by using the strong-decrease of during flows outside a neighborhood of described in (50), the non-increase of during jumps given in (51), by noting that, by design, the closed-loop dynamics are a well-posed HDS which experiences periodic jumps followed by intervals of flow of length (c.f. [25]), and by following the same arguments of [12, Prop 3.27] and [23, Prop. 2.7].