This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Conditioning with Conditionals

Mahmoud Abdelgalil and Jorge I. Poveda This work was supported in part by the grants NSF ECCS CAREER 2305756 and AFOSR YIP: FA9550-22-1-0211.M. Abdelgalil and J. I. Poveda are with the ECE Department, UCSD, CA, USA. Corresponding Author: J. I. Poveda ([email protected]).

On Lie-Bracket Averaging for a Class of Hybrid Dynamical Systems with Applications to Model-Free Control and Optimization

Mahmoud Abdelgalil and Jorge I. Poveda This work was supported in part by the grants NSF ECCS CAREER 2305756 and AFOSR YIP: FA9550-22-1-0211.M. Abdelgalil and J. I. Poveda are with the ECE Department, UCSD, CA, USA. Corresponding Author: J. I. Poveda ([email protected]).
Abstract

The stability of dynamical systems with oscillatory behaviors and well-defined average vector fields has traditionally been studied using averaging theory. These tools have also been applied to hybrid dynamical systems, which combine continuous and discrete dynamics. However, most averaging results for hybrid systems are limited to first-order methods, hindering their use in systems and algorithms that require high-order averaging techniques, such as hybrid Lie-bracket-based extremum seeking algorithms and hybrid vibrational controllers. To address this limitation, we introduce a novel high-order averaging theorem for analyzing the stability of hybrid dynamical systems with high-frequency periodic flow maps. These systems incorporate set-valued flow maps and jump maps, effectively modeling well-posed differential and difference inclusions. By imposing appropriate regularity conditions, we establish results on (T,ε)(T,\varepsilon)-closeness of solutions and semi-global practical asymptotic stability for sets. These theoretical results are then applied to the study of three distinct applications in the context of hybrid model-free control and optimization via Lie-bracket averaging.

Keywords Hybrid systems, averaging theory, multi-time scale dynamical systems, extremum seeking

1 Introduction

The theory of averaging has been crucial over the past century for analyzing and synthesizing systems with fast oscillatory or time-varying dynamics [41], including nonlinear controllers [27], estimation methods, and model-free optimization algorithms [6, 48, 40, 44]. Stability analyses based on averaging theory have been extensively explored for systems modeled as Ordinary Differential Equations (ODEs) [41, 33] and extended to certain Hybrid Dynamical Systems (HDS) [53, 49, 39], as well systems wherein dithers are injected into a non-smooth/switched system, and averaging is used to obtain a continuous approximation of the dynamics [26, 24].

Stability results based on averaging theory typically rely on a well-defined “average system” whose solutions remain “close” to those of the original system over compact time domains and state space subsets. This closeness is established by transforming the original dynamics into a perturbed version of the average dynamics. By leveraging uniform stability properties via 𝒦\mathcal{K}\mathcal{L} bounds, it can be shown that the trajectories of the original dynamics converge similarly to those of the average dynamics as the time scale separation increases. This method has been applied to dynamical systems modeled as ODEs [41, 27, 49] and some classes of HDS and switching systems [53, 32, 39, 15, 25, 36]. These findings have led to the development of new hybrid algorithms for extremum-seeking (ES) and adaptive systems [40, 29, 28, 3, 9].

Despite significant progress over the last decade, gaps remain between the averaging tools available for ODEs and those suitable for HDS. Most averaging results for HDS have been limited to “first-order” averaging, where the nominal stabilizing vector field is derived by neglecting higher-order perturbations. As indicated in [41, Section 2.9], [5], and [16], first-order approximations may not accurately characterize the stability of certain highly oscillatory systems, such as those in vibrational control [44] and Lie-bracket ES [16, 22]. In these cases, the stabilizing average dynamics require consideration of higher-order terms. This high-order averaging, well-known in ODE literature, is the focus of this paper, which aims to develop similar tools for hybrid dynamical systems with highly oscillatory flow maps and demonstrate their applications in control and optimization problems.

Based on this, our main contributions are the following:

(a) First, a novel high-order averaging theorem is introduced to study the “closeness of solutions” property between certain HDS with periodic flow maps and their corresponding average dynamics. In contrast to existing literature, we focus on hybrid systems with differential and difference inclusions, where stabilizing effects are determined by an average hybrid system obtained through the recursive application of averaging at high orders of the inverse frequency. These higher-order averages become essential when traditional first-order average systems fail to capture stability properties of the system. For compact time domains and compact sets of initial conditions, it is established that each solution of the original HDS is (T,ε)(T,\varepsilon)-close to some solution of its average hybrid dynamics. This notion of closeness between solutions is needed because, in general, HDS generate solutions with discontinuities, or “jumps”, for which the standard (uniform) distance between solutions is not a suitable metric, see [49, 53] and [20, Ex. 3, pp. 43]. Moreover, the models studied in this paper admit multiple solutions from a given initial condition, a property that is particularly useful to study families of functions of interest as in e.g., switching systems. To establish the closeness of solutions property, we introduce a new change of variable that is recursive in nature, and which leverages the structure in which the “high-order” oscillations appear in the flow map.

(b) Next, by leveraging the property of closeness of solutions between the original and the average hybrid dynamics, as well as a uniform asymptotic stability assumption on the (second-order) average hybrid dynamics, we establish a semi-global practical asymptotic stability result for the original hybrid system. For the purpose of generality, we allow the average hybrid dynamics to have a compact set that is locally asymptotically stable with respect to some basin of attraction, which recovers the entire space whenever the stability properties are actually global. By using proper indicators in the stability analysis, the original hybrid system is shown to render the same compact set semi-global practically asymptotically stable with respect to the same basin of attraction, thus effectively extending [49, Thm. 2] to high-order averaging.

c) Finally, by exploiting the structure of the HDS, as well as the high-order averaging results, we study three different novel stabilization and optimization problems that involve hybrid set-valued dynamics and highly oscillatory control: (a) distance-based synchronization in networks with switching communication topologies and control directions, extending the results of [18, 17] to oscillators on dynamic graphs, operating under intermittent control; (b) source-seeking problems in non-holonomic vehicles operating under faulty and spoofed sensors, which extends the results of [16] to vehicles operating in adversarial environments, and those of [38] to vehicles with non-holonomic models with angular actuation; and (c) the solution of model-free global extremum seeking problems on smooth compact manifolds, which extends the recent results of [37] to Lie-bracket-based algorithms. In this manner, an important gap in the hybrid extremum-seeking control literature is also filled by integrating hybrid dynamics into Lie-bracket-based extremum-seeking algorithms.

The rest of this paper is organized as follows. Section 2 introduces the preliminaries. Section 3 presents a motivational example. Section 4 presents the main results. Applications are studied in Section 5. The proofs are presented in Section 6, and Section 7 presents the conclusions.

2 Preliminaries

2.1 Notation

The bilinear form ,:n×n\langle\cdot,\cdot\rangle:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R} is the canonical inner product on n\mathbb{R}^{n}, i.e. x,y=xy\langle x,y\rangle=x^{\top}y, for any two vectors x,ynx,y\in\mathbb{R}^{n}. Given a compact set 𝒜n\mathcal{A}\subset\mathbb{R}^{n} and a vector xnx\in\mathbb{R}^{n}, we use |x|𝒜:=minx~𝒜xx~2|x|_{\mathcal{A}}:=\min_{\tilde{x}\in\mathcal{A}}\|x-\tilde{x}\|_{2}. We also use con¯(𝒜)\overline{\text{con}}(\mathcal{A}) to denote the closure of the convex hull of 𝒜\mathcal{A}. A set-valued mapping M:pnM:\mathbb{R}^{p}\rightrightarrows\mathbb{R}^{n} is outer semicontinuous (OSC) at zz if for each sequence {zi,si}(z,s)p×n\{z_{i},s_{i}\}\to(z,s)\in\mathbb{R}^{p}\times\mathbb{R}^{n} satisfying siM(zi)s_{i}\in M(z_{i}) for all i0i\in\mathbb{Z}_{\geq 0}, we have sM(z)s\in M(z). A mapping MM is locally bounded (LB) at zz if there exists an open neighborhood NzpN_{z}\subset\mathbb{R}^{p} of zz such that M(Nz)M(N_{z}) is bounded. The mapping MM is OSC and LB relative to a set KpK\subset\mathbb{R}^{p} if MM is OSC for all zKz\in K and M(K):=zKM(x)M(K):=\cup_{z\in K}M(x) is bounded. A function β:0×00\beta:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} is of class 𝒦\mathcal{KL} if it is nondecreasing in its first argument, nonincreasing in its second argument, limr0+β(r,s)=0\lim_{r\to 0^{+}}\beta(r,s)=0 for each s0s\in\mathbb{R}_{\geq 0}, and limsβ(r,s)=0\lim_{s\to\infty}\beta(r,s)=0 for each r0r\in\mathbb{R}_{\geq 0}. Throughout the paper, for two (or more) vectors u,vnu,v\in\mathbb{R}^{n}, we write (u,v)=[u,v](u,v)=[u^{\top},v^{\top}]^{\top}. Let x=(x1,x2,,xn)nx=(x_{1},x_{2},\ldots,x_{n})\in\mathbb{R}^{n}. Given a Lipschitz continuous function f:nmf:\mathbb{R}^{n}\to\mathbb{R}^{m}, we use xif\partial_{x_{i}}f to denote the generalized Jacobian [13] of ff with respect to the variable xix_{i}. A function ff is said to be of class 𝒞k\mathcal{C}^{k} if its kkth-derivative is locally Lipschitz continuous. For two vectors x1,x2nx_{1},x_{2}\in\mathbb{R}^{n}, the notation x1x2x_{1}\otimes x_{2} denotes the outer product defined as x1x2=x1x2x_{1}\otimes x_{2}=x_{1}x_{2}^{\top}. We let 𝕊1:={(x1,x2)2:x12+x22=1}\mathbb{S}^{1}:=\left\{(x_{1},x_{2})\in\mathbb{R}^{2}:\,x_{1}^{2}+x_{2}^{2}=1\right\} and 𝔹:={xn:x1}\mathbb{B}:=\{x\in\mathbb{R}^{n}~{}:~{}\|x\|\leq 1\}, where the dimension nn will be clear from the context. Finally, all simulation time units may be assumed to be in seconds.

2.2 Hybrid Dynamical Systems

2.2.1 Model

We consider HDS aligned with the framework of [21], given by the following inclusions:

xC,x˙F(x),\displaystyle x\in C,~{}~{}~{}~{}~{}\dot{x}\in F(x), (1a)
xD,x+G(x),\displaystyle x\in D,~{}~{}~{}x^{+}\in G(x), (1b)

where F:nnF:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n} is called the flow map, G:nnG:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n} is called the jump map, CnC\subset\mathbb{R}^{n} is called the flow set, and DnD\subset\mathbb{R}^{n} is called the jump set. We use =(C,F,D,G)\mathcal{H}=(C,F,D,G) to denote the data of the HDS \mathcal{H}. We note that systems of the form (1) generalize purely continuous-time systems (obtained when D=D=\emptyset) and purely discrete-time systems (obtained when C=C=\emptyset). Of particular interest to us are time-varying systems, which can also be represented as (1) by using an auxiliary state τ0\tau\in\mathbb{R}_{\geq 0} with dynamics τ˙=ρ\dot{\tau}=\rho and τ+=τ\tau^{+}=\tau, where ρ0\rho\in\mathbb{R}_{\geq 0} indicates the rate of change of τ\tau. In this paper, we will always work with well-posed HDS that satisfy the following standing assumption [21, Assumption 6.5].

Assumption 1

The sets C,DC,D are closed. The set-valued mapping FF is OSC, LB, and for each xCx\in C the set F(x)F(x) is convex and nonempty. The set-valued mapping GG is OSC, LB, and for each xDx\in D the set G(x)G(x) is nonempty.

2.2.2 Properties of Solutions

Solutions to (1) are parameterized by a continuous-time index t0t\in\mathbb{R}_{\geq 0}, which increases continuously during flows, and a discrete-time index j0j\in\mathbb{Z}_{\geq 0}, which increases by one during jumps. Therefore, solutions to (1) are defined on hybrid time domains (HTDs). A set E0×0E\subset\mathbb{R}_{\geq 0}\times\mathbb{Z}_{\geq 0} is called a compact HTD if E=j=0J1([tj,tj+1],j)E=\cup_{j=0}^{J-1}([t_{j},t_{j+1}],j) for some finite sequence of times 0=t0t1tJ0=t_{0}\leq t_{1}\ldots\leq t_{J}. The set EE is a HTD if for all (T,J)E(T,J)\in E, E([0,T]×{0,,J})E\cap([0,T]\times\{0,\ldots,J\}) is a compact HTD.

The following definition formalizes the notion of solution to HDS of the form (1).

Definition 1

A hybrid arc xx is a function defined on a HTD. In particular, x:dom(x)nx:\text{dom}(x)\to\mathbb{R}^{n} is such that x(,j)x(\cdot,j) is locally absolutely continuous for each jj such that the interval Ij:={t:(t,j)dom(x)}I_{j}:=\{t:(t,j)\in\text{dom}(x)\} has a nonempty interior. A hybrid arc x:dom(x)nx:\text{dom}(x)\to\mathbb{R}^{n} is a solution xx to the HDS (1) if x(0,0)CDx(0,0)\in C\cup D, and:

  1. 1.

    For all j0j\in\mathbb{Z}_{\geq 0} such that IjI_{j} has nonempty interior: x(t,j)Cx(t,j)\in C for all tIjt\in I_{j}, and x˙(t,j)F(x(t,j))\dot{x}(t,j)\in F(x(t,j)) for almost all tIjt\in I_{j}.

  2. 2.

    For all (t,j)dom(x)(t,j)\in\text{dom}(x) such that (t,j+1)dom(x)(t,j+1)\in\text{dom}(x): x(t,j)Dx(t,j)\in D and x(t,j+1)G(x(t,j))x(t,j+1)\in G(x(t,j)).

A solution xx is maximal if it cannot be further extended. A solution xx is said to be complete if lengthdom(x)=\text{length}~{}\text{dom}(x)=\infty. \square

In this paper, we also work with an “inflated” version of (1), which is instrumental for robustness analysis [21, Def. 6.27].

Definition 2

For δ>0\delta>0, the δ\delta-inflation δ\mathcal{H}_{\delta} of the HDS \mathcal{H} with data (1) is given by δ:=(Cδ,Dδ,Fδ,Gδ)\mathcal{H}_{\delta}:=(C_{\delta},D_{\delta},F_{\delta},G_{\delta}), where the sets Cδ,DδC_{\delta},D_{\delta} are defined as:

Cδ\displaystyle C_{\delta} :={xn:({x}+δ𝔹)C},\displaystyle:=\left\{x\in\mathbb{R}^{n}:\left(\{x\}+\delta\mathbb{B}\right)\cap C\neq\emptyset\right\}, (2a)
Dδ\displaystyle D_{\delta} :={xn:({x}+δ𝔹)D},\displaystyle:=\left\{x\in\mathbb{R}^{n}:\left(\{x\}+\delta\mathbb{B}\right)\cap D\neq\emptyset\right\}, (2b)
and the set-valued mappings Fδ,GδF_{\delta},G_{\delta} are defined as:
Fδ(x)\displaystyle F_{\delta}(x) :=con¯F((x+δ𝔹)C)+δ𝔹,\displaystyle:=\overline{\text{con}}~{}F((x+\delta\mathbb{B})\cap C)+\delta\mathbb{B}, (2c)
Gδ(x)\displaystyle G_{\delta}(x) :={vn:vg+δ𝔹,gG((x+δ𝔹)D)},\displaystyle:=\{v\in\mathbb{R}^{n}:~{}v\in g+\delta\mathbb{B},g\in G((x+\delta\mathbb{B})\cap D)\}, (2d)

for all xnx\in\mathbb{R}^{n}. \square

The following definition, corresponding to [21, Def. 5.23], will be used in this paper.

Definition 3

Given T,ρ>0T,\rho>0, two hybrid arcs x1:dom(x1)nx_{1}:\text{dom}(x_{1})\to\mathbb{R}^{n} and x2:dom(x2)nx_{2}:\text{dom}(x_{2})\to\mathbb{R}^{n} are said to be (T,ρ)(T,\rho)-close if:

  • for each (t,j)dom(x1)(t,j)\in\text{dom}(x_{1}) with t+jTt+j\leq T there exists ss such that (s,j)dom(x2)(s,j)\in\text{dom}(x_{2}), with |ts|ρ|t-s|\leq\rho and |x1(t,j)x2(s,j)|ρ|x_{1}(t,j)-x_{2}(s,j)|\leq\rho.

  • for each (t,j)dom(x2)(t,j)\in\text{dom}(x_{2}) with t+jTt+j\leq T there exists ss such that (s,j)dom(x1)(s,j)\in\text{dom}(x_{1}), with |ts|ρ|t-s|\leq\rho and |x2(t,j)x1(s,j)|ρ|x_{2}(t,j)-x_{1}(s,j)|\leq\rho. \square

2.2.3 Stability Notions

To study the stability properties of the HDS (1), we make use of the following notion, which is instrumental for the study of “local” stability properties.

Definition 4

Let 𝒜\mathcal{A} be a compact set contained in an open set 𝒜\mathcal{B}_{\mathcal{A}}. A function ω:𝒜0\omega:\mathcal{B}_{\mathcal{A}}\rightarrow\mathbb{R}_{\geq 0} is said to be a proper indicator function for 𝒜\mathcal{A} on 𝒜\mathcal{B}_{\mathcal{A}} if ω\omega is continuous, ω(x)=0\omega(x)=0 if and only if x𝒜x\in\mathcal{A}, and if the sequence {xi}i=1\{x_{i}\}_{i=1}^{\infty}, xi𝒜x_{i}\in\mathcal{B}_{\mathcal{A}}, approaches the boundary of 𝒜\mathcal{B}_{\mathcal{A}} or is unbounded then, the sequence {ω(xi)}i=1\{\omega(x_{i})\}_{i=1}^{\infty} is also unbounded. \square

The use of proper indicators is common when studying (uniform) local stability properties [45]. The following definition is borrowed from [21, Def. 7.10 and Thm. 7.12].

Definition 5

A compact set 𝒜n\mathcal{A}\subset\mathbb{R}^{n} contained in an open set 𝒜\mathcal{B}_{\mathcal{A}} is said to be uniformly asymptotically stable (UAS) with a basin of attraction 𝒜\mathcal{B}_{\mathcal{A}} for the HDS (1) if for every proper indicator ω\omega on 𝒜\mathcal{B}_{\mathcal{A}} there exists β𝒦\beta\in\mathcal{KL} such that for each solution xx to (1) starting in 𝒜\mathcal{B}_{\mathcal{A}}, we have

ω(x(t,j))\displaystyle\omega(x(t,j)) β(ω(x(0,0)),t+j),\displaystyle\leq\beta(\omega(x(0,0)),t+j), (3)

for all (t,j)dom(x)(t,j)\in\text{dom}(x). If 𝒜\mathcal{B}_{\mathcal{A}} is the whole space, then ω(x)=|x|𝒜\omega(x)=|x|_{\mathcal{A}}, and in this case the set 𝒜\mathcal{A} is said to be uniformly globally asymptotically stable (UGAS) for the HDS (1). \square

3 Motivational Example

The problem of target seeking in mobile robotic systems is ubiquitous across various engineering applications, including rescue missions [8], gas leak detection, and general autonomous vehicles performing exploration missions in hazardous environments that could be too dangerous for humans [55, 38]. Different algorithms have been considered for resolving source-seeking problems in mobile robots [55]. Naturally, these algorithms rely on real-time exploration and exploitation mechanisms that are robust with respect to different types of disturbances, including small bounded measurement noise and implementation errors [38], as well as certain structured and bounded additive perturbations [47]. Nevertheless, in many realistic applications, the seeking robots operate in environments where their sensors rarely have continuous and perfect access to measurements of the environment. Common reasons for faulty measurements include intermittent communication networks, interference due to obstacles or external environmental conditions, to name just a few [30, 19]. In addition to encountering faulty and intermittent measurements, autonomous robots operating in adversarial environments might also be subject to malicious spoofing attacks that deliberately modify some of the signals used by the controllers. In such situations, it is natural to ask whether the mobile robots can still complete their missions, and under what conditions (if any) such success can be guaranteed.

To study the above question, we consider a typical model of a mobile robot studied in the context of source seeking problems: a planar kinematic model of a non-holonomic vehicle, with equations

x˙1\displaystyle\dot{x}_{1} =ux3,\displaystyle=u\,x_{3}, x˙2\displaystyle\dot{x}_{2} =ux4,\displaystyle=u\,x_{4}, x˙3\displaystyle\dot{x}_{3} =Ωx4,\displaystyle=\Omega x_{4}, x˙4\displaystyle\dot{x}_{4} =Ωx3,\displaystyle=-\Omega x_{3},

where the vector xp:=(x1,x2)2x_{p}:=(x_{1},x_{2})\in\mathbb{R}^{2} models the position of the vehicle in the plane, the vector x3,4:=(x3,x4)𝕊1x_{3,4}:=(x_{3},x_{4})\in\mathbb{S}^{1} captures the orientation of the vehicle, uu is the forward velocity, and Ω\Omega is the angular velocity. We use the above model for the kinematics of the vehicle to simplify the discussion since our focus is on modeling the effect of intermittent measurements and spoofing attacks. Nevertheless, this model is ubiquitous in the source-seeking literature, see [7], [14], [38], [55], and [50].

The main objective of the vehicle is to stabilize its position at a particular unknown target point xp2x_{p}^{\star}\in\mathbb{R}^{2} using only real-time measurements of a distance-like signal J(xp)J(x_{p}), which for simplicity is assumed to be 𝒞1\mathcal{C}^{1}, strongly convex with minimizer at the point xpx_{p}^{\star}, and having a globally Lipschitz gradient (these assumptions will be relaxed later in Section 5.1.2). In other words, the vehicle seeks to solve the problem minxp2J(xp)\min_{x_{p}\in\mathbb{R}^{2}}~{}J(x_{p}) without knowledge of the mathematical form of JJ. Since nonholonomic vehicles cannot be simultaneously stabilized in position and orientation using smooth feedback, we let the vehicles oscillate continuously using a predefined constant angular velocity Ω>0\Omega>0. This model can also capture vehicles that have no directional control over Ω\Omega, such as in quad-copters that have lost one propeller, leading to a constant yaw rate during hover that can be effected, though not eliminated or reversed, by modulating the remaining propellers [51].

Refer to caption
Figure 1: Automaton-like representation of switching seeking dynamics in nonholonomic vehicles with multiple operating modes: under spoofing (z1=1z_{1}=1), under no measurement (z1=2z_{1}=2), and under nominal operation (z1=3z_{1}=3).

Under “nominal” operating conditions, and to stabilize (a neighborhood of) the target xpx_{p}^{*}, we can consider the following feedback controller that only uses real-time measurements of the potential field JJ evaluated at the current position

u(xp,τ2)\displaystyle u(x_{p},\tau_{2}) =ε1cos(τ2+J(xp)),τ˙2=1ε2,Ω=1ε,\displaystyle=\varepsilon^{-1}\cos\left(\tau_{2}+J(x_{p})\right),~{}\dot{\tau}_{2}=\frac{1}{\varepsilon^{2}},~{}\Omega=\frac{1}{\varepsilon},

where ε>0\varepsilon>0 is a tunable parameter. To capture the effect of having intermittent measurements and malicious spoofing attacks in the algorithm, we re-write the nominal closed-loop system as the following mode-dependent dynamical system:

x˙1\displaystyle\dot{x}_{1} =ε1x3uz1(xp,τ2),\displaystyle=\varepsilon^{-1}x_{3}u_{z_{1}}(x_{p},\tau_{2}), x˙3\displaystyle\dot{x}_{3} =ε1x4,\displaystyle=\varepsilon^{-1}x_{4}, (4a)
x˙2\displaystyle\dot{x}_{2} =ε1x4uz1(xp,τ2),\displaystyle=\varepsilon^{-1}x_{4}u_{z_{1}}(x_{p},\tau_{2}), x˙4\displaystyle\dot{x}_{4} =ε1x3,\displaystyle=-\varepsilon^{-1}x_{3}, (4b)
τ˙2\displaystyle\dot{\tau}_{2} =ε2,\displaystyle=\varepsilon^{-2}, z˙1\displaystyle\dot{z}_{1} =0,\displaystyle=0, (4c)
where z1𝒬:={1,2,3}z_{1}\in\mathcal{Q}:=\{1,2,3\} is a logic mode that is kept constant during the evolution of (4). In this case, the map uz1u_{z_{1}} is
uz1(xp,τ2)=cos(τ2+(z12)J(xp)),\displaystyle u_{z_{1}}(x_{p},\tau_{2})=\cos\left(\tau_{2}+(z_{1}-2)J(x_{p})\right), (4d)

where z1=3z_{1}=3 describes the nominal operation mode, z1=2z_{1}=2 describes the mode in which no measurement of JJ is available to the vehicle, and z1=1z_{1}=1 corresponds to the mode in which the control algorithm is under spoofing that is able to reverse the sign of the measurements of JJ. In this way, the dynamics of the vehicle switch in real-time between the three operating modes as the robot simultaneously seeks the target xpx_{p}^{*}. Note that, at every switching instant, the states x=(x1,x2,x3,x4)x=(x_{1},x_{2},x_{3},x_{4}) remain constant, and z1z_{1} jumps according to z1+𝒬\{z1}z_{1}^{+}\in\mathcal{Q}\backslash\{z_{1}\}, i.e., the system is allowed to switch from its current mode to any of the other two modes. Figure 1 illustrates this switching behavior. To study the stability properties of the system, we consider the coordinate transformation:

(x3x4)\displaystyle\begin{pmatrix}{x}_{3}\\ {x}_{4}\end{pmatrix} =(cos(ε1t)sin(ε1t)sin(ε1t)cos(ε1t))(x~3x~4),\displaystyle=\begin{pmatrix}\cos\left(\varepsilon^{-1}t\right)&\sin\left(\varepsilon^{-1}t\right)\\ -\sin\left(\varepsilon^{-1}t\right)&\cos\left(\varepsilon^{-1}t\right)\end{pmatrix}\begin{pmatrix}\tilde{x}_{3}\\ \tilde{x}_{4}\end{pmatrix},

which is defined via a rotation matrix and therefore is invertible for all t0t\geq 0. Using τ1=ε1t\tau_{1}=\varepsilon^{-1}t, the transformation leads to the new system

x˙p\displaystyle\dot{x}_{p} =ε1(cos(τ1)sin(τ1)sin(τ1)cos(τ1))A(τ1)(x~3x~4)uz1(xp,τ2),\displaystyle=\varepsilon^{-1}{\color[rgb]{0,0,0}\underbrace{\begin{pmatrix}\cos\left(\tau_{1}\right)&\sin\left(\tau_{1}\right)\\ -\sin\left(\tau_{1}\right)&\cos\left(\tau_{1}\right)\end{pmatrix}}_{A(\tau_{1})}}\begin{pmatrix}\tilde{x}_{3}\\ \tilde{x}_{4}\end{pmatrix}u_{z_{1}}(x_{p},\tau_{2}), (5a)
x~˙3,4\displaystyle\dot{\tilde{x}}_{3,4} =0,τ˙1=ε1,τ˙2=ε2,z˙1=0,\displaystyle=0,\quad\dot{\tau}_{1}=\varepsilon^{-1},\quad\dot{\tau}_{2}=\varepsilon^{-2},~{}~{}~{}\dot{z}_{1}=0, (5b)

with jumps x+=xx^{+}=x and z1+𝒬\{z1}z_{1}^{+}\in\mathcal{Q}\backslash\{z_{1}\}. In (5), the vector x~3,4\tilde{x}_{3,4} is now constant, but still restricted to the compact set 𝕊1\mathbb{S}^{1}. For this system, we can investigate the stability properties of the state x~=(xp,x~3,4)\tilde{x}=(x_{p},\tilde{x}_{3,4}) with respect to the set {xp}×𝕊1\{x_{p}^{\star}\}\times\mathbb{S}^{1}. To do this, we model the closed-loop system as a HDS of the form (1), with switching signal z1z_{1} being generated by an extended auxiliary hybrid automaton, with states z:=(z1,z2,z3)03z:=(z_{1},z_{2},z_{3})\in\mathbb{R}_{\geq 0}^{3}, and set-valued dynamics

zCz,z˙𝒯F(z):=({0}[0,η1][0,η2]𝕀𝒬u(z1)),\displaystyle z\in C_{z},~{}~{}\dot{z}\in\mathcal{T}_{F}(z):=\left(\begin{array}[]{c}\{0\}\\ \left[0,\eta_{1}\right]\\ \left[0,\eta_{2}\right]-\mathbb{I}_{\mathcal{Q}_{u}}(z_{1})\end{array}\right), (6d)
zDz,z+𝒯G(z):=(𝒬\{z1}z21z3),\displaystyle z\in D_{z},~{}~{}z^{+}\in\mathcal{T}_{G}(z):=\left(\begin{array}[]{c}\mathcal{Q}\backslash\{z_{1}\}\\ z_{2}-1\\ z_{3}\\ \end{array}\right), (6h)

where η1>0\eta_{1}>0, η2[0,1)\eta_{2}\in[0,1), 𝕀𝒬u()\mathbb{I}_{\mathcal{Q}_{u}}(\cdot) is the classic indicator function, the set of logic modes has been partitioned as 𝒬=𝒬s𝒬u\mathcal{Q}=\mathcal{Q}_{s}\cup\mathcal{Q}_{u}, with 𝒬s={3}\mathcal{Q}_{s}=\{3\} and 𝒬u={1,2}\mathcal{Q}_{u}=\{1,2\}, and the sets Cz,DzC_{z},D_{z} are given by

Cz=𝒬×[0,N]×[0,T],Dz=𝒬×[1,N]×[0,T].C_{z}=\mathcal{Q}\times[0,N_{\circ}]\times[0,T_{\circ}],~{}~{}D_{z}=\mathcal{Q}\times[1,N_{\circ}]\times[0,T_{\circ}]. (7)

for some constants N1N_{\circ}\geq 1 and T>0T_{\circ}>0. In this system, the states z2z_{2} and z3z_{3} can be seen as timers used to coordinate when the system jumps.

Refer to caption
Figure 2: Trajectories of the vehicle starting from two initial conditions (x1(0),x2(0))=(4,4)(x_{1}(0),x_{2}(0))=(-4,4) (blue) and (x1(0),x2(0))=(4,4)(x_{1}(0),x_{2}(0))=(-4,-4) (red), for ε=1/10π\varepsilon=1/\sqrt{10\pi}, J(xp)=0.5x12+0.5x22J(x_{p})=0.5x_{1}^{2}+0.5x_{2}^{2}, and for two different switching signals z1(t)z_{1}(t), shown in the left plot. The black dots indicate the initial conditions.

In particular, every hybrid arc generated by the hybrid automaton (6) satisfies the following two properties for any two times t2>t1t_{2}>t_{1} that are in its domain [21, Ex. 2.15],[40, Lemma 7]:

N(t1,t2)\displaystyle N_{\sharp}(t_{1},t_{2}) η1(t2t1)+N,\displaystyle\leq\eta_{1}(t_{2}-t_{1})+N_{\circ}, (8a)
T(t1,t2)\displaystyle T_{\sharp}(t_{1},t_{2}) η2(t2t1)+T,\displaystyle\leq\eta_{2}(t_{2}-t_{1})+T_{\circ}, (8b)

where N(t1,t2)N_{\sharp}(t_{1},t_{2}) is the total number of jumps during the time interval (t1,t2)(t_{1},t_{2}), and T(t1,t2)=t1t2𝕀𝒬u(z1(t))𝑑tT_{\sharp}(t_{1},t_{2})=\int_{t_{1}}^{t_{2}}\mathbb{I}_{\mathcal{Q}_{u}}(z_{1}(t))\,dt is the total activation time during [t1,t2][t_{1},t_{2}], and during flows of the system, of the modes in the set 𝒬u\mathcal{Q}_{u}. In fact, condition (8a) imposes an average dwell-time (ADT) constraint [23] on the switches of z1z_{1}, while condition (8b) imposes an average activation time (AAT) constraint [54] on the time spent in modes 1 and 2. These conditions are parameterized by the tunable constants η1\eta_{1} and η2\eta_{2}, respectively. Note that (8a) immediately rules out Zeno behavior. By modeling the switching signals as solutions of the hybrid automaton (6), we can study the closed-loop system without pre-specifying the switching times of z1z_{1}, which are generally unknown. Instead, we consider any possible solution (x~,z)(\tilde{x},z) generated by the interconnection (5)-(7).

While the xpx_{p}-dynamics in (5) are periodic with high-frequency oscillations, existing averaging tools in the literature [49] do not capture the stabilizing effect of the control law (4d). This can be observed by introducing a new time scale s=tεs=\frac{t}{\varepsilon}, which leads to the following dynamics (in the s-time scale):

x˙p=A(τ1)x~3,4uz1(xp,τ2),τ˙1=1,τ˙2=1ε.\dot{x}_{p}=A(\tau_{1})\tilde{x}_{3,4}u_{z_{1}}(x_{p},\tau_{2}),~{}~{}\dot{\tau}_{1}=1,~{}~{}\dot{\tau}_{2}=\frac{1}{\varepsilon}. (9)

Using (4d) and the property cos(α+β)=cos(α)cos(β)sin(α)sin(β)\cos(\alpha+\beta)=\cos(\alpha)\cos(\beta)-\sin(\alpha)\sin(\beta), it is easy to see that for any constant vector x~3,4𝕊1\tilde{x}_{3,4}\in\mathbb{S}^{1} the average of the vector field (9) along the fast varying state τ2\tau_{2} is equal to zero. Since the first-order average vector field is zero, which is only marginally stable, no stability conclusions can be obtained from a direct application of first-order averaging theory. Indeed, in this case, the stabilizing effects are dictated by higher-order terms that are neglected from the first-order average. Similar obstacles emerge when using first-order averaging theory in the analysis of Lie-bracket-based extremum seeking controllers [16] and in certain vibrational controllers [44]. On the other hand, as we will show in the next section, by using second-order averaging theory for HDS, we can obtain the following second-order average hybrid dynamics of (5)-(7), with states (x¯,z¯)(\bar{x},\bar{z}):

(x¯,z¯)(2×𝕊1)×Cz,{x¯˙p=(2z¯1)2J(x¯p),x¯˙3,4=0,z¯˙𝒯F(z¯),\displaystyle(\bar{x},\bar{z})\in\left(\mathbb{R}^{2}\times\mathbb{S}^{1}\right)\times C_{z},~{}\left\{\begin{array}[]{l}~{}~{}\dot{\bar{x}}_{p}=\dfrac{(2-\bar{z}_{1})}{2}\nabla J(\bar{x}_{p}),\\ \dot{\bar{x}}_{3,4}=0\vspace{0.1cm},\\ ~{}~{}~{}~{}\dot{\bar{z}}\in\mathcal{T}_{F}(\bar{z}),\end{array}\right.
(x¯,z¯)(2×𝕊1)×Dz,{x¯+=x¯,z¯+𝒯G(z¯).\displaystyle(\bar{x},\bar{z})\in\left(\mathbb{R}^{2}\times\mathbb{S}^{1}\right)\times D_{z},~{}\left\{\begin{array}[]{l}\bar{x}^{+}=\bar{x},\\ \bar{z}^{+}\in\mathcal{T}_{G}(\bar{z}).\end{array}\right.

In turn, under the smoothness and strong convexity assumption on JJ, this HDS renders UGAS the compact set 𝒜:={xp}×𝕊1×𝒬×[0,N]×[0,T]\mathcal{A}:=\{x_{p}^{\star}\}\times\mathbb{S}^{1}\times\mathcal{Q}\times[0,N_{\circ}]\times[0,T_{\circ}] for η1>0\eta_{1}>0 and η2\eta_{2} sufficiently small (see Section 5-A-2). By using Theorem 2 in Section 4, we will be able to conclude that, for any compact set K02K_{0}\subset\mathbb{R}^{2} and any ν>0\nu>0, there exists ε>0\varepsilon^{*}>0 such that for all ε(0,ε)\varepsilon\in(0,\varepsilon^{*}) every trajectory of (5) that starts in K0K_{0}, satisfies

|xp(t,j)x|β(|xp(0,0)x|,t+j)+ν,|x_{p}(t,j)-x^{\star}|\leq\beta\left(|x_{p}(0,0)-x^{\star}|,t+j\right)+\nu,

for all (t,j)(t,j) in the domain of the solution. A more general result will be stated later in Section 5.1 for a class of hybrid Lie-bracket averaging-based systems. In the meantime, we illustrate in Figure 2 (in blue color) the convergence properties of the vehicle under the controller (4d). The inset also shows the evolution in time of the switching signal z1z_{1}, indicating which mode is active at each time. As observed, the vehicle operating under spoofing and intermittent sensing (T(0,15)5.98T_{\sharp}(0,15)\approx 5.98) is able to successfully complete the source-seeking mission. We also show in red color an unstable trajectory of the vehicle under the same controller and a more frequent spoofing attack (T(0,15)11.34T_{\sharp}(0,15)\approx 11.34) that does not satisfy (8b) (i.e., with a larger value of η2\eta_{2} in (8b)), also shown in the inset. For both scenarios depicted in Figure 2, the switching signals z1z_{1} are generated as solutions of the hybrid automaton (6).

Remark 1

The results and tools discussed in this section (extended in Section 5.1) can help practitioners evaluate the resilience of non-holonomic vehicle seeking dynamics against intermittent measurements and spoofing attacks. By characterizing the parameters (η1,η2)(\eta_{1},\eta_{2}) and ε\varepsilon that ensure stable (practical) source seeking, practitioners can design effective detection and rejection mechanisms to ensure the control system operates in its nominal mode “sufficiently often”. \square

The motivational problem in this section was modeled using a high-amplitude, high-frequency oscillatory system, whose averaged system corresponds to a switching system. However, the results presented in this paper are applicable to a broader class of hybrid systems, with switching systems representing just one specific case.

4 Second-Order Averaging for a Class of Hybrid Dynamical Systems

We consider a subclass of HDS (1), given by

((x,z),τ)\displaystyle\left(({x},z),\tau\right) C×02,\displaystyle\in C\times\mathbb{R}_{\geq 0}^{2}, {(x˙z˙)Fε(x,z,τ1,τ2),τ˙1=ε1,τ˙2=ε2,\displaystyle\begin{cases}\left(\begin{array}[]{c}\dot{{x}}\\ \dot{z}\end{array}\right)&\in F_{\varepsilon}({x},{z},\tau_{1},\tau_{2}),\\ \hphantom{\Big{[}}\begin{array}[]{c}\dot{\tau}_{1}\end{array}\hphantom{\Big{]}}&=\varepsilon^{-1},\\ \hphantom{\Big{[}}\begin{array}[]{c}\dot{\tau}_{2}\end{array}\hphantom{\Big{]}}&=\varepsilon^{-2},\end{cases} (10a)
((x,z),τ)\displaystyle\left(({x},{z}),\tau\right) D×02,\displaystyle\in D\times\mathbb{R}_{\geq 0}^{2}, {(x+z+)G(x,z),τ1+=τ1,τ2+=τ2,\displaystyle\begin{cases}\left(\begin{array}[]{c}{x}^{+}\\ {z}^{+}\end{array}\right)&\in G({x},{z}),\\ \hphantom{\Big{[}}\begin{array}[]{c}\tau_{1}^{+}\end{array}\hphantom{\Big{]}}&=\tau_{1},\\ \hphantom{\Big{[}}\begin{array}[]{c}\tau_{2}^{+}\end{array}\hphantom{\Big{]}}&=\tau_{2},\end{cases} (10b)

where ε>0\varepsilon>0 is a small parameter, xn1x\in\mathbb{R}^{n_{1}}, zn2z\in\mathbb{R}^{n_{2}}, n1+n2=nn_{1}+n_{2}=n, C,DnC,D\subset\mathbb{R}^{n} τi0\tau_{i}\in\mathbb{R}_{\geq 0}, for i{1,2}i\in\{1,2\}, and τ=(τ1,τ2)02\tau=(\tau_{1},\tau_{2})\in\mathbb{R}_{\geq 0}^{2}. The set-valued mapping FεF_{\varepsilon} is given by

Fε(x,z,τ1,τ2):={fε(x,z,τ1,τ2)}×Φ(z),\displaystyle F_{\varepsilon}({x},{z},\tau_{1},\tau_{2}):=\big{\{}f_{\varepsilon}(x,z,\tau_{1},\tau_{2})\big{\}}\times\Phi({z}), (11)

where fεf_{\varepsilon} is a continuous function of the form

fε(x,z,τ1,τ2)=k=12εk2ϕk(x,z,τ1,τ2).\displaystyle f_{\varepsilon}(x,z,\tau_{1},\tau_{2})=\sum_{k=1}^{2}\varepsilon^{k-2}{\phi}_{k}({x},{z},\tau_{1},\tau_{2}). (12)

The functions {ϕk}k=12\{\phi_{k}\}_{k=1}^{2}, the sets C,DnC,D\subset\mathbb{R}^{n}, and the set-valued mappings Φ\Phi, GG are application-dependent. Their regularity properties are characterized by the following assumption:

Assumption 2

CC and DD are closed. Φ\Phi is OSC, LB, and convex-valued relative to CC; GG is OSC and LB relative to DD; for all zCz\in C the set Φ(z)\Phi(z) is nonempty; and for all (x,z)D(x,z)\in D the set G(x,z)G(x,z) is nonempty. \square

For the function fεf_{\varepsilon} defined in (12) we will ask for stronger regularity conditions in terms of smoothness and periodicity of the mappings ϕk\phi_{k} in a small inflation of CC (c.f. Eq. (2a)).

Assumption 3
\thlabel

asmp:A1There exists δ>0\delta^{*}>0 and Tk>0T_{k}>0, for k{1,2}k\in\{1,2\}, such that for all δ[0,δ]\delta\in[0,\delta^{*}] the following holds:

  1. (a)

    Smoothness: For k{1,2}k\in\{1,2\}, the functions ϕk:Cδ×+2n1\phi_{k}:C_{\delta}\times\mathbb{R}^{2}_{+}\to\mathbb{R}^{n_{1}} satisfy that: ϕk𝒞2k\phi_{k}\in\mathcal{C}^{2-k} with respect to (x,τ1)(x,\tau_{1}), ϕk𝒞0\phi_{k}\in\mathcal{C}^{0} with respect to zz, and ϕk\phi_{k} is continuous in τ2\tau_{2}.

  2. (b)

    Periodicity: For k{1,2}k\in\{1,2\}, ϕk\phi_{k} satisfies

    ϕk(x,z,τ1+T1,τ2)\displaystyle{\phi}_{k}({x},{z},\tau_{1}+T_{1},\tau_{2}) =ϕk(x,z,τ1,τ2),\displaystyle={\phi}_{k}({x},{z},\tau_{1},\tau_{2}),
    ϕk(x,z,τ1,τ2+T2)\displaystyle{\phi}_{k}({x},{z},\tau_{1},\tau_{2}+T_{2}) =ϕk(x,z,τ1,τ2),\displaystyle={\phi}_{k}({x},{z},\tau_{1},\tau_{2}),

    for all ((x,z),τ)Cδ×+2((x,z),\tau)\in C_{\delta}\times\mathbb{R}^{2}_{+}.

  3. (c)

    Zero-Average of ϕ1\phi_{1} in τ2\tau_{2}:

    0T2ϕ1(x,z,τ1,s2)𝑑s2=0,{\textstyle\int_{0}^{T_{2}}}{\phi}_{1}({x},{z},\tau_{1},s_{2})\,ds_{2}=0,

    for all ((x,z),τ)Cδ×+2((x,z),\tau)\in C_{\delta}\times\mathbb{R}^{2}_{+}. \square

Remark 2

The periodicity and smoothness assumptions on ϕk\phi_{k} are standard [41, Ch. 2]. Periodicity is particularly important in higher-order averaging as it simplifies the computations of the near-identity transformations required to establish the closeness-of-solutions property [41, Section 2.9]. Without the periodicity assumption, rigorous proofs become significantly more complex. For example, in the general time-varying case, higher-order averaging necessitates the use of general-order functions of the small parameter rather than polynomials. This increases the complexity of the asymptotic approximations that can be achieved [41, Theorem 4.5.4] without providing additional practical benefits, since in the applications of interest, the periodicity assumption holds by design. \square

To study the stability properties of the HDS (10), we first characterize its 2nd{}^{\text{nd}}-order average HDS. To define this system, for each ((x,z),τ)Cδ×+2((x,z),\tau)\in C_{\delta}\times\mathbb{R}^{2}_{+} we define the following auxiliary function:

u1(x,z,τ):=0τ2ϕ1(x,z,τ1,s2)𝑑s2,\displaystyle u_{1}(x,z,\tau):=\int_{0}^{\tau_{2}}\phi_{1}(x,z,\tau_{1},s_{2})\,ds_{2}, (13a)
as well as the Lie-bracket between (13a) and ϕ1\phi_{1}:
[u1,ϕ1]x()=(xϕ1u1xu1ϕ1)(),\left[u_{1},\phi_{1}\right]_{x}(\cdot)=\left(\partial_{x}\phi_{1}\cdot u_{1}-\partial_{x}u_{1}\cdot\phi_{1}\right)(\cdot), (13b)

where we omitted the function arguments to simplify notation. Using (13), we introduce the 2nd{}^{\text{nd}}-order average mapping of fεf_{\varepsilon}, denoted f¯:Cδn1\bar{f}:C_{\delta}\to\mathbb{R}^{n_{1}}, given by:

f¯(θ):=\displaystyle\bar{f}(\theta):= 1T1T20T10T2(ϕ2(θ,τ)+12[u1,ϕ1]x(θ,τ))𝑑τ2𝑑τ1,\displaystyle\frac{1}{T_{1}T_{2}}\int_{0}^{T_{1}}\int_{0}^{T_{2}}\left(\phi_{2}(\theta,\tau)+\frac{1}{2}\left[u_{1},\phi_{1}\right]_{x}(\theta,\tau)\right)\,d\tau_{2}\,d\tau_{1}, (14)

where θ:=(x,z)\theta:=(x,z). The following definition leverages f¯()\bar{f}(\cdot).

Definition 6

The 2nd{}^{\text{nd}}-order average HDS of (10) has states θ¯:=(x¯,z¯)n1×n2\bar{\theta}:=(\bar{x},\bar{z})\in\mathbb{R}^{n_{1}}\times\mathbb{R}^{n_{2}}, and is given by

2ave:{θ¯C,θ¯˙F(θ¯)θ¯D,θ¯+G(θ¯),\mathcal{H}_{2}^{\text{ave}}:~{}~{}~{}\left\{\begin{array}[]{ll}\bar{\theta}\in C,&~{}~{}\dot{\bar{\theta}}\in F\left(\bar{\theta}\right)\vspace{0.2cm}\\ \bar{\theta}\in D,&\bar{\theta}^{+}\in G\left(\bar{\theta}\right)\end{array}\right., (15)

where

F(θ¯):={f¯(θ¯)}×Φ(z),θ¯C,F(\bar{\theta}):=\left\{\bar{{f}}\left(\bar{\theta}\right)\right\}\times\Phi(z),~{}~{}~{}~{}\forall~{}\bar{\theta}\in C,

and Φ,C,G,D\Phi,C,G,D are the same of (10). \square

Remark 3

It is worth mentioning that if ϕ1\phi_{1} satisfies

ϕ1(θ,τ)\displaystyle\phi_{1}(\theta,\tau) ==1rb(θ,τ1)v(τ),\displaystyle=\sum_{\ell=1}^{r}{b}_{\ell}({\theta},\tau_{1})\,v_{\ell}(\tau),
0τ2ϕ1(θ,τ1,s2)𝑑s2\displaystyle\textstyle\int_{0}^{\tau_{2}}\phi_{1}({\theta},\tau_{1},s_{2})ds_{2} ==1rb(θ,τ1)0τ2v(τ1,s2)𝑑s2,\displaystyle=\sum_{\ell=1}^{r}{b}_{\ell}(\theta,\tau_{1})\,\int_{0}^{\tau_{2}}v_{\ell}(\tau_{1},s_{2})\,ds_{2},

for some functions b,vb_{\ell},v_{\ell} and some r1r\in\mathbb{Z}_{\geq 1}, then the Lie bracket in (13b) reduces to:

[u1,ϕ1]x()\displaystyle\left[u_{1},\phi_{1}\right]_{x}(\cdot) =1,2=1r[b1,b2]x(θ,τ1)0s2v1(τ1,s2)v2(τ)𝑑s2\displaystyle=\sum_{\ell_{1},\ell_{2}=1}^{r}\left[{b}_{\ell_{1}},{b}_{\ell_{2}}\right]_{x}(\theta,\tau_{1})\int_{0}^{s_{2}}v_{\ell_{1}}(\tau_{1},s_{2})v_{\ell_{2}}(\tau)\,ds_{2}
=1>2r[b1,b2](θ,τ1)v(τ),\displaystyle=\sum_{\ell_{1}>\ell_{2}}^{r}\left[{b}_{\ell_{1}},{b}_{\ell_{2}}\right](\theta,\tau_{1})v_{\ell}(\tau),

where vv_{\ell} is the time-varying function:

v(τ)=0τ2v1(τ1,s2)v2(τ)𝑑s20τ2v2(τ1,s2)v1(τ)𝑑s2,\displaystyle v_{\ell}(\tau)=\int_{0}^{\tau_{2}}v_{\ell_{1}}(\tau_{1},s_{2})v_{\ell_{2}}(\tau)\,ds_{2}-\int_{0}^{\tau_{2}}v_{\ell_{2}}(\tau_{1},s_{2})v_{\ell_{1}}(\tau)\,ds_{2},

and, consequently, the average of [u1,ϕ1]x()\left[u_{1},\phi_{1}\right]_{x}(\cdot) reduces to:

1T20T2[u1,ϕ1]x(θ,τ1,s2)𝑑s2=1>2r[b1,b2]x(θ,s1)Λ(τ1),\frac{1}{T_{2}}\int_{0}^{T_{2}}\left[u_{1},\phi_{1}\right]_{x}(\theta,\tau_{1},s_{2})ds_{2}=\sum_{\ell_{1}>\ell_{2}}^{r}\left[{b}_{\ell_{1}},{b}_{\ell_{2}}\right]_{x}(\theta,s_{1})\Lambda(\tau_{1}),

where Λ(τ1)=1T20T2v(τ1,s2)𝑑s2\Lambda(\tau_{1})=\frac{1}{T_{2}}\int_{0}^{T_{2}}v_{\ell}(\tau_{1},s_{2})ds_{2}. However, in contrast to the standard Lie-bracket averaging framework [16], here we do not necessarily assume that ϕ1{\phi}_{1} admits such a decomposition, which enables the study of more general vector fields fεf_{\varepsilon}. \square

The next lemma follows directly by Assumption 2 and the continuity properties of ϕk\phi_{k}, k{1,2}k\in\{1,2\}, c.f., Assumption LABEL:asmp:A1.

Lemma 1

Suppose that Assumptions 2-LABEL:asmp:A1 hold. Then, the HDS (10) and the HDS 2ave\mathcal{H}^{\text{ave}}_{2} in (15) satisfy Assumption 1. \square

The following Theorem the first main result of this paper. It uses the notion of (T,ρ)(T,\rho)-closeness, introduced in Definition 3. All the proofs are deferred to Section 6.

Theorem 1
\thlabel

thm:closeness_of_trajsSuppose that Assumptions 2-LABEL:asmp:A1 hold. Let K0n{K}_{0}\subset\mathbb{R}^{n} be a compact set such that every solution of the HDS (15) with initial conditions in K0{K}_{0} has no finite escape times. Then, for each ρ>0\rho>0 and each T>0T>0, there exists ε>0\varepsilon^{*}>0 such that for each ε(0,ε)\varepsilon\in(0,\varepsilon^{*}) every solution of (10) starting in K0+ρ𝔹{K}_{0}+\rho\mathbb{B} is (T,ρ)(T,\rho)-close to some solution of (15) starting in K0{K}_{0}. \square

Remark 4

The result of \threfthm:closeness_of_trajs establishes a novel closeness-of-solutions property (in the sense of Definition 3) between the high-frequency high-amplitude periodic-in-the-flows HDS (10) and the second-order average HDS 2ave\mathcal{H}_{2}^{\text{ave}} defined in (15). Unlike Lie-bracket averaging results for Lipschitz ODEs [16, Thm. 1], the result does not assert closeness between all solutions of (10) and 2ave\mathcal{H}_{2}^{\text{ave}}, but rather that for every solution of the original HDS (10) there exists a solution of the 2nd-order average dynamics 2ave\mathcal{H}_{2}^{\text{ave}} that is (T,ρ)(T,\rho)-close. Thus, Theorem LABEL:thm:closeness_of_trajs effectively extends to the 2nd-order case existing 1st-order averaging results for HDS, such as [49, Thm. 1]. \square

As a result of Theorem LABEL:thm:closeness_of_trajs, if all trajectories θ¯\bar{\theta} of 2ave\mathcal{H}_{2}^{\text{ave}} satisfy suitable convergence bounds, then the trajectories θ\theta of the original HDS (10) will approximately inherit the same bounds on compact sets and time domains. To extend these bounds to potentially unbounded hybrid time domains, we use the following stability definition [49, Def. 6].

Definition 7
\thlabel

defn:SPUAS For the HDS (10), a compact set 𝒜n\mathcal{A}\subset\mathbb{R}^{n} is said to be Semi-Globally Practically Asymptotically Stable (SGPAS) with respect to the basin of attraction 𝒜\mathcal{B}_{\mathcal{A}} as ε0+\varepsilon\rightarrow 0^{+} if, for each proper indicator function ω\omega on 𝒜\mathcal{B}_{\mathcal{A}} there exists β𝒦\beta\in\mathcal{KL} such that, for each compact set K𝒜K\subset\mathcal{B}_{\mathcal{A}} and for each ν>0\nu>0, there exists ε>0\varepsilon^{*}>0 such that for all ε(0,ε]\varepsilon\in(0,\varepsilon^{*}] and for all solutions of (10) with θ(0,0)K\theta(0,0)\in K we have

ω(θ(t,j))β(ω(θ(0,0)),t+j)+ν,\displaystyle\omega(\theta(t,j))\leq\beta(\omega(\theta(0,0)),t+j)+\nu, (16)

for all (t,j)dom(θ,τ)(t,j)\in\text{dom}(\theta,\tau). \square

As discussed in Definition 5, if 𝒜\mathcal{B}_{\mathcal{A}} covers the whole space, then we can also take ω(x)=|x|𝒜\omega(x)=|x|_{\mathcal{A}} in (16). If, additionally, CDC\cup D is compact, then (16) describes a Global Practical Asymptotic Stability (GPAS) property.

With \threfdefn:SPUAS at hand, we can now state the second main result of this paper, which establishes a novel second-order averaging result for HDS:

Theorem 2
\thlabel

thm:avg_UAS_implies_org_PUAS Suppose that Assumptions 2 and LABEL:asmp:A1 hold, and let 𝒜n\mathcal{A}\subset\mathbb{R}^{n} be a compact set. If 𝒜\mathcal{A} is UAS with basin of attraction 𝒜\mathcal{B}_{\mathcal{A}} for 2ave\mathcal{H}_{2}^{\text{ave}}, then 𝒜\mathcal{A} is also SGPAS as ε0+\varepsilon\to 0^{+} with respect to the basin of attraction 𝒜\mathcal{B}_{\mathcal{A}} for the HDS (10). \square

The discussion preceding Theorem LABEL:thm:avg_UAS_implies_org_PUAS directly implies the following Corollary, which we will leverage in Section 5 for the study of globally practically stable Lie-bracket hybrid ES systems on smooth boundaryless compact manifolds.

Corollary 1
\thlabel

corollary1 Suppose that Assumptions 2 and LABEL:asmp:A1 hold, and let 𝒜n\mathcal{A}\subset\mathbb{R}^{n} be a compact set. If CDC\cup D is compact, and 𝒜\mathcal{A} is UGAS for 2ave\mathcal{H}_{2}^{\text{ave}}, then 𝒜\mathcal{A} is GPAS as ε0+\varepsilon\to 0^{+} for the HDS (10) and the trajectories satisfy

|θ(t,j)|𝒜β(|θ(0,0)|𝒜,t+j)+ν,\displaystyle|\theta(t,j)|_{\mathcal{A}}\leq\beta(|\theta(0,0)|_{\mathcal{A}},t+j)+\nu, (17)

for all (t,j)dom(θ,τ)(t,j)\in\text{dom}(\theta,\tau). \square

Remark 5

In Theorem LABEL:thm:avg_UAS_implies_org_PUAS, it is assumed that 2ave\mathcal{H}_{2}^{\text{ave}} renders UAS the set 𝒜\mathcal{A}. However, in certain cases 2ave\mathcal{H}_{2}^{\text{ave}} might depend on additional tunable parameters η>0\eta>0, and 𝒜\mathcal{A} might only be SGPAS as η0+\eta\to 0^{+}, see [40, 39, 28]. In this case, the SGPAS result of Theorem LABEL:thm:avg_UAS_implies_org_PUAS still applies as (ε,η)0+(\varepsilon,\eta)\to 0^{+}, where the tunable parameters are now sequentially tuned, starting with η\eta. This observation follows directly by modifying the proof of Theorem LABEL:thm:avg_UAS_implies_org_PUAS as in [39, Thm. 7]. An example will be presented in Section 5.1 in the context of slow switching systems. \square

5 Applications to Hybrid Model-Free Control and Optimization

We present three different applications to illustrate the results of Section 4 in the context of model-free optimization and control. In all our examples, we will consider a HDS of the form (10), where fεf_{\varepsilon} is allowed to switch between a finite number of vector fields, some of which might not necessarily have a stabilizing second-order average HDS. Specifically, we consider systems where the main state xx evolves in an application-dependent closed set Mn1M\subset\mathbb{R}^{n_{1}}, under the following dynamics:

x˙\displaystyle\dot{x} =fε(x,z,τ1,τ2)=k=12εk2ϕz1,k(x,τ1,τ2),\displaystyle=f_{\varepsilon}(x,z,\tau_{1},\tau_{2})=\sum_{k=1}^{2}\varepsilon^{k-2}\phi_{z_{1},k}({x},\tau_{1},\tau_{2}), (18)

where z1z_{1} is a logic mode allowed to switch between values in the set 𝒬={1,,N}\mathcal{Q}=\{1,\dots,N\}, where N2N\in\mathbb{Z}_{\geq 2}. The switching behavior can be either time-triggered (modeled via the hybrid automaton (6)) or state-triggered (modeled via a hysteresis-based mechanism incorporated into the sets CC and DD). In both cases, Zeno behavior will be precluded by design. Since for each z1𝒬z_{1}\in\mathcal{Q} the functions ϕz1,k\phi_{z_{1},k} will be designed to satisfy \threfasmp:A1, the 2nd{}^{\text{nd}}-order average mapping can be directly computed for each mode z1𝒬z_{1}\in\mathcal{Q} using (14):

f¯z1\displaystyle\bar{f}_{z_{1}} =1T1T20T10T2ϕz1,2𝑑τ2𝑑τ1\displaystyle=\frac{1}{T_{1}T_{2}}\int_{0}^{T_{1}}\int_{0}^{T_{2}}\phi_{z_{1},2}\,d\tau_{2}\,d\tau_{1} (19)
+1T1T20T10T212[0τ2ϕz1,1𝑑s2,ϕz1,1]𝑑τ2𝑑τ1.\displaystyle+\frac{1}{T_{1}T_{2}}\int_{0}^{T_{1}}\int_{0}^{T_{2}}\frac{1}{2}\left[\int_{0}^{\tau_{2}}\phi_{z_{1},1}\,ds_{2},\phi_{z_{1},1}\right]\,d\tau_{2}\,d\tau_{1}.

To guarantee that f¯z1\bar{f}_{z_{1}} is well-defined, we let ϕz1,k\phi_{z_{1},k} depend on a vector of frequencies w=(w1,w2,,wr)w=(w_{1},w_{2},\ldots,w_{r}), for some r+r\in\mathbb{Z}_{+}, which satisfies the following:

Assumption 4

The frequencies satisfy wi>0w_{i}\in\mathbb{Q}_{>0} and wiwjw_{i}\neq w_{j}, ij\forall~{}i\neq j\in\mathbb{N}. \square

The conditions of Assumption 4 are standard in vibrational control [44] and extremum seeking [6, 48, 40, 47, 22].

5.1 Switching Model-Free Optimization and Stabilization

In our first two applications, the set of logic modes satisfies 𝒬=𝒬s𝒬u\mathcal{Q}=\mathcal{Q}_{s}\cup\mathcal{Q}_{u}, where 𝒬s𝒬u=\mathcal{Q}_{s}\cap\mathcal{Q}_{u}=\emptyset, 𝒬s\mathcal{Q}_{s} indicates “stable” modes, and 𝒬u\mathcal{Q}_{u} indicates “unstable” modes. The logic mode z1:dom(z1)𝒬z_{1}:\text{dom}(z_{1})\to\mathcal{Q} models a switching signal generated by the hybrid automaton (6) studied in Section 3, with state z=(z1,z2,z3)z=(z_{1},z_{2},z_{3}), which induces the ADT constraint (8a) and the AAT constraint (8b) on the switching, effectively ruling out Zeno behavior. Using the data (Cz,𝒯F,Dz,𝒯G)(C_{z},\mathcal{T}_{F},D_{z},\mathcal{T}_{G}) of system (6), the closed-loop switched system has the form of (10), with sets

C=M×Cz,D=M×Dz,C=M\times C_{z},~{}~{}~{}D=M\times D_{z}, (20)

flow map Fε={fε}×ΦF_{\varepsilon}=\{f_{\varepsilon}\}\times\Phi, fεf_{\varepsilon} given by (18), Φ(z)=𝒯F(z)\Phi(z)=\mathcal{T}_{F}(z), and jump map:

G(x,z)={x+}×𝒯G(z).G(x,z)=\{x^{+}\}\times\mathcal{T}_{G}(z). (21)

By construction, Assumption 2 is satisfied and the corresponding HDS 2ave\mathcal{H}_{2}^{\text{ave}} can be directly obtained using (15):

(x¯,z¯)\displaystyle(\bar{x},\bar{z}) C,{(x¯˙z¯˙)F(x¯,z¯),\displaystyle\in C,\,\,\begin{cases}\left(\begin{array}[]{c}\dot{\bar{{x}}}\\ \dot{\bar{z}}\end{array}\right)\in F(\bar{x},\bar{z}),\end{cases} (22a)
(x¯,z¯)\displaystyle(\bar{x},\bar{z}) D,{(x¯+z¯+)G(x¯,z¯),\displaystyle\in D,\,\,\begin{cases}\left(\begin{array}[]{c}\bar{{x}}^{+}\\ {\bar{z}}^{+}\end{array}\right)\in G(\bar{x},\bar{z}),\end{cases} (22b)

where F(x¯,z¯)={f¯z1(x¯)}×𝒯F(z¯)F(\bar{x},\bar{z})=\left\{\bar{f}_{z_{1}}(\bar{x})\right\}\times\mathcal{T}_{F}(\bar{z}). We consider two different applications where this model is applicable.

5.1.1 Synchronization of Oscillators with Switching Control Directions and Graphs

Consider a collection of r2r\in\mathbb{Z}_{\geq 2} oscillators evolving on the rr-torus M=𝕊1××𝕊1=𝕋r2rM=\mathbb{S}^{1}\times\dots\times\mathbb{S}^{1}=\mathbb{T}^{r}\subset\mathbb{R}^{2r}, which is an embedded (and compact) submanifold in the Euclidean space. Let xi=(x1i,x2i)𝕊1x^{i}=(x^{i}_{1},x^{i}_{2})\in\mathbb{S}^{1} be the state of the ithi^{\text{th}} oscillator, for i{1,,r}i\in\{1,\dots,r\}, and denote its input by uiu_{i}. Moreover, let α:=(α1,,αr)𝒥\alpha:=(\alpha_{1},\dots,\alpha_{r})\in\mathcal{J}, where 𝒥{+1,1}r\mathcal{J}\subseteq\{+1,-1\}^{r}, |𝒥|=N1|\mathcal{J}|=N_{1}\in\mathbb{N}, such that αi{+1,1}\alpha_{i}\in\{+1,-1\} for all ii. The dynamics of the overall network of oscillators are given by

(x˙1x˙r)=((1+α1u1)Sx1(1+αrur)Sxr),S:=(0110).\displaystyle\begin{pmatrix}\dot{x}^{1}\\ \vdots\\ \dot{x}^{r}\end{pmatrix}=\begin{pmatrix}(1+\alpha_{1}u_{1})\,Sx^{1}\\ \vdots\\ (1+\alpha_{r}u_{r})\,Sx^{r}\end{pmatrix},~{}~{}S:=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}. (23)

where α\alpha is the unknown vector of control directions. Let x=(x1,,xr)Mx=(x^{1},\dots,x^{r})\in M, and note that MM is invariant under the dynamics in (23). The connectivity between oscillators is dictated by the network topology which, at any given instant, is defined by a unique element from a collection of N21N_{2}\in\mathbb{Z}_{\geq 1} graphs {𝒢1,,𝒢N2}\{\mathcal{G}_{1},\dots,\mathcal{G}_{N_{2}}\}, 𝒢k=(𝒱,k)\mathcal{G}_{k}=(\mathcal{V},\mathcal{E}_{k}), where 𝒱={1,,r}\mathcal{V}=\{1,\dots,r\} is the set of nodes representing the oscillators, and k\mathcal{E}_{k} is the edge set representing the interconnections. To simplify our presentation, we make the following assumption.

Assumption 5
\thlabel

asmp:ex_synchronization_A1 For each k{1,2,,N2}k\in\{1,2,\ldots,N_{2}\} the graph 𝒢k\mathcal{G}_{k} is connected and undirected. \square

Our goal is to synchronize the oscillators under switching communication topologies 𝒢k\mathcal{G}_{k} and unknown control directions α\alpha. To solve this problem, we consider the feedback law

ui,k(x,τ2)\displaystyle u_{i,k}(x,\tau_{2}) =ε12wicos(wiτ2+κJi,k(x))κ12,\displaystyle=\varepsilon^{-1}\sqrt{2w_{i}}\cos\left(w_{i}\tau_{2}+\kappa J_{i,k}(x)\right)\kappa^{-\frac{1}{2}}, (24)

where Ji,k(x)=12j𝒩ki|xixj|2J_{i,k}(x)=\frac{1}{2}\sum_{j\in\mathcal{N}^{i}_{k}}\lvert x^{i}-x^{j}\lvert^{2}, and κ>0\kappa\in\mathbb{R}_{>0} is a tuning parameter. In (24), the set 𝒩ki={j𝒱:(i,j)k}𝒱\mathcal{N}_{k}^{i}=\{j\in\mathcal{V}:(i,j)\in\mathcal{E}_{k}\}\subset\mathcal{V} indicates the neighbors of the ithi^{\text{th}} agent according to the topology of the graph 𝒢k\mathcal{G}_{k}. We consider a model where both the network topology 𝒢k\mathcal{G}_{k} and the control directions α\alpha are allowed to switch in (23) and (24). To capture this behavior, let 𝒬={1,,N1N2}\mathcal{Q}=\{1,\dots,N_{1}N_{2}\}, and fix a choice of all possible bijections ϱ:𝒬z1(k(z1),α(z1)){1,,N2}×𝒥\varrho:\mathcal{Q}\ni z_{1}\mapsto(k(z_{1}),\alpha(z_{1}))\in\{1,\dots,N_{2}\}\times\mathcal{J}. In this way, for each z1𝒬z_{1}\in\mathcal{Q} there is a particular control direction α(z1)\alpha(z_{1}) and graph 𝒢k(z1)\mathcal{G}_{k(z_{1})} acting in (23) and (24). To study the overall dynamics of the system, we define the coordinates

x1i\displaystyle x^{i}_{1} =cos(ξi),\displaystyle=-\cos(\xi_{i}), x2i\displaystyle x^{i}_{2} =sin(ξi),\displaystyle=\sin(\xi_{i}), i\displaystyle\forall i 𝒱,\displaystyle\in\mathcal{V},

where ξi[0,2π)\xi_{i}\in[0,2\pi), and ξ:=(ξ1,,ξr)\xi:=(\xi_{1},\dots,\xi_{r}), which lead to the following polar representation of (23):

(ξ˙1ξ˙r)=(1+α1u1,k(ξ,τ2)1+αrur,k(ξ,τ2)),\displaystyle\begin{pmatrix}\dot{\xi}_{1}\\ \vdots\\ \dot{\xi}_{r}\end{pmatrix}=\begin{pmatrix}1+\alpha_{1}u_{1,k}(\xi,\tau_{2})\\ \vdots\\ 1+\alpha_{r}u_{r,k}(\xi,\tau_{2})\end{pmatrix}, (25)
Refer to caption
Figure 3: Network topologies in the second case of Example 1.

where the feedback law ui,ku_{i,k} becomes:

ui,k(ξ,τ2)\displaystyle u_{i,k}(\xi,\tau_{2}) =ε12wicos(wiτ2+κJi,k(ξ))κ12,\displaystyle=\varepsilon^{-1}\sqrt{2w_{i}}\cos(w_{i}\tau_{2}+\kappa J_{i,k}(\xi))\kappa^{-\frac{1}{2}}, (26)

with distance function Ji,k(ξ)=j𝒩ki(1cos(ξiξj))J_{i,k}(\xi)=\sum_{j\in\mathcal{N}^{i}_{k}}(1-\cos(\xi_{i}-\xi_{j})). Using (19), the 2nd2^{\text{nd}}-order average mapping is given by

ξ¯˙=𝟙Vz1(ξ¯),\displaystyle\dot{\bar{\xi}}=\mathbbm{1}-\nabla V_{z_{1}}(\bar{\xi}), (27)

where 𝟙=(1,1,,1)\mathbbm{1}=(1,1,\ldots,1), Vz1(ξ¯)=i=1rJi,k(z1)(ξ¯)V_{z_{1}}(\bar{\xi})=\sum_{i=1}^{r}J_{i,k(z_{1})}(\bar{\xi}), and

Vz1(ξ¯)\displaystyle\nabla V_{z_{1}}(\bar{\xi}) =(j𝒩k(z1)1sin(ξ¯1ξ¯j)j𝒩k(z1)rsin(ξ¯rξ¯j)),\displaystyle=\begin{pmatrix}\sum_{j\in\mathcal{N}^{1}_{k(z_{1})}}\sin(\bar{\xi}_{1}-\bar{\xi}_{j})\\ \vdots\\ \sum_{j\in\mathcal{N}^{r}_{k(z_{1})}}\sin(\bar{\xi}_{r}-\bar{\xi}_{j})\end{pmatrix},

which is the well-known Kuramoto model over the graph 𝒢k(z1)\mathcal{G}_{k(z_{1})} [18, 17]. The subset 𝒮\mathcal{S} that characterizes synchronization has the polar coordinate representation 𝒮={ξ[0,2π)r:ξ1==ξr}.\mathcal{S}=\{\xi\in[0,2\pi)^{r}:\,\xi_{1}=\cdots=\xi_{r}\}. Now observe that the vector field in (23) under the feedback law (24), or equivalently the vector field in (25) under the feedback law (26), has the same structure as the vector fields fεf_{\varepsilon} in (18) with trivial dependence on the intermediate timescale τ1=ε1t\tau_{1}=\varepsilon^{-1}t. Moreover, using (6) with 𝒬u=\mathcal{Q}_{u}=\emptyset to generate the switching signal z1z_{1}, the closed-loop system has the form (10) with CC and DD given by (20), and GG given by (21). For this system, we study stability of the set 𝒜=𝒮×Cz\mathcal{A}=\mathcal{S}\times C_{z} using two tunable parameters (c.f. Remark 5). The following proposition establishes a novel “model-free” synchronization result for oscillators under switching graphs and control directions via averaging.

Proposition 1

Under Assumptions 4-LABEL:asmp:ex_synchronization_A1, the HDS (10) renders the set 𝒜\mathcal{A} SGPAS as (ε,η1)0+(\varepsilon,\eta_{1})\rightarrow 0^{+} with some basin of attraction 𝒜\mathcal{B}_{\mathcal{A}}. \square

Example 1

To illustrate the above result, we consider two scenarios. In the first scenario, 𝒢k\mathcal{G}_{k} is static and the control directions are switching. In this case, we consider two oscillators (i.e., r=2r=2), N2=1N_{2}=1 and J1,2(ξ)=J2,1(ξ)=J(ξ)=1cos(ξ1ξ2)J_{1,2}(\xi)=J_{2,1}(\xi)=J(\xi)=1-\cos(\xi_{1}-\xi_{2}). We let N1=4N_{1}=4, 𝒥={(+1,+1),(1,+1),(+1,1),(1,1)}\mathcal{J}=\{(+1,+1),(-1,+1),(+1,-1),(-1,-1)\}, and we consider the bijection ϱ\varrho that satisfies ϱ(1)=(1,(+1,+1))\varrho(1)=(1,(+1,+1)), ϱ(2)=(1,(1,+1))\varrho(2)=(1,(-1,+1)), ϱ(3)=(1,(+1,1))\varrho(3)=(1,(+1,-1)), ϱ(4)=(1,(1,1))\varrho(4)=(1,(-1,-1)). For the numerical simulation results, shown in Fig. 4, we used ε=1/10π\varepsilon=1/\sqrt{10\pi}, κ=10\kappa=10, w1=1w_{1}=1, w2=2w_{2}=2, and a switching mode z1z_{1} generated by (6) with η1=2.5\eta_{1}=2.5, N=1N_{\circ}=1, and arbitrary η2\eta_{2} and TT_{\circ}. As observed in the figure, (local) practical synchronization is achieved.

In the second scenario, we incorporate switching graphs 𝒢k\mathcal{G}_{k}. We let r=4r=4, N1=4N_{1}=4, N2=3N_{2}=3, and we consider the set 𝒥={(+1,+1,1,+1),(1,+1,+1,+1),(1,+1,1,1),(1,1,+1,+1)}\mathcal{J}=\{(+1,+1,-1,+1),(-1,+1,+1,+1),(-1,+1,-1,-1),\\ (-1,-1,+1,+1)\}, and the collection of graphs {𝒢1,𝒢2,𝒢3}\{\mathcal{G}_{1},\mathcal{G}_{2},\mathcal{G}_{3}\} shown in Fig. 3. For the numerical simulation results, shown in Fig. 5, we used ε=1/10π\varepsilon=1/\sqrt{10\pi}, κ=10\kappa=10, w1=1w_{1}=1, w2=4/3w_{2}=4/3, w3=5/3w_{3}=5/3, w4=2w_{4}=2, and a switching mode z1z_{1} generated by (6) with η1=1.5\eta_{1}=1.5, N=1N_{\circ}=1, and arbitrary η2\eta_{2} and TT_{\circ}. As shown in Fig. 5, (local) practical synchronization is also achieved despite simultaneous switches happening in the network topology and the control directions. \square

Refer to caption
Figure 4: Simulation results of the first scenario in Example 1. The figure on the right is a flat embedding of the torus 𝕋24\mathbb{T}^{2}\subset\mathbb{R}^{4} into the plane 2\mathbb{R}^{2}. Synchronization is achieved on the submanifold 𝒮\mathcal{S} corresponding to the diagonal line.

5.1.2 Lie-Bracket Extremum Seeking under Intermittence and Spoofing

Consider the following control-affine system

x˙=i=1rbi(x,τ1)ui,z1(x,τ2),r1,\displaystyle\dot{x}=\sum_{i=1}^{r}b_{i}(x,\tau_{1})u_{i,z_{1}}(x,\tau_{2}),~{}~{}~{}r\in\mathbb{Z}_{\geq 1}, (28)

where the goal is to steer the state xnx\in\mathbb{R}^{n} towards the set of solutions of the optimization problem

minxMJ(x),{\color[rgb]{0,0,0}\min_{x\in M}}~{}~{}J(x), (29)

where MnM\subset\mathbb{R}^{n} is the parameter search space, and JJ is a continuously differentiable cost function whose mathematical form is unknown, but which is available via measurements or evaluations. Problem (29) describes a standard extremum seeking (ES) problem [16, 6, 40, 43], where exact knowledge of the functions bib_{i} is in general not required. However, unlike traditional ES, we aim to achieve convergence to the solutions of (29) despite sporadic failures in accessing measurements of JJ and intermittent cost measurements affected by malicious external spoofing designed to destabilize the system, as in the motivational example of Section 3, see also [30, 19].

To study this scenario, we consider the following mode-dependent Lie-bracket-based control law

ui,z1(x,τ2)\displaystyle u_{i,{z_{1}}}(x,\tau_{2}) =ε12wicos(wiτ2+κ(z12)J(x))κ12,\displaystyle=\varepsilon^{-1}\sqrt{2w_{i}}\cos\left(w_{i}\tau_{2}+\kappa(z_{1}-2)J(x)\right)\kappa^{-\frac{1}{2}}, (30)

where κ>0\kappa\in\mathbb{R}_{>0} is a tuning parameter and z1𝒬={1,2,3}z_{1}\in\mathcal{Q}=\{1,2,3\} is the logic mode generated by the hybrid automaton (6) . Note that (4) is a particular case of (28). Using equation (19) to study system (28) under the feedback law (30), we obtain:

f¯z1(x¯)\displaystyle\bar{f}_{z_{1}}(\bar{x}) =(2z1)P(x¯)J(x¯),\displaystyle=(2-z_{1})P(\bar{x})\,\nabla J(\bar{x}),

where the matrix-valued mapping PP is given by

P(x¯)=1T1i=1r0T1bi(x¯,τ1)bi(x¯,τ1)𝑑τ1.\displaystyle P(\bar{x})=\frac{1}{T_{1}}\sum_{i=1}^{r}\int_{0}^{T_{1}}b_{i}(\bar{x},\tau_{1})\otimes b_{i}(\bar{x},\tau_{1})\,d\tau_{1}.

We make the following regularity assumptions on JJ and PP:

Assumption 6
\thlabel

asmp:ex_2_A1 The following holds:

  1. 1.

    There exists λP>0\lambda_{P}>0 and MP>0M_{P}>0 such that |P(x)x|MP|x|,xP(x)xλP|x|2,xn.|P(x)x|\leq M_{P}|x|,\,x^{\intercal}P(x)x\geq\lambda_{P}|x|^{2},\,\forall x\in\mathbb{R}^{n}.

  2. 2.

    The cost function JJ is 𝒞1\mathcal{C}^{1}, strongly convex with strong-convexity parameter μ>0\mu>0, and minimizer xx^{\star}, and there exists LJ>0L_{J}>0 such that |J(x)J(x)|LJ|xx||\nabla J(x)-\nabla J(x^{\prime})|\leq L_{J}|x-x^{\prime}|, for all x,xnx,x^{\prime}\in\mathbb{R}^{n}. \square

Refer to caption
Figure 5: Simulation results of the second scenario considered in Example 1.
Remark 6

Item 1) in Assumption LABEL:asmp:ex_2_A1 can be seen as a cooperative persistence of excitation condition on the vector fields bib_{i}, see [5] for similar conditions. Item 2) is common in ES [6], and covers quadratic functions. \square

The following result formalizes the discussions of Section 3, and establishes a novel resilience result for Lie-bracket ES algorithms under spoofing and intermittent feedback:

Proposition 2
\thlabel

prop:ex2_es_intermittent Under Assumptions 4 and LABEL:asmp:ex_2_A1, for η1>0\eta_{1}>0 there exist η2>0\eta_{2}^{*}>0 such that for all η2(0,η2)\eta_{2}\in(0,\eta_{2}^{*}) the HDS (10) renders the set 𝒜={x}×Cz\mathcal{A}=\{x^{\star}\}\times C_{z} SGPAS as ε0+\varepsilon\to 0^{+}. \hfill\square

5.2 Switched Global Extremum Seeking on Smooth Compact Manifolds

We now leverage the tools presented in this paper to solve global ES problems of the form (29) on sets MM that describe a class of smooth, boundaryless, compact manifolds. This application illustrates the use of Corollary LABEL:corollary1. As thoroughly discussed in the literature [35, 11, 37, 10], for such problems, no smooth algorithm with global stability bounds of the form (17) can be employed as the average system due to inherent topological obstructions. Here, we show how to remove this obstruction by using adaptive switching between multiple ES controllers in problems that satisfy the following assumption:

Assumption 7
\thlabel

asmp:mfd_ex_A1 The following holds:

  1. (a)

    The set MM is a smooth, embedded, connected, and compact submanifold without boundary, endowed with a Riemannian structure by the metric ,M:TxM×TxM\langle\cdot,\cdot\rangle_{M}:T_{x}M\times T_{x}M\rightarrow\mathbb{R} induced by the metric ,:n×n\langle\cdot,\cdot\rangle:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R} of the ambient Euclidean space n\mathbb{R}^{n}.

  2. (b)

    The function JJ is smooth on an open set UnU\subset\mathbb{R}^{n} containing MM, has a unique minimizer xMx^{\star}\in M satisfying J(x)<J(x)J(x^{\star})<J(x), xxM\forall x\neq x^{\star}\in M, and there exists a known number J¯(0,)\bar{J}\in(0,\infty) such that for all xJ¯={xM:J(x)<J(x)+J¯}x\in\mathcal{L}_{\bar{J}}=\{x\in M:\,J(x)<J(x^{\star})+\bar{J}\} we have MJ(x)=0x=x\nabla_{M}J(x)=0\iff x=x^{\star}, where MJ\nabla_{M}J is the orthonormal projection (defined by the metric ,\langle\cdot,\cdot\rangle) of the gradient J\nabla J onto the tangent space TxMT_{x}M, xM\forall x\in M. \square

While \threfasmp:mfd_ex_A1 suffices to guarantee the local, or at best, almost-global uniform practical asymptotic stability of the minimizer xx^{\star} for a smooth Lie Bracket-based extremum seeking controller on MM [18, 17], its (smooth) average system will necessarily have more than one critical point in MM due to the topology of the manifold [35, 34]. To remove this obstruction to achieve uniform global ES, we consider a hybrid algorithm that implements state-triggered switching between certain functions introduced in the following definition:

Definition 8

Let N1N\in\mathbb{N}_{\geq 1}, and 𝒬={1,,N}\mathcal{Q}=\{1,\dots,N\}. A family of functions {J~q}q𝒬\{\tilde{J}_{q}\}_{q\in\mathcal{Q}} is said to be a δ\delta-gap synergistic family of functions subordinate to JJ on MM if:

  1. 1.

    q𝒬\forall q\in\mathcal{Q}, J~q:U\tilde{J}_{q}:U\rightarrow\mathbb{R} is smooth, its unique minimizer on MM is xx^{\star}, and has a finite number of critical points in MM, i.e. |Crit J~q|1|\text{Crit }\tilde{J}_{q}|\in\mathbb{N}_{\geq 1}, where Crit J~q:={xM:MJ~q(x)=0}\text{Crit }\tilde{J}_{q}:=\{x\in M:\,\nabla_{M}\tilde{J}_{q}(x)=0\}.

  2. 2.

    q𝒬\forall q\in\mathcal{Q}, J(x)<J¯+J(x)J~q(x)=J(x)J(x)<\bar{J}+J(x^{\star})\implies\tilde{J}_{q}(x)=J(x), i.e. the family {J~q}\{\tilde{J}_{q}\} agrees with JJ on the neighborhood J¯\mathcal{L}_{\bar{J}} of the minimizer xx^{\star},

  3. 3.

    δ(0,)\exists~{}\delta\in(0,\infty) such that

    δ<Δ:=minq1𝒬xCrit J~q1\{x}maxq2𝒬(J~q1(x)J~q2(x)).\displaystyle\delta<\Delta^{*}:=\min_{\begin{subarray}{c}q_{1}\in\mathcal{Q}\\ x\in\text{Crit }\tilde{J}_{q_{1}}\backslash\{x^{\star}\}\end{subarray}}\max_{q_{2}\in\mathcal{Q}}\,(\tilde{J}_{q_{1}}(x)-\tilde{J}_{q_{2}}(x)).~{}~{}~{}\vspace{0.1cm}\hfill\square
Refer to caption
Figure 6: Simulation results of the first scenario in \threfexmp:optimization_on_manifolds_example: (top) the trajectory on 𝕊2\mathbb{S}^{2} for α=1\alpha=1 along with two projections, and (bottom) the evolution of the cost along the trajectories.

In words, the constructions in Definition 8 involve families of diffeomorphisms that “warp the manifold MM in sufficiently many ways so as to distinguish the minimizer xx^{\star} from other critical points of the function JJ”. The reader is referred to [46] and [37] for the full details (and examples) of these constructions, which are leveraged in [37] for the design of algorithms using classic averaging. We stress that constructing these diffeomorphisms does not necessarily require exact knowledge of the mathematical form of JJ (although knowledge of MM is required), but only a qualitative description of JJ that allows to identify the threshold J¯\bar{J} below which the only critical point corresponds to xx^{\star}. Such knowledge is reasonable for most practical applications where the optimal value J(x)J(x^{\star}) is known to lie within a certain bounded region, or be equal to certain value. A similar assumption was considered in [52]. Under such qualitative knowledge, our goal is to attain global ES from all initial conditions, ensuring that (29) is solved regardless of where the algorithm’s trajectories are initialized or land after a sudden disturbance.

To design hybrid Lie-bracket-based algorithms with global stability properties, we consider dynamics of the form (28), where, for simplicity we allow the functions bib_{i} to be independent of τ1\tau_{1}. We formalize this in the following assumption:

Assumption 8
\thlabel

asmp:mfd_ex_A2 The following holds for problem (29):

  1. (a)

    There exists a family of smooth vector fields {b1,,br}\{b_{1},\dots,b_{r}\} on MM, for which the operator P:M×n(x,v)P(x)[v]TxMP:M\times\mathbb{R}^{n}\ni(x,v)\mapsto P(x)[v]\in T_{x}M defined by

    P(x)[v]=i=1rbi(x),vbi(x),P(x)[v]=\sum_{i=1}^{r}\langle b_{i}(x),v\rangle\,b_{i}(x),

    is such that P(x)[v]=0v(TxM).P(x)[v]=0\iff v\in\left(T_{x}M\right)^{\perp}.

  2. (b)

    There exists a δ\delta-gap synergistic family of functions {J~q}q𝒬\{\tilde{J}_{q}\}_{q\in\mathcal{Q}} subordinate to JJ on MM, which are available for measurement. \square

Assumption LABEL:asmp:mfd_ex_A2 is natural in the setting we consider herein. Indeed, item (a) in Assumption LABEL:asmp:mfd_ex_A2 is a standard assumption in the context of Lie-bracket extremum seeking on manifolds, see e.g. [17, Assumption 1]. On the other hand, item (b) is a standard assumption in the context of robust global stabilization problems on manifolds via hybrid control, see e.g. [37, 35, 12].

To solve (29), we consider the HDS (10), with fεf_{\varepsilon} given by (28), and the following hybrid feedback law

ui,z(x,τ2):=ε12wicos(κJ~z(x)+wiτ2)κ12,\displaystyle u_{i,z}(x,\tau_{2}):=\varepsilon^{-1}\sqrt{2w_{i}}\cos(\kappa\,\tilde{J}_{z}(x)+w_{i}\tau_{2})\kappa^{-\frac{1}{2}},

​​​where κ>0\kappa\in\mathbb{R}_{>0} is a tuning gain, z𝒬z\in\mathcal{Q} is a logic state, the sets CC and DD are given by

C\displaystyle C :={(x,z)M×𝒬:J~z(x)minz~𝒬J~z~(x)δ},\displaystyle:=\big{\{}(x,z)\in M\times\mathcal{Q}:\,\tilde{J}_{z}(x)-\min_{\tilde{z}\in\mathcal{Q}}\tilde{J}_{\tilde{z}}(x)\leq\delta\big{\}},
D\displaystyle D :={(x,z)M×𝒬:J~z(x)minz~𝒬J~z~(x)δ},\displaystyle:=\big{\{}(x,z)\in M\times\mathcal{Q}:\,\tilde{J}_{z}(x)-\min_{\tilde{z}\in\mathcal{Q}}\tilde{J}_{\tilde{z}}(x)\geq\delta\big{\}},

and the flow map in (11) uses Φ(z)={0}\Phi(z)=\{0\}. Finally, the jump map in (10b) is defined as

G(x,z)={x}×{z𝒬:J~z(x)=minz~𝒬J~z~(x)}.\displaystyle G(x,z)=\{x\}\times\big{\{}z\in\mathcal{Q}:~{}\tilde{J}_{z}(x)=\min_{\tilde{z}\in\mathcal{Q}}\tilde{J}_{\tilde{z}}(x)\big{\}}.

Using equation (19) with z=z1z=z_{1}, we obtain:

f¯z(x¯)=P(x¯)[J~z(x¯)].\bar{f}_{z}(\bar{x})=-P(\bar{x})[\nabla\tilde{J}_{z}(\bar{x})]. (31)

We can now state the following global result for Lie-bracket ES systems on smooth compact manifolds:

Proposition 3

Under Assumptions 4, LABEL:asmp:mfd_ex_A1, and LABEL:asmp:mfd_ex_A2, the set 𝒜={x}×𝒬\mathcal{A}=\{x^{*}\}\times\mathcal{Q} is GPAS as ε0+\varepsilon\to 0^{+} for the HDS (10). \square

Refer to caption
Figure 7: Simulation results of the second scenario in \threfexmp:optimization_on_manifolds_example.
Example 2
\thlabel

exmp:optimization_on_manifolds_example To illustrate the result of Proposition 3, let M=𝕊2={x3:|x|2=1}M=\mathbb{S}^{2}=\{x\in\mathbb{R}^{3}:|x|^{2}=1\} be the unit sphere, and consider the cost J(x)=1x,e3J(x)=1-\langle x,e_{3}\rangle. We define the vector fields bib_{i} for i{1,2,3}i\in\{1,2,3\} by bi(x)=ei×xb_{i}(x)=e_{i}\times x, where eie_{i} is the unit vector in 3\mathbb{R}^{3} with zero elements except the ithi^{\text{th}}-element which is set to 11. The function JJ has two critical points where 𝕊2J\nabla_{\mathbb{S}^{2}}J vanishes: a critical point x=(0,0,1)x^{\star}=(0,0,1) that corresponds to the minimum value J(x)=0J(x^{\star})=0, and a critical point x=(0,0,1)x^{\sharp}=(0,0,-1) that corresponds to the maximum value J(x)=2J(x^{\sharp})=2. Let 𝒬={1,2}\mathcal{Q}=\{1,2\}, and define the family of functions {J~1,J~2}\{\tilde{J}_{1},\tilde{J}_{2}\} by J~q(x)=J(Φq(x))\tilde{J}_{q}(x)=J\left(\Phi_{q}(x)\right), where the maps Φq\Phi_{q} are defined as follows:

Φq(x)\displaystyle\Phi_{q}(x) :={x,J(x)αexp(kq2(J(x)1)2(e^1+e^2))x,J(x)>α,\displaystyle:=\begin{cases}x,&J(x)\leq\alpha\\ \exp\left(\frac{k_{q}}{\sqrt{2}}(J(x)-1)^{2}(\hat{e}_{1}+\hat{e}_{2})\right)x,&J(x)>\alpha\end{cases},

and where k1=+12,k2=12k_{1}=+\frac{1}{2},\,k_{2}=-\frac{1}{2}, α{1,2}\alpha\in\{1,2\}, exp:3×3GL(3,)\exp:\mathbb{R}^{3\times 3}\rightarrow GL(3,\mathbb{R}) is the matrix exponential where GL(3,)GL(3,\mathbb{R}) is the group of real invertible 3×33\times 3-matrices, and e^i\hat{e}_{i} is the skew-symmetric matrix associated with the unit vector eie_{i}. It can be verified, see [37], that, when α=1\alpha=1, the family {J~1,J~2}\{\tilde{J}_{1},\tilde{J}_{2}\} is a δ\delta-gap synergistic family of functions subordinate to JJ on 𝕊2\mathbb{S}^{2} for any δ(0,14)\delta\in(0,\frac{1}{4}). We use α=2\alpha=2 to emulate the situation in which no switching takes place since, when α=2\alpha=2, J(x)=J~1(x)=J~2(x)J(x)=\tilde{J}_{1}(x)=\tilde{J}_{2}(x), for all x𝕊2x\in\mathbb{S}^{2}. To generate the numerical results, we used δ=15\delta=\frac{1}{5}, w1=2,w2=3,w3=1w_{1}=2,\,w_{2}=3,\,w_{3}=1, κ=4\kappa=4, and ε=1/8π\varepsilon=1/\sqrt{8\pi}, which completely defines the HDS (10). We simulate two scenarios. In the first scenario, a small adversarial input, tailored to (locally) stabilize the critical point xx^{\sharp} for an ES algorithm without switching, is added to the vector field fεf_{\varepsilon}, i.e., we use f~ε=fε+fe\tilde{f}_{\varepsilon}=f_{\varepsilon}+f_{e}, where |fe|<0.1|f_{e}|<0.1. For details on the construction of fef_{e}, we refer the reader to the appendix in the extended manuscript [4]. The simulation results of the first scenario for α=1\alpha=1 and α=2\alpha=2, starting from the initial condition x(0,0)(0.11,0.11,0.98)x(0,0)\approx(-0.11,0.11,-0.98), which is nearby xx^{\sharp}, are shown in Fig. 6. As observed, without switching (α=2\alpha=2), the small disturbance fef_{e} effectively traps the non-hybrid ES algorithm in the vicinity of the problematic critical point xx^{\sharp}, despite the fact that x(0,0)xx(0,0)\neq x^{\sharp} and that the system is persistently perturbed by the dithers. On the other hand, as predicted by Proposition 3, the hybrid ES algorithm (α=1\alpha=1) renders the set 𝒜\mathcal{A} GPAS as ε0+\varepsilon\rightarrow 0^{+}.

In the second scenario, a similarly constructed adversarial input, tailored to (locally) stabilize the problematic critical points of the functions J~1\tilde{J}_{1} and J~2\tilde{J}_{2}, in the absence of switching, is added to fεf_{\varepsilon}. The simulation results for α=1\alpha=1, with q(0,0)=2q(0,0)=2 and starting from x(0,0)(0.286,0.286,0.914)x(0,0)\approx(-0.286,0.286,-0.914), which coincides with the problematic critical point of J~2\tilde{J}_{2}, are shown in Fig. 7. It can be observed that a state jump from q=2q=2 to q+=1q^{+}=1 is trigger to allow the ES algorithm escape the problematic critical point of J~2\tilde{J}_{2}. The detailed construction of the adversarial signals can be found in the Appendix of the Supplemental Material. All computer codes used to generate the simulations of this paper are available at [2]. \square

6 Analysis and Proofs

In this section, we present the proofs of our main results.

6.1 Proof of Theorem LABEL:thm:closeness_of_trajs

We follow similar ideas as in standard averaging theorems [27, Thm. 10.4], [49, Thm.1], but using a recursive variable transformation applied to the analysis of the HDS.

Fix K0nK_{0}\subset\mathbb{R}^{n}, T>0T>0 and ρ>0\rho>0, and let Assumption LABEL:asmp:A1 generate δ>0\delta^{\star}>0. For any δ(0,δ]\delta\in(0,\delta^{\star}], consider the δ\delta-inflation of 2ave\mathcal{H}_{2}^{\text{ave}}, given by

2,δave:{θ¯Cδ,θ¯˙Fδ(θ¯)θ¯Dδ,θ¯+Gδ(θ¯),\mathcal{H}_{2,\delta}^{\text{ave}}:~{}~{}~{}\left\{\begin{array}[]{ll}\bar{\theta}\in C_{\delta},&~{}~{}\dot{\bar{\theta}}\in F_{\delta}\left(\bar{\theta}\right)\vspace{0.2cm}\\ \bar{\theta}\in D_{\delta},&\bar{\theta}^{+}\in G_{\delta}\left(\bar{\theta}\right)\end{array}\right., (32)

where the data (Cδ,Fδ,Dδ,Gδ)(C_{\delta},F_{\delta},D_{\delta},G_{\delta}) is constructed as in Definition 2 from the data of 2ave\mathcal{H}_{2}^{\text{ave}} in (6). Fix δ(0,δ]\delta\in(0,\delta^{\star}] such that for any solution θ~=(x~,z~)\tilde{\theta}=(\tilde{x},\tilde{z}) to the system (32) starting in K0+δ𝔹{K}_{0}+\delta\mathbb{B} there exists a solution θ¯=(x¯,z¯)\bar{\theta}=(\bar{x},\bar{z}) to system (15) starting in K0K_{0} such that the two solutions are (T,ρ2)(T,\frac{\rho}{2})-close. Such δ\delta exists due to [21, Proposition 6.34]. Without loss of generality, we may take δ<1\delta<1 and ρ<1\rho<1.

Let 𝒮2ave(K0)\mathcal{S}_{\mathcal{H}_{2}^{\text{ave}}}(K_{0}) be the set of maximal solutions to (15) starting in K0K_{0}, and define the following sets:

T(K0)\displaystyle\mathcal{R}_{T}(K_{0}) :={ξ=θ¯(t,j):θ¯𝒮2ave(K0),\displaystyle:=\Big{\{}\xi=\bar{\theta}(t,j):\bar{\theta}\in\mathcal{S}_{\mathcal{H}_{2}^{\text{ave}}}({K}_{0}),
(t,j)dom(θ¯),t+jT},\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}(t,j)\in\text{dom}(\bar{\theta}),~{}t+j\leq T\Big{\}},
K1\displaystyle K_{1} :=T(K0)+𝔹,K:=K1G(K1D),\displaystyle:=\mathcal{R}_{T}(K_{0})+\mathbb{B},~{}~{}~{}K:=K_{1}\cup G\left(K_{1}\cap D\right), (33)

where K{K} is compact by [49, Proposition 2]. Consider the functions ψm,ψp:Cδ×02n1\psi_{m},\psi_{p}:C_{\delta}\times\mathbb{R}^{2}_{\geq 0}\to\mathbb{R}^{n_{1}} defined as follows:

ψm(θ,τ)\displaystyle\psi_{m}(\theta,\tau) :=12(xϕ1u1xu1ϕ1),\displaystyle:=\frac{1}{2}\left(\partial_{x}\phi_{1}\cdot u_{1}-\partial_{x}u_{1}\cdot\phi_{1}\right), (34a)
ψp(θ,τ)\displaystyle\psi_{p}(\theta,\tau) :=12(xϕ1u1+xu1ϕ1),\displaystyle:=\frac{1}{2}\left(\partial_{x}\phi_{1}\cdot u_{1}+\partial_{x}u_{1}\cdot\phi_{1}\right), (34b)

where for simplicity we omitted the arguments in the right-hand side of (34). Using integration by parts and item (c) in Assumption LABEL:asmp:A1, we have 0T2ψp(θ,τ)𝑑τ2=0\int_{0}^{T_{2}}\psi_{p}(\theta,\tau)\,d\tau_{2}=0, for all (θ,τ1)Cδ×0(\theta,\tau_{1})\in C_{\delta}\times\mathbb{R}_{\geq 0}. Let h¯:Cδ×0n1\bar{h}:C_{\delta}\times\mathbb{R}_{\geq 0}\to\mathbb{R}^{n_{1}} be defined as:

h¯(θ,τ1):=1T20T2(ϕ2(θ,τ)+ψm(θ,τ))𝑑τ2,\bar{{h}}(\theta,\tau_{1}):=\frac{1}{T_{2}}\int_{0}^{T_{2}}\Big{(}\phi_{2}(\theta,\tau)+\psi_{m}(\theta,\tau)\Big{)}\,d{\tau}_{2}, (35)

which satisfies f¯(θ)=1T10T1h¯(θ,τ1)𝑑τ1\bar{{f}}(\theta)=\frac{1}{T_{1}}\int_{0}^{T_{1}}\bar{{h}}(\theta,\tau_{1})\,d\tau_{1}.

Below, Lemmas 2-4 follow directly by the continuity and periodicity properties of the respective functions. For completness, the proofs can be found in the Appendix of the supplemental material, or in the Extended Manuscript [4].

Lemma 2

Let KnK\subset\mathbb{R}^{n} be a compact set. Then, there exist Lh¯,Lf¯,Mh¯>0L_{\bar{h}},L_{\bar{f}},M_{\bar{h}}>0 such that |h¯(θ,τ1)|Mh¯|\bar{h}(\theta,\tau_{1})|\leq M_{\bar{h}} and |h¯(θ,τ1)h¯(θ,τ1)|Lh¯|θθ||\bar{h}(\theta,\tau_{1})-\bar{h}(\theta^{\prime},\tau_{1})|\leq L_{\bar{h}}|\theta-\theta^{\prime}|, |f¯(θ)f¯(θ)|Lf¯|θθ||\bar{f}(\theta)-\bar{f}(\theta^{\prime})|\leq L_{\bar{f}}|\theta-\theta^{\prime}|, for all θ,θKCδ\theta,\theta^{\prime}\in K\cap C_{\delta} and all τ10\tau_{1}\in\mathbb{R}_{\geq 0}. \square

Next, for each (θ,τ)Cδ×+2(\theta,\tau)\in C_{\delta}\times\mathbb{R}^{2}_{+}, we define

u2(θ,τ)\displaystyle{u}_{2}(\theta,\tau) :=0τ2(ϕ2(θ,τ1,s2)+ψm(θ,τ1,s2)\displaystyle:=\int_{0}^{\tau_{2}}\Big{(}\phi_{2}(\theta,\tau_{1},s_{2})+\psi_{m}(\theta,\tau_{1},s_{2})
+ψp(θ,τ1,s2)h¯(θ,τ1,s2))ds2.\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\psi_{p}(\theta,\tau_{1},s_{2})-\bar{{h}}(\theta,\tau_{1},s_{2})\Big{)}\,ds_{2}.

as well as the auxiliary functions

σ1(θ,τ)\displaystyle{\sigma}_{1}(\theta,\tau) :=u1(θ,τ),\displaystyle:={u}_{1}(\theta,\tau), (36a)
σ2(θ,τ)\displaystyle{\sigma}_{2}(\theta,\tau) :=u2(θ,τ)0τ2τ1u1(θ,τ1,s2)ds2\displaystyle:={u}_{2}(\theta,\tau)-\int_{0}^{\tau_{2}}\partial_{\tau_{1}}{u}_{1}(\theta,\tau_{1},s_{2})\,ds_{2}
xu1(θ,τ)u1(θ,τ),\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\partial_{x}{u}_{1}(\theta,\tau)\cdot{u}_{1}(\theta,\tau), (36b)
σ¯(θ,τ1)\displaystyle\bar{{\sigma}}(\theta,\tau_{1}) :=0τ1(h¯(θ,s1)f¯(θ))𝑑s1.\displaystyle:=\int_{0}^{\tau_{1}}\Big{(}\bar{{h}}(\theta,s_{1})-\bar{{f}}(\theta)\,\Big{)}\,ds_{1}. (36c)

The functions (σk,σ¯)(\sigma_{k},\bar{\sigma}), k{1,2}k\in\{1,2\}, will play an important role in our analysis. We state some of their regularity properties.

Lemma 3

Let KnK\subset\mathbb{R}^{n} be a compact set. Then, there exist Lk,L¯>0L_{k},\bar{L}>0 such that for k{1,2}k\in\{1,2\}:

|σk(θ,τ)|Lk,|σ¯(θ,τ1)|L¯,\displaystyle|\sigma_{k}(\theta,\tau)|\leq L_{k},~{}~{}~{}~{}|\bar{\sigma}(\theta,\tau_{1})|\leq\bar{L}, (37a)
|σk(θ,τ)σk(θ^,τ^)|Lk(|θθ^|+|ττ^|),\displaystyle|\sigma_{k}(\theta,\tau)-\sigma_{k}(\hat{\theta},\hat{\tau})|\leq L_{k}(|\theta-\hat{\theta}|+|\tau-\hat{\tau}|), (37b)
|σ¯(θ,τ1)σ¯(θ^,τ^1)|L(|θθ^|+|τ1τ^1|),\displaystyle|\bar{\sigma}(\theta,\tau_{1})-\bar{\sigma}(\hat{\theta},\hat{\tau}_{1})|\leq L(|\theta-\hat{\theta}|+|\tau_{1}-\hat{\tau}_{1}|), (37c)

θ,θ^KCδ\forall~{}\theta,\hat{\theta}\in K\cap C_{\delta} τ,τ^02\tau,\hat{\tau}\in\mathbb{R}_{\geq 0}^{2}, and τ1,τ^10\tau_{1},\hat{\tau}_{1}\in\mathbb{R}_{\geq 0}. \square

The following technical Lemma will be key for our results. The proof of the first items leverages the Lipschitz extension Lemma of [49, Lemma 2].

Lemma 4

Let KnK\subset\mathbb{R}^{n} be a compact set, and for k{1,2}k\in\{1,2\} let σk,σ¯\sigma_{k},\bar{\sigma} be given by (36), and Lk,L¯>0L_{k},\bar{L}>0 be given by Lemma 3. Consider the closed sets

C|K,δ=(KCδ)×+2,D|K=(KD¯)×+2.\displaystyle C|_{K,\delta}=\left(K\cap C_{\delta}\right)\times\mathbb{R}^{2}_{+},~{}~{}~{}~{}D|_{K}=\left(K\cap\bar{D}\right)\times\mathbb{R}^{2}_{+}. (38)

Then, there exist functions σ~k:n×+2n1\tilde{\sigma}_{k}:\mathbb{R}^{n}\times\mathbb{R}^{2}_{+}\to\mathbb{R}^{n_{1}}, σ~:n×02n1\tilde{\sigma}:\mathbb{R}^{n}\times\mathbb{R}_{\geq 0}^{2}\to\mathbb{R}^{n_{1}} such that the following holds:

  1. (a)

    For all (θ,τ)C|K,δ(\theta,\tau)\in{C}|_{K,\delta}, we have σ~k(θ,τ)=σk(θ,τ)\widetilde{{\sigma}}_{k}(\theta,\tau)={\sigma}_{k}(\theta,\tau) and σ~(θ,τ1)=σ¯(θ,τ1)\widetilde{{\sigma}}(\theta,\tau_{1})=\bar{{\sigma}}(\theta,\tau_{1}).

  2. (b)

    For all (θ,τ)n+2(\theta,\tau)\in\mathbb{R}^{n+2}, we have |σ~k(θ,τ)|Lk\lvert\widetilde{{\sigma}}_{k}(\theta,\tau)\lvert\leq L_{k}, and |σ~(θ,τ1)|L¯\lvert\widetilde{{\sigma}}(\theta,\tau_{1})\lvert\leq\bar{L}.

  3. (c)

    For all (θ,τ)n+2(\theta,\tau)\in\mathbb{R}^{n+2} and (θ,τ)n+2(\theta^{\prime},\tau^{\prime})\in\mathbb{R}^{n+2}:

    |σ~k(θ,τ)σ~k(θ,τ)|Lk(|θθ|+|ττ|),\displaystyle\lvert\widetilde{{\sigma}}_{k}(\theta,\tau)-\widetilde{{\sigma}}_{k}(\theta^{\prime},\tau^{\prime})\lvert\leq L_{k}\,\big{(}\lvert\theta-\theta^{\prime}\lvert+\lvert\tau-\tau^{\prime}\lvert\big{)},
    |σ~(θ,τ1)σ~(θ,τ1)|nL¯(|θθ|+|τ1τ1|).\displaystyle\lvert\widetilde{{\sigma}}(\theta,\tau_{1})-\widetilde{{\sigma}}(\theta^{\prime},\tau^{\prime}_{1})\lvert\leq\sqrt{n}\bar{L}\,\big{(}\lvert\theta-\theta^{\prime}\lvert+\lvert\tau_{1}-\tau^{\prime}_{1}\lvert\big{)}.
  4. (d)

    For all (θ,τ)C|K,δ(\theta,\tau)\in{C}|_{K,\delta}:

    α1(θ,τ):\displaystyle\alpha_{1}(\theta,\tau): =ϕ1(θ,τ)τ2σ~1(θ,τ)=0,\displaystyle=\phi_{1}(\theta,\tau)-\partial_{\tau_{2}}\widetilde{{\sigma}}_{1}(\theta,\tau)=0, (39)
    α2(θ,τ):\displaystyle\alpha_{2}(\theta,\tau): =ϕ2(θ,τ)τ2σ~2(θ,τ)\displaystyle=\phi_{2}(\theta,\tau)-\partial_{\tau_{2}}\widetilde{{\sigma}}_{2}(\theta,\tau)
    xσ~1(θ,τ)ϕ1(θ,τ)τ1σ~1(θ,τ)\displaystyle~{}~{}~{}~{}~{}~{}~{}-\partial_{x}\widetilde{{\sigma}}_{1}(\theta,\tau)\,\phi_{1}(\theta,\tau)-\partial_{\tau_{1}}\widetilde{{\sigma}}_{1}(\theta,\tau)
    =h¯(θ,τ1).\displaystyle=\bar{h}(\theta,\tau_{1}). (40)
  5. (e)

    There exists M3>0M_{3}>0 such that for all (θ,τ)C|K,δ(\theta,\tau)\in{C}|_{K,\delta}, α3(θ,τ)M3𝔹\alpha_{3}(\theta,\tau)\subset M_{3}\mathbb{B}, where

    α3(θ,τ):={\displaystyle\alpha_{3}(\theta,\tau):=\big{\{} Xϕ1(θ,τ)+Zζ+xσ~1(θ,τ)ϕ2(θ,τ)\displaystyle X\,\phi_{1}(\theta,\tau)+Z\,\zeta+\partial_{x}\widetilde{{\sigma}}_{1}(\theta,\tau)\,\phi_{2}(\theta,\tau)
    +τ1σ~2(θ,τ):ζΦ(z),Zzσ~1(θ,τ),\displaystyle+\partial_{\tau_{1}}\widetilde{{\sigma}}_{2}(\theta,\tau):\,\zeta\in\Phi(z),Z\in\partial_{z}\widetilde{\sigma}_{1}(\theta,\tau),
    Xxσ~2(θ,τ)},\displaystyle X\in\partial_{x}\widetilde{\sigma}_{2}(\theta,\tau)\big{\}}, (41)
  6. (f)

    There exists M4>0M_{4}>0 such that for all (θ,τ)C|K,δ(\theta,\tau)\in{C}|_{K,\delta}, α4(θ,τ)M4𝔹\alpha_{4}(\theta,\tau)\subset M_{4}\mathbb{B}, where

    α4(θ,τ):={\displaystyle\alpha_{4}(\theta,\tau):=\big{\{} Xϕ2(θ,τ)+Zζ:ζΦ(z),\displaystyle X\,\phi_{2}(\theta,\tau)+Z\,\zeta:\,\zeta\in\Phi(z),
    Zzσ~1(θ,τ),Xxσ~2(θ,τ)}.\displaystyle Z\in\partial_{z}\widetilde{\sigma}_{1}(\theta,\tau),X\in\partial_{x}\widetilde{\sigma}_{2}(\theta,\tau)\big{\}}. (42)

Continuing with the proof of Theorem LABEL:thm:closeness_of_trajs, let ε1:=δ(nL¯Ms+M3+M4+0.5L¯h¯)1\varepsilon_{1}^{*}:=\delta(\sqrt{n}\bar{L}M_{s}+M_{3}+M_{4}+0.5\bar{L}_{\bar{h}})^{-1}, ε2:=δ(2L¯(0.5n+1))1\varepsilon_{2}^{*}:=\delta(2\bar{L}(0.5\sqrt{n}+1))^{-1}, ε3:=δ(2(L1+L2)+1)1\varepsilon_{3}^{*}:=\delta(2(L_{1}+L_{2})+1)^{-1}, ε4:=ρ(2(L1+L2+(0.5n+1)L¯))1\varepsilon_{4}^{*}:=\rho(2(L_{1}+L_{2}+(0.5\sqrt{n}+1)\bar{L}))^{-1}, and ε:=min{ε0,ε1,ε2,ε3,ε4}\varepsilon^{*}:=\min\left\{\varepsilon_{0},\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\varepsilon_{4}\right\}, which satisfies ε(0,1)\varepsilon^{*}\in(0,1) due to the definition of ε3\varepsilon_{3}. Let ε(0,ε)\varepsilon\in(0,\varepsilon^{*}) and consider the restricted HDS:

(θ,τ)\displaystyle(\theta,\tau) C|K,{θ˙Fε(θ,τ1,τ2),τ˙1=ε1,τ˙2=ε2,\displaystyle\in C|_{K},~{}~{}~{}\left\{\begin{array}[]{l}~{}\dot{\theta}~{}\in~{}F_{\varepsilon}(\theta,\tau_{1},\tau_{2}),\\ \dot{\tau}_{1}~{}=~{}\varepsilon^{-1},\\ \dot{\tau}_{2}~{}=~{}\varepsilon^{-2},\end{array}\right. (43d)
(θ,τ)\displaystyle(\theta,\tau) D|K,{θ+G(θ)K,τ1+=τ1,τ2+=τ2,\displaystyle\in D|_{K},~{}~{}~{}\left\{\begin{array}[]{l}\theta^{+}~{}\in~{}G(\theta)\cap K,\\ \tau^{+}_{1}~{}=~{}\tau_{1},\\ \tau^{+}_{2}~{}=~{}\tau_{2},\end{array}\right. (43h)

with θ=(x,z)\theta=(x,z), τ=(τ1,τ2)\tau=(\tau_{1},\tau_{2}), and flow set C|KC|_{K} and jump set D|KD|_{K} defined in (38), with KK given by (33). We divide the rest of proof into four main steps:

Step 1: Construction and Properties of Auxiliary Functions: Using the compact set K~:=K+δ2𝔹\tilde{K}:=K+\frac{\delta}{2}\mathbb{B} and the functions σk,σ¯\sigma_{k},\bar{\sigma} given by (36), let Lemmas 2-3 generate Lh¯,Lf¯,Mh¯,Lk,L¯>0L_{\bar{h}},L_{\bar{f}},M_{\bar{h}},L_{k},\bar{L}>0. Using these constants, the set K~\tilde{K}, and the functions (σk,σ¯\sigma_{k},\bar{\sigma}), let Lemma 4 generate the functions (σ~k,σ~)(\tilde{\sigma}_{k},\tilde{\sigma}). Then, for each (θ,τ)n×+2(\theta,\tau)\in\mathbb{R}^{n}\times\mathbb{R}^{2}_{+} and each ε(0,ε)\varepsilon\in(0,\varepsilon^{*}), we define

θ^:=θγε(θ,τ),θ~:=θ^λε(θ^,τ1),\hat{\theta}:=\theta-\gamma_{\varepsilon}(\theta,\tau),~{}~{}~{}\tilde{\theta}:=\hat{\theta}-\lambda_{\varepsilon}(\hat{\theta},\tau_{1}), (44)

where

γε(θ,τ):\displaystyle\gamma_{\varepsilon}(\theta,\tau): =(k=12εkσ~k(θ,τ)0),λε(θ^,τ1):=(εσ~(θ^,τ1)0).\displaystyle=\left(\!\!\begin{array}[]{c}\sum_{k=1}^{2}\varepsilon^{k}\tilde{\sigma}_{k}(\theta,\tau)\\ 0\end{array}\!\!\!\right),~{}\lambda_{\varepsilon}(\hat{\theta},\tau_{1}):=\left(\!\!\!\begin{array}[]{c}\varepsilon\tilde{\sigma}(\hat{\theta},\tau_{1})\\ 0\end{array}\!\!\!\right). (49)

Using the inequalities in item (b) of Lemma 4, we have

|γε(θ,τ)|\displaystyle\left|\gamma_{\varepsilon}(\theta,\tau)\right| =|k=12εkσ~k(θ,τ)|ε(L1+εL2),\displaystyle=\left|\sum_{k=1}^{2}\varepsilon^{k}\widetilde{{\sigma}}_{k}(\theta,\tau)\right|\leq\varepsilon(L_{1}+\varepsilon L_{2}), (50)

for all (θ,τ)n×+2(\theta,\tau)\in\mathbb{R}^{n}\times\mathbb{R}^{2}_{+}. Similarly, due to items (b)-(c) of Lemma 4, the definition of θ^\hat{\theta} in (44), and inequality (50),

|λε(θ^,τ1)|\displaystyle\left|\lambda_{\varepsilon}(\hat{\theta},\tau_{1})\right| ε|σ~(θ^,τ1)σ~(θ,τ1)|+ε|σ~(θ,τ1)|\displaystyle\leq\varepsilon\left|\tilde{\sigma}(\hat{\theta},\tau_{1})-\tilde{\sigma}\left(\theta,\tau_{1}\right)\right|+\varepsilon\Big{|}\tilde{\sigma}\left(\theta,\tau_{1}\right)\Big{|}
εnL¯|θ^θ|+ε|σ~(θ,τ1)|\displaystyle\leq\varepsilon\sqrt{n}\bar{L}\big{|}\hat{\theta}-\theta\big{|}+\varepsilon|\tilde{\sigma}(\theta,\tau_{1})|
εnL¯|γε(θ,τ)|+ε|σ~(θ,τ1)|,\displaystyle\leq\varepsilon\sqrt{n}\bar{L}\big{|}\gamma_{\varepsilon}(\theta,\tau)\big{|}+\varepsilon|\tilde{\sigma}(\theta,\tau_{1})|, (51)

for all (θ,τ)n×02(\theta,\tau)\in\mathbb{R}^{n}\times\mathbb{R}_{\geq 0}^{2}. Also, note that due to item (a) of Lemma 4, the functions γε\gamma_{\varepsilon} and λε\lambda_{\varepsilon} satisfy γε(θ,τ)=(k=12εkσk(θ,τ),0)\gamma_{\varepsilon}(\theta,\tau)=\left(\sum_{k=1}^{2}\varepsilon^{k}\sigma_{k}(\theta,\tau),0\right) and λε(θ^,τ1)=(εσ¯(θ^,τ1),0)\lambda_{\varepsilon}(\hat{\theta},\tau_{1})=\left(\varepsilon\bar{\sigma}(\hat{\theta},\tau_{1}),0\right) for all (θ,τ),(θ^,τ)(K~Cδ)×+2(\theta,\tau),(\hat{\theta},\tau)\in\left(\tilde{K}\cap C_{\delta}\right)\times\mathbb{R}^{2}_{+}, with (σk,σ¯)(\sigma_{k},\bar{\sigma}) given by (37).

Step 2: Construction of First Auxiliary Solution: Let (θ,τ)(\theta,\tau) be a solution to (43) with θ(0,0)K0\theta(0,0)\in K_{0}. By construction, we have

θ(t,j)KK~,(t,j)dom(θ,τ).\theta(t,j)\in K\subset\tilde{K},~{}~{}\forall~{}(t,j)\in\text{dom}(\theta,\tau). (52)

Using (37a), item (b) of Lemma 4, (50), and the choice of ε\varepsilon^{*}

|γε(θ(t,j),τ(t,j))|δ2,(t,j)dom(θ,τ).|\gamma_{\varepsilon}(\theta(t,j),\tau(t,j))|\leq\frac{\delta}{2},~{}~{}~{}\forall~{}(t,j)\in\text{dom}(\theta,\tau). (53)

For each (t,j)dom(θ,τ)(t,j)\in\text{dom}(\theta,\tau), let θ^\hat{\theta} be defined via (44). It follows that θ^\hat{\theta} is a hybrid arc, and due to (52) and (53) it satisfies

θ^(t,j)K+δ2𝔹=K~,(t,j)dom(θ^,τ).\hat{\theta}(t,j)\in K+\frac{\delta}{2}\mathbb{B}=\tilde{K},~{}~{}~{}~{}\forall~{}(t,j)\in\text{dom}(\hat{\theta},\tau). (54)

Thus, using (37a), item (b) of Lemma 4, and (6.1), we get

|λε(θ^(t,j),τ1(t,j))|εL¯(12n+1)δ2,|\lambda_{\varepsilon}(\hat{\theta}(t,j),\tau_{1}(t,j))|\leq\varepsilon\bar{L}\left({\frac{1}{2}}\sqrt{n}+1\right)\leq{\frac{\delta}{2}}, (55)

for all (t,j)dom(θ^,τ)(t,j)\in\text{dom}(\hat{\theta},\tau).

Next, for each (t,j)dom(θ^,τ)(t,j)\in\text{dom}(\hat{\theta},\tau) such that (t,j+1)dom(θ^,τ)(t,j+1)\in\text{dom}(\hat{\theta},\tau), we have that:

θ(t,j)=θ^(t,j)+γε(θ(t,j),τ(t,j))DKDK~.\displaystyle\theta(t,j)=\hat{\theta}(t,j)+\gamma_{\varepsilon}(\theta(t,j),\tau(t,j))\in D\cap K\subset D\cap\tilde{K}.

Using (53) and the construction in (2b), we conclude that θ^(t,j)Dδ/2\hat{\theta}(t,j)\in D_{\delta/2}. Thus, using (43h) and (53), we obtain:

θ^(t,j+1)\displaystyle\hat{\theta}(t,j+1) =θ(t,j+1)γε(θ(t,j+1),τ(t,j+1))\displaystyle=\theta(t,j+1)-\gamma_{\varepsilon}(\theta(t,j+1),\tau(t,j+1))
G(θ(t,j)D)K+δ2𝔹\displaystyle\in G\left(\theta(t,j)\cap D\right)\cap K+{\frac{\delta}{2}}\mathbb{B}
G(θ(t,j)D)+δ2𝔹\displaystyle\subset G\left(\theta(t,j)\cap D\right)+{\frac{\delta}{2}}\mathbb{B}
=G((θ^(t,j)+γε(θ(t,j),τ(t,j)))D)+δ2𝔹\displaystyle=G\left(\left(\hat{\theta}(t,j)+\gamma_{\varepsilon}(\theta(t,j),\tau(t,j))\right)\cap D\right)+{\frac{\delta}{2}}\mathbb{B}
G((θ^(t,j)+δ2𝔹)D)+δ2𝔹.\displaystyle\subset G\left(\left(\hat{\theta}(t,j)+{\frac{\delta}{2}}\mathbb{B}\right)\cap D\right)+{\frac{\delta}{2}}\mathbb{B}. (56)

Similarly, from (44), for all j0j\in\mathbb{Z}_{\geq 0} such that Ij={t:(t,j)dom(θ^)}I_{j}=\{t:\,(t,j)\in\text{dom}\,(\hat{\theta})\} has a nonempty interior, we have:

θ(t,j)=θ^(t,j)+γε(θ(t,j),τ(t,j))CKCδK~,\theta(t,j)=\hat{\theta}(t,j)+\gamma_{\varepsilon}(\theta(t,j),\tau(t,j))\in C\cap K\subset C_{\delta}\cap\tilde{K}, (57)

for all tIjt\in I_{j}. Therefore, using the construction of the inflated flow set (2a), and the bound (53), we have that θ^(t,j)Cδ/2\hat{\theta}(t,j)\in C_{{\delta/2}}. Since γε\gamma_{\varepsilon} is Lipschitz continuous due to Lemma 4, θ^(,j)\hat{\theta}(\cdot,j) is locally absolutely continuous, and it satisfies

θ^˙(t,j)\displaystyle\dot{\hat{\theta}}(t,j) =θ˙(t,j)γε(θ(t,j),τ(t,j))˙\displaystyle=\dot{\theta}(t,j)-\dot{\overbrace{\gamma_{\varepsilon}(\theta(t,j),\tau(t,j))}}
=Fε(θ,τ)γε(θ(t,j),τ(t,j))˙,\displaystyle=F_{\varepsilon}(\theta,\tau)-\dot{\overbrace{\gamma_{\varepsilon}(\theta(t,j),\tau(t,j))}}, (58)

for almost all tIjt\in I_{j}. To compute (58), note that

εσ~˙1\displaystyle\varepsilon\dot{\tilde{\sigma}}_{1} εθσ~1θ˙+ετσ~1τ˙\displaystyle\in\varepsilon\partial_{\theta}\tilde{\sigma}_{1}\dot{\theta}+\varepsilon\partial_{\tau}\tilde{\sigma}_{1}\dot{\tau}
=εxσ~1x˙+εzσ~1z˙+ετ1σ~1τ˙1+ετ2σ~1τ˙2\displaystyle=\varepsilon\partial_{x}\tilde{\sigma}_{1}\dot{x}+\varepsilon\partial_{z}\tilde{\sigma}_{1}\dot{z}+\varepsilon\partial_{\tau_{1}}\tilde{\sigma}_{1}\dot{\tau}_{1}+\varepsilon\partial_{\tau_{2}}\tilde{\sigma}_{1}\dot{\tau}_{2}
xσ~1k=12εk1ϕk+τ1σ~1+1ετ2σ~1+εzσ~1Φ,\displaystyle\subset\partial_{x}\tilde{\sigma}_{1}\sum_{k=1}^{2}\varepsilon^{k-1}{\phi}_{k}+\partial_{\tau_{1}}\tilde{\sigma}_{1}+\frac{1}{\varepsilon}\partial_{\tau_{2}}\tilde{\sigma}_{1}+\varepsilon\partial_{z}\tilde{\sigma}_{1}\Phi,
ε2σ~˙2\displaystyle\varepsilon^{2}\dot{\tilde{\sigma}}_{2} ε2θσ~2θ˙+ε2τσ~2τ˙\displaystyle\in\varepsilon^{2}\partial_{\theta}\tilde{\sigma}_{2}\dot{\theta}+\varepsilon^{2}\partial_{\tau}\tilde{\sigma}_{2}\dot{\tau}
=ε2xσ~2x˙+ε2zσ~2z˙+ε2τ1σ~2τ˙1+ε2τ2σ~2τ˙2\displaystyle=\varepsilon^{2}\partial_{x}\tilde{\sigma}_{2}\dot{x}+\varepsilon^{2}\partial_{z}\tilde{\sigma}_{2}\dot{z}+\varepsilon^{2}\partial_{\tau_{1}}\tilde{\sigma}_{2}\dot{\tau}_{1}+\varepsilon^{2}\partial_{\tau_{2}}\tilde{\sigma}_{2}\dot{\tau}_{2}
xσ~2k=12εkϕk+ετ1σ~2+τ2σ~2+ε2zσ~2Φ,\displaystyle\subset\partial_{x}\tilde{\sigma}_{2}\sum_{k=1}^{2}\varepsilon^{k}{\phi}_{k}+\varepsilon\partial_{\tau_{1}}\tilde{\sigma}_{2}+\partial_{\tau_{2}}\tilde{\sigma}_{2}+\varepsilon^{2}\partial_{z}\tilde{\sigma}_{2}\Phi,

where we used (11), the structure of (12), and (43). Therefore, using the definition of γε\gamma_{\varepsilon}, we have that (58) can be written as

θ^˙(t,j)(k=14εk2αk(θ(t,j),τ(t,j))Φ(z(t,j))),\displaystyle\dot{\hat{\theta}}(t,j)\in\left(\begin{array}[]{c}\sum_{k=1}^{4}\varepsilon^{k-2}{\alpha}_{k}(\theta(t,j),\tau(t,j))\\ \Phi(z(t,j))\end{array}\right),

for almost all tIjt\in I_{j}. Using the last containment in (57), and items (a), (d), (e), and (f) of Lemma 4, we obtain

α1(θ(t,j),τ(t,j))=0,\displaystyle\alpha_{1}(\theta(t,j),\tau(t,j))=0,
α2(θ(t,j),τ(t,j))=h¯(θ(t,j),τ1(t,j)),\displaystyle\alpha_{2}(\theta(t,j),\tau(t,j))=\bar{{h}}(\theta(t,j),\tau_{1}(t,j)),
αk(θ(t,j),τ(t,j))Mk𝔹,k{3,4}.\displaystyle\alpha_{k}(\theta(t,j),\tau(t,j))\subset M_{k}\mathbb{B},~{}~{}~{}k\in\{3,4\}.

for all tIjt\in I_{j}. Thus, ε(0,ε)\forall~{}\varepsilon\in(0,\varepsilon^{*}) and

θ^˙(t,j)(h¯(θ(t,j),τ1(t,j))Φ(z(t,j)))+χε(θ(t,j),τ(t,j)),\displaystyle\dot{\hat{\theta}}(t,j)\in\left(\begin{array}[]{c}\bar{{h}}(\theta(t,j),\tau_{1}(t,j))\\ \Phi(z(t,j))\end{array}\right)+\chi_{\varepsilon}(\theta(t,j),\tau(t,j)), (61)

for almost all tIjt\in I_{j}, where

χε(θ,τ):=(k=12εkαk+2(θ(t,j),τ(t,j))0).\chi_{\varepsilon}(\theta,\tau):=\left(\begin{array}[]{c}\sum_{k=1}^{2}\varepsilon^{k}\alpha_{k+2}(\theta(t,j),\tau(t,j))\\ 0\end{array}\right).

Finally, using the equality in (57), we can write (61) as:

θ^˙(t,j)\displaystyle\dot{\hat{\theta}}(t,j) (h¯(θ^(t,j)+γε(θ(t,j),τ(t,j)),τ1(t,j))Φ(z(t,j)))\displaystyle\in\left(\begin{array}[]{c}\bar{{h}}\left(\hat{\theta}(t,j)+\gamma_{\varepsilon}(\theta(t,j),\tau(t,j)),\tau_{1}(t,j)\right)\\ \Phi(z(t,j))\end{array}\right) (64)
+χε(θ(t,j),τ(t,j)),\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\chi_{\varepsilon}(\theta(t,j),\tau(t,j)), (65)

which holds for almost all tIjt\in I_{j}.

Step 3: Construction of Second Auxiliary Solution: Using the solution (θ,τ)(\theta,\tau) to the restricted HDS (43) considered in Step 2, as well as the hybrid arc θ^\hat{\theta}, we now define a new auxiliary solution. In particular, for each (t,j)dom(θ,τ)(t,j)\in\text{dom}(\theta,\tau), using (44) we define:

θ~(t,j):=θ^(t,j)λε(θ^(t,j),τ1(t,j)).\tilde{\theta}(t,j):=\hat{\theta}(t,j)-\lambda_{\varepsilon}\left(\hat{\theta}(t,j),\tau_{1}(t,j)\right). (66)

By construction, θ~\tilde{\theta} is a hybrid arc. Since, by combining equation (44) and inequalities (53) and (55) we have that

|γε(θ(t,j),τ(t,j))+λε(θ^(t,j),τ1(t,j))|δ,|\gamma_{\varepsilon}(\theta(t,j),\tau(t,j))+\lambda_{\varepsilon}(\hat{\theta}(t,j),\tau_{1}(t,j))|\leq{\delta}, (67)

for all (t,j)dom(θ,τ)(t,j)\in\text{dom}(\theta,\tau), and for all (t,j)dom(θ^,τ)(t,j)\in\text{dom}(\hat{\theta},\tau), it follows that the hybrid arc (66) satisfies θ~(0,0)K0+δ𝔹\tilde{\theta}(0,0)\in K_{0}+\delta\mathbb{B}.

Now, for each (t,j)dom(θ~)(t,j)\in\text{dom}(\tilde{\theta}) such that (t,j+1)dom(θ~)(t,j+1)\in\text{dom}(\tilde{\theta}) it satisfies θ(t,j)=θ~(t,j)+γε(θ(t,j),τ(t,j))+λε(θ^(t,j),τ1(t,j))DKDK~\theta(t,j)=\tilde{\theta}(t,j)+\gamma_{\varepsilon}(\theta(t,j),\tau(t,j))+\lambda_{\varepsilon}(\hat{\theta}(t,j),\tau_{1}(t,j))\in D\cap K\subset D\cap\tilde{K}, where we used (44) and (66). Therefore, using the construction of the inflated jump set (2b), we conclude that θ~(t,j)Dδ\tilde{\theta}(t,j)\in D_{{\delta}}, and using (55)-(56) we have:

θ~(t,j+1)\displaystyle\tilde{\theta}(t,j+1) =θ^(t,j+1)λε(θ^(t,j+1),τ1(t,j+1))\displaystyle=\hat{\theta}(t,j+1)-\lambda_{\varepsilon}(\hat{\theta}(t,j+1),\tau_{1}(t,j+1))
G((θ^(t,j)+δ2𝔹)D)+δ𝔹\displaystyle\in G\left(\left(\hat{\theta}(t,j)+{\frac{\delta}{2}}\mathbb{B}\right)\cap D\right)+{\delta}\mathbb{B}
=G((θ~(t,j)+λε(θ^(t,j),τ1(t,j))+δ2𝔹)D)\displaystyle=G\Big{(}\Big{(}\tilde{\theta}(t,j)+\lambda_{\varepsilon}(\hat{\theta}(t,j),\tau_{1}(t,j))+{\frac{\delta}{2}}\mathbb{B}\Big{)}\cap D\Big{)}
+δ𝔹Gδ(θ~(t,j)),\displaystyle~{}~{}~{}+{\delta}\mathbb{B}\subset G_{{\delta}}(\tilde{\theta}(t,j)),

where we also used (66) and the definition of the inflated jump map GδG_{{\delta}} in (2d). On the other hand, for each j0j\in\mathbb{Z}_{\geq 0} such that Ij={t:(t,j)dom(θ~)}I_{j}=\{t:\,(t,j)\in\text{dom}\,(\tilde{\theta})\} has a nonempty interior, we must have:

θ(t,j)\displaystyle\theta(t,j) =θ~(t,j)+γε(θ(t,j),τ(t,j))+λε(θ^(t,j),τ1(t,j))\displaystyle=\tilde{\theta}(t,j)+\gamma_{\varepsilon}(\theta(t,j),\tau(t,j))+\lambda_{\varepsilon}(\hat{\theta}(t,j),\tau_{1}(t,j))
=θ^(t,j)+λε(θ^(t,j),τ1(t,j))CK,\displaystyle=\hat{\theta}(t,j)+\lambda_{\varepsilon}(\hat{\theta}(t,j),\tau_{1}(t,j))\in C\cap K, (68)

and therefore, due to (55), the inclusion in (68), and the definition of inflated flow set (2a):

θ^(t,j)Cδ/2KCδ/2K~CδK~,tIj.\hat{\theta}(t,j)\in C_{{\delta/2}}\cap K\subset C_{{\delta/2}}\cap\tilde{K}\subset C_{\delta}\cap\tilde{K},~{}~{}~{}\forall~{}t\in I_{j}. (69)

Since σ~\tilde{\sigma} is Lipschitz continuous in all arguments due to Lemma 4, it follows that θ~(,j)\tilde{\theta}(\cdot,j) is locally absolutely continuous and satisfies:

θ~˙(t,j)=θ^˙(t,j)dλε(θ^(t,j),τ1(t,j))dt,for a.a.tIj.\dot{\tilde{\theta}}(t,j)=\dot{\hat{\theta}}(t,j)-\frac{d\lambda_{\varepsilon}(\hat{\theta}(t,j),\tau_{1}(t,j))}{dt},~{}~{}\text{for~{}a.a.}~{}~{}t\in I_{j}.

Using the definition of λε\lambda_{\varepsilon} in (49), as well as (69), item (a) of Lemma 4, and (36c), we obtain:

λ˙ε\displaystyle\dot{\lambda}_{\varepsilon} (εθ^σ~(θ^,τ1)θ^˙+ετ1σ~(θ^,τ1)τ˙1,0)\displaystyle\in\left(\varepsilon\partial_{\hat{\theta}}\tilde{\sigma}(\hat{\theta},\tau_{1})\dot{\hat{\theta}}+\varepsilon\partial_{\tau_{1}}\tilde{\sigma}(\hat{\theta},\tau_{1})\dot{\tau}_{1},0\right)
=(εθ^σ~(θ^,τ1)θ^˙+τ1σ~(θ^,τ1),0)\displaystyle=\left(\varepsilon\partial_{\hat{\theta}}\tilde{\sigma}(\hat{\theta},\tau_{1})\dot{\hat{\theta}}+\partial_{\tau_{1}}\tilde{\sigma}(\hat{\theta},\tau_{1}),0\right)
=(εθ^σ~(θ^,τ1)θ^˙+h¯(θ^,τ1)f¯(θ^),0).\displaystyle=\left(\varepsilon\partial_{\hat{\theta}}\tilde{\sigma}(\hat{\theta},\tau_{1})\dot{\hat{\theta}}+\bar{h}(\hat{\theta},\tau_{1})-\bar{f}(\hat{\theta}),0\right).

Therefore, using (65) we have that for almost all tIjt\in I_{j}:

θ~˙(t,j)\displaystyle\dot{\tilde{\theta}}(t,j) (f¯(θ^(t,j))Φ(z(t,j)))+(p(θ(t,j),θ^(t,j),τ(t,j))0),\displaystyle\in\left(\!\!\begin{array}[]{c}\bar{f}\big{(}\hat{\theta}(t,j)\big{)}\\ \Phi\left(z(t,j)\right)\end{array}\!\!\right)+\left(\!\!\!\begin{array}[]{c}p\left(\theta(t,j),\hat{\theta}(t,j),\tau(t,j)\right)\\ 0\end{array}\!\!\!\right), (74)

where the last term can be written in compact form as

p(θ,θ^,τ)\displaystyle p(\theta,\hat{\theta},\tau) =εθ^σ~(θ^,τ1)θ^˙+k=12εkαk+2(θ,τ)\displaystyle=\varepsilon\partial_{\hat{\theta}}\tilde{\sigma}(\hat{\theta},\tau_{1})\dot{\hat{\theta}}+\sum_{k=1}^{2}\varepsilon^{k}\alpha_{k+2}(\theta,\tau)
+h¯(θ^+γε(θ,τ),τ1)h¯(θ^,τ1).\displaystyle~{}~{}~{}~{}~{}~{}~{}+\bar{{h}}\left(\hat{\theta}+\gamma_{\varepsilon}(\theta,\tau),\tau_{1}\right)-\bar{h}(\hat{\theta},\tau_{1}).

Using (66), (68), (69) and Lemma 2, we have:

|h¯(θ^+γε(θ,τ),τ1)h¯(θ^,τ1)|\displaystyle\left\lvert\bar{h}\left(\hat{\theta}+\gamma_{\varepsilon}(\theta,\tau),\tau_{1}\right)-\bar{h}\left(\hat{\theta},\tau_{1}\right)\right\lvert Lh¯|γε(θ,τ)|,\displaystyle\leq L_{\bar{h}}\lvert\gamma_{\varepsilon}(\theta,\tau)\lvert, (75a)
|h¯(θ,τ1)|\displaystyle\left\lvert\bar{{h}}(\theta,\tau_{1})\right\lvert Mh¯.\displaystyle\leq M_{\bar{h}}. (75b)

for all tIjt\in I_{j} and all ε(0,ε)\varepsilon\in(0,\varepsilon^{*}), where we omitted the time arguments to simplify notation. Using θ^=(x^,z)\hat{\theta}=(\hat{x},z), we have that

θ^σ~(θ^,τ1)θ^˙\displaystyle\partial_{\hat{\theta}}\tilde{\sigma}(\hat{\theta},\tau_{1})\dot{\hat{\theta}} =x^σ~(θ^,τ1)x^˙+zσ~(θ^,τ1)z˙\displaystyle=\partial_{\hat{x}}\tilde{\sigma}(\hat{\theta},\tau_{1})\dot{\hat{x}}+\partial_{z}\tilde{\sigma}(\hat{\theta},\tau_{1})\dot{z}
x^σ~(θ^,τ1)(h¯(θ,τ1)+k=12εkαk+2(θ,τ))\displaystyle\subset\partial_{\hat{x}}\tilde{\sigma}(\hat{\theta},\tau_{1})\left(\bar{h}(\theta,\tau_{1})+\sum_{k=1}^{2}\varepsilon^{k}\alpha_{k+2}(\theta,\tau)\right)
+zσ~(θ^,τ1)Φ(z),\displaystyle~{}~{}~{}~{}~{}~{}+\partial_{z}\tilde{\sigma}(\hat{\theta},\tau_{1})\Phi(z),

where the inclusion follows from (65), and αi\alpha_{i} are defined in (39), (40), (41), and (42). Since σ~\tilde{\sigma} is globally Lipschitz continuous by Lemma 4, it follows that x^σ~L¯𝔹,z^σ~L¯𝔹\partial_{\hat{x}}\tilde{\sigma}\subset\bar{L}\mathbb{B},\,\partial_{\hat{z}}\tilde{\sigma}\subset\bar{L}\mathbb{B}. Using (68), (69), items (c), (e) and (f) of Lemma 4, as well as (75b), we obtain for all tIjt\in I_{j} and all ε(0,ε)\varepsilon\in(0,\varepsilon^{*}):

θ^σ~(θ^(t,j),τ1(t,j))θ^˙(t,j)\displaystyle\partial_{\hat{\theta}}\tilde{\sigma}(\hat{\theta}(t,j),\tau_{1}(t,j))\dot{\hat{\theta}}(t,j) nL¯Ms𝔹,\displaystyle\subset\sqrt{n}\bar{L}M_{s}\mathbb{B}, (76)

where Ms:=Mh¯+M3+M4+MzM_{s}:=M_{\bar{h}}+M_{3}+M_{4}+M_{z}, the constant MzM_{z} comes from the proof of items (e)-(f) in Lemma 4, and where we used the fact that ε<1\varepsilon<1. Therefore, using the above bounds, as well as (53) to bound (75a), we conclude that for all tIjt\in I_{j} and all ε(0,ε)\varepsilon\in(0,\varepsilon^{*}): p(θ(t,j),θ^(t,j),τ(t,j))εM~𝔹δ𝔹p(\theta(t,j),\hat{\theta}(t,j),\tau(t,j))\subset\varepsilon\widetilde{M}\mathbb{B}\subset{\delta}\mathbb{B}, where M~=nL¯Ms+M3+M4+δL¯h¯2\widetilde{M}=\sqrt{n}\bar{L}M_{s}+M_{3}+M_{4}+{\frac{\delta\bar{L}_{\bar{h}}}{2}}, and we used the fact that δ<1\delta<1 and the choice of ε\varepsilon. Using (55), (66), and (74), we conclude that for almost all tIjt\in I_{j}:

θ~˙(t,j)\displaystyle\dot{\tilde{\theta}}(t,j) (f¯(θ~(t,j)+δ2𝔹)Φ(z(t,j)))+δ𝔹Fδ(θ~(t,j)),\displaystyle\in\left(\begin{array}[]{c}\bar{f}\left(\tilde{\theta}(t,j)+{\frac{\delta}{2}}\mathbb{B}\right)\\ \Phi(z(t,j))\end{array}\right)+{\delta}\mathbb{B}\subset F_{\delta}(\tilde{\theta}(t,j)),

where FδF_{\delta} is the inflated average flow map in (32). It follows that θ~\tilde{\theta} is (T,ρ/2)(T,\rho/2)-close to some solution θ¯\bar{\theta} of the average system (15), with θ¯(0,0)K0\bar{\theta}(0,0)\in K_{0}. Using (44), (66), and the fact that (67) also holds with δ=ρ2\delta=\frac{\rho}{2} due to the fact that ε<ε4\varepsilon<\varepsilon_{4}, we conclude that θ\theta is (T,ρ)(T,{\rho})-close to θ¯\bar{\theta}.

Step 4: Removing KK from the HDS (43). We now study the properties of the solutions to the unrestricted system (10) from K0K_{0}, based on the properties of the solutions of the restricted system (43) initialized also in K0K_{0}. Let (θ,τ)(\theta,\tau) be a solution of (10) starting in K0K_{0}. We consider two scenarios:

  1. (a)

    For all (t,j)dom(θ,τ)(t,j)\in\text{dom}(\theta,\tau) such that t+jTt+j\leq T we have θ(t,j)K\theta(t,j)\in K. Then, it follows that θ\theta is (T,ρ)(T,\rho)-close to θ¯\bar{\theta}.

  2. (b)

    There exists (t,j)dom(θ,τ)(t,j)\in\text{dom}(\theta,\tau) such that θ(s,k)K\theta(s,k)\in K for all (s,k)dom(θ,τ)(s,k)\in\text{dom}(\theta,\tau) such that s+kt+js+k\leq t+j and either:

    1. (1)

      there exists sequence {i}i\{\ell_{i}\}_{i\in\mathbb{Z}} satisfying limii=t\lim_{i\to\infty}\ell_{i}=t such that (i,j)dom(θ,τ)(\ell_{i},j)\in\text{dom}(\theta,\tau) and θ(i,j)K\theta(\ell_{i},j)\notin K for each ii, or else

    2. (2)

      (t,j+1)dom(θ,τ)(t,j+1)\in\text{dom}(\theta,\tau) and θ(t,j+1)K\theta(t,j+1)\notin K.

    Then, we must have that the solution (θ,τ)(\theta,\tau) agrees with a solution to (43) up to time (t,j)(t,j), which implies that θ(t,j)T(K0)+ρ𝔹\theta(t,j)\in\mathcal{R}_{T}(K_{0})+\rho\mathbb{B}. But, since ρ(0,1)\rho\in(0,1), and due to the construction of KK in (33), we have that T(K0)+ρ𝔹\mathcal{R}_{T}(K_{0})+\rho\mathbb{B} is contained in the interior of KK, so neither of the above two cases can occur.

Since item (b) cannot occur, we obtain the desired result. \blacksquare

6.2 Proof of Theorem LABEL:thm:avg_UAS_implies_org_PUAS

The proof proceeds in a similar manner to the proof of \threfthm:closeness_of_trajs and the proof of [49, Theorem 2].

Let ω:𝒜0\omega:\mathcal{B}_{\mathcal{A}}\rightarrow\mathbb{R}_{\geq 0} be a proper indicator for 𝒜\mathcal{A} on 𝒜\mathcal{B}_{\mathcal{A}}, and let β𝒦\beta\in\mathcal{KL} such that each solution θ¯\bar{\theta} of the averaged HDS (15) starting in 𝒜\mathcal{B}_{\mathcal{A}} satisfies

ω(θ¯(t,j))\displaystyle\omega\left(\bar{\theta}(t,j)\right) β(ω(θ¯(0,0)),t+j),\displaystyle\leq\beta\left(\omega(\bar{\theta}(0,0)),t+j\right), \displaystyle\forall (t,j)domθ¯.\displaystyle(t,j)\in\text{dom}\,\bar{\theta}.

Let K0𝒜K_{0}\subset\mathcal{B}_{\mathcal{A}} be compact, and let

K1\displaystyle K_{1} ={θ𝒜:ω(θ)β(maxθ~K0ω(θ~),0)+1},\displaystyle=\left\{\theta\in\mathcal{B}_{\mathcal{A}}:\,\omega(\theta)\leq\beta\left(\max_{\tilde{\theta}\in K_{0}}\omega(\tilde{\theta}),0\right)+1\right\}, (77)
K\displaystyle K =K1G(K1D).\displaystyle=K_{1}\cup G(K_{1}\cap D). (78)

By construction, the continuity of ω,β\omega,\beta, and the OSC property of GG, the set KK is compact and satisfies K𝒜K\subset\mathcal{B}_{\mathcal{A}}. Let ν(0,1)\nu\in(0,1) and observe that, due to the robustness properties of well-posed HDS [21, Lemma 7.20], there exists δ(0,δ)\delta\in(0,\delta^{*}) such that all solutions θ~\tilde{\theta} to the inflated averaged HDS (32) that start in K0+δ𝔹K_{0}+\delta\mathbb{B} satisfy for all (t,j)domθ~(t,j)\in\text{dom}\,\tilde{\theta}:

ω(θ~(t,j))\displaystyle\omega(\tilde{\theta}(t,j)) β(ω(θ~(0,0)),t+j)+ν3.\displaystyle\leq\beta\left(\omega(\tilde{\theta}(0,0)),t+j\right)+\frac{\nu}{3}. (79)

Without loss of generality we may assume that δ<1\delta<1, and we define K~=K+δ2𝔹\tilde{K}=K+\frac{\delta}{2}\mathbb{B}. Using K~\tilde{K} we let Lemmas 2 and 3 generate the constants Lh¯,Lf¯,Mh¯,L¯,Lk,L>0L_{\bar{h}},L_{\bar{f}},M_{\bar{h}},\bar{L},L_{k},L>0 so that the bounds Lemma 2 and (37) hold for all θ,θK~Cδ\theta,\theta^{\prime}\in\tilde{K}\cap C_{\delta} and all τ1,τ^10\tau_{1},\hat{\tau}_{1}\in\mathbb{R}_{\geq 0}. Using these constants, we define εi\varepsilon_{i} as in the proof of Theorem 1, for all i{1,2,3,4}i\in\{1,2,3,4\}. Since ω\omega and β\beta are continuous, and β(r,)\beta(r,\cdot) converges to zero as the argument grows unbounded, there exists ε5(0,δ2(L1+L2+L¯))\varepsilon_{5}^{*}\in\left(0,\frac{\delta}{2(L_{1}+L_{2}+\bar{L})}\right) such that for all θK~~{}\theta\in\tilde{K} and θ~K~+ε5(L1+L2+L¯)𝔹\tilde{\theta}\in\tilde{K}+\varepsilon_{5}^{*}(L_{1}+L_{2}+\bar{L})\mathbb{B} satisfying |θθ~|ε5(L1+L2+L¯)|\theta-\tilde{\theta}|\leq\varepsilon_{5}^{*}(L_{1}+L_{2}+\bar{L}) and all s0s\geq 0:

ω(θ)\displaystyle\omega(\theta) ω(θ~)+ν3,\displaystyle\leq\omega(\tilde{\theta})+\frac{\nu}{3}, β(ω(θ~),s)\displaystyle\beta(\omega(\tilde{\theta}),s) β(ω(θ),s)+ν3.\displaystyle\leq\beta(\omega({\theta}),s)+\frac{\nu}{3}. (80)

Then, we let ε=mink=1,,5εk\varepsilon^{*}=\min_{k=1,\ldots,5}\varepsilon_{k} and fix ε(0,ε)\varepsilon\in(0,\varepsilon^{*}). As in the proof of Theorem LABEL:thm:closeness_of_trajs, we define the restricted HDS:

(θ,τ)\displaystyle(\theta,\tau) C|K\displaystyle\in C|_{K} {θ˙Fε(θ,τ1,τ2),τ˙1=ε1,τ˙2=ε2,\displaystyle\begin{cases}~{}~{}~{}~{}\dot{\theta}&\in{F}_{\varepsilon}(\theta,\tau_{1},\tau_{2}),\\ \hphantom{\Big{[}}\begin{array}[]{c}\dot{\tau}_{1}\end{array}\hphantom{\Big{]}}&=\varepsilon^{-1},\\ \hphantom{\Big{[}}\begin{array}[]{c}\dot{\tau}_{2}\end{array}\hphantom{\Big{]}}&=\varepsilon^{-2},\end{cases} (81a)
(θ,τ)\displaystyle(\theta,\tau) D|K\displaystyle\in D|_{K} {θ+G(θ)K,τ1+=τ1,τ2+=τ2,\displaystyle\begin{cases}~{}~{}~{}~{}\theta^{+}&\in G(\theta)\cap K,\\ \hphantom{\Big{[}}\begin{array}[]{c}\tau_{1}^{+}\end{array}\hphantom{\Big{]}}&=\tau_{1},\\ \hphantom{\Big{[}}\begin{array}[]{c}\tau_{2}^{+}\end{array}\hphantom{\Big{]}}&=\tau_{2},\end{cases} (81b)

and we let (θ,τ)(\theta,\tau) denote a solution to (81) starting in K0K_{0}. As in the proof of Theorem LABEL:thm:closeness_of_trajs, using (44), we define, for each (t,j)dom(θ,τ)(t,j)\in\text{dom}(\theta,\tau), the arc θ~:=θγε(θ,τ)λε(θγε(θ,τ),τ1)\tilde{\theta}:=\theta-\gamma_{\varepsilon}(\theta,\tau)-\lambda_{\varepsilon}(\theta-\gamma_{\varepsilon}(\theta,\tau),\tau_{1}), which, as shown in the proof of Theorem LABEL:thm:closeness_of_trajs, is a hybrid arc that is a solution to the inflated HDS (32), and therefore satisfies (79) for all (t,j)domθ~(t,j)\in\text{dom}\,\tilde{\theta}. We can now use (80) to obtain:

ω(θ(t,j))\displaystyle\omega(\theta(t,j)) ω(θ~(t,j))+ν3β(ω(θ~(0,0)),t+j)+2ν3\displaystyle\leq\omega(\tilde{\theta}(t,j))+\frac{\nu}{3}\leq\beta(\omega(\tilde{\theta}(0,0)),t+j)+\frac{2\nu}{3}
β(ω(θ(0,0)),t+j)+ν,\displaystyle\leq\beta(\omega({\theta}(0,0)),t+j)+\nu, (82)

for all (t,j)dom(θ)(t,j)\in\text{dom}(\theta). Since β𝒦\beta\in\mathcal{K}\mathcal{L}, θ\theta remains in the set Kν={θ𝒜:ω(θ)β(maxθ~K0ω(θ~),0)+ν}K_{\nu}=\big{\{}\theta\in\mathcal{B}_{\mathcal{A}}:\,\omega(\theta)\leq\beta\big{(}\max_{\tilde{\theta}\in K_{0}}\omega(\tilde{\theta}),0\big{)}+\nu\big{\}}, which is compact and satisfies KvK1K_{v}\subset K_{1} due to the fact that ν<1\nu<1. We can now use the exact same argument of Step 4 in the proof of Theorem LABEL:thm:closeness_of_trajs to establish that (82) also holds for every solution starting in K0K_{0} of the unrestricted HDS (10). \blacksquare

6.3 Proofs of Propositions 1-3

Proof of Proposition 1: Since 𝒬u=\mathcal{Q}_{u}=\emptyset, η2>0\eta_{2}>0 and T0T_{\circ}\geq 0 can be arbitrary. \threfasmp:ex_synchronization_A1 guarantees that the subset 𝒮\mathcal{S} is UAS for the averaged vector field (LABEL:asmp:ex_synchronization_A1) for each fixed z1𝒬z_{1}\in\mathcal{Q}. Hence, since system (6d) satisfies Assumption 1, and 𝒜\mathcal{A} is compact, by [21, Thm. 7.12 and Lemma 7.20], the set 𝒜\mathcal{A} is SGPAS as η10+\eta_{1}\rightarrow 0^{+} with respect to some basin of attraction 𝒜\mathcal{B}_{\mathcal{A}}. Therefore, by \threfthm:avg_UAS_implies_org_PUAS, we conclude that 𝒜\mathcal{A} is SGPAS as (ε,η1)0+(\varepsilon,\eta_{1})\rightarrow 0^{+} for (10) with respect to 𝒜\mathcal{B}_{\mathcal{A}}. \blacksquare

Proof of Proposition LABEL:prop:ex2_es_intermittent: We show that the average HDS (22) renders UGAS the set 𝒜\mathcal{A}, such that Theorem LABEL:thm:avg_UAS_implies_org_PUAS can be directly used. Indeed, consider the Lyapunov function V(x¯)=J(x¯)J(x)V(\bar{x})=J(\bar{x})-J(x^{*}), and note that for each z1𝒬z_{1}\in\mathcal{Q}, we have V(x¯)f¯z1(x¯)=(2z1)J(x¯)P(x¯)J(x¯)\nabla V(\bar{x})^{\intercal}\bar{f}_{z_{1}}(\bar{x})=(2-z_{1})\nabla J(\bar{x})^{\intercal}P(\bar{x})\nabla J(\bar{x}). Using strong convexity and globally Lipschitz of J\nabla J, we obtain: For z1=1z_{1}=1: V(x¯)f¯1(x¯)2MPLJμV(x¯)\nabla V(\bar{x})^{\intercal}\bar{f}_{1}(\bar{x})\leq\frac{2M_{P}L_{J}}{\mu}V(\bar{x}); For z1=2z_{1}=2: V(x¯)f¯2(x¯)2MPLJμV(x¯)\nabla V(\bar{x})^{\intercal}\bar{f}_{2}(\bar{x})\leq\frac{2M_{P}L_{J}}{\mu}V(\bar{x}); For z1=3z_{1}=3: V(x¯)f¯3(x¯)2λPμV(x¯)\nabla V(\bar{x})^{\intercal}\bar{f}_{3}(\bar{x})\leq-2\lambda_{P}\mu V(\bar{x}). By [40, Proposition 3], we obtain that for η1>0\eta_{1}>0 and for η2>0\eta^{*}_{2}>0 sufficiently small, the set 𝒜={x}×Cz\mathcal{A}=\{x^{\star}\}\times C_{z} is UGAS for the average HDS (22). Hence, by Theorem LABEL:thm:avg_UAS_implies_org_PUAS, 𝒜\mathcal{A} is SGPAS as ε0+\varepsilon\to 0^{+} for the original system (10). \blacksquare

Proof of Proposition 3: We show that 2ave\mathcal{H}_{2}^{\text{ave}} renders UGAS the set 𝒜\mathcal{A}. To do this, we consider the Lyapunov function V(z)=J~z(x)J(x)V(z)=\tilde{J}_{z}(x)-J(x^{\star}), which during flows satisfies V˙=i=1rbi(x¯),J~z¯(x¯)20,(x¯,z¯)C\dot{V}=-\sum_{i=1}^{r}\langle b_{i}(\bar{x}),\nabla\tilde{J}_{\bar{z}}(\bar{x})\rangle^{2}\leq 0,~{}~{}\forall~{}(\bar{x},\bar{z})\in C. If V˙=0\dot{V}=0 for some (x¯,z¯)C(\bar{x},\bar{z})\in C, then we have three possible cases:

  1. 1.

    J~z¯(x¯)0\nabla\tilde{J}_{\bar{z}}(\bar{x})\neq 0, but P(x¯)[J~z¯(x¯)]=0P(\bar{x})[\nabla\tilde{J}_{\bar{z}}(\bar{x})]=0, which implies that J~z¯(x¯)(Tx¯M)\nabla\tilde{J}_{\bar{z}}(\bar{x})\in(T_{\bar{x}}M)^{\perp} due to item (a) in \threfasmp:mfd_ex_A2. Consequently, we must have MJ~z¯(x¯)=0\nabla_{M}\tilde{J}_{\bar{z}}(\bar{x})=0; or

  2. 2.

    J~z¯(x¯)=0\nabla\tilde{J}_{\bar{z}}(\bar{x})=0, which again implies that MJ~z(x¯)=0\nabla_{M}\tilde{J}_{z}(\bar{x})=0; or

  3. 3.

    J~z¯(x¯)0\nabla\tilde{J}_{\bar{z}}(\bar{x})\neq 0 and P(x¯)[J~z¯(x¯)]0P(\bar{x})[\nabla\tilde{J}_{\bar{z}}(\bar{x})]\neq 0, but J~z(x¯),P(x¯)[J~z¯(x¯)]=0.\langle\nabla\tilde{J}_{z}(\bar{x}),P(\bar{x})[\nabla\tilde{J}_{\bar{z}}(\bar{x})]\rangle=0. In that case, we can decompose J~z¯(x¯)=MJ~z¯(x¯)+MJ~z¯(x¯)\nabla\tilde{J}_{\bar{z}}(\bar{x})=\nabla_{M}\tilde{J}_{\bar{z}}(\bar{x})+\nabla_{M}^{\perp}\tilde{J}_{\bar{z}}(\bar{x}) where MJ~z¯(x¯)(TxM)\nabla_{M}^{\perp}\tilde{J}_{\bar{z}}(\bar{x})\in(T_{x}M)^{\perp}, and therefore P(x¯)[J~z¯(x¯)]P(\bar{x})[\nabla\tilde{J}_{\bar{z}}(\bar{x})] =P(x¯)[MJ~z¯(x¯)]=P(\bar{x})[\nabla_{M}\tilde{J}_{\bar{z}}(\bar{x})], and MJ~z(x¯),P(x¯)[MJ~z¯(x¯)]=0.\langle\nabla_{M}\tilde{J}_{z}(\bar{x}),P(\bar{x})[\nabla_{M}\tilde{J}_{\bar{z}}(\bar{x})]\rangle=0. However, by definition of PP, we have i=1rbi(x¯),MJ~z¯(x¯)2=0P(x¯)[MJ~z¯(x¯)]=0,\sum_{i=1}^{r}\langle b_{i}(\bar{x}),\nabla_{M}\tilde{J}_{\bar{z}}(\bar{x})\rangle^{2}=0\implies P(\bar{x})[\nabla_{M}\tilde{J}_{\bar{z}}(\bar{x})]=0, which contradicts the assumption that P(x¯)[J~z¯(x¯)]=P(x¯)[MJ~z¯(x¯)]0P(\bar{x})[\nabla\tilde{J}_{\bar{z}}(\bar{x})]=P(\bar{x})[\nabla_{M}\tilde{J}_{\bar{z}}(\bar{x})]\neq 0. Hence, this case cannot happen.

Therefore, V˙=0\dot{V}=0 for some (x¯,z¯)C(\bar{x},\bar{z})\in C implies that MJ~z¯(x¯)=0\nabla_{M}\tilde{J}_{\bar{z}}(\bar{x})=0. Due to item (b) in \threfasmp:mfd_ex_A2 and item (b) in \threfasmp:mfd_ex_A1, and since flows are allowed only in the set CC, which contains no critical points other than in 𝒜\mathcal{A}, it follows that V˙=0\dot{V}=0 if and only if (x¯,z¯)𝒜(\bar{x},\bar{z})\in\mathcal{A}. Next, observe that immediately after jumps we have for all (x¯,z¯)D(\bar{x},\bar{z})\in D: ΔV=V(x¯+,z¯+)V(x¯,z¯)=J~z¯+(x¯)J~z¯(x¯)δ<0\Delta V=V(\bar{x}^{+},\bar{z}^{+})-V(\bar{x},\bar{z})=\tilde{J}_{\bar{z}^{+}}(\bar{x})-\tilde{J}_{\bar{z}}(\bar{x})\leq-\delta<0, since, by definition of the jump map we have z¯+{z𝒬:J~z(x¯)=minz~𝒬J~z~(x¯)}\bar{z}^{+}\in\{z\in\mathcal{Q}:\tilde{J}_{z}(\bar{x})=\min_{\tilde{z}\in\mathcal{Q}}\tilde{J}_{\tilde{z}}(\bar{x})\} and, using the structure of DD,

(x¯,z¯)DJ~z¯(x¯)minz~𝒬J~z~(x¯)δ>0.\displaystyle(\bar{x},\bar{z})\in D\implies\tilde{J}_{\bar{z}}(\bar{x})-\min_{\tilde{z}\in\mathcal{Q}}\tilde{J}_{\tilde{z}}(\bar{x})\geq\delta>0.

By [42, Theorem 3.19], 𝒜\mathcal{A} is UGAS for the HDS (15). By \threfcorollary1, 𝒜\mathcal{A} is GPAS as ε0+\varepsilon\to 0^{+} for the HDS (10). \blacksquare

7 Conclusions

A second-order averaging result is introduced for a class of hybrid dynamical systems that combine differential and difference inclusions. Under regularity conditions, this result establishes the closeness of solutions between the original and average dynamics, enabling semi-global practical asymptotic stability when the average hybrid system has an asymptotically stable compact set. The findings are demonstrated through the analysis and design of various hybrid and highly oscillatory algorithms for model-free control and optimization problems. The theoretical tools developed in this paper also enable the study of systems for which global control Lyapunov functions, as defined in [44], do not exist. For additional applications, we refer the reader to the recent manuscript [1]. Other future research directions will study incorporating resets into the controllers [31, 39], stochastic phenomena, the development of hybrid source-seeking controllers with multi-obstacle avoidance capabilities [9], as well as developing experimental validations of the proposed algorithms.

References

  • [1] M Abdelgalil and Jorge I. Poveda. Hybrid Minimum-Seeking in Synergistic Lyapunov Functions: Robust Global Stabilization under Unknown Control Directions. arXiv:2408.04882, 2024.
  • [2] Mahmoud Abdelgalil. Higher order averaging for hybrid dynamical systems. https://github.com/maabdelg/HOA-HDS.
  • [3] Mahmoud Abdelgalil, Daniel E Ochoa, and Jorge I Poveda. Multi-time scale control and optimization via averaging and singular perturbation theory: From odes to hybrid dynamical systems. Annual Reviews in Control, 56:100926, 2023.
  • [4] Mahmoud Abdelgalil and Jorge I Poveda. On Lie-bracket averaging for a class of hybrid dynamical systems with applications to model-free control and optimization: Extended manuscript. arXiv preprint arXiv:2308.15732, 2023.
  • [5] Mahmoud Abdelgalil and Haithem Taha. Recursive averaging with application to bio-inspired 3-d source seeking. IEEE Control Systems Letters, 6:2816–2821, 2022.
  • [6] K. B. Ariyur and M. Krstić. Real-Time Optimization by Extremum-Seeking Control. Wiley, 2003.
  • [7] S. Azuma, M. S. Sakar, and G. J. Pappas. Stochastic Source Seeking by Mobile Robots. IEEE Transactions on Automatic Control, 57(9):2308–2321, 2012.
  • [8] Ilario A Azzollini, Nicola Mimmo, and Lorenzo Marconi. An extremum seeking approach to search and rescue operations in avalanches using arva. IFAC-PapersOnLine, 53(2):1627–1632, 2020.
  • [9] M. Benosman and J. I. Poveda. Robust source seeking and formation learning-based controller. US Patent 10,915,108 B2, 2019.
  • [10] Emmanuel Bernuau, Wilfrid Perruquetti, and Emmanuel Moulay. Retraction obstruction to time-varying stabilization. Automatica, 49(6):1941–1943, 2013.
  • [11] P. Casau, R. Cunha, R. G., and C. Silvestre. Hybrid Control for Robust and Global Tracking on Smooth Manifolds. IEEE Transactions on Automatic Control, 65:1870–1885, 2020.
  • [12] Pedro Casau, Ricardo G Sanfelice, and Carlos Silvestre. Robust synergistic hybrid feedback. IEEE Transactions on Automatic Control, 2024.
  • [13] Francis H Clarke, Yuri S Ledyaev, Ronald J Stern, and Peter R Wolenski. Nonsmooth analysis and control theory, volume 178. Springer, 2008.
  • [14] J. Cochran and M. Krstić. Nonholonomic source seeking with tuning of angular velocity. IEEE Trans. on Autom. Control, 54:717–731, 2009.
  • [15] Avik De, Samuel A Burden, and Daniel E Koditschek. A hybrid dynamical extension of averaging and its application to the analysis of legged gait stability. The International Journal of Robotics Research, 37(2-3):266–286, 2018.
  • [16] H. Dürr, M. Stanković, C. Ebenbauer, and K. H. Johansson. Lie bracket approximation of extremum seeking systems. Automatica, 49:1538–1552, 2013.
  • [17] H. Durr, M. Stanković, K. H. Johansson, and C. Ebenbauer. Extremum seeking on submanifolds in the Euclidian space. Automatica, 50:2591–2596, 2014.
  • [18] Hans-Bernd Dürr, Miloš S Stanković, Karl Henrik Johansson, and Christian Ebenbauer. Examples of distance-based synchronization: An extremum seeking approach. In 51st Allerton Conference, pages 366–373, 2013.
  • [19] Felipe Galarza-Jimenez, Jorge I Poveda, Gianluca Bianchin, and Emiliano Dall’Anese. Extremum seeking under persistent gradient deception: A switching systems approach. IEEE Control Systems Letters, 6:133–138, 2021.
  • [20] R. Goebel, R. G. Sanfelice, and A. R. Teel. Hybrid Dynamical Systems. IEEE Control Systems Magazine, 29(2):28–93, 2009.
  • [21] R. Goebel, R. G. Sanfelice, and A. R. Teel. Hybrid Dynamical Systems: Modeling, Stability, and Robustness. Princeton University Press, 2012.
  • [22] Victoria Grushkovskaya, Alexander Zuyev, and Christian Ebenbauer. On a class of generating vector fields for the extremum seeking problem: Lie bracket approximation and stability properties. Automatica, 94:151–160, 2018.
  • [23] J. P. Hespanha and A. S. Morse. Stabilization of switched systems with average dwell-time. 38th IEEE Conference on Decision and Control, pages 2655–2660, 1999.
  • [24] Luigi Iannelli, Karl Henrik Johansson, Ulf T Jönsson, and Francesco Vasca. Averaging of nonsmooth systems using dither. Automatica, 42(4):669–676, 2006.
  • [25] Luigi Iannelli, Karl Henrik Johansson, Ulf T Jönsson, and Francesco Vasca. Subtleties in the averaging of a class of hybrid systems with applications to power converters. Control Eng. Practice, 16(8):961–975, 2008.
  • [26] Luigi Iannelli, Karl Henrik Johansson, UT Jonsson, and Francesco Vasca. Dither for smoothing relay feedback systems. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 50(8):1025–1035, 2003.
  • [27] H. K. Khalil. Nonlinear Systems. Prentice Hall, 2002.
  • [28] Suad Krilašević and Sergio Grammatico. Learning generalized nash equilibria in monotone games: A hybrid adaptive extremum seeking control approach. Automatica, 151:110931, 2023.
  • [29] R. J. Kutadinata, W. Moase, and C. Manzie. Extremum-Seeking in Singularly Perturbed Hybrid Systems. IEEE Transactions on Automatic and Control, 62(6):3014–3020, 2017.
  • [30] Christophe Labar, Christian Ebenbauer, and Lorenzo Marconi. Extremum seeking with intermittent measurements: A lie-brackets approach. IEEE Transactions on Automatic Control, 67(12):6968–6974, 2022.
  • [31] Yaoyu Li and John E Seem. Extremum seeking control with reset control, June 12 2012. US Patent 8,200,344.
  • [32] Daniel Liberzon and Hyungbo Shim. Stability of linear systems with slow and fast time variation and switching. In 2022 IEEE 61st Conference on Decision and Control (CDC), pages 674–678. IEEE, 2022.
  • [33] Marco Maggia, Sameh A Eisa, and Haithem E Taha. On higher-order averaging of time-periodic systems: reconciliation of two averaging techniques. Nonlinear Dynamics, 99:813–836, 2020.
  • [34] C. G. Mayhew. Hybrid Control for Topologically Constrained Systems, Ph.D Dissertation. University of California, Santa Barbara, 2010.
  • [35] Christopher G Mayhew and Andrew R Teel. On the topological structure of attraction basins for differential inclusions. Systems & Control Letters, 60(12):1045–1050, 2011.
  • [36] Daniel E Ochoa and Jorge I Poveda. Momentum-based nash set-seeking over networks via multi-time scale hybrid dynamic inclusions. IEEE Transactions on Automatic Control, 2023.
  • [37] Daniel E Ochoa and Jorge I Poveda. Robust global optimization on smooth compact manifolds via hybrid gradient-free dynamics. Automatica, 171:111916, 2025.
  • [38] J. I. Poveda, M. Benosman, A. R. Teel, and R. G. Sanfelice. Coordinated hybrid source seeking with robust obstacle avoidance in multi-vehicle autonomous systems. IEEE Transactions on Automatic Control, Vol. 67(No. 2):pp. 706–721, 2022.
  • [39] J. I. Poveda and N. Li. Robust hybrid zero-order optimization algorithms with acceleration via averaging in continuous time. Automatica, 123, 2021.
  • [40] J. I. Poveda and A. R. Teel. A framework for a class of hybrid extremum seeking controllers with dynamic inclusions. Automatica, 76:113–126, 2017.
  • [41] Jan A Sanders, Ferdinand Verhulst, and James Murdock. Averaging methods in nonlinear dynamical systems, volume 59. Springer, 2007.
  • [42] R. Sanfelice. Hybrid Feedback Control. Princeton Uni. Press, 2021.
  • [43] Alexander Scheinker and Miroslav Krstić. Minimum-seeking for clfs: Universal semiglobally stabilizing feedback under unknown control directions. IEEE Transactions on Automatic Control, 58(5):1107–1122, 2012.
  • [44] Alexander Scheinker and Miroslav Krstić. Model-free stabilization by extremum seeking. Springer, 2017.
  • [45] Eduardo D Sontag. Remarks on input to state stability of perturbed gradient flows, motivated by model-free feedback control learning. Systems & Control Letters, 161:105138, 2022.
  • [46] T. Strizic, J. I. Poveda, and A. R. Teel. Hybrid gradient descent for robust global optimization on the circle. 56th IEEE Conference on Decision and Control, pages 2985–2990, 2017.
  • [47] Raik Suttner and Sergey Dashkovskiy. Robustness and averaging properties of a large-amplitude, high-frequency extremum seeking control scheme. Automatica, 136:110020, 2022.
  • [48] Y. Tan, D. Nes̆ić, and I. Mareels. On non-local stability properties of extremum seeking controllers. Automatica, 42(6):889–903, 2006.
  • [49] A. R. Teel and D. Nes̆ić. Averaging Theory for a Class of Hybrid Systems. Dynamics of Continuous, Discrete and Impulsive Systems, 17:829–851, 2010.
  • [50] H. Dürr and M. Krstic and A. Scheinker and C. Ebenbauer. Extremum seeking for dynamic maps using Liebrackets and singular perturbations. Automatica, 83(91-99), 2017.
  • [51] M. Mueller, R. D’Andrea. Stability and control of a quadrocopter despite the complete loss of one, two, or three propellers. In IEEE international conference on robotics and automation, pages 45–52, 2014.
  • [52] R. Suttner, S. Dashkovskiy. Exponential stability for extremum seeking control systems. IFAC-PapersOnLine, 50(1):15464–15470, 2017.
  • [53] W. Wang, A. Teel, and D. Nes̆ić. Analysis for a class of singularly perturbed hybrid systems via averaging. Automatica, 48(6), 2012.
  • [54] G. Yang and D. Liberzon. A Lyapunov-based small-gain theorem for interconnected switched systems. Systems and Control Letters, 78:47–54, 2015.
  • [55] Chunlei Zhang, Daniel Arnold, Nima Ghods, Antranik Siranosian, and Miroslav Krstic. Source seeking with non-holonomic unicycle without position measurement and with tuning of forward velocity. Systems & control letters, 56(3):245–252, 2007.

Appendix A Proofs of Auxiliary Lemmas 2-4

Proof of Lemma 2: By \threfasmp:A1, ϕ1\phi_{1} is 𝒞1\mathcal{C}^{1} in xx and τ1\tau_{1} and 𝒞0\mathcal{C}^{0} in zz, and therefore so is u1u_{1} since it is the integral of ϕ1\phi_{1} with respect to τ2\tau_{2}. Moreover, ϕ2\phi_{2} is 𝒞0\mathcal{C}^{0} in xx, τ1\tau_{1}, and zz. It follows that ψm\psi_{m} is 𝒞0\mathcal{C}^{0} in xx, τ1\tau_{1}, and zz, since it is the integral with respect to τ2\tau_{2} of terms that involve xϕ1,xu1,ϕ2,ϕ1,\partial_{x}\phi_{1},\,\partial_{x}u_{1},\,\phi_{2},\,\phi_{1}, and u1u_{1}, which are all 𝒞0\mathcal{C}^{0} in xx, τ1\tau_{1}, and zz. Consequently, h¯\bar{h} is 𝒞0\mathcal{C}^{0} in all its arguments. In addition, since f¯\bar{f} is the integral with respect to τ1\tau_{1} of h¯\bar{h}, it follows that f¯\bar{f} is also 𝒞0\mathcal{C}^{0} in all its arguments. The conclusion of the lemma follows from the fact that the set KK is compact and that all functions are periodic in τ1\tau_{1}, and therefore h¯\bar{h} is uniformly Lipschitz continuous and bounded in (KCδ)×0(K\cap C_{\delta})\times\mathbb{R}_{\geq 0}, and the vector field f¯\bar{f} is uniformly Lipschitz continuous on KCδK\cap C_{\delta}. \blacksquare

Proof of Lemma 3: By \threfasmp:A1, ϕ1\phi_{1} is continuous with respect to τ2\tau_{2}, and is 𝒞1\mathcal{C}^{1} with respect to xx and τ1\tau_{1}, i.e. continuously differentiable and its differential (the jacobian matrix [xϕ1τ1ϕ1][\partial_{x}\phi_{1}\,\,\partial_{\tau_{1}}\phi_{1}]) is locally Lipschitz continuous. Since u1u_{1} is the integral of ϕ1\phi_{1} with respect to τ2\tau_{2}, it follows that u1u_{1}, xu1\partial_{x}u_{1}, and τ1u1\partial_{\tau_{1}}u_{1} are 𝒞0\mathcal{C}^{0} in all arguments. In addition, we note that ϕ2\phi_{2} is 𝒞0\mathcal{C}^{0} in xx and τ1\tau_{1}, and continuous with respect to τ2\tau_{2}. Therefore, u2u_{2} is the integral with respect to τ2\tau_{2} of terms that are locally Lipschitz continuous in xx and τ1\tau_{1}, and continuous in τ2\tau_{2}. Consequently, u2u_{2} is also 𝒞0\mathcal{C}^{0} in all arguments. Moreover, since all terms are periodic in τ1\tau_{1} and τ2\tau_{2}, we obtain that all terms are globally Lipschitz in τ1\tau_{1} and τ2\tau_{2}. This establishes the inequalities for the maps σk\sigma_{k} for k{1,2}k\in\{1,2\}. An identical argument establishes the inequality for the map σ¯.\bar{\sigma}. \blacksquare

Proof of Lemma 4: By invoking [49, Lemma 2], there exist functions σ~k,σ~\tilde{\sigma}_{k},\tilde{\sigma}, constructed using the saturation function as in [49, Lemma 2], such that, due to Lemma 3, the properties in items (a), (b), and (c) hold. To establish item (d), we note that since σ~1=σ1\tilde{\sigma}_{1}=\sigma_{1} in the set C|K,δC|_{K,\delta}, and by using the construction of σ1\sigma_{1} in (36a) and the definition of u1u_{1} in (13a), we directly obtain τ2u1=ϕ1\partial_{\tau_{2}}u_{1}=\phi_{1}, which establishes (39). Similarly, since

τ2σ~2\displaystyle\partial_{\tau_{2}}\widetilde{{\sigma}}_{2} =ϕ2+ψm+ψph¯τ1u1τ2(xu1u1),\displaystyle=\phi_{2}+\psi_{m}+\psi_{p}-\bar{h}-\partial_{\tau_{1}}u_{1}-\partial_{\tau_{2}}(\partial_{x}u_{1}\cdot u_{1}), (83a)
xσ~1ϕ1\displaystyle\partial_{x}\widetilde{{\sigma}}_{1}\,\phi_{1} =xu1ϕ1,τ1σ~1=τ1u1,\displaystyle=\partial_{x}u_{1}\cdot\phi_{1},~{}~{}~{}~{}\partial_{\tau_{1}}\widetilde{{\sigma}}_{1}=\partial_{\tau_{1}}u_{1}, (83b)

and since τ2(xu1u1)=τ2(xu1)u1+xu1ϕ1\partial_{\tau_{2}}(\partial_{x}u_{1}\cdot u_{1})=\partial_{\tau_{2}}(\partial_{x}u_{1})\cdot u_{1}+\partial_{x}u_{1}\cdot\phi_{1}, and

τ2(x0τ2ϕ1(θ¯,τ1,s2)𝑑s2)\displaystyle\partial_{\tau_{2}}\left(\partial_{x}\int_{0}^{\tau_{2}}\phi_{1}(\bar{\theta},\tau_{1},s_{2})\,ds_{2}\right) =τ2(0τ2xϕ1(θ¯,τ1,s2)ds2)\displaystyle=\partial_{\tau_{2}}\left(\int_{0}^{\tau_{2}}\partial_{x}\phi_{1}(\bar{\theta},\tau_{1},s_{2})\,ds_{2}\right)
=xϕ1,\displaystyle=\partial_{x}\phi_{1}, (84)

we can combine (83)-(84) to obtain (40). To establish items (e) and (f), let Mk>0M_{k}>0 satisfy |ϕk(θ,τ)|Mk|\phi_{k}(\theta,\tau)|\leq M_{k}, for k{1,2}k\in\{1,2\}, for all (θ,τ)C|K,δ×02(\theta,\tau)\in C|_{K,\delta}\times\mathbb{R}^{2}_{\geq 0}, which exists due to the continuity of ϕk\phi_{k}, the compactness of C|K,δC|_{K,\delta} and the periodicity of ϕk\phi_{k} with respect to τ\tau. In addition, since Φ\Phi is LB, there exist Mz>0M_{z}>0 such that Φ(C|K,δ)Mz𝔹\Phi(C|_{K,\delta})\subset M_{z}\mathbb{B}. Moreover, since σ~1\widetilde{{\sigma}}_{1} and σ~2\widetilde{{\sigma}}_{2} are globally Lipschitz by construction, it follows that the generalized Jacobians zσ~1\partial_{z}\widetilde{\sigma}_{1} and xσ~2\partial_{x}\widetilde{\sigma}_{2} are OSC, compact, convex, and bounded [13]. Therefore, (θ,τ)C|K,δ×02\forall(\theta,\tau)\in C|_{K,\delta}\times\mathbb{R}_{\geq 0}^{2}, a3α3(θ,τ)\forall a_{3}\in\alpha_{3}(\theta,\tau), and a4α4(θ,τ)\forall a_{4}\in\alpha_{4}(\theta,\tau), we have

|a3|\displaystyle\lvert a_{3}\lvert L2M1+L1Mz+L1M2+L1=:M3,\displaystyle\leq L_{2}M_{1}+L_{1}M_{z}+L_{1}M_{2}+L_{1}=:M_{3},
|a4|\displaystyle\lvert a_{4}\lvert L2M2+L2Mz=:M4,\displaystyle\leq L_{2}M_{2}+L_{2}M_{z}=:M_{4},

which establishes the result. \blacksquare

Appendix B Construction of an adversarial input on 𝕊n\mathbb{S}^{n}

Let N1N\in\mathbb{N}_{\geq 1}, and let {x1,,xN}𝕊n\{x_{1},\dots,x_{N}\}\in\mathbb{S}^{n} be a set of points such that there exists Δ>0\Delta\in\mathbb{R}_{>0} such that, if i=𝕊n({xi}+Δ𝔹)\mathcal{B}_{i}=\mathbb{S}^{n}\cap(\{x_{i}\}+\Delta\mathbb{B}), then ij=\mathcal{B}_{i}\cap\mathcal{B}_{j}=\emptyset for all iji\neq j. Consider the auxiliary functions χj:0\chi_{j}:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}, given by

χ1(r)\displaystyle\chi_{1}(r) :={exp(r1)r>00r0,\displaystyle:=\begin{cases}\exp\left(-r^{-1}\right)&r>0\\ 0&r\leq 0\end{cases},
χ2(r)\displaystyle\chi_{2}(r) :=χ1(r)χ1(r)+χ1(1r).\displaystyle:=\frac{\chi_{1}(r)}{\chi_{1}(r)+\chi_{1}(1-r)}.

Using the function χ2\chi_{2}, the constant Δ\Delta, and an arbitrary choice of ϵ(0,1)\epsilon\in(0,1), we define the smooth bump functions φi:𝕊n[0,1]\varphi_{i}:\mathbb{S}^{n}\rightarrow[0,1], for i{1,,N}i\in\{1,\dots,N\}, by the expression:

φi(x)\displaystyle\varphi_{i}(x) =1χ2(1x,xi(1ϵ)ΔϵΔ).\displaystyle=1-\chi_{2}\left(\frac{1-\langle x,x_{i}\rangle-(1-\epsilon)\Delta}{\epsilon\Delta}\right). (85)

It is straightforward to verify that φi(x)=0\varphi_{i}(x)=0, for all xix\not\in\mathcal{B}_{i}. In other words, supp(φi)i\text{supp}(\varphi_{i})\subset\mathcal{B}_{i}, where supp(φi)={x𝕊n:φi(x)0}\text{supp}(\varphi_{i})=\{x\in\mathbb{S}^{n}:\,\varphi_{i}(x)\neq 0\}. Moreover, it can be shown that there exists 𝒰isupp(φi)\mathcal{U}_{i}\subset\text{supp}(\varphi_{i}) such that φi(x)=1\varphi_{i}(x)=1 for all x𝒰ix\in\mathcal{U}_{i}. Using the above construction, let fe:𝕊nxTx𝕊nf_{e}:\mathbb{S}^{n}\ni x\mapsto T_{x}\mathbb{S}^{n} be given by

fe(x):=δ1i=1Nφi(x)(xixi,xx),\displaystyle f_{e}(x):=\delta_{1}\sum_{i=1}^{N}{\varphi_{i}(x)(x_{i}-\langle x_{i},x\rangle x)},

where δ1>0\delta_{1}>0 is a tuning parameter. It is straightforward to see that the map fef_{e} is smooth and that

supx𝕊n|fe(x)|2δ12maxi{1,,N}supx~i(1xi,x~2).\displaystyle\sup_{x\in\mathbb{S}^{n}}|f_{e}(x)|^{2}\leq\delta_{1}^{2}\max_{i\in\{1,\dots,N\}}\sup_{\tilde{x}\in\mathcal{B}_{i}}(1-\langle x_{i},\tilde{x}\rangle^{2}).

Define the candidate Lyapunov functions Vi:𝕊n0V_{i}:\mathbb{S}^{n}\rightarrow\mathbb{R}_{\geq 0} by Vi(x):=1xi,xV_{i}(x):=1-\langle x_{i},x\rangle, then observe that, due to the properties of the functions φi\varphi_{i}, the derivatives of the functions ViV_{i} along the vector field fef_{e} are given by

𝕊nVi(x),fe(x)<δ1(1xi,x2)0,\displaystyle\left\langle\nabla_{\mathbb{S}^{n}}V_{i}(x),f_{e}(x)\right\rangle<-{\delta_{1}(1-\langle x_{i},x\rangle^{2})}\leq 0,

for all x𝒰ix\in\mathcal{U}_{i}, and that 𝕊nVi(x),fe(x)x=xi\left\langle\nabla_{\mathbb{S}^{n}}V_{i}(x),f_{e}(x)\right\rangle\iff x=x_{i}. In other words, the vector field x˙=fe(x)\dot{x}=f_{e}(x), locally stabilizes all the points in the set {x1,,xN}𝕊n\{x_{1},\dots,x_{N}\}\in\mathbb{S}^{n}. Moreover, if fi:𝕊nxTx𝕊nf_{i}:\mathbb{S}^{n}\ni x\rightarrow T_{x}\mathbb{S}^{n} is another smooth vector field such that

𝕊nVi(x),fi(x)<δ3(1xi,x2),\displaystyle\left\langle\nabla_{\mathbb{S}^{n}}V_{i}(x),f_{i}(x)\right\rangle<\delta_{3}(1-\langle x_{i},x\rangle^{2}),

for some δ3>0\delta_{3}>0 and for all x𝒰ix\in\mathcal{U}_{i}, then we have that:

𝕊nVi(x),fe(x)+fi(x)<(δ1δ3)(1xi,x2).\displaystyle\left\langle\nabla_{\mathbb{S}^{n}}V_{i}(x),f_{e}(x)+f_{i}(x)\right\rangle<-(\delta_{1}-\delta_{3})(1-\langle x_{i},x\rangle^{2}).

That is, if δ3<δ1\delta_{3}<\delta_{1}, then the perturbation fi(x)f_{i}(x) does not destroy the (local) stability of the the point xix_{i}.