This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Interval centred form for proving stability of non-linear discrete-time systems

Auguste Bourgois Lab-STICC
ENSTA Bretagne
Brest, FranceForssea Robotics
Paris, France [email protected] Lab-STICC
ENSTA Bretagne
Brest, France
   Luc Jaulin Lab-STICC
ENSTA Bretagne
Brest, France [email protected]
Abstract

In this paper, we propose a new approach to prove stability of non-linear discrete-time systems. After introducing the new concept of stability contractor, we show that the interval centred form plays a fundamental role in this context and makes it possible to easily prove asymptotic stability of a discrete system. Then, we illustrate the principle of our approach through theoretical examples. Finally, we provide two practical examples using our method : proving stability of a localisation system and that of the trajectory of a robot.

1 Introduction

Proving properties of Cyber Physical Systems (CPS) is an important topic that should be considered when designing reliable systems [3, 8, 21, 29]. Among those properties, stability is often wanted for a dynamical system : achieving stability around a given setpoint in the state space of the latter is one of the aims of control theory. Proving stability of a dynamical system can be done rigorously [12], which is of major importance when applied to real-life systems. Indeed, a stable system is considered safe, since its behaviour is predictable.

Let us recall the definition of stability of a dynamical system. Consider the non-linear discrete-time system

𝐱k+1=𝐟(𝐱k)\begin{array}[]{ccl}\mathbf{x}_{k+1}&=&\mathbf{f}(\mathbf{x}_{k})\end{array} (1)

According to Lyapunov’s definition of stability [7, 28], the system (1) is stable if

ε>0,δ>0,𝐱0<δk0,𝐱k<ε\forall\varepsilon>0,\>\exists\delta>0,\>\|\mathbf{x}_{0}\|<\delta\implies\forall k\geq 0,\|\mathbf{x}_{k}\|<\varepsilon (2)

The system is asymptotically stable (or Lyapunov stable) if there exists a neighbourhood of 𝟎\mathbf{0} such that any initial state 𝐱0\mathbf{x}_{0} taken in this neighbourhood yields a trajectory which converges to 𝟎\mathbf{0},

δ>0,𝐱0<δlimk𝐱k=0\exists\delta>0,\>\|\mathbf{x}_{0}\|<\delta\implies\lim_{k\rightarrow\infty}\|\mathbf{x}_{k}\|=0 (3)

Finally, the system is exponentially stable if for a given norm \|\cdot\|

δ>0,α0,β0,𝐱0<δk0,𝐱kα𝐱0eβk\exists\delta>0,\>\exists\alpha\geq 0,\>\exists\beta\geq 0,\>\|\mathbf{x}_{0}\|<\delta\implies\forall k\geq 0,\>\|\mathbf{x}_{k}\|\leq\alpha\|\mathbf{x}_{0}\|e^{-\beta k} (4)

A classical method to prove stability of a system is to linearise the latter around 𝟎\mathbf{0} and check if the eigenvalues are inside the unit disk. However, due the inherent uncertainties of a real-life system’s model, no guarantee can be obtained without using interval analysis [26]. The Jury criterion [10] can also be used on the linearised system in this context, but again, interval computation has to be performed to get a proof of stability [24]. Moreover, to our knowledge, none of the existing methods is able to give an approximation for the neighbourhoods δ\delta and ε\varepsilon used in Lyapunov’s definition. Now, finding values for δ\delta and ε\varepsilon is needed in practice, for instance to initialize algorithms which approximate basins of attraction [14, 22, 27] or reachable sets [16].

This paper proposes an original approach to prove Lyapunov stability of a non-linear discrete system, but also to find values for δ\delta and ε\varepsilon. It uses the centred form, a classical concept in interval analysis [17]. Moreover, it does not need the introduction of any Lyapunov function, Jury criterion, linearisation, or any other classical tool used in control theory.

Section 2 briefly presents interval analysis and the notations used in the rest of this paper. Section 3 introduces the new notion of stability contractor and gives a theorem which explains why this concept is useful for stability analysis. Section 4 recalls the definition of the centred form. It provides a recursive version of the centred form in the case where the function to enclose is the solution of a recurrence equation. It also shows that the centred form can be used to build stability contractors. Section 5 shows that the approach is able to reach a conclusion for a large class of stable systems and provides a procedure for proving stability. Some examples are given to illustrate graphically the properties of the approach. Section 7 concludes the paper and proposes some perspectives.

2 Introduction to interval analysis

The method presented in this paper is based on interval analysis. In short, interval analysis is a field of mathematics where intervals, i.e. connected subsets of \mathbb{R}, are used instead of real numbers to perform computations. Doing so allows to enclose all types of uncertainties (from floating-point round-off to modelling errors) of a system and therefore yield a guaranteed enclosure for the solution of a problem related to that system. In this section, we briefly introduce the notations and important concepts used later in this paper. More details about interval analysis and its applications can be found in [11, 19].

An interval is a set delimited by a lower bound xx^{-} and an upper bound x+x^{+} such that xx+x^{-}\leq x^{+} :

[x]=[x,x+][x]=[x^{-},x^{+}]

Intervals can be stacked into vectors, and are thus denoted by

[𝐱]=([x1],[x2],,[xn])[\mathbf{x}]=\left([x_{1}],[x_{2}],\dots,[x_{n}]\right)

We write [u,v]×n[u,v]^{\times n} the interval vector of size nn, all the components of which are equal to [u,v][u,v].

Vectors of intervals are often called boxes or interval vectors. The set of axis-aligned boxes of n\mathbb{R}^{n} is denoted by 𝕀n\mathbb{IR}^{n}. Similarly, interval vectors can be concatenated into interval matrices.

Intervals, interval vectors (or matrices) can be multiplied by a real λ\lambda as such :

λ[x,x+]={[λx,λx+]ifλ0[λx+,λx]ifλ<0\lambda[x^{-},x^{+}]=\begin{cases}[\lambda x^{-},\lambda x^{+}]&\text{if}\;\lambda\geq 0\\ [\lambda x^{+},\lambda x^{-}]&\text{if}\;\lambda<0\\ \end{cases}

We denote by w([x])w\left([x]\right) the width of [x][x] :

w([x])=x+xw\left([x]\right)=x^{+}-x^{-}

The width of an interval vector [𝐱][\mathbf{x}] is given by

w([𝐱])=max𝑖(w([xi]))w\left([\mathbf{x}]\right)=\underset{i}{\max}\left(w\left([x_{i}]\right)\right)

The absolute value of an interval [x][x] is

|[x]|=max(|x|,|x+|)\left\lvert[x]\right\rvert=\max\left(\left\lvert x^{-}\right\rvert,\left|x^{+}\right|\right)

And the norm of an interval vector [𝐱][\mathbf{x}] is defined as follows

[𝐱]=max𝑖|[xi]|\left\lVert[\mathbf{x}]\right\rVert=\underset{i}{\max}\left\lvert[x_{i}]\right\rvert

Later in this paper, we will use the following implication

[𝐚],[𝐛]𝕀n,[𝐚][𝐛][𝐚]<[𝐛]\forall[\mathbf{a}],\,[\mathbf{b}]\in\mathbb{IR}^{n},\,[\mathbf{a}]\subset[\mathbf{b}]\implies\left\lVert[\mathbf{a}]\right\rVert<\left\lVert[\mathbf{b}]\right\rVert (5)

The usual arithmetic operators (+,,×,/+,-,\times,/) can be defined over intervals (as well as boxes and interval matrices). Operations involving intervals, interval vectors and interval matrices can therefore be naturally deduced from their real counterpart.

Extending a real function to intervals (and equivalently to interval vectors/matrices) can also be achieved as follows :

f([x])={f(x),x[x]}f\left([x]\right)=\left\{f(x),x\in[x]\right\}

Except in trivial cases, f([x])f\left([x]\right) usually cannot be written as an interval, whence the use of inclusion functions. An inclusion function [f]([x])[f]\left([x]\right) associated with f([x])f\left([x]\right) yields an interval (or interval vector/matrix) enclosing the set f([x])f\left([x]\right) :

f([x])[f]([x])f\left([x]\right)\subset[f]\left([x]\right)

[f][f] is said to be minimal if [f]([x])[f]\left([x]\right) is the smallest interval (or interval vector/matrix) enclosing the set f([x])f\left([x]\right) (see Figure 1). The minimal inclusion function associated with ff will be denoted by [f([x])][f\left([x]\right)].

Refer to caption
Figure 1: Interval function, inclusion function & minimal inclusion function

An inclusion function is said to be natural when it is expressed by replacing its variables and its elementary functions and operators by their interval counterparts.

Example 1.

Consider the function f:2f:\mathbb{R}^{2}\rightarrow\mathbb{R} such that for 𝐱=(x1,x2)2\mathbf{x}=\left(x_{1},x_{2}\right)\in\mathbb{R}^{2}

f(𝐱)=sin(x1)+exp(x2)f(\mathbf{x})=\sin\left(x_{1}\right)+\exp\left(x_{2}\right)

The natural inclusion function [f][f] of ff is :

[f]([𝐱])=[sin]([x1])+[exp]([x2])[f]([\mathbf{x}])=[\sin]\left([x_{1}]\right)+[\exp]\left([x_{2}]\right)

where [sin][\sin] and [exp][\exp] are the inclusion functions of sin\sin and exp\exp.

3 Stability contractor

In this section, we present the concept of stability contractor, a tool that can be used to rigorously prove the stability of a dynamical system. The rigour of the method comes from the use of interval analysis.

The following new definition adapts the definition of a contractor, as given in [4], to stability analysis.

Definition 1.

Consider a box [𝐱0][\mathbf{x}_{0}] of n\mathbb{R}^{n}. A stability contractor 𝚿:𝕀n𝕀n\boldsymbol{\Psi}:\mathbb{IR}^{n}\rightarrow\mathbb{IR}^{n} of rate α<1\alpha<1 is an operator which satisfies

(i)[𝐚][𝐛]𝚿([𝐚])𝚿([𝐛])(monotonicity)(ii)𝚿([𝐚])[𝐚](contractance)(iii)𝚿(𝟎)=𝟎(equilibrium)(iv)𝚿([𝐚])α[𝐚]k1,𝚿k([𝐚])αk[𝐚](convergence)\begin{array}[]{ccc}(i)&[\mathbf{a}]\subset[\mathbf{b}]\implies\boldsymbol{\Psi}([\mathbf{a}])\subset\boldsymbol{\Psi}([\mathbf{b}])&\text{(monotonicity)}\\ (ii)&\boldsymbol{\Psi}([\mathbf{a}])\subset[\mathbf{a}]&\text{(contractance)}\\ (iii)&\boldsymbol{\Psi}(\mathbf{0})=\mathbf{0}&\text{(equilibrium)}\\ (iv)&\boldsymbol{\Psi}([\mathbf{a}])\subset\alpha\cdot[\mathbf{a}]\implies\forall k\geq 1,\,\boldsymbol{\Psi}^{k}([\mathbf{a}])\subset\alpha^{k}\cdot[\mathbf{a}]&\text{(convergence)}\end{array} (6)

for all boxes [𝐚],[𝐛][\mathbf{a}],[\mathbf{b}] inside [𝐱0][\mathbf{x}_{0}]. For k1k\geq 1 Ψk{\Psi}^{k} denotes the iterated function 𝚿𝚿𝑘\underset{k}{\underbrace{\boldsymbol{\Psi}\circ\dots\circ\boldsymbol{\Psi}}}. If k=0k=0, 𝚿0\boldsymbol{\boldsymbol{\Psi}}^{0} denotes the identity function.

Example 2.

If [x][x] is an interval, the operator [x][x]0.9[x][x]\mapsto[x]\cap 0.9\cdot[x] is a stability contractor whereas the operator [x]0.9[x][x]\mapsto 0.9\cdot[x] is not.

Proposition 1.

If 𝚿\boldsymbol{\Psi} is a stability contractor of rate α<1\alpha<1, then

𝚿([𝐱])α[𝐱]limk𝚿k([𝐱])=𝟎\boldsymbol{\Psi}([\mathbf{x}])\subset\alpha\cdot[\mathbf{x}]\implies\lim_{k\rightarrow\infty}\boldsymbol{\Psi}^{k}([\mathbf{x}])=\mathbf{0} (7)
Proof.

Let [𝐱]𝕀n[\mathbf{x}]\in\mathbb{IR}^{n} and 𝚿\boldsymbol{\Psi} denote a stability contractor such that 𝚿([𝐱])α[𝐱]\boldsymbol{\Psi}([\mathbf{x}])\subset\alpha\cdot[\mathbf{x}]. Then, according to Equation (6, iv),

𝚿([𝐱])α[𝐱]\displaystyle\boldsymbol{\Psi}([\mathbf{x}])\subset\alpha\cdot[\mathbf{x}] 𝚿k([𝐱])αk[𝐱]\displaystyle\implies\boldsymbol{\Psi}^{k}([\mathbf{x}])\subset\alpha^{k}\cdot[\mathbf{x}]
limk𝚿k([𝐱])=𝟎\displaystyle\implies\lim_{k\rightarrow\infty}\boldsymbol{\Psi}^{k}([\mathbf{x}])=\mathbf{0}

A consequence of this proposition is that getting a stability contractor allows us to prove stability of a system without performing an infinitely long set-membership simulation. It suffices to have one box [𝐱]𝟎[\mathbf{x}]\ni\mathbf{0} such that 𝚿([𝐱]))α[𝐱]\boldsymbol{\Psi}([\mathbf{x}]))\subset\alpha\cdot[\mathbf{x}], α<1\alpha<1 to conclude that the system is asymptotically stable for all 𝐱[𝐱]\mathbf{x}\in[\mathbf{x}]. In other words, the system is proven to be Lyapunov stable in the neighbourhood [𝐱0][\mathbf{x}_{0}].

Now, if one can build a stability contractor 𝚿\boldsymbol{\Psi} for a given dynamical system, then proving Lyapunov stability of the latter for a given initial condition [𝐱][\mathbf{x}] comes down to applying proposition 1. Building such a contractor is addressed in the next section.

4 Centred form of an interval function

The aim of this section is to present a general method for building a stability contractor for discrete dynamical systems. It is based on the concept on centred form, proposed by Moore in [18]. Additionally, it proposes an algorithm to compute iteratively the centred form of an iterated function.

Consider a function 𝐟:nn\mathbf{f}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}, with a Jacobian matrix 𝐉(𝐱)=d𝐟d𝐱(𝐱)\mathbf{J}(\mathbf{x})=\frac{d\mathbf{f}}{d\mathbf{x}}(\mathbf{x}). Consider a box [𝐱][\mathbf{x}] and one point 𝐱¯\bar{\mathbf{x}} in [𝐱][\mathbf{x}]. For simplicity, and without loss of generality, we assume that 𝐱¯=𝟎[𝐱]\bar{\mathbf{x}}=\mathbf{0}\in[\mathbf{x}] and that 𝐟(𝟎)=𝟎\mathbf{f}(\mathbf{0})=\mathbf{0}.

Remark 1.

If this condition is not satisfied, i.e. 𝐱¯𝟎\bar{\mathbf{x}}\neq\mathbf{0} or 𝐟(𝟎)𝟎\mathbf{f}(\mathbf{0})\neq\mathbf{0}, the problem 𝐲=𝐟(𝐱),𝐱[𝐱]\mathbf{y}=\mathbf{f}(\mathbf{x}),\mathbf{x}\in[\mathbf{x}] can be transformed into an equivalent problem satisfying the latter :

𝐲=𝐟(𝐱)𝐲𝐟(𝐱¯)=𝐳=𝐟(𝐱¯)+𝐟(𝐱𝐱¯=𝐩+𝐱¯)𝐳=𝐟(𝐱¯)+𝐟(𝐩+𝐱¯)=𝐠(𝐩)\begin{array}[]{ccc}\mathbf{y}=\mathbf{f}(\mathbf{x})&\Leftrightarrow&\underset{=\mathbf{z}}{\underbrace{\mathbf{y}-\mathbf{f}(\bar{\mathbf{x}})}}=-\mathbf{f}(\bar{\mathbf{x}})+\mathbf{f}(\underset{=\mathbf{p}}{\underbrace{\mathbf{x}-\bar{\mathbf{x}}}}+\bar{\mathbf{x}})\\ &\Leftrightarrow&\mathbf{z}=\underset{=\mathbf{g}(\mathbf{p})}{\underbrace{-\mathbf{f}(\bar{\mathbf{x}})+\mathbf{f}(\mathbf{p}+\bar{\mathbf{x}})}}\end{array} (8)

i.e.

{𝐲=𝐳+𝐟(𝐱¯)𝐳=𝐠(𝐩)𝐩=𝐱𝐱¯\left\{\begin{array}[]{ccc}\mathbf{y}&=&\mathbf{z}+\mathbf{f}(\bar{\mathbf{x}})\\ \mathbf{z}&=&\mathbf{g}(\mathbf{p})\\ \mathbf{p}&=&\mathbf{x}-\bar{\mathbf{x}}\end{array}\right. (9)

Now, consider the new problem 𝐳=𝐠(𝐩),𝐩[𝐩]\mathbf{z}=\mathbf{g}(\mathbf{p}),\mathbf{p}\in[\mathbf{p}] where [𝐩]=[𝐱]𝐱¯[\mathbf{p}]=[\mathbf{x}]-\bar{\mathbf{x}}. Since 𝐠(𝟎)=𝟎\mathbf{g}(\mathbf{0})=\mathbf{0} and 𝟎[𝐩]\mathbf{0}\in[\mathbf{p}], the new problem satisfies the condition stated above.

Let us recall the definition of the centred form,as given by [18] :

Definition 2.

The centred form 𝐟c\mathbf{f}_{c} associated to the function 𝐟\mathbf{f} is given by

[𝐟c]([𝐱])=([𝐉]([𝐱]))[𝐱][\mathbf{f}_{c}]([\mathbf{x}])=\left(\left[\mathbf{J}\right]([\mathbf{x}])\right)\cdot[\mathbf{x}] (10)

where [𝐉]\left[\mathbf{J}\right] is the natural extension of 𝐉\mathbf{J}.

Proposition 2.

According to [18], for all [𝐱]𝕀n[\mathbf{x}]\in\mathbb{IR}^{n},

𝐟([𝐱])[𝐟c]([𝐱])\mathbf{f}([\mathbf{x}])\subset[\mathbf{f}_{c}]([\mathbf{x}]) (11)

Additionally, the centred form tends towards the minimal inclusion function when [𝐱][\mathbf{x}] is sufficiently small :

w([𝐟c]([𝐱]))w([𝐟([𝐱])])=o(w([𝐱]))asw([𝐱])0w\left([\mathbf{f}_{c}]\left([\mathbf{x}]\right)\right)-w\left([\mathbf{f}\left([\mathbf{x}]\right)]\right)=o\left(w\left([\mathbf{x}]\right)\right)\;\text{as}\;w\left([\mathbf{x}]\right)\rightarrow 0 (12)

The statements of proposition 2 are illustrated on figure 2.

Refer to caption
Figure 2: Illustration of the centred form extension of an interval function
Theorem 1.

Consider a function 𝐟\mathbf{f} with 𝐟(𝟎)=𝟎\mathbf{f}(\mathbf{0})=\mathbf{0} and with Jacobian matrix 𝐉(𝐱)=d𝐟d𝐱(𝐱)\mathbf{J}(\mathbf{x})=\frac{d\mathbf{f}}{d\mathbf{x}}(\mathbf{x}). Denote by [𝐟][\mathbf{f}] and [𝐉][\mathbf{J}] the natural inclusion functions for 𝐟\mathbf{f} and J\mathbf{J}. The centred form [𝐟ck][\mathbf{f}_{c}^{k}] associated to 𝐟k=𝐟𝐟𝐟\mathbf{f}^{k}=\mathbf{f}\circ\mathbf{f}\circ\dots\circ\mathbf{f} is given by the following sequence

[𝐳](0)=[𝐱][𝐀](0)=Id[𝐳](k)=[𝐟]([𝐳](k1))[𝐀](k)=[𝐉]([𝐳](k1))[𝐀](k1)[𝐟ck]([𝐱])=[𝐀](k)[𝐱]\begin{array}[]{ccc}[\mathbf{z}](0)&=&[\mathbf{x}]\\ {}[\mathbf{A}](0)&=&\text{Id}\\ {}[\mathbf{z}](k)&=&[\mathbf{f}]\left([\mathbf{z}](k-1)\right)\\ {}[\mathbf{A}](k)&=&[\mathbf{J}]([\mathbf{z}](k-1))\cdot[\mathbf{A}](k-1)\\ {}[\mathbf{f}_{c}^{k}]([\mathbf{x}])&=&[\mathbf{A}](k)\cdot[\mathbf{x}]\end{array} (13)
Proof.

Consider a function 𝐟:nn\mathbf{f}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n} such that 𝐟(𝟎)=𝟎\mathbf{f}(\mathbf{0})=\mathbf{0}, and denote 𝐉=d𝐟d𝐱\mathbf{J}=\frac{d\mathbf{f}}{d\mathbf{x}} its Jacobian matrix. Write 𝐟k\mathbf{f}^{k} the kk-th iterated function of 𝐟\mathbf{f}. Then for all 𝐱n\mathbf{x}\in\mathbb{R}^{n}, the Jacobian matrix of 𝐟k\mathbf{f}^{k} is given by :

d𝐟kd𝐱(𝐱)=d(𝐟𝐟k1)d𝐱(𝐱)=d𝐟d𝐱(𝐟k1(𝐱))d𝐟k1d𝐱(𝐱).\dfrac{d\mathbf{f}^{k}}{d\mathbf{x}}(\mathbf{x})=\dfrac{d(\mathbf{f}\circ\mathbf{f}^{k-1})}{d\mathbf{x}}(\mathbf{x})=\dfrac{d\mathbf{f}}{d\mathbf{x}}(\mathbf{f}^{k-1}(\mathbf{x}))\cdot\dfrac{d\mathbf{f}^{k-1}}{d\mathbf{x}}(\mathbf{x}). (14)

Define 𝐳(k)=𝐟k(𝐱)\mathbf{z}(k)=\mathbf{f}^{k}(\mathbf{x}) and 𝐀(k)=d𝐟kd𝐱(𝐱).\mathbf{A}(k)=\frac{d\mathbf{f}^{k}}{d\mathbf{x}}(\mathbf{x}). Since for all 𝐱\mathbf{x}, 𝐉(𝐱)=d𝐟d𝐱(𝐱)\mathbf{J}(\mathbf{x})=\frac{d\mathbf{f}}{d\mathbf{x}}(\mathbf{x}), we get

𝐀(k)=𝐉(𝐳(k1))𝐀(k1).\mathbf{A}(k)=\mathbf{J}(\mathbf{z}(k-1))\cdot\mathbf{A}(k-1). (15)

Now, since 𝐟(𝟎)=𝟎\mathbf{f}(\mathbf{0})=\mathbf{0}, it follows that 𝐟k(𝟎)=𝟎\mathbf{f}^{k}\left(\mathbf{0}\right)=\mathbf{0}. Finally, substituting Equation (14) into Equation (10) yields

[𝐟ck]([𝐱])\displaystyle[\mathbf{f}_{c}^{k}]([\mathbf{x}]) =[d𝐟kd𝐱]([𝐱])[𝐱]\displaystyle=\left[\dfrac{d\mathbf{f}^{k}}{d\mathbf{x}}\right]\left([\mathbf{x}]\right)\cdot[\mathbf{x}]
=[𝐀](k)[𝐱]\displaystyle=[\mathbf{A}](k)\cdot[\mathbf{x}]

Remark 2.

From Proposition 2, for a given kk, we have

limw([𝐱])0w([𝐟ck]([𝐱]))w([𝐟k([𝐱])])w([𝐱])=0\lim_{w([\mathbf{x}])\rightarrow 0}\frac{w([\mathbf{f}_{c}^{k}]([\mathbf{x}]))-w([\mathbf{f}^{k}([\mathbf{x}])])}{w([\mathbf{x}])}=0 (16)

which means that [𝐟ck][\mathbf{f}_{c}^{k}] tends towards the minimal inclusion function when w([𝐱])0w([\mathbf{x}])\rightarrow 0. Now, for a given [𝐱][\mathbf{x}], even very small, we generally observe the following :

limkw([𝐟ck]([𝐱]))w([𝐟k([𝐱])])w([𝐱])=.\lim_{k\rightarrow\infty}\frac{w([\mathbf{f}_{c}^{k}]([\mathbf{x}]))-w([\mathbf{f}^{k}([\mathbf{x}])])}{w([\mathbf{x}])}=\infty. (17)

In other words, the pessimism introduced by the centred form increases with kk.

Theorem 2.

Consider a function 𝐟\mathbf{f} with 𝐟(𝟎)=𝟎\mathbf{f}(\mathbf{0})=\mathbf{0}. If there exist a box [𝐱0]𝟎[\mathbf{x}_{0}]\ni\mathbf{0} and a real α<1\alpha<1 such that [𝐟c]([𝐱0])α[𝐱0][\mathbf{f}_{c}]([\mathbf{x}_{0}])\subset\alpha\cdot[\mathbf{x}_{0}] then the interval operator 𝚿𝐟([𝐱])=[𝐟c]([𝐱])\boldsymbol{\Psi}_{\mathbf{f}}([\mathbf{x}])=[\mathbf{f}_{c}]([\mathbf{x}]) is a stability contractor inside [𝐱0][\mathbf{x}_{0}].

Proof.

Properties (6, i), (6, ii) and (6, iii) of definition 1 are easily checked. Now, let us prove property (6, iv) by induction.

Let α<1\alpha<1 and [𝐱]𝟎[\mathbf{x}]\ni\mathbf{0}. Property (6, iv) states that

[𝐟c]([𝐱])α[𝐱]k0,[𝐟c]k([𝐱])αk[𝐱][\mathbf{f}_{c}]\left([\mathbf{x}]\right)\subset\alpha[\mathbf{x}]\implies\forall k\geq 0,\,[\mathbf{f}_{c}]^{k}\left([\mathbf{x}]\right)\subset\alpha^{k}\cdot[\mathbf{x}]

where [𝐟c]k=[𝐟c][𝐟c]𝑘[\mathbf{f}_{c}]^{k}=\underset{k}{\underbrace{[\mathbf{f}_{c}]\circ\dots\circ[\mathbf{f}_{c}]}}. Since [𝐱]α0[𝐱][\mathbf{x}]\subset\alpha^{0}[\mathbf{x}], the property holds for k=0k=0.

Let us define the sequence

[𝐰k+1]=[𝐟c]([𝐰k])[\mathbf{w}_{k+1}]=[\mathbf{f}_{c}]\left([\mathbf{w}_{k}]\right)

Now, assume that the property also holds for a k>0k>0, i.e.

[𝐰k]αk[𝐱][\mathbf{w}_{k}]\subset\alpha^{k}\cdot[\mathbf{x}]

Let us show that

[𝐰k+1]αk+1[𝐱][\mathbf{w}_{k+1}]\subset\alpha^{k+1}\cdot[\mathbf{x}]
[𝐰k+1]=[𝐟c]([𝐰k])=[𝐉]([𝐰k])[𝐰k][𝐉]([𝐱])[𝐰k]since [𝐉] is inclusion monotonic and [𝐰k]αk[𝐱][𝐱][𝐉]([𝐱])(αk[𝐱])since [𝐰k]αk[𝐱]=αk[𝐉]([𝐱])[𝐱]=αk[𝐟c]([𝐱])αk(α[𝐱])αk+1[𝐱]since [𝐟c]([𝐱])α[𝐱]\begin{array}[]{ccll}[\mathbf{w}_{k+1}]&=&[\mathbf{f}_{c}]([\mathbf{w}_{k}])\\ &=&[\mathbf{J}]\left([\mathbf{w}_{k}]\right)\cdot[\mathbf{w}_{k}]\\ &\subset&[\mathbf{J}]\left([\mathbf{x}]\right)\cdot[\mathbf{w}_{k}]&\text{since $\left[\mathbf{J}\right]$ is inclusion monotonic and $[\mathbf{w}_{k}]\subset\alpha^{k}\cdot[\mathbf{x}]\subset[\mathbf{x}]$}\\ &\subset&[\mathbf{J}]\left([\mathbf{x}]\right)\cdot(\alpha^{k}\cdot[\mathbf{x}])&\text{since $[\mathbf{w}_{k}]\subset\alpha^{k}[\mathbf{x}]$}\\ &=&\alpha^{k}\cdot\left[\mathbf{J}\right]([\mathbf{x}])\cdot[\mathbf{x}]\\ &=&\alpha^{k}\cdot[\mathbf{f}_{c}]\left([\mathbf{x}]\right)\\ &\subset&\alpha^{k}\cdot(\alpha\cdot[\mathbf{x}])\\ &\subset&\alpha^{k+1}\cdot[\mathbf{x}]&\text{since $[\mathbf{f}_{c}]\left([\mathbf{x}]\right)\subset\alpha[\mathbf{x}]$}\end{array}

Theorem 2 shows that the centred form can be used in a very general way to build stability contractors. Other integration methods such as the one proposed by Lohner [15] or the ones based on affine forms [2] do not have this property, even if they usually yield a tighter enclosure of the image set of a function.

5 Proving stability using the centred form

In this section, we show how the centred form can be used to prove stability of non-linear dynamical systems.

5.1 Method

Consider the system described by Equation (1), where 𝐟(𝟎)=𝟎\mathbf{f}\left(\mathbf{0}\right)=\mathbf{0}.

In short, the methods consists in finding an initial box [𝐱][\mathbf{x}] in the state space of the system such that the centred form of 𝐟\mathbf{f} is a stability contractor. This implies that iterating this centred form onto that initial box will converge towards 𝟎\mathbf{0}. Since the centred form is an inclusion function for 𝐟\mathbf{f}, that implies that the system will also converge towards 𝟎\mathbf{0}.

More specifically, if for a given box [𝐱]𝟎[\mathbf{x}]\ni\mathbf{0} with width δ\delta there exist q1q\geq 1 and α<1\alpha<1 such that [𝐟cq]([𝐱])α[𝐱][\mathbf{f}_{c}^{q}]\left([\mathbf{x}]\right)\subset\alpha\cdot[\mathbf{x}], then [𝐟cq][\mathbf{f}_{c}^{q}] is a stability contractor of rate α\alpha (according to theorem 2). Now, since proposition 1 states

limk[𝐟cq]k([𝐱])=𝟎\lim_{k\rightarrow\infty}[\mathbf{f}_{c}^{q}]^{k}([\mathbf{x}])=\mathbf{0}

and since proposition 2 asserts that

𝐱[𝐱],𝐟q(𝐱)𝐟q([𝐱])[𝐟cq]([𝐱])\forall\mathbf{x}\in[\mathbf{x}],\>\mathbf{f}^{q}\left(\mathbf{x}\right)\in\mathbf{f}^{q}\left([\mathbf{x}]\right)\subset[\mathbf{f}^{q}_{c}]\left([\mathbf{x}]\right)

it follows that

𝐱[𝐱],limk(𝐟q)k(𝐱)=𝟎\forall\mathbf{x}\in[\mathbf{x}],\>\lim_{k\rightarrow\infty}\left(\mathbf{f}^{q}\right)^{k}(\mathbf{x})=\mathbf{0}

which implies asymptotic stability of the system (see Equation 3).

Furthermore, let us define the sequence

[𝐱k+1]=[𝐟cq]k([𝐱])[\mathbf{x}_{k+1}]=[\mathbf{f}^{q}_{c}]^{k}\left([\mathbf{x}]\right) (18)

Thus, substituting Equation (18) in property (6, iv) and applying Equation (5) yields

[𝐱k+1]\displaystyle\left\lVert[\mathbf{x}_{k+1}]\right\rVert αk[𝐱]\displaystyle\leq\alpha^{k}\left\lVert[\mathbf{x}]\right\rVert
[𝐱]eln(α)k\displaystyle\leq\left\lVert[\mathbf{x}]\right\rVert e^{\ln\left(\alpha\right)k}
[𝐱]eβk\displaystyle\leq\left\lVert[\mathbf{x}]\right\rVert e^{-\beta k}

where β=ln(α)>0\beta=-\ln\left(\alpha\right)>0. This implies exponential stability of the system (see Equation 4).

Our method is summarized by algorithm 1. It takes as inputs the function 𝐟\mathbf{f} describing the system, the initial box [𝐱][\mathbf{x}], and a maximum number of iterations NN. The latter is necessary, as stated in remark 2 to avoid looping indefinitely. Usually, NN does not need to be larger than 1010, since the iterated centred form tends to diverge with the number of iterations.

0:  𝐟\mathbf{f}, [𝐱][\mathbf{x}], NN
0:  true if the system is stable, false if undetermined
  [𝐀]𝐈[\mathbf{A}]\leftarrow\mathbf{I}
  [𝐳][𝐱][\mathbf{z}]\leftarrow[\mathbf{x}]
  for i=1i=1 to NN do
     [𝐀][𝐉]([𝐳])[𝐀][\mathbf{A}]\leftarrow[\mathbf{J}]\left([\mathbf{z}]\right)\cdot[\mathbf{A}]
     [𝐳][𝐟]([𝐳])[\mathbf{z}]\leftarrow[\mathbf{f}]\left([\mathbf{z}]\right)
     [𝐱i][𝐀][𝐱][\mathbf{x}_{i}]\leftarrow[\mathbf{A}]\cdot[\mathbf{x}]
     if [𝐱i][𝐱][\mathbf{x}_{i}]\subset[\mathbf{x}] then
        return  true
     end if
  end for
  return  false
Algorithm 1 Centred form based stability contractor

It is important to remember that if the previous algorithm yields a false value, that does not necessarily mean that the system is unstable. Indeed, bisecting the initial box [𝐱][\mathbf{x}] into smaller boxes and running the algorithm on them could prove stability of the system in the neighbourhood defined by those boxes.

5.2 Completeness of the method

Now, let us show that whenever a system actually is exponentially stable, our method will be able to prove it. Given η>0\eta>0, we denote by η\mathcal{B}_{\eta} the set of all hypercubes [𝐱][\mathbf{x}] centred at 𝟎\mathbf{0} and such that w([𝐱])ηw([\mathbf{x}])\leq\eta.

Proposition 3.

If the system is exponentially stable around 𝟎\mathbf{0}, then

η>0,[𝐱]η,k>0,α<1,[𝐟ck]([𝐱])α[𝐱]\exists\eta>0,\>\forall[\mathbf{x}]\in\mathcal{B}_{\eta},\>\exists k>0,\>\exists\alpha<1,\>[\mathbf{f}_{c}^{k}]([\mathbf{x}])\subset\alpha\cdot[\mathbf{x}]
Proof.

Let us assume exponential stability of the system described by 𝐟\mathbf{f} around 𝟎\mathbf{0}, i.e. there exists a neighbourhood 𝒩\mathcal{N} of 𝟎\mathbf{0}, such that

β, 0<β<1,k>0,𝐱𝒩,𝐟k(𝐱)<β𝐱\exists\beta,\>0<\beta<1,\>\exists k>0,\>\forall\mathbf{x}\in\mathcal{N},\>\|\mathbf{f}^{k}(\mathbf{x})\|_{\infty}<\beta\|\mathbf{x}\|_{\infty}

This property translates into

[𝐟k([𝐱])]β[𝐱][\mathbf{f}^{k}([\mathbf{x}])]\subset\beta\cdot[\mathbf{x}]

for all cubes [𝐱][\mathbf{x}] in 𝒩\mathcal{N} centred in 𝟎\mathbf{0}, where [𝐟k([𝐱])][\mathbf{f}^{k}([\mathbf{x}])] is the smallest box which contains the set 𝐟k([𝐱])\mathbf{f}^{k}([\mathbf{x}]). Take one of these cubes [𝐱][\mathbf{x}] and denote by η\eta its width w([𝐱])w\left([\mathbf{x}]\right). If η\eta is sufficiently small, the pessimism of the centred form becomes arbitrarily small and we have

α,βα<1,[𝐟ck]([𝐱])α[𝐱]\exists\alpha,\>\beta\leq\alpha<1,\>[\mathbf{f}_{c}^{k}]([\mathbf{x}])\subset\alpha[\mathbf{x}]

6 Application

6.1 Example 1 : Proving stability of the logistic map

The aim of this example is to illustrate the section 5.2. The logistic map is a simple recurrence relation which behaviour can be highly complex or chaotic :

xk+1=ρxk(1xk)x_{k+1}=\rho\cdot x_{k}\cdot\left(1-x_{k}\right) (19)

The parameter ρ\rho influences the dynamics of the system. For ρ=2.4\rho=2.4, the system is stable, but shows an oscillating behaviour around its equilibrium point (see figure 3).

Refer to caption
Figure 3: Behaviour of the logistic map for ρ=2.4\rho=2.4

Note that the equilibrium point is not 𝟎\mathbf{0} and we thus need to centre our problem to apply the stability method as described in this paper (see remark 1).

Figure 4 displays the behaviour of our algorithm for an initial box [x0]=[0.577, 0.585][x_{0}]=\left[0.577,\>0.585\right]. Even if [x1][x_{1}] is not contained in [x0][x_{0}], we have [x2][x0][x_{2}]\subset[x_{0}]. We have thus shown that for an initial state chosen in [x0][x_{0}], the trajectory will converge towards the stable point.

Refer to caption
Figure 4: Stability contractor proving stability of the logistic map. The initial box is small, whence the 2 enlargements symbolised by the connected black squares.

6.2 Example 2 : Proving stability of a three dimensional map

The aim of this example is to illustrate the steps of algorithm 1 in a higher dimensional problem.

Let us consider the following discrete system:

𝐱k+1=0.8𝐑(π6+x1,π4+x2,π3+x3)𝐱k=𝐟(𝐱k)\begin{array}[]{ccc}\mathbf{x}_{k+1}&=&0.8\cdot\mathbf{R}\left(\frac{\pi}{6}+x_{1},\>\frac{\pi}{4}+x_{2},\>\frac{\pi}{3}+x_{3}\right)\cdot\mathbf{x}_{k}\\ &=&\mathbf{f}\left(\mathbf{x}_{k}\right)\end{array} (20)

where 𝐑(φ,θ,ψ)\mathbf{R}\left(\varphi,\theta,\psi\right) is the rotation matrix parametrized by roll (φ\varphi, around axis XX), pitch (θ\theta, around YY) and yaw (ψ\psi, around ZZ) angles.

With our approach, we show that the discrete system is stable, as depicted by Figure 5 which is a projection of the 3-dimensional system across steps of our algorithm. With [𝐱0]=[ε,ε]×3[\mathbf{x}_{0}]=[-\varepsilon,\varepsilon]^{\times 3}, where ε=0.004\varepsilon=0.004, the algorithm needs 3 iterations to get captured inside the initial box. We thus have proved the stability of the system. The blue sets correspond to the image of [𝐱0][\mathbf{x}_{0}] by 𝐟,𝐟2,𝐟3\mathbf{f},\>\mathbf{f}^{2},\>\mathbf{f}^{3} and has been obtained using a Monte-Carlo method for visualization purposes. The point in the centre corresponds to the origin 𝟎\mathbf{0}.

Refer to caption
Figure 5: View in the (x1,x2)(x_{1},x_{2})-space of the effect of the stability contractor acting on the (x1,x2,x3)(x_{1},x_{2},x_{3})-space

Considering the shape of the set 𝐟k([𝐱0])\mathbf{f}^{k}([\mathbf{x}_{0}]), we understand that zonotope-based methods [6, 30] or Lohner integration methods [15, 31] could get a stronger convergence. However, these efficient operators cannot be used to prove the stability except if we are able to prove that they correspond to a stability contractor, which is not the case yet.

6.3 Example 3 : Validation of a localisation system

The aim of this example is to illustrate how our method could be applied to prove stability of an existing localisation system, before integrating the latter in a robot for example.

Remark 3.

Proving the stability of such a localisation system is of importance in robotics. Indeed, the commands are usually computed using the robot’s state, estimated by the localisation system. If the latter is not stable, i.e. it does not converge towards the actual position of the robot, then control is useless. Note that additionally to stability, accuracy and precision are also wanted features for a localisation system, that we won’t address here.

We also briefly explain how our method can be coupled with a paving algorithm to characterize the stability region of the localisation system. The latter corresponds to the acceptable set of parameters of the system, in the sense that it remains stable regardless of the parameters picked in that set.

Consider a range-only localisation system using two landmarks 𝐚,𝐛\mathbf{a},\>\mathbf{b} as represented on Figure 6. A static robot is located at position 𝐦\mathbf{m}, and is able to measure the exact distance between itself and the landmarks.

Refer to caption
Figure 6: localisation problem formalisation

More precisely, we assume that we have the following measurements

(y1y2)=𝐡(𝐦)=((m1a1)2+(m2a2)2(m1b1)2+(m2b2)2).\left(\begin{array}[]{c}y_{1}\\ y_{2}\end{array}\right)=\mathbf{h}(\mathbf{m})=\left(\begin{array}[]{c}(m_{1}-a_{1})^{2}+(m_{2}-a_{2})^{2}\\ (m_{1}-b_{1})^{2}+(m_{2}-b_{2})^{2}\end{array}\right).

Assume also that, at iteration kk, the robot believes to be at position 𝐩k\mathbf{p}_{k} whereas it actually is at position 𝐦\mathbf{m}. We define the Newton sequence as

𝐩k+1=𝐩k+𝐉𝐡1(𝐩k)(𝐡(𝐦)𝐡(𝐩k))\mathbf{p}_{k+1}=\mathbf{p}_{k}+\mathbf{J}_{\mathbf{h}}^{-1}(\mathbf{p}_{k})\cdot(\mathbf{h}(\mathbf{m})-\mathbf{h}(\mathbf{p}_{k}))

where

𝐉𝐡(𝐩)=2(p1a1p2a2p1b1p2b2)\mathbf{J}_{\mathbf{h}}(\mathbf{p})=2\left(\begin{array}[]{ccc}p_{1}-a_{1}&&p_{2}-a_{2}\\ p_{1}-b_{1}&&p_{2}-b_{2}\end{array}\right)

We have

𝐱k+1=𝐱k+𝐉𝐡1(𝐱k+𝐦)(𝐡(𝐦)𝐡(𝐱k+𝐦))𝐟(𝐱k,𝐦),\mathbf{x}_{k+1}=\underset{\mathbf{f}(\mathbf{x}_{k},\,\mathbf{m})}{\underbrace{\mathbf{x}_{k}+\mathbf{J}_{\mathbf{h}}^{-1}(\mathbf{x}_{k}+\mathbf{m})\cdot(\mathbf{h}(\mathbf{m})-\mathbf{h}(\mathbf{x}_{k}+\mathbf{m}))}},

where 𝐱k=𝐩k𝐦\mathbf{x}_{k}=\mathbf{p}_{k}-\mathbf{m} is the localisation error. We want to show that for 𝐦\mathbf{m} in a given region of the plane, the sequence defined by 𝐱k+1=𝐟(𝐱k,𝐦)\mathbf{x}_{k+1}=\mathbf{f}(\mathbf{x}_{k},\mathbf{m}) is stable, i.e., converges to 𝟎\mathbf{0}.

Remark 4.

Of course, this localisation system could be greatly improved. But let us imagine this localisation system has been implemented on a robot and working for years. Its performances are known and trusted, and changing it is not possible or desired. The goal here is to validate the existing system, not to build a new one. The guarantee of stability is a property we can check with our approach.

Now, let us introduce the concept of stability region.

Definition 3.

The sequence 𝐱k+1=𝐟(𝐱k,𝐦)\mathbf{x}_{k+1}=\mathbf{f}(\mathbf{x}_{k},\,\mathbf{m}) parametrized by 𝐦[𝐦]\mathbf{m}\in[\mathbf{m}] is robustly stable if

𝐦[𝐦],limk𝐱k=𝟎\forall\,\mathbf{m}\in[\mathbf{m}],\>\lim_{k\rightarrow\infty}\mathbf{x}_{k}=\mathbf{0} (21)
Definition 4.

We define the stability region as

𝕄={𝐦|ε>0,𝐱0[ε,ε]×n,limk𝐱k=0}\mathbb{M}=\left\{\mathbf{m}\,|\,\exists\varepsilon>0,\,\forall\,\mathbf{x}_{0}\in[-\varepsilon,\varepsilon]^{\times n},\lim_{k\rightarrow\infty}\|\mathbf{x}_{k}\|=0\right\} (22)

To calculate an inner approximation of 𝕄\mathbb{M}, we decompose the 𝐦\mathbf{m}-space into small boxes. Take one of these small boxes [𝐦][\mathbf{m}] and a box [𝐱0][\mathbf{x}_{0}] around 𝟎\mathbf{0}. The box [𝐱0][\mathbf{x}_{0}] should be small, but large compared to [𝐦][\mathbf{m}]. If there exists k>0k>0 such that the system is stable, i.e. [𝐟ck]([𝐱0],[𝐦])[𝐱0][\mathbf{f}_{c}^{k}]([\mathbf{x}_{0}],\,[\mathbf{m}])\subset[\mathbf{x}_{0}], then [𝐦]𝕄[\mathbf{m}]\subset\mathbb{M}.

Consider a situation where the landmarks are at positions 𝐚=(0, 0.1)T,𝐛=(0,0.1)T\mathbf{a}=\left(0,\>0.1\right)^{\mathrm{T}},\,\mathbf{b}=\left(0,\>-0.1\right)^{\mathrm{T}}. To characterize the set 𝕄\mathbb{M}, we build a paving of the plane with small boxes [𝐦][\mathbf{m}] of width 0.0010.001. Taking ε=w([𝐦])\varepsilon=\sqrt{w([\mathrm{\mathbf{m}}])} and [𝐱0]=[ε,ε]×2[\mathbf{x}_{0}]=[-\varepsilon,\varepsilon]^{\times 2}, we get the results depicted on Figure 7. The green area is proved to be inside 𝕄\mathbb{M}. As a consequence, if the robot is located in the green area and if its initial localisation error is smaller than ε\varepsilon, then the classical Newton method will converge towards the actual position of the robot. On the other hand, we are not able to conclude anything about the stability of the localisation algorithm in the red area: this could require to take smaller boxes [𝐦][\mathbf{m}] in the 𝐦\mathbf{m}-space, a smaller initial condition [𝐱0][\mathbf{x}_{0}], or this might also mean that the localisation system is not stable when initialised inside that specific choice of parameters.

Refer to caption
Figure 7: Stability region 𝕄\mathbb{M} of the localisation system

6.4 Example 4 : Stability of a cyclic trajectory

This last example illustrates how our method can be applied to a real-life problem : a robot controlled by a state-machine is deployed in a lake; we will prove that the trajectory of the latter is stable.

6.4.1 Presentation of the problem

Let us consider a robot (e.g. an Autonomous Underwater Vehicle (AUV)) cruising in a closed area (e.g. a lake) at a constant speed. We consider that the robot is moving in a plane, for the sake of simplicity. The robot is controlled by an automaton and a heading controller. It also embeds a sensor (e.g. a sonar) to detect if the shore is close by. In this case, an event is triggered and the automaton changes its states : a new heading command is sent to the controller. Consider for instance the following sequence for the automaton:

  1. 1.

    Head East during 25 sec.

  2. 2.

    Head North until reaching the shore

  3. 3.

    Go South for 7.57.5 sec

  4. 4.

    Head West until reaching the shore

  5. 5.

    Go to 1.

6.4.2 Proving stability of the robot’s trajectory

The goal is to prove that the robot’s trajectory will converge towards a stable cycle.

Assume that the border is modelled by the function

x2=h(x1)=20(1exp(0.25x1))x_{2}=h(x_{1})=20\left(1-\exp\left(-0.25x_{1}\right)\right) (23)

Given the starting position of the robot 𝐱=(x1,x2)T\mathbf{x}=(x_{1},x_{2})^{\mathrm{T}}, the positions of the robot after each of the four transitions are respectively given by the following functions :

𝐟1(𝐱)\displaystyle\mathbf{f}_{1}\left(\mathbf{x}\right) =(x1+25v,x2)T\displaystyle=\left(x_{1}+25v,\>x_{2}\right)^{\mathrm{T}}
𝐟2(𝐱)\displaystyle\mathbf{f}_{2}\left(\mathbf{x}\right) =(x1,h(x1))T\displaystyle=\left(x_{1},\>h\left(x_{1}\right)\right)^{\mathrm{T}}
𝐟3(𝐱)\displaystyle\mathbf{f}_{3}\left(\mathbf{x}\right) =(x1,x27.5v)T\displaystyle=\left(x_{1},\>x_{2}-7.5v\right)^{\mathrm{T}}
𝐟4(𝐱)\displaystyle\mathbf{f}_{4}\left(\mathbf{x}\right) =(h1(x2),x2)T\displaystyle=\left(h^{-1}(x_{2}),\>x_{2}\right)^{\mathrm{T}}

where vv denotes the cruising speed of the robot. Each of these functions can be modified to take model and sensor uncertainties into account, as we consider that the robot is cruising in dead-reckoning. Usually, these uncertainties grow with time if the robot does not perform exteroceptive measurements.

Let us define the cycle function 𝐟\mathbf{f}, given by Equation (24):

𝐟(𝐱)\displaystyle\mathbf{f}\left(\mathbf{x}\right) =\displaystyle= 𝐟4𝐟3𝐟2𝐟1(𝐱)\displaystyle\mathbf{f}_{4}\circ\mathbf{f}_{3}\circ\mathbf{f}_{2}\circ\mathbf{f}_{1}\left(\mathbf{x}\right) (24)

Proving stability of the cycle comes down to proving that the discrete-time system 𝐱k+1=𝐟(𝐱k)\mathbf{x}_{k+1}=\mathbf{f}\left(\mathbf{x}_{k}\right) is stable.

Figure 8 represents the evolution of [𝐱][\mathbf{x}] over steps of integration, as well as the intermediate steps. The initial box is [𝐱0]=([1.5,6.5],[9.5,15.5])T\left[\mathbf{x}_{0}\right]=\left(\left[1.5,6.5\right],\>\left[9.5,15.5\right]\right)^{\mathrm{T}}, and we model the robot getting lost by adding an uncertainty [𝐮i]=𝐟i(𝐱)𝐱v[ε,ε]×2[\mathbf{u}_{i}]=\frac{\left\|\mathbf{f}_{i}\left(\mathbf{x}\right)-\mathbf{x}\right\|}{v}\cdot\left[-\varepsilon,\varepsilon\right]^{\times 2} to each intermediate function 𝐟i\mathbf{f}_{i}, where v=1m/sv=1\>m/s is the cruising speed of the robot and ε=0.05m/s\varepsilon=0.05\>m/s is the speed at which the robot is getting lost, i.e. the inflating rate of the robot’s estimated position.

Refer to caption
Figure 8: The robot rebounds on the border and follows a cyclic trajectory

Since we have [𝐱1][𝐱0]\left[\mathbf{x}_{1}\right]\in\left[\mathbf{x}_{0}\right], we can conclude that the robot, equipped with the sensors named earlier and controlled by the given automaton, is going to perform a stable cycle in the lake.

7 Conclusion

In this paper, we have proposed a new approach to prove Lyapunov stability of a non-linear discrete-time system in a given neighbourhood [𝐱0][\mathbf{x}_{0}], regardless of the system’s uncertainties and the floating-point round-off errors. The principle is to perform a simulation based on the interval centred form, from an initial box [𝐱0][\mathbf{x}_{0}] and until the current box [𝐱k][\mathbf{x}_{k}] is strictly enclosed in [𝐱0][\mathbf{x}_{0}]. From the properties of the centred form we have proposed, we are able to conclude that a system is stable. The method applies to a large class of systems and can be used for arbitrary large dimensions, which is not the case for methods based on linearisation which are very sensitive to the dimension of the state vector. Additionally, contrary to the existing methods, ours is able to find a neighbourhood of radius δ\delta inside which the system is stable.

The next step is to extend this approach to deal with continuous-time systems described by differential equations. This extension will require the introduction of interval integration methods [5, 23, 25, 31].

The interval community can be decomposed into two worlds :

  • On the one hand, small intervals [20] are used in combination with linearisation methods and are able to propagate efficiently small uncertainties such as those related to round-off errors. They can solve high dimensional problems and do not require any bisections.

  • On the other hand, large intervals are mainly used for global optimization [9], solving equations [13], characterizing sets [11]. They used together with contractors techniques, constraint propagation and bisections algorithms.

This work belongs to the first world and makes use of small intervals. This is why the centred form is very efficient and why we do not perform any bisection. Now, in practice, the approaches developed by both communities can be used jointly : to solve real-life problems such as approximating basins of attraction, we will have to combine the two methodologies appropriately.

References

  • [1]
  • [2] M. V. A. Andrade, J. L. D. Comba & J. Stolfi (1994): Affine Arithmetic. In: Interval’94, St Petersburg.
  • [3] E. Asarin, T. Dang & A. Girard (2007): Hybridization methods for the analysis of non-linear systems. Acta Informatica 7(43), pp. 451–476, 10.1007/s00236-006-0035-7.
  • [4] G. Chabert & L. Jaulin (2009): A Priori Error Analysis with Intervals. SIAM Journal on Scientific Computing 31(3), pp. 2214–2230, 10.1137/070696982.
  • [5] A. Chapoutot, J. Alexandre Dit Sandretto & O. Mullier (2015): Validated Explicit and Implicit Runge-Kutta Methods. In: Summer Workshop on Interval Methods.
  • [6] C. Combastel (2005): A State Bounding Observer for Uncertain Non-linear Continuous-time Systems based on Zonotopes. In: CDC-ECC ’05, 10.1109/CDC.2005.1583327.
  • [7] I. Fantoni & R. Lozano (2001): Non-linear control for underactuated mechanical systems. Springer-Verlag, 10.1007/978-1-4471-0177-2.
  • [8] G. Frehse (2008): PHAVer: Algorithmic Verification of Hybrid Systems. International Journal on Software Tools for Technology Transfer 10(3), pp. 23–48, 10.1007/978-3-540-31954-2_17.
  • [9] E. R. Hansen (1992): Global Optimization using Interval Analysis. Marcel Dekker, New York, NY.
  • [10] L. Jaulin & J. Burger (1999): Proving stability of uncertain parametric models. Automatica, pp. 627–632, 10.1016/S0005-1098(98)00201-5.
  • [11] L. Jaulin, M. Kieffer, O. Didrit & E. Walter (2001): Applied Interval Analysis, with Examples in Parameter and State Estimation, Robust Control and Robotics. Springer-Verlag, London, 10.1007/978-1-4471-0249-6.
  • [12] Luc Jaulin & Fabrice Bars (2020): Characterizing Sliding Surfaces of Cyber-Physical Systems. Acta Cybernetica 24, pp. 431–448, 10.14232/actacyb.24.3.2020.9.
  • [13] R. B. Kearfott & V. Kreinovich, editors (1996): Applications of Interval Computations. Kluwer, Dordrecht, the Netherlands, 10.1007/978-1-4613-3440-8.
  • [14] M. Lhommeau, L Jaulin & L. Hardouin (2007): Inner and outer approximation of capture basins using interval analysis. ICINCO 2007.
  • [15] R. Lohner (1987): Enclosing the solutions of ordinary initial and boundary value problems. In E. Kaucher, U. Kulisch & Ch. Ullrich, editors: Computer Arithmetic: Scientific Computation and Programming Languages, BG Teubner, Stuttgart, Germany, pp. 255–286.
  • [16] T. Le Mézo, L. Jaulin & B. Zerr (2019): Bracketing backward reach sets of a dynamical system. In: International Journal of Control, 10.1080/00207179.2019.1643910.
  • [17] R. E. Moore (1966): Interval Analysis. Prentice-Hall, Englewood Cliffs, NJ.
  • [18] R. E. Moore (1979): Methods and Applications of Interval Analysis. SIAM, Philadelphia, PA, 10.1137/1.9781611970906.
  • [19] Ramon E Moore, R Baker Kearfott & Michael J Cloud (2009): Introduction to interval analysis. SIAM, 10.1137/1.9780898717716.
  • [20] A. Neumaier (1991): Interval Methods for Systems of Equations. Cambridge University Press, Cambridge, UK, 10.1017/CBO9780511526473.
  • [21] N. Ramdani & N. Nedialkov (2011): Computing Reachable Sets for Uncertain Nonlinear Hybrid Systems using Interval Constraint Propagation Techniques. Nonlinear Analysis: Hybrid Systems 5(2), pp. 149–162, 10.1016/j.nahs.2010.05.010.
  • [22] S. Ratschan & Z. She (2010): Providing a Basin of Attraction to a Target Region of Polynomial Systems by Computation of Lyapunov-Like Functions. SIAM Journal on Control and Optimization 48(7), pp. 4377–4394, 10.1137/090749955.
  • [23] N. Revol, K. Makino & M. Berz (2005): Taylor models and floating-point arithmetic: proof that arithmetic operations are validated in COSY. Journal of Logic and Algebraic Programming 64, pp. 135–154, 10.1016/j.jlap.2004.07.008.
  • [24] J. Rohn (1996): An algorithm for checking stability of symmetric interval matrices. IEEE Trans. Autom. Control 41(1), pp. 133–136, 10.1109/9.481618.
  • [25] S. Rohou, L. Jaulin, L. Mihaylova, F. Le Bars & S. Veres (2019): Reliable robot localization. ISTE Group, 10.1002/9781119680970.
  • [26] S. M. Rump (2001): INTLAB - INTerval LABoratory. In J. Grabmeier, E. Kaltofen & V. Weispfennig, editors: Handbook of Computer Algebra: Foundations, Applications, Systems, Springer-Verlag, Heidelberg, Germany.
  • [27] P. Saint-Pierre (2002): Hybrid kernels and capture basins for impulse constrained systems. In C.J. Tomlin & M.R. Greenstreet, editors: in Hybrid Systems: Computation and Control, 2289, Springer-Verlag, pp. 378–392, 10.1007/3-540-45873-5_30.
  • [28] J.J. Slotine & W. Li (1991): Applied nonlinear control. Prentice Hall, Englewood Cliffs (N.J.). Available at http://opac.inria.fr/record=b1132812.
  • [29] W. Taha & A. Duracz: Acumen: An Open-source Testbed for Cyber-Physical Systems Research. In: CYCLONE’15, 10.1007/978-3-319-47063-4_11.
  • [30] Jian Wan (2007): Computationally reliable approaches of contractive model predictive control for discrete-time systems. PhD dissertation, Universitat de Girona, Girona, Spain.
  • [31] D. Wilczak & P. Zgliczynski (2011): Cr-Lohner algorithm. Schedae Informaticae 20, pp. 9–46, 10.4467/20838476SI.11.001.0287.