This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Prescribed-time Control for Linear Systems in Canonical Form Via Nonlinear Feedback

Hefu Ye, and Yongduan Song This work was supported by the National Natural Science Foundation of China under grant (No.61991400, No.61991403, No.61860206008, and No.61933012). (Corresponding Author: Yongduan Song.)H. F. Ye is with Chongqing Key Laboratory of Autonomous Systems, Institute of Artificial Intelligence, School of Automation, Chongqing University, Chongqing 400044, China, and also with Star Institute of Intelligent Systems (SIIS), Chongqing 400044, China. (e-mail: [email protected]).Y. D. Song is with Chongqing Key Laboratory of Autonomous Systems, Institute of Artificial Intelligence, School of Automation, Chongqing University, Chongqing 400044, China. (e-mail: [email protected]).
Abstract

For systems in canonical form with nonvanishing uncertainties/disturbances, this work presents an approach to full state regulation within prescribed time irrespective of initial conditions. By introducing the smooth hyperbolic-tangent-like function, a nonlinear and time-varying state feedback control scheme is constructed, which is further extended to address output feedback based prescribed-time regulation by invoking the prescribed-time observer, all are applicable over the entire operational time zone. As an alternative to full state regulation within user-assignable time interval, the proposed method analytically bridges the divide between linear and nonlinear feedback based prescribed-time control, and is able to achieve asymptotic stability, exponential stability and prescribed-time stability with a unified control structure.

Index Terms:
Nonlinear feedback, prescribed-time stability, output feedback, full state regulation

I Introduction

Finite-time convergence is highly desirable in many real-world automation applications, where the ultimate control goals are to be realized within finite time rather than infinite time, e.g., auto parts assembling, spacecraft rendezvous and docking [1, 2], and proportional navigation guidance [3], etc. Various approaches to finite-time convergence have been reported in literature, including: finite-time control, fixed-time control, time-synchronized control, predefined-time control and prescribed-time control. The prototype of the finite-time Lyapunov theory originates from V˙(x)+kVa(x)0\dot{V}(x)+kV^{a}(x)\leq 0 where V(x)V(x) is a positive definite function, and a(0,1),k𝐑+a\in(0,1),~{}k\in\mathbf{R}^{+}.

As an effort to achieve finite-time stabilization for high-order systems, the homogeneous method, terminal sliding mode method and adding a power integrator method are successively proposed (see, for instance, [5, 8, 9, 11, 7, 10, 4, 6]), which greatly promote the development of finite-time control theory. Since the convergence (settling) time therein depends on initial conditions and design parameters, the notion of fixed-time control is then introduced in [12] and [13], where the fractional-order plus odd-order feedback is used, leading to different closed-loop system dynamics, so that the upper boundary of the convergence time can be estimated without using initial conditions. However, neither finite-time control nor fixed-time control can actually achieve state regulation within one unified time. The time-synchronized control scheme proposed in [14] and [15], based on the norm-normalized sign function, is shown to be able to achieve output regulation simultaneously for different initial conditions with a unified control law.

To further alleviate the dependence of the settling time on design parameters, a predefined-time approach is exploited to estimate the upper bound of the convergence time in [16] and [17] by multiplying exponential signals on the basis of fractional power feedback signals. Recently, the notion of prescribed-time control is proposed in [18], which allows the user to assign the convergence time at will and irrespective of initial conditions or any other design parameter, thus offers a clear advantage over those that do not. With this concept, three different approaches have been developed, namely, states transformation approach, temporal scale transformation approach, and parametric Lyapunov equation based approach (e.g., [18, 20, 19, 21, 22, 24, 23]). Based on states transformation approach, the distributed consensus control algorithms are studied for multi-agent systems in [25, 27, 26], and a prescribed-time observer based output feedback algorithm is elegantly established for linear systems in [28]. Subsequent works further consider more complex systems, such as stochastic nonlinear systems [29] and LTI systems with input delay [30]. In addition, by using temporal scale transformation, a triangularly stable controller is proposed for the perturbed system in [31], a dynamic high-gain feedback algorithm is established for strict-feedback-like systems in [19], and some distributed algorithms are developed for multi-agent systems in [32, 33, 34]. Based upon parametric Lyapunov equation, a finite-time controller and a prescribed-time controller are studied for linear systems in [1] and [21], and then generalized to nonlinear systems in [35].

Theoretically inclined, prescribed-time control systems, under some generic design conditions, are capable of tolerating large parametric, structural and parameterizable disturbance uncertainties on the finite time interval, to ensure desired control performance, in addition to system stability. This property comes from a time-varying function which goes to \infty as tt tends to the prescribed time. Different from [18, 25, 22], where the time-varying function is used to scale the coordinate transformations, this paper only introduces the time-varying function into the virtual/actual controller. The advantage of this approach is that a simpler controller results and the control effort is reduced. In addition, to obtain far superior transient performance, we choose a new feedback scheme that the regular feedback signal can be reconstructed into some suitable forms by a nonlinear mechanism with high design degrees of freedom.111Some early literature exploit similar ideas to improved control performance. For example, in classical proportional-integral-differential (PID) control, the feedback signal xx is processed by proportion, integral or differential to construct the classical PI, PD or PID control.

Motivated by the above discussions, this paper revisits the prescribed-time control of high-order systems via a novel nonlinear feedback approach. In Section II, a useful hyperbolic-tangent-like function and a novel lemma are presented. In Section III, we study the prescribed-time control for certain LTI systems by using a nonlinear and time-varying feedback, both full state feedback and partial state feedback are considered. Section IV gives a extend prescribed-time control algorithm for uncertain LTI systems. Section V concludes this paper. The main contributions of this paper are as follows:

  • Both full state and partial state feedback controller are designed to achieve state regulation within prescribed time irrespective of initial conditions and any other design parameter. A non-stop running implementation method with ISS property is proposed for the first time.

  • Unlike most existing solutions that usually use the regular (direct) state feedback, this paper proposes to use the “reshaped” feedback states through the hyperbolic-tangent-like function, so as to establish a nonlinear and time-varying feedback control strategy capable of addressing asymptotic, exponential and prescribed-time control uniformly under certain conditions.

  • For high-order systems with non-parametric uncertainties/disturbances, we propose a prescribed-time sliding mode control scheme, melting attractive stability and robustness features at the transition and steady-state stages.

Notations: 𝐑\mathbf{R} is the set of nonnegative real numbers, 𝐑+={x𝐑:x>0}\mathbf{R}^{+}=\{x\in\mathbf{R}:x>0\}. For non zero integers mm and nn, let 0m×n0_{m\times n} be the (m,n)(m,n)-matrix with zero entries, and Jn=((0(n1)×1,In1),0n×1)J_{n}=((0_{(n-1)\times 1},I_{n-1})^{\top},0_{n\times 1})^{\top} and n(𝐚)=((0n×(n1)),𝐚)\mathcal{L}_{n}(\mathbf{a})=\left((0_{n\times(n-1)}),\mathbf{a}\right)^{\top} where 𝐚=(a1,,an)\mathbf{a}=(a_{1},\cdots,a_{n})^{\top}. [0,tp)\ell_{\infty}[0,t_{p}) denotes \ell_{\infty} on [0,tp)[0,t_{p}). Denote by 𝒦\mathcal{K} the set of class 𝒦\mathcal{K}-functions and denote by 𝒦\mathcal{KL} the set of class 𝒦\mathcal{KL}-functions (see Section 4.4 in [37]). For any vector 𝐳\mathbf{z}, we use 𝐳\mathbf{z}^{\top} and 𝐳\|\mathbf{z}\| to denote its transpose and Euclidean norm respectively. limtTf()\lim_{t\rightarrow T}f(\cdot) denotes the limit of f()f(\cdot) as tTt\rightarrow T. We denote by (q)(q=0,,n)\bullet^{(q)}(q=0,\cdots,n) the qq-th derivative of \bullet, and denote by q\bullet^{q} the qq-th power of \bullet.

II Preliminaries

II-A Problem Statement

We restrict our analysis to the following system in canonical form with uncertainties/disturbances

{𝐱˙(t)=A𝐱(t)+Bu(t)+D(𝐱,t)y(t)=C𝐱(t)\left\{\begin{array}[]{ll}\begin{aligned} \dot{\mathbf{x}}(t)&=A{\mathbf{x}}(t)+Bu(t)+D(\mathbf{x},t)\\ y(t)&=C{\mathbf{x}}(t)\end{aligned}\end{array}\right. (1)

where A=Jn+n(𝐚)A=J_{n}+\mathcal{L}_{n}(\mathbf{a}) is the system matrix, B=[0,,0,1]B=[0,\cdots,0,1]^{\top}, C=[b0,b1,,bn1]C=[b_{0},b_{1},\cdots,b_{n-1}] are coefficient vectors, D=[0,,0,d(𝐱,t)]D=[0,\cdots,0,d(\mathbf{x},t)]^{\top} with d(𝐱,t):𝐑n×[0,)𝐑d(\mathbf{x},t):\mathbf{R}^{n}\times[0,\infty)\rightarrow\mathbf{R} modeling the unknown nonvanishing uncertainties/disturbances of the system, 𝐱(t)=[x1,,xn]𝐑n{\mathbf{x}}(t)=[x_{1},\cdots,x_{n}]^{\top}\in\mathbf{R}^{n} is the vector of system states, and u(t)𝐑u(t)\in\mathbf{R} is the control input. (A,B)(A,B) is controllable and (A,C)(A,C) is observable. The control objective is to design a feedback control u(t)u(t) to stabilize (1) within prescribed-time tpt_{p}, i.e., 𝐱(t)[0,tp)\mathbf{x}(t)\in\ell_{\infty}[0,t_{p}) and limttp{xi(t)}i=1n=0.\lim_{t\rightarrow t_{p}}\{x_{i}(t)\}_{i=1}^{n}=0. We are particularly interested in making use of the feedback information xx through a nonlinear way to construct the control scheme.

Definition 1

[30] The origin of the system 𝐱˙=f(𝐱,t)\dot{\mathbf{x}}=f(\mathbf{x},t) is said to be prescribed-time globally asymptotically stable (PT-GUAS) if there exist a class 𝒦\mathcal{K}\mathcal{L} function β\beta and a function μ:[0,tp)𝐑+\mu:\left[0,t_{p}\right)\rightarrow\mathbf{R}^{+}such that μ\mu tends to infinity as tt goes to tpt_{p} and, t[0,tp)\forall t\in\left[0,t_{p}\right)

𝐱(t)β(x(0),μ(t)),\|\mathbf{x}(t)\|\leq\beta\left(\left\|x\left(0\right)\right\|,\mu\left(t\right)\right),

where tpt_{p} is a time that can be prescribed in the design.

II-B Hyperbolic-tangent-like Function

Instead of using xx directly, we process the feedback information xx through the following hyperbolic-tangent-like function h(x):(,+)h(x):(-\infty,+\infty) \rightarrow (1b,1a)\left(-\frac{1}{b},\frac{1}{a}\right) as:

h(x)eaxebxaeax+bebx,0<bah(x)\coloneqq\frac{e^{ax}-e^{-bx}}{ae^{ax}+be^{-bx}},~{}0<b\leq a (2)

where aa and bb are design parameters, which becomes the standard hyperbolic tangent function for a=b=1a=b=1. Such nonlinear feedback exhibits two salient properties.

Property 1: The function h(x)h(x) is 𝒞\mathcal{C^{\infty}} on 𝐑\mathbf{R} and h(x)=0h(x)=0 if and only if x=0x=0. Under 0<ba0<b\leq a, the inequality 0|x|h(|x|)xh(x)0\leq|x|h(|x|)\leq xh(x) holds.

Proof: Define a continuous function F(x)=xh(x)|x|h(|x|)F(x)=xh(x)-|x|h(|x|). For x0\forall x\geq 0, we have

F(x)=x(eaxebx)aeax+bebx|x|(ea|x|eb|x|)aea|x|+beb|x|0.F(x)=\frac{x(e^{ax}-e^{-bx})}{ae^{ax}+be^{-bx}}-\frac{|x|(e^{a|x|}-e^{-b|x|})}{ae^{a|x|}+be^{-b|x|}}\equiv 0.

For x<0\forall x<0, it follows that

F(x)=\displaystyle F(x)= x(eaxebx)aeax+bebxx(ebxeax)bebx+aeax\displaystyle\frac{x(e^{ax}-e^{-bx})}{ae^{ax}+be^{-bx}}-\frac{x(e^{bx}-e^{-ax})}{be^{bx}+ae^{-ax}}
=\displaystyle= x(ba)(e(a+b)x+e(a+b)x2)(aeax+bebx)(bebx+aeax)0.\displaystyle\frac{x(b-a)\left(e^{(a+b)x}+e^{-(a+b)x}-2\right)}{\left(ae^{ax}+be^{-bx}\right)\left(be^{bx}+ae^{-ax}\right)}\geq 0.

Thus, under 0<ba0<b\leq a, the inequality F(x)0F(x)\geq 0 holds for x(,+)\forall x\in(-\infty,+\infty), implying that 0|x|h(|x|)xh(x)0\leq|x|h(|x|)\leq xh(x) for x(,+)\forall x\in(-\infty,+\infty). \hfill\blacksquare

Property 2: Function h(x)h(x) is strictly monotonically increasing with respect to (w.r.t.) xx, its upper and lower bounds are 1/a1/a and 1/b-1/b, respectively. By selecting different design parameters aa and bb, various functions can be obtained from h(x)h(x), In particular, if choosing aa and bb small enough, then h(x)=xh(x)=x.

Proof: The upper and lower bounds of h(x)h(x) are

limx+h(x)\displaystyle\lim_{x\rightarrow+\infty}h(x) =limx+eaxebxaeax+bebx=1a,α>0\displaystyle=\lim_{x\rightarrow+\infty}\frac{e^{ax}-e^{-bx}}{ae^{ax}+be^{-bx}}=\frac{1}{a},~{}~{}~{}\alpha>0
limxh(x)\displaystyle\lim_{x\rightarrow-\infty}h(x) =limxeaxebxaeax+bebx=1b,b>0.\displaystyle=\lim_{x\rightarrow-\infty}\frac{e^{ax}-e^{-bx}}{ae^{ax}+be^{-bx}}=-\frac{1}{b},~{}b>0.~{}~{}~{}~{}~{}~{}~{}~{}~{}

When aa and bb are sufficiently small, for x(,+)~{}x\in(-\infty,+\infty), by using L’Ho^\hat{\text{o}}pital’s Rule, we have

lima0b0eaxebxaeax+bebx\displaystyle\mathop{\lim}_{a\rightarrow 0\atop b\rightarrow 0}\frac{e^{ax}-e^{-bx}}{ae^{ax}+be^{-bx}} =lima0b0xeax+xebxeax+axeax+ebxbxebx\displaystyle=\mathop{\lim}_{a\rightarrow 0\atop b\rightarrow 0}\frac{xe^{ax}+xe^{-bx}}{e^{ax}+axe^{ax}+e^{-bx}-bxe^{-bx}}
=x\displaystyle=x

implying that the expanded/compressed signal h(x)h(x) reverses back to the regular signal xx. \hfill\blacksquare

Remark 1

The classical finite-time control adopts fractional power of xx (i.e., x1/3x^{1/3}) to expand the feedback signal xx on x[1,1]x\in[-1,1], and compress xx on x(,1)(1,+)x\in(-\infty,-1)\cup(-1,+\infty); the fixed-time control uses an additional nonlinear damping term (e.g.,x3)(e.g.,x^{3}) on the basis of the original feedback signal to expand the feedback signal on (,+)(-\infty,+\infty). Consequentially, different feedback signals cause different convergence properties, which is mainly reflected in the relationship between settling time and initial conditions. Since the hyperbolic-tangent-like function h(x)h(x) can expand or compress the feedback signal with different aa and bb, it provides control design extra flexibility and degree of freedom. In addition, the right-hand side of (2) remains bounded within (1/b,1/a)(-1/b,1/a) even if |x||x| grows large, this special property makes h(x)h(x) perfectly suitable to function as the core part of the controller, and our motivation for this work partly stems from such appealing features of h(x)h(x). Fig. 1 illustrates h(x)h(x) with different aa and bb, and the fractional power feedback signals in terms of xx.

Refer to caption
Figure 1: Schematic figure of h(x)=eaxebxaeax+bebxh(x)=\frac{e^{ax}-e^{-bx}}{ae^{ax}+be^{-bx}} and the trajectories of x1/3+x3x^{1/3}+x^{3} and x1/3x^{1/3}.

The interesting feature behind this nonlinear feedback is that it includes regular (direct) feedback of xx as a special case, and it allows the linear regular feedback control and nonlinear feedback control to be unified through such function, providing a variety of ways to making use of xx for control development. By using properties of the hyperbolic-tangent-like function, we establish the following lemma, which is crucial to our later technical development.

Lemma 1

Consider the functions μ(t)=1/(tpt)\mu(t)=1/(t_{p}-t) and h(x)h(x) be given as in (2). For t[0,tp)t\in[0,t_{p}), if a positive continuously differentiable function satisfies

V˙(t)kμ(t)h(V),k>1\dot{V}(t)\leq-k\mu(t)h(V),~{}k>1 (3)

where V˙=kμ(t)h(V)\dot{V}=-k\mu(t)h(V) if and only if V=0V=0, then we have V(t)β(V0,μ(t))V(t)\leq\beta(V_{0},\mu(t)) and β\beta being of class-𝒦\mathcal{KL}. In particular, it holds

limttpV(t)=0,limttpV˙(t)=0.\lim_{t\rightarrow t_{p}}V(t)=0,~{}\lim_{t\rightarrow t_{p}}\dot{V}(t)=0.

Proof: Consider the analytical expression of (3):

V˙ktpteaVebVaeaV+bebV,\dot{V}\leq\frac{-k}{t_{p}-t}\frac{e^{aV}-e^{-bV}}{ae^{aV}+be^{-bV}}, (4)

Let V=eaVebVV_{*}=e^{aV}-e^{-bV}, where V0=eaV(0)ebV(0){V_{*}}_{0}=e^{aV(0)}-e^{-bV(0)}. Then we have V0V_{*}\geq 0, and

V˙\displaystyle\dot{V}_{*}\leq (aeaV+bebV)ktpteaVebVaeaV+bebV\displaystyle\left(ae^{aV}+be^{-bV}\right)\frac{-k}{t_{p}-t}\frac{e^{aV}-e^{-bV}}{ae^{aV}+be^{-bV}} (5)
\displaystyle\leq kμV\displaystyle-k\mu V_{*}

holds for t[0,tp)t\in[0,t_{p}). Hence we derive Vβ(V0,μ(t))V_{*}\leq\beta({V_{*}}_{0},\mu(t)) according to Lemma 1 in [18], namely V[0,tp)V_{*}\in\ell_{\infty}[0,t_{p}) and limttpV=0\lim_{t\rightarrow t_{p}}V_{*}=0. The same result can be established for (eaVebV)(e^{aV}-e^{-bV}). It follows from the fact VV tends to ++\infty “slower” than (eaVebV)(e^{aV}-e^{-bV}) as V+V\rightarrow+\infty that V(t)[0,tp)V(t)\in\ell_{\infty}[0,t_{p}) and limttpV(t)=0\lim_{t\rightarrow t_{p}}V(t)=0. In addition, the inequality (4) can be transformed into the following form:

V1aln(C1(tpt)k+ebV),t[0,tp)V\leq\frac{1}{a}\ln\left(C_{1}(t_{p}-t)^{k}+e^{-bV}\right),~{}t\in[0,t_{p}) (6)

where C1=(eaV0ebV0)/tpkC_{1}=\left(e^{aV_{0}}-e^{-bV_{0}}\right)/{t_{p}^{k}} is the integral constant. In fact, one can easily verify the following calculations

(6)eaVebVC1(tpt)k;(6.1)eaVebVtptC1(tpt)k1;(6.2)(6.1)(aeaV+bebV)V˙kC1(tpt)k1;(6.3)(6.3)V˙kC1(tpt)k1aeaV+bebVktpteaVebVaeaV+bebV.(6.4)\begin{array}[]{|l|c|}\hline\cr\small{\text{(6)}}\Rightarrow~{}e^{aV}-e^{-bV}\leq C_{1}(t_{p}-t)^{k};&~{}\small{\text{(6.1)}}\\ ~{}~{}~{}\Rightarrow\frac{e^{aV}-e^{-bV}}{t_{p}-t}\leq C_{1}(t_{p}-t)^{k-1};&~{}\small{\text{(6.2)}}\\ \small{\text{(6.1)}}\Rightarrow~{}\left(ae^{aV}+be^{-bV}\right)\dot{V}\leq-kC_{1}(t_{p}-t)^{k-1};{}&~{}\small{\text{(6.3)}}\\ \small{\text{(6.3)}}\Rightarrow~{}\dot{V}\leq\frac{-kC_{1}(t_{p}-t)^{k-1}}{ae^{aV}+be^{-bV}}\leq\frac{-k}{t_{p}-t}\frac{e^{aV}-e^{-bV}}{ae^{aV}+be^{-bV}}.{}&~{}\small{\text{(6.4)}}\\ \hline\cr\end{array}

Furthermore, from (6.3) we have (a+b)V˙0(a+b)\dot{V}\leq 0 and

V˙kC1(tpt)k1aeaV+bebV,V˙(0)=kC1tpk1aeaV0+bebV0.\dot{V}\leq\frac{-kC_{1}(t_{p}-t)^{k-1}}{ae^{aV}+be^{-bV}},~{}\dot{V}(0)=\frac{-kC_{1}t_{p}^{k-1}}{ae^{aV_{0}}+be^{-bV_{0}}}. (7)

Indeed, notice that V˙\dot{V} is a continuous function and V˙=kμh(V)\dot{V}=-k\mu h(V) for V=0V=0, implying that V˙[0,tp)\dot{V}\in\ell_{\infty}[0,t_{p}) and V˙(t)0\dot{V}(t)\rightarrow 0 as ttpt\rightarrow t_{p}. This completes the proof. \hfill\blacksquare

As discussed earlier, when aa and bb are sufficiently small, we have lima,b0h(V)=V\lim_{a,b\rightarrow 0}h(V)=V. Lemma 1 is therefore equivalent to the Corollary 1 in [18].

III Prescribed-time control for linear systems in canonical form without uncertainties

Motivated by the appealing features of the nonlinear scaling function h(x)h(x), we now discuss how to introduce it into the prescribed-time control design of nn-th order systems (1). We first design prescribed-time control schemes using nonlinear and time-varying full state feedback and partial state feedback to achieve full state regulation for system (1) without uncertainties/disturbances (i.e., d(𝐱,t)0d(\mathbf{x},t)\equiv{0}), then we extend the control scheme to cope with nonvanishing uncertainties/disturbances in the system.

III-A Prescribed-time State Feedback Controller

By using the time-varying scaling function and the hyperbolic-tangent-like function as introduced in Section 2, we construct the vectors as

H(𝐳)\displaystyle H(\mathbf{z}) =[h(z1),,h(zn)]\displaystyle=[h(z_{1}),\cdots,h(z_{n})]^{\top} (8)
Γ(𝐳)\displaystyle\Gamma(\mathbf{z}) =kμrH(𝐳)\displaystyle=k\mu^{r}H(\mathbf{z})

where μ=1/(tpt)\mu=1/(t_{p}-t) and h()h(\cdot) is defined in (2). In addition, we introduce the following two auxiliary vectors

𝐳\displaystyle\mathbf{z} =𝐱+JnΦ\displaystyle={\mathbf{x}}+J_{n}^{\top}\Phi (9)
Φ\displaystyle\Phi =Jn(Φ˙+𝐳)+Γ(𝐳)\displaystyle=J_{n}^{\top}(\dot{\Phi}+{\mathbf{z}})+\Gamma(\mathbf{z})

where 𝐳=[z1,,zn]𝐑n\mathbf{z}=[z_{1},\cdots,z_{n}]^{\top}\in\mathbf{R}^{n}, Φ=[ϕ1,,ϕn]𝐑n\Phi=[\phi_{1},\cdots,\phi_{n}]^{\top}\in\mathbf{R}^{n}. Note that JnJ_{n}^{\top} is lower triangular, thus both 𝐳\mathbf{z} and Φ\Phi can be easily calculated recursively (see (28) for specific example of computing ziz_{i} and ϕi\phi_{i}). It is interesting to see that the convergence property of the closed-loop system only depends on the parameters aa, bb, kk and rr in vector Γ(𝐳)\Gamma(\mathbf{z}).

Theorem 1

Consider system (1) with d(𝐱,t)0d(\mathbf{x},t)\equiv{0} and the state feedback control law,

u=B(n(𝐚)𝐱+Φ),\displaystyle u=-{B^{\top}}(\mathcal{L}_{n}(\mathbf{a})\mathbf{x}+{\Phi}), (10)

then all closed-loop signals are bounded and the origin of the closed-loop system (1) is:

  • 1)1)

    Globally uniformly asymptotically stable (GUAS), that is, x(t)β(𝐱(0),t)\|x(t)\|\leq\beta(\|\mathbf{x}(0)\|,t), if the controller parameters are selected as k>0k>0 and r=0r=0. In addition, exponential output regulation is achieved if aa and bb are chosen sufficiently small.

  • 2)2)

    Prescribed-time globally uniformly asymptotically stable (PT-GUAS), that is, x(t)β(𝐱(0),μ(t))\|x(t)\|\leq\beta(\|\mathbf{x}(0)\|,\mu(t)), if the controller parameters are selected as k>nk>n and r=1r=1.

Proof: By the definition of JnJ_{n}, BB and n(𝐚)\mathcal{L}_{n}(\mathbf{a}), it is readily verified that JnJn+BB=InJ_{n}J_{n}^{\top}+BB^{\top}=I_{n} and BBn(𝐚)=n(𝐚)BB^{\top}\mathcal{L}_{n}(\mathbf{a})=\mathcal{L}_{n}(\mathbf{a}), therefore, it holds that

𝐳˙\displaystyle\dot{\mathbf{z}} =A𝐱+Bu+JnΦ˙\displaystyle=A{\mathbf{x}}+Bu+J_{n}^{\top}\dot{\Phi} (11)
=(Jn+n(𝐚))𝐳JnJnΦn(𝐚)JnΦ\displaystyle=\left(J_{n}+\mathcal{L}_{n}(\mathbf{a})\right){\mathbf{z}}-J_{n}J_{n}^{\top}{\Phi}-\mathcal{L}_{n}(\mathbf{a})J_{n}^{\top}{\Phi}
BBn(𝐚)𝐱BBΦ+ΦΓ(𝐳)Jn𝐳\displaystyle~{}~{}~{}-BB^{\top}\mathcal{L}_{n}(\mathbf{a}){\mathbf{x}}-BB^{\top}{\Phi}+{\Phi}-\Gamma(\mathbf{z})-J_{n}^{\top}{\mathbf{z}}
=(Jn+n(𝐚))𝐳Φn(𝐚)(JnΦ+𝐱)\displaystyle=\left(J_{n}+\mathcal{L}_{n}(\mathbf{a})\right){\mathbf{z}}-{\Phi}-\mathcal{L}_{n}(\mathbf{a})\left(J_{n}^{\top}{\Phi}+{\mathbf{x}}\right)
+ΦΓ(𝐳)Jn𝐳\displaystyle~{}~{}~{}+{\Phi}-\Gamma(\mathbf{z})-J_{n}^{\top}{\mathbf{z}}
=(JnJn)𝐳Γ(𝐳).\displaystyle=\left(J_{n}-J_{n}^{\top}\right){\mathbf{z}}-\Gamma(\mathbf{z}).

1) proof of GUAS result: we prove that the closed-loop system under the control law (10) with r=0r=0 (in this case, Γ(𝐳)=kH(𝐳)\Gamma(\mathbf{z})=kH(\mathbf{z})) is GUAS. To this end, we define the error vector between H(𝐳)H(\mathbf{z}) and 𝐳(t)\mathbf{z}(t) as

𝐄H(𝐳)𝐳(t)\mathbf{E}\coloneqq H(\mathbf{z})-\mathbf{z}(t) (12)

where 𝐄=[e1,,en]𝐑n\mathbf{E}=[e_{1},\cdots,e_{n}]^{\top}\in\mathbf{R}^{n} is a smooth function satisfying limα,β0𝐄=𝟎\lim_{\alpha,\beta\rightarrow 0}\mathbf{E}=\mathbf{0} and 𝐄\mathbf{E} is bounded as long as 𝐳(t)\mathbf{z}(t) is bounded. Consider a positive definite function V1=𝐳𝐳/2V_{1}=\mathbf{z}^{\top}\mathbf{z}/2, the time derivative of V1V_{1} along (11) is

V˙1=𝐳(JnJn)𝐳𝐳Γ(𝐳)=k𝐳H(𝐳)0\dot{V}_{1}=\mathbf{z}^{\top}\left(J_{n}-J_{n}^{\top}\right){\mathbf{z}}-\mathbf{z}^{\top}\Gamma(\mathbf{z})=-k\mathbf{z}^{\top}H(\mathbf{z})\leq 0 (13)

where the fact that 𝐳(JnJn)𝐳=0,𝐳𝐑n{\mathbf{z}}^{\top}\left(J_{n}-J_{n}^{\top}\right){\mathbf{z}}=0,~{}\forall{\mathbf{z}}\in\mathbf{R}^{n} is used since JnJnJ_{n}-J_{n}^{\top} is a skew symmetric matrix. It follows from (13) that V˙1=0\dot{V}_{1}=0 if and only if 𝐳=𝟎\mathbf{z}=\mathbf{0}, thus the transformed system (11) is asymptotically stable on [0,+)[0,+\infty), establishing the same to system (1) according to the converging-input converging-output property of the corresponding auxiliary vectors.

From (13), it can be further shown that

V˙1\displaystyle\dot{V}_{1} =k𝐳(𝐄+𝐳)k𝐳2+k𝐳𝐄\displaystyle=-k\mathbf{z}^{\top}(\mathbf{E}+\mathbf{z})\leq-k\|\mathbf{z}\|^{2}+k\|\mathbf{z}\|\|\mathbf{E}\| (14)
k2𝐳2+k2Δ2\displaystyle\leq-\frac{k}{2}\|\mathbf{z}\|^{2}+\frac{k}{2}\Delta^{2}

where Δsup{𝐄}\Delta\coloneqq\sup\{\mathbf{\|E\|}\}. By integrating both sides of the inequality (14), we obtain V1(t)V1(0)ekt/2+Δ2(1ekt/2)V_{1}(t)\leq V_{1}(0)e^{-kt/2}+\Delta^{2}(1-e^{-kt/2}). If we further choose the design parameters aa and bb in H(𝐳)H(\mathbf{z}) small enough, one can obtain lima,b0Δ=0\lim_{a,b\rightarrow 0}\Delta=0, yielding V1(t)V1(0)ekt/2V_{1}(t)\leq V_{1}(0)e^{-kt/2}, thus we have 𝐳(t)𝐳(0)ekt/2\|\mathbf{z}(t)\|\leq\|\mathbf{z}(0)\|{e^{-kt/2}} and the transformed system (11) is exponentially stable. In addition, it follows from (9) that x1(t)=z1(t)x_{1}(t)=z_{1}(t), therefore exponential output regulation to zero of (1) can be achieved. The word “exponential” actually means that “near exponential”, because the parameters aa and bb can only be selected as sufficiently small, not zero.

2) Proof of PT-GUAS result: we now show that the closed-loop system under the control law (10) with r=1r=1 (in this case, Γ(𝐳)=kμH(𝐳)\Gamma(\mathbf{z})=k\mu H(\mathbf{z})) is PT-GUAS. For t[0,tp),t\in[0,t_{p}), there exists a continuous positive function W(𝐳,t)=𝐳𝐳/nW(\mathbf{z},t)=\sqrt{\mathbf{z}^{\top}{\mathbf{z}}/n} such that W(𝐳,t)=0W(\mathbf{z},t)=0 as 𝐳=𝟎\mathbf{z}=\mathbf{0} and

W2=𝐳𝐳/nWmax{|z1|,,|zn|}zW^{2}=\mathbf{z}^{\top}{\mathbf{z}}/n\Rightarrow W\leq\max\{|z_{1}|,\cdots,|z_{n}|\}\triangleq{z}^{*} (15)

In addition, its time derivative can be shown as

W˙=\displaystyle\dot{W}= 𝐳𝐳˙nW=𝐳(JnJn)𝐳𝐳Γ(𝐳)nW\displaystyle\frac{\mathbf{z}^{\top}\dot{\mathbf{z}}}{nW}=\frac{{\mathbf{z}}^{\top}\left(J_{n}-J_{n}^{\top}\right){\mathbf{z}}-\mathbf{z}^{\top}\Gamma(\mathbf{z})}{nW} (16)
=\displaystyle= 𝐳Γ(𝐳)nW=kμnWi=1nzih(zi).\displaystyle-\frac{\mathbf{z}^{\top}\Gamma(\mathbf{z})}{nW}=-\frac{k\mu}{nW}\sum_{i=1}^{n}z_{i}h(z_{i}).

Using Property 1 and (15), we have

W˙\displaystyle\dot{W} kμnWi=1n|zi|h(|zi|)kμnWzh(z)\displaystyle\leq-\frac{k\mu}{nW}\sum_{i=1}^{n}|z_{i}|h(|z_{i}|)\leq-\frac{k\mu}{nW}{z}^{*}h({z}^{*}) (17)
kμnWWh(W)=kμnh(W)0.\displaystyle\leq-\frac{k\mu}{nW}{W}h(W)=-\frac{k\mu}{n}h(W)\leq 0.

Since we select k>nk>n, then k/n>1k/n>1. By virtue of Lemma 1, one can prove that W(t)[0,tp)W(t)\in\ell_{\infty}[0,t_{p}), W˙(t)[0,tp)\dot{W}(t)\in\ell_{\infty}[0,t_{p}) and

limttpW(t)=0,limttpW˙(t)=0.\lim_{t\rightarrow t_{p}}W(t)=0,~{}\lim_{t\rightarrow t_{p}}\dot{W}(t)=0. (18)

Hence the closed-loop signals {zi}i=1n[0,tp)\{z_{i}\}_{i=1}^{n}\in\ell_{\infty}[0,t_{p}) and {z˙i}i=1n[0,tp)\{\dot{z}_{i}\}_{i=1}^{n}\in\ell_{\infty}[0,t_{p}), and meanwhile converge to zero as tt tends to tpt_{p}. Using the property of the hyperbolic-tangent-like function, we can proceed to prove that Γ(𝐳)[0,tp)\Gamma(\mathbf{z})\in\ell_{\infty}[0,t_{p}) and the convergence of which to zero at the prescribed time.

By means of the auxiliary vectors as introduced in (9), one can find that x1=z1x_{1}=z_{1} and the following closed-loop z1z_{1}-dynamics holds

z˙1=kμh(z1)+z2\dot{z}_{1}=-k\mu h(z_{1})+z_{2} (19)

where z2z_{2} is a bounded function, which also can be treated as a vanishing disturbance. When ttpt\rightarrow t_{p}, the equivalent form of (19) is

x˙1=ktpteax1ebx1aeax1+bebx1\dot{x}_{1}=\frac{-k}{t_{p}-t}\frac{e^{ax_{1}}-e^{-bx_{1}}}{ae^{ax_{1}}+be^{-bx_{1}}} (20)

It follows from (4)-(6) and (20) that

eax1ebx1=C2(tpt)ke^{ax_{1}}-e^{-bx_{1}}=C_{2}{(t_{p}-t)^{k}} (21)

where C2=(eax1(0)ebx1(0))/tpkC_{2}=\left(e^{ax_{1}(0)}-e^{-bx_{1}(0)}\right)/t_{p}^{k}. Then taking time derivative on both sides of (21), we have

(aeax1+bebx1)x2=kC1(tpt)k1,k>n.\displaystyle{\left(ae^{ax_{1}}+be^{-bx_{1}}\right)}x_{2}=kC_{1}(t_{p}-t)^{k-1},~{}k>n. (22)

Observe that (22) means that x20x_{2}\rightarrow 0 as ttpt\rightarrow t_{p}. Continue, using the analysis similar to that used in (21)-(22), by taking the ii-th (i=2,,n)(i=2,\cdots,n) derivative of both sides of (21), we can generalize that {xi}i=2n\{x_{i}\}_{i=2}^{n} and {x˙i}i=1n\{\dot{x}_{i}\}_{i=1}^{n} converge to zero as ttpt\rightarrow t_{p} (this is the reason for k>nk>n). Therefore, limttp𝐱(t)=0\lim_{t\rightarrow t_{p}}\|\mathbf{x}(t)\|=0 and limttp𝐱˙(t)=0\lim_{t\rightarrow t_{p}}\|\dot{\mathbf{x}}(t)\|=0 hold.

In addition, it follows from (9) that JnΦ˙=𝐳˙𝐱˙J_{n}^{\top}\dot{\Phi}=\dot{\mathbf{z}}-\dot{\mathbf{x}}, therefore Φ=𝐳˙𝐱˙+Jn𝐳+Γ(𝐳)\Phi=\dot{\mathbf{z}}-\dot{\mathbf{x}}+J_{n}^{\top}\mathbf{z}+\Gamma(\mathbf{z}), then

Φ𝐳˙+𝐱˙+𝐳+Γ(𝐳)[0,tp).\displaystyle\|\Phi\|\leq\|\dot{\mathbf{z}}\|+\|\dot{\mathbf{x}}\|+\|\mathbf{z}\|+\|\Gamma(\mathbf{z})\|\in\ell_{\infty}[0,t_{p}). (23)

Consequently, from (10) we have

|u(t)|\displaystyle|u(t)| =|B(n(𝐚)𝐱+Φ)|\displaystyle=|-B^{\top}(\mathcal{L}_{n}(\mathbf{a})\mathbf{x}+\Phi)| (24)
n(𝐚)𝐱+Φ\displaystyle\leq\|\mathcal{L}_{n}(\mathbf{a})\|\|\mathbf{x}\|+\|\Phi\|
n(𝐚)𝐱+𝐳˙+𝐱˙+𝐳+Γ(𝐳).\displaystyle\leq\|\mathcal{L}_{n}(\mathbf{a})\|\|\mathbf{x}\|+\|\dot{\mathbf{z}}\|+\|\dot{\mathbf{x}}\|+\|\mathbf{z}\|+\|\Gamma(\mathbf{z})\|.

Note that each term in the third line of (24) is bounded on [0,tp)[0,t_{p}) and converges to zero as ttpt\rightarrow t_{p}. Therefore, u(t)[0,tp)u(t)\in\ell_{\infty}[0,t_{p}) and limttp|u(t)|=0.\lim_{t\rightarrow t_{p}}|u(t)|=0. This completes the proof. \hfill\blacksquare

Remark 2

When we consider a scalar system x˙=u\dot{x}=u, from Theorem 1, one can immediately obtain a prescribed-time controller as up=ktptxu_{p}=-\frac{k}{t_{p}-t}x. Note that according to Theorem 1, this controller is only a special case under the design parameters aa and bb are chosen as small enough, and the design parameter kk satisfies k>1k>1. Note that the classical finite-time controllers uf=ksign(x)|x|αu_{f}=-k\text{sign}(x)|x|^{\alpha} (k>0,0<α<1)(k>0,0<\alpha<1) (see [4, 21]), then the unique dynamic solution is

x(t)={sign(x0)(|x0|1αk(1α)t)11α,t[0,tp)0,ttpx(t)=\left\{\begin{array}[]{l}\text{sign}(x_{0})\left(|x_{0}|^{1-\alpha}-k(1-\alpha)t\right)^{\frac{1}{1-\alpha}},~{}t\in[0,t_{p})\\ 0,~{}t\geq t_{p}\end{array}\right.

where tp=|x0|1αk(1α)t_{p}=\frac{|x_{0}|^{1-\alpha}}{k(1-\alpha)}. Thereafter, the finite-time controller is equivalent to

uf=\displaystyle u_{f}= ksign(x)|x|α1|x|=k|x|α1x\displaystyle-k\text{sign}(x)|x|^{\alpha-1}|x|=-k|x|^{\alpha-1}x (25)
=\displaystyle= k(|x0|1αk(1α)t)1x.\displaystyle-k\left(|x_{0}|^{1-\alpha}-k(1-\alpha)t\right)^{-1}x.

Inserting |x0|1α=k(1α)tp|x_{0}|^{1-\alpha}=k(1-\alpha)t_{p} into (25), we have

uf\displaystyle u_{f} =k(k(1α)tpk(1α)t)1x\displaystyle=-k\left(k(1-\alpha)t_{p}-k(1-\alpha)t\right)^{-1}x (26)
=x(1α)(tpt),t[0,tp).\displaystyle=-\frac{x}{(1-\alpha)(t_{p}-t)},~{}\forall t\in[0,t_{p}).

Let k=11α>1k=\frac{1}{1-\alpha}>1, we have up=ufu_{p}=u_{f}. The above analysis proves that the prescribed-time controller is equivalent to the finite-time controller under choosing some special design parameters.

Remark 3

If we utilize the PT-GUAS controller (as defined in (10) with k>n,r=1k>n,~{}r=1) and the GUAS controller (k>0,r=0k>0,~{}r=0) through the following way,

u(t)={B(n(𝐚)𝐱+Jn(Φ˙+𝐳)+(n+1)μH(𝐳)),0t<tpB(n(𝐚)𝐱+Jn(Φ˙+𝐳)+H(𝐳)),ttpu(t)=\left\{\begin{array}[]{l}-{B^{\top}}(\mathcal{L}_{n}(\mathbf{a})\mathbf{x}+J_{n}^{\top}(\dot{\Phi}+{\mathbf{z}})+(n+1)\mu H(\mathbf{z})),\\ ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}0\leq t<t_{p}\\ -{B^{\top}}(\mathcal{L}_{n}(\mathbf{a})\mathbf{x}+J_{n}^{\top}(\dot{\Phi}+{\mathbf{z}})+H(\mathbf{z})),~{}~{}t\geq t_{p}\end{array}\right.

then the system states converge to zero as ttpt\rightarrow t_{p} and then remain zero for ttpt\geq t_{p}. In fact, this switching method means that the prescribed-time controller guarantees that the closed-loop system is PT-GUAS on [0,tp)[0,t_{p}) and the GUAS controller guarantees that the closed-loop system is ISS in the presence of some external disturbance on [tp,+)[t_{p},+\infty).

Remark 4

Various methods on finite-time control have been reported in literature during the past few years, among which the most typical ones include adding a power integral (AAPI), linear matrix inequalities (LMI) and implicit Lyapunov function (ILF), where the key element utilized is the fractional power state feedback (e.g., [10, 12, 11]). In pursuit of an alternative solution, we exploit a unified control law such that the closed-loop system (11) can be regulated asymptotically, exponentially or within prescribed-time by choosing the design parameters k,rk,~{}r, aa and bb in (12) properly. One salient feature with this method is that it analytically bridges the divide between prescribed-time control and traditional asymptotic control. Furthermore, different design parameters (aa and bb in h(x)h(x)) allow different reshaped feedback signals to be utilized in the control scheme. Such treatment provides extra design flexibility and degree of freedom in tuning regulation performance.

Remark 5

Compared with the existing prescribed-time control results (see, for instance, [18, 38, 28, 22]), the proposed NTV feedback scheme, making use of the reshaped (compressed/expanded) feedback signal, is applicable over the entire operational process. In addition, this scheme has a numerical advantage over the aforementioned methods, this is because here only 1/(tpt)1/(t_{p}-t) rather than 1/(tpt)n1/(t_{p}-t)^{n} is involved for state scaling. Furthermore, with the proposed hyperbolic tangent function, the magnitude of initial control input can be adjusted through the parameters aa and bb.

Example 1

To verify the effectiveness and benefits of control scheme as presented Theorem 1, we conduct a comparative simulation study through a third-order system

{x˙1(t)=x2(t),x˙2(t)=x3(t),x˙3(t)=u(t).\left\{\begin{array}[]{ll}\dot{x}_{1}(t)=x_{2}(t),\\ \dot{x}_{2}(t)=x_{3}(t),\\ \dot{x}_{3}(t)=u(t).\end{array}\right.

According to Theorem 1-GUAS, the asymptotically stabilization controller under a,b0a,b\rightarrow 0 is uGUAS=K𝐱(t)u_{\text{GUAS}}=-K\mathbf{x}(t) with K=[k3+2k,3k2+2,3k]K=[k^{3}+2k,3k^{2}+2,3k]^{\top} and 𝐱(t)=[x1,x2,x3],\mathbf{x}(t)=[x_{1},x_{2},x_{3}]^{\top}, the control parameters are selected as a=b=0.01a=b=0.01, k=0.5k=0.5; and the initial condition are selected as [x1(0);x2(0);x3(0)]=[1;0;1][x_{1}(0);x_{2}(0);x_{3}(0)]=[-1;0;1]. According to Theorem 1-PT-GUAS, the following NTV feedback prescribed-time stabilization controller can be obtained,

uPT-GUAS=z2ϕ˙2kμh(z3),t[0,tp)u_{\text{PT-GUAS}}=-z_{2}-\dot{\phi}_{2}-k\mu h(z_{3}),~{}t\in[0,t_{p}) (27)

with

μ(t,tp)=1/(tpt),\displaystyle\mu(t,t_{p})=1/(t_{p}-t), (28)
ϕ1(x1,t)=kμh(x1),\displaystyle\phi_{1}(x_{1},t)=k\mu h(x_{1}),
ϕ2(x1,x2,t)=kμh(z2)+x1+ϕ˙1,\displaystyle\phi_{2}(x_{1},x_{2},t)=k\mu h(z_{2})+x_{1}+\dot{\phi}_{1},
h(zi)=eaziebziaeazi+βebzi,i=1,2,3\displaystyle h(z_{i})=\frac{e^{az_{i}}-e^{-bz_{i}}}{ae^{az_{i}}+\beta e^{-bz_{i}}},~{}i=1,2,3
ϕ˙i=k=1iϕixkxk+1+k=0i1ϕiμ(k)μ(k+1),\displaystyle\dot{\phi}_{i}=\sum_{k=1}^{i}\frac{\partial\phi_{i}}{\partial x_{k}}x_{k+1}+\sum_{k=0}^{i-1}\frac{\partial\phi_{i}}{\partial\mu^{(k)}}\mu^{(k+1)},
z1=x1,z2=x2+ϕ1,z3=x3+ϕ2,\displaystyle z_{1}=x_{1},~{}z_{2}=x_{2}+\phi_{1},~{}z_{3}=x_{3}+\phi_{2},

where the corresponding design parameters are selected as a=b=1a=b=1, k=6k=6, tp=6st_{p}=6s. In addition, to compare the control performance, we adopt our previous result (a linear feedback scheme in [18]) for simulation, the corresponding control law is given by uS=k=13Ck3μ0(k)μx4kv4(k1w˙1+k2w¨1)k(w¨1+k1w1+k2w˙1)u_{S}=-\sum_{k=1}^{3}C_{k}^{3}\frac{\mu_{0}^{(k)}}{\mu}x_{4-k}-v^{4}(k_{1}\dot{w}_{1}+k_{2}\ddot{w}_{1})-k(\ddot{w}_{1}+k_{1}w_{1}+k_{2}\dot{w}_{1}), where μ0=(tp/(tpt))4\mu_{0}=(t_{p}/(t_{p}-t))^{4}, v=tp/(tpt)v=t_{p}/(t_{p}-t), w1=μ0x1w_{1}=\mu_{0}x_{1}. The design parameters in uSu_{S} are selected as tp=6.2st_{p}=6.2s, k1=k2=0.6k_{1}=k_{2}=0.6 and k=1k=1.

Refer to caption
Figure 2: The system state and control input trajectories with [x1(0);x2(0);x3(0)]=[1;0;1][x_{1}(0);x_{2}(0);x_{3}(0)]=[-1;0;1] and tp=6st_{p}=6s.

For comparison, simulation results obtained with the three different control schemes are shown in Fig. 2, from which asymptotic stabilization and prescribed-time stabilization are observed. Furthermore, it is seen that i)i) the settling time with the proposed prescribed-time control is indeed irrespective of initial condition and any other design parameter; ii)ii) the proposed scheme works within and after the prescribed-time interval; and iii)iii) compared with linear feedback scheme (black dotted line), it can be seen that the NTV feedback schemes (red dotted line and blue solid line) have a superior transient performance with a smaller initial control effort, verifying the effectiveness and benefits of the proposed algorithms.222Our design is another option in the control designer’s toolbox and we do not claim its universality with respect to the existing designs but highlight its better control performance of a class of LTI systems.

III-B Prescribed-time Observer

When only partial state is measurable, we employ the prescribed-time observer proposed in [38], to construct the prescribed-time control using output feedback. As in [38] and [36], our solution is based on the separation principle, namely the controller is derived by designing a prescribed-time observer and a NTV output feedback control separately.

Considering that d(𝐱,t)0d(\mathbf{x},t)\equiv 0 and only output is available for feedback. The system (1) can be transformed into the following observer canonical form by a linear nonsingular transformation ξ(t)=M𝐱(t)\xi(t)=M\mathbf{x}(t)

{ξ˙(t)=𝒜ξ(t)+u(t)+𝒟y(t)y(t)=ξ1(t)\left\{\begin{array}[]{ll}\begin{aligned} \dot{\mathbf{\xi}}(t)&=\mathscr{A}{\mathbf{\xi}}(t)+\mathscr{B}u(t)+\mathscr{D}y(t)\\ y(t)&={\xi}_{1}(t)\end{aligned}\end{array}\right. (29)

where 𝒜=Jn\mathscr{A}=J_{n}, =[bn1,,b0]\mathscr{B}=[b_{n-1},\cdots,b_{0}]^{\top}, 𝒟=[an,,a1]\mathscr{D}=[a_{n},\cdots,a_{1}]^{\top}, and the aia_{i}s, bib_{i}s are the same as those in (1).

Invoking the observer proposed in [38], as follows

ξ^˙(t)=𝒜ξ^(t)+u(t)+𝒟y(t)\displaystyle\dot{\hat{\mathbf{\xi}}}(t)=\mathscr{A}{\hat{\mathbf{\xi}}}(t)+\mathscr{B}u(t)+\mathscr{D}y(t) (30)
+[g1(t,T),,gn(t,T)](yξ^1)\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\left[g_{1}(t,T),\cdots,g_{n}(t,T)\right]^{\top}(y-\hat{\xi}_{1})

where the time-varying observer gains {gi(t,T)}i=1n\{g_{i}(t,T)\}_{i=1}^{n} satisfy

gi(t,T)=(n+m0+i1Tp¯0i,1p¯0i+1,1)μ1i\displaystyle g_{i}(t,T)=\left(\frac{n+m_{0}+i-1}{T}\bar{p}_{0_{i,1}}-\bar{p}_{0_{i+1,1}}\right)\mu_{1}^{i}
j=1i1gjp¯0i,jμ1nj+ri,\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\sum_{j=1}^{i-1}g_{j}\bar{p}_{0_{i,j}}\mu_{1}^{n-j}+r_{i},
gn(t,T)=rn+2n+m01Tp¯0n,1μ1nj=1n1gjp¯0n,jμ1nj,\displaystyle g_{n}(t,T)=r_{n}+\frac{2n+m_{0}-1}{T}\bar{p}_{0_{n,1}}\mu_{1}^{n}-\sum_{j=1}^{n-1}g_{j}\bar{p}_{0_{n,j}}\mu_{1}^{n-j},

where μ1(t,T)=T/(Tt)\mu_{1}(t,T)=T/(T-t) and

p¯0i,i=1,p¯0i,j=0,ji\displaystyle\bar{p}_{0_{i,i}}=1,~{}~{}~{}\bar{p}_{0_{i,j}}=0,~{}j\geq i (31)
p¯0i,j1=n+m0+ijTp¯0i,j+p¯0i+1,j,\displaystyle\bar{p}_{0_{i,j-1}}=-\frac{n+m_{0}+i-j}{T}\bar{p}_{0_{i,j}}+\bar{p}_{0_{i+1,j}},
n1ij2,\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}n-1\geq i\geq j\geq 2,
p¯0n,j1=2n+m0jTp¯0n,j,j=2,,n,\displaystyle\bar{p}_{0_{n,j-1}}=-\frac{2n+m_{0}-j}{T}\bar{p}_{0_{n,j}},~{}j=2,\cdots,n,

and m01m_{0}\geq 1 is an integer and 𝐫=[r1,,rn]\mathbf{r}=[r_{1},\cdots,r_{n}]^{\top} is selected to make the nn-dimensional matrix Λ=[𝐫,[In1,01×(n1)]]\Lambda=\left[\mathbf{r},\left[I_{n-1},0_{1\times(n-1)}\right]^{\top}\right] Hurwitz. With (29)-(30) and observer error state ξ~=ξξ^\tilde{\xi}=\xi-\hat{\xi}, we get the observer error dynamics

ξ~˙(t)=\displaystyle\dot{\tilde{\mathbf{\xi}}}(t)= Jnξ~(t)[g1(t,T),,gn(t,T)]ξ~1.\displaystyle J_{n}{\tilde{\mathbf{\xi}}}(t)-\left[g_{1}(t,T),\cdots,g_{n}(t,T)\right]^{\top}\tilde{\xi}_{1}. (32)
Lemma 2

[38] For the dynamic system (1), consider the observer (30) having error dynamic (32) and observer gains {gi(t,T)}i=1n\{g_{i}(t,T)\}_{i=1}^{n}, and the {ri}i=1n\{r_{i}\}_{i=1}^{n} are constants to be selected such that the companion matrix Λ\Lambda is Hurwitz, then the closed-loop observer error system (32) is prescribed-time stable, and there exist two positive constants c1c_{1} and c2c_{2} such that

|ξ~(t)|μ1(t,T)m01c1exp(c2t)|ξ~(0)|,|\tilde{\mathbf{\xi}}(t)|\leq\mu_{1}(t,T)^{-m_{0}-1}c_{1}\exp(-c_{2}t)|\tilde{\mathbf{\xi}}(0)|, (33)

for all t[0,T)t\in[0,T). In addition, the output estimation error injection terms {gi(t,T)ξ~i}i=1n\{g_{i}(t,T)\tilde{\xi}_{i}\}_{i=1}^{n} remain uniformly bounded over [0,T)[0,T), and converge to zero as tT.t\rightarrow T. Also, 𝐱~(t){\tilde{\mathbf{x}}}(t) has the same dynamic properties as ξ~(t){\tilde{\mathbf{\xi}}}(t) since 𝐱~(t)=M1ξ~(t){\tilde{\mathbf{x}}}(t)=M^{-1}{\tilde{\mathbf{\xi}}}(t) with MM being a nonsingular constant matrix.

III-C Prescribed-time Output Feedback Controller

The output feedback prescribed-time control law for system (1) is constructed by replacing x1,x2,,xnx_{1},x_{2},\cdots,x_{n} with x^1,x^2,,x^n\hat{x}_{1},\hat{x}_{2},\cdots,\hat{x}_{n} in (10) as follows:

u=B(n(𝐚)𝐱^+Φ(𝐱^))u=-{B^{\top}}\left(\mathcal{L}_{n}(\mathbf{a})\hat{\mathbf{x}}+{\Phi}(\hat{\mathbf{x}})\right) (34)

where only x1x_{1} is measurable, Φ(𝐱^)=Jn(Φ˙(𝐱^)+𝐳^)+Γ(𝐳^){\Phi}(\hat{\mathbf{x}})=J_{n}^{\top}(\dot{\Phi}(\hat{\mathbf{x}})+\hat{\mathbf{z}})+\Gamma(\hat{\mathbf{z}}) with Γ(𝐳^)=kμr[h1(z^1),,hn(z^n)]\Gamma(\hat{\mathbf{z}})=k\mu^{r}[h_{1}(\hat{z}_{1}),\cdots,h_{n}(\hat{z}_{n})]^{\top} and 𝐳^=𝐱^+JnΦ(𝐱^).\hat{\mathbf{z}}=\hat{\mathbf{x}}+J_{n}^{\top}\Phi(\hat{\mathbf{x}}).

The control law (34) involves the design parameters kk and rr as defined in Theorem 1. It can be verified that different kk and rr lead to different convergence rate. For instance, we can set k>rk>r and r=0r=0, and by invoking the classical high-gain observer, to achieve asymptotic or exponential output regulation. Here in this subsection, our ambitious goal is to achieve state regulation with output feedback with the aid of the prescribed-time observer developed in [38].

Theorem 2

For the dynamic system (1) with d(𝐱,t)𝟎d(\mathbf{x},t)\equiv\mathbf{0}, consider the output feedback control law (34) with the prescribed-time observer (30), the closed-loop system is prescribed-time stable if the controller and the observer parameters are selected according to Theorem 1-PT-GUAS and Lemma 2, and TtpT\leq t_{p}.

Proof: The proof consists of two steps, the first step is to prove that the closed-loop system with the observer and the output feedback control scheme does not escape during [0,T)[0,T), and the second step is to show that all closed-loop trajectories converge to zero as tt tends to tpt_{p} and remain zero thereafter.

Step 1: We consider the Lyapunov function V=𝐳𝐳/2V=\mathbf{z}^{\top}\mathbf{z}/2. Using JnJn+BB=InJ_{n}J_{n}^{\top}+BB^{\top}=I_{n} and BBn(𝐚)=n(𝐚)BB^{\top}\mathcal{L}_{n}(\mathbf{a})=\mathcal{L}_{n}(\mathbf{a}), the derivative of VV over [0,T)[0,T) along (1) under the output feedback control law (34) becomes

V˙\displaystyle\dot{V} =𝐳[(Jn+n(𝐚))𝐳JnJnΦ(𝐱)n(𝐚)JnΦ(𝐱)]\displaystyle=\mathbf{z}^{\top}\left[\left(J_{n}+\mathcal{L}_{n}(\mathbf{a})\right){\mathbf{z}}-J_{n}J_{n}^{\top}{\Phi({\mathbf{x}})}-\mathcal{L}_{n}(\mathbf{a})J_{n}^{\top}{\Phi({\mathbf{x}})}\right]
𝐳[BBn(𝐚)(𝐱𝐱~)+BB(Φ(𝐱)Φ(𝐱~))]\displaystyle~{}~{}~{}-\mathbf{z}^{\top}\left[BB^{\top}\mathcal{L}_{n}(\mathbf{a})(\mathbf{x}-\tilde{\mathbf{x}})+BB^{\top}{(\Phi(\mathbf{x})-\Phi(\tilde{\mathbf{x}}))}\right]
+𝐳[Φ(𝐱)Γ(𝐳)Jn𝐳]\displaystyle~{}~{}~{}+\mathbf{z}^{\top}\left[{\Phi({\mathbf{x}})}-\Gamma(\mathbf{z})-J_{n}^{\top}{\mathbf{z}}\right]
=𝐳(Jn+n(𝐚)Jn)𝐳𝐳Φ(𝐱)+𝐳BBΦ(𝐱~)\displaystyle=\mathbf{z}^{\top}\left(J_{n}+\mathcal{L}_{n}(\mathbf{a})-J_{n}^{\top}\right){\mathbf{z}}-\mathbf{z}^{\top}{\Phi({\mathbf{x}})}+\mathbf{z}^{\top}BB^{\top}{\Phi(\tilde{\mathbf{x}})}
𝐳n(𝐚)(𝐳𝐱~)+𝐳(Φ(𝐱)Γ(𝐳))\displaystyle~{}~{}~{}-\mathbf{z}^{\top}\mathcal{L}_{n}(\mathbf{a})\left(\mathbf{z}-\tilde{\mathbf{x}}\right)+\mathbf{z}^{\top}\left({\Phi({\mathbf{x}})}-\Gamma(\mathbf{z})\right)
=𝐳Γ(𝐳)+𝐳(n(𝐚)𝐱~+BBΦ(𝐱~))\displaystyle=-\mathbf{z}^{\top}\Gamma(\mathbf{z})+\mathbf{z}^{\top}\left(\mathcal{L}_{n}(\mathbf{a})\tilde{\mathbf{x}}+BB^{\top}{\Phi(\tilde{\mathbf{x}})}\right)

where 𝐱~=𝐱𝐱^\tilde{\mathbf{x}}={\mathbf{x}}-\hat{\mathbf{x}}, Φ(𝐱~)=Φ(𝐱)Φ(𝐱^){\Phi(\tilde{\mathbf{x}})}={\Phi}({\mathbf{x}})-{\Phi(\hat{\mathbf{x}})}. It is seen from Lemma 2 that 𝐱~\tilde{\mathbf{x}} remains uniformly bounded over [0,T)[0,T) and converges to zero as tTt\rightarrow T. The boundedness of Φ(𝐱~){\Phi(\tilde{\mathbf{x}})} is also guaranteed by the bounded 𝐱~\tilde{\mathbf{x}}, and Φ(𝐱~){\Phi(\tilde{\mathbf{x}})} also converges to zero as tt tends to TT. Therefore, there exist a positive constant γ<\gamma<\infty such that V˙γ\dot{V}\leq\gamma holds for t[0,T)\forall t\in[0,T). It follows that VV and 𝐳\mathbf{z} cannot escape during the interval [0,T).[0,T).

Step 2: From Lemma 2, we know that there exists a prescribed-time TT, such that {x^i(t)=xi(t)}i=2n\{\hat{x}_{i}(t)=x_{i}(t)\}_{i=2}^{n} for tTt\geq T. In consequence, the output feedback control law u=B(n(𝐚)𝐱^+Φ(𝐱^))u=-{B^{\top}}\left(\mathcal{L}_{n}(\mathbf{a})\hat{\mathbf{x}}+{\Phi}(\hat{\mathbf{x}})\right) coincides with state feedback control law u=B(n(𝐚)𝐱+Φ(𝐱))u=-B^{\top}\left(\mathcal{L}_{n}(\mathbf{a}){\mathbf{x}}+{\Phi}({\mathbf{x}})\right) for tT\forall t\geq T. In other words, this output feedback law can be used to establish prescribed-time stability and performance recovery (see [37, 36] and [28]). Note that the closed-loop trajectory under (34) does not escape during t[0,T)t\in[0,T), it follows from Theorem 1-PT-GUAS that under the proposed output feedback control law, there exists another pre-set time tpTt_{p}\geq T to steer the system from an arbitrary bounded state to zero as ttp.t\rightarrow t_{p}. The boundedness of u(t)u(t) can also be easy established according to Theorem 1. This completes the proof. \hfill\blacksquare

Remark 6

To close this section, it is worth making the following comments.

  • i)i)

    The output feedback controller inherits the properties and advantages of the state feedback controller as stated in Theorem 1, that is, elegant parameter tuning, one-step design process, simple controller structure.

  • ii)ii)

    Prescribed-time stabilization is the result of employing the scaling function μ(t,tp)\mu(t,t_{p}) and the nonlinear feedback function h(x)h(x) inside the control scheme (10). All observer errors {x~i}i=1n\{\tilde{x}_{i}\}_{i=1}^{n} are regulated to zero within the pre-set time TT, and all system states {xi}i=1n\{{x}_{i}\}_{i=1}^{n} are regulated to zero within pre-set time tpt_{p}, where tpTt_{p}\geq T.

  • iii)iii)

    Only the output state (x1)(x_{1}) is required in constructing the output feedback prescribed-time control (34). Such control scheme has an obvious numerical advantage because its maximum implemented gain is 1/(tpt)n1/(t_{p}-t)^{n}, while the standard linear nn-th order output feedback prescribed-time controller proposed in [28] involves 1/(tpt)2n1/(t_{p}-t)^{2n}.

  • iv)iv)

    It is noted that the time-varying gains μ(t,tp)\mu(t,t_{p}), μ1(t,T)\mu_{1}(t,T) go to infinity as time tends to tpt_{p} or TT, such phenomenon (unbounded control gain at the fixed convergence time) appears in all results on strictly finite-/fixed-/prescribed-time control. The implementation solution for finite-/fixed-time control is to use fractional power state feedback or sign functions switching the controller to zero when the system is regulated to the equilibrium point. The other implementation solutions for prescribed-time control are given in [28, 20, 18]. One typical way to address this issue is to let the system operate in a finite-time interval, i.e., adjusting the operational time slightly shorter than prescribed convergence time. Another typical way is to let the system operate in an infinite time interval by making suitable saturation on the control gains. Here in this work, we use the method as described in Remark 3 to implement the prescribed-time controller for t[0,tp)t\in[0,t_{p}) and initiate the asymptotically stable controller for t[tp,+)t\in[t_{p},+\infty).

Example 2

We illustrate the performance of the observer and the output feedback controller through the following model:

{x˙1(t)=x2(t)x˙2(t)=u(t)y(t)=x1(t),\left\{\begin{array}[]{ll}\dot{x}_{1}(t)=x_{2}(t){}\\ \dot{x}_{2}(t)=u(t){}\\ y(t)=x_{1}(t),\end{array}\right.

the observer is

{x^˙1(t)=x^2(t)+g1(t,T)(y(t)x^1(t))x^˙2(t)=u(t)+g2(t,T)(y(t)x^1(t))\left\{\begin{array}[]{ll}\dot{\hat{x}}_{1}(t)=\hat{x}_{2}(t)+g_{1}\left(t,T\right)\left(y(t)-\hat{x}_{1}(t)\right)\\ \dot{\hat{x}}_{2}(t)=u(t)+g_{2}\left(t,T\right)\left(y(t)-\hat{x}_{1}(t)\right)\end{array}\right. (35)

with g1(t,T)=r1+2(m0+2)Tμ1g_{1}(t,T)=r_{1}+\frac{2(m_{0}+2)}{T}\mu_{1}, g2(t,T)=r2+r1m0+2Tμ1+(m0+1)(m0+2)T2μ12g_{2}(t,T)=r_{2}+r_{1}\frac{m_{0}+2}{T}\mu_{1}+\frac{(m_{0}+1)(m_{0}+2)}{T^{2}}\mu_{1}^{2}. For observer parameters, we select r1=r2=1r_{1}=r_{2}=1, m0=3m_{0}=3, T=4T=4s and [x^1(0);x^2(0)]=[0;0][\hat{x}_{1}(0);\hat{x}_{2}(0)]=[0;0]. For output feedback controller parameters, we select k=5k=5, a=b=1a=b=1 and tp=6t_{p}=6s. The initial condition is [x1(0);x2(0)]=[4,3][x_{1}(0);x_{2}(0)]=[4,-3]. The control law uu is implemented similar to (28), just replacing x2x_{2} in (28) with x^2\hat{x}_{2}. Furthermore, the control performance in a noisy environment is studied by considering the output signal y(t)y(t) corrupted with an uncertain measurement noise η(t)\eta(t), namely y(t)=x1(t)+η(t)y(t)=x_{1}(t)+\eta(t) with η(t)=0.001sin(3t)\eta(t)=0.001\sin(3t). The closed-loop state {xi(t)}i=12\{x_{i}(t)\}_{i=1}^{2} trajectories, state estimate {x^i(t)}i=12\{\hat{x}_{i}(t)\}_{i=1}^{2} trajectories, the norm of observer estimation error 𝐱~(t)\|\tilde{\mathbf{x}}(t)\|, the norm of system state 𝐱(t)\|{\mathbf{x}}(t)\| and control input signal u(t)u(t) are shown in Figs. 3-4.

It is observed from Fig. 3 that the controller remains operational after tpt_{p}, and all closed-loop signals are bounded on the whole time-domain, in particular, the observer estimation errors converge to zero as tT(4s)t\rightarrow T(4s), and system states converge to zero as ttp(6s)t\rightarrow t_{p}(6s), confirming our theoretical prediction and analysis. Fig. 4 shows that the proposed controller retains its performance even in the presence of measurement noise. Although a slightly chattering phenomenon, caused by noise and controller switching, occurs near tpt_{p}, the control input remains bounded on the whole time-domain. In addition, the numerical advantage leading to friendly implementation has been verified in simulation.

Refer to caption
Refer to caption
Refer to caption
Figure 3: Simulation results with prescribed-time observer (T=4sT=4s) and the proposed output feedback control (tp=6st_{p}=6s).
Refer to caption
Refer to caption
Refer to caption
Figure 4: Simulations results with measurement noise.

IV Prescribed-time control for linear systems in canonical form with uncertainties

In the presence of non-vanishing uncertain term d(𝐱,t)d(\mathbf{x},t), system (1) can be rewritten as

{x˙i=xi+1,i=1,,n1x˙n=u+F(x1,,xn,t)\left\{\begin{array}[]{ll}\dot{x}_{i}=x_{i+1},~{}~{}~{}~{}i=1,\cdots,n-1\\ \dot{x}_{n}=u+F(x_{1},\cdots,x_{n},t)\end{array}\right. (36)

where F()=i=1naixi+d(𝐱,t)F(\cdot)=\sum_{i=1}^{n}a_{i}x_{i}+d(\mathbf{x},t) is an unknown smooth function and satisfies |F()|d¯(𝐱)|F(\cdot)|\leq\bar{d}(\mathbf{x}) with d¯()\bar{d}(\cdot) being a known scalar real-valued function.

Define a sliding surface s(t)s(t) on [0,tp)[0,t_{p}) as follows:

s(t)=xn+ϕn1(x1,,xn1,t),s(t)=x_{n}+\phi_{n-1}(x_{1},\cdots,x_{n-1},t),~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} (37)

where ϕn1\phi_{n-1} is defined in (9). Some other sliding surface selection can be referred to [39] and [40]. The derivative of the auxiliary variable along the trajectories of (36) is

s˙=u+F()+ϕ˙n1.\dot{s}=u+F(\cdot)+\dot{\phi}_{n-1}. (38)

where ϕ˙n1=k=1n1ϕn1xkxk+1+k=0n2ϕn1μ(k)μ(k+1)\dot{\phi}_{n-1}=\sum_{k=1}^{n-1}\frac{\partial\phi_{n-1}}{\partial x_{k}}x_{k+1}+\sum_{k=0}^{n-2}\frac{\partial\phi_{n-1}}{\partial\mu^{(k)}}\mu^{(k+1)} belongs to a computable function.

Theorem 3

Consider system (1) and the transformed system (36), the closed-loop signals {xi}i=1n\{x_{i}\}_{i=1}^{n} and s(t)s(t) are prescribed-time globally uniformly asymptotically stable (PT-GUAS), if the control law is designed as:

u=d¯sign(s)ϕ˙n1kμh(s),u=-\bar{d}\text{sign}(s)-\dot{\phi}_{n-1}-k\mu h(s), (39)

where k>nk>n, d¯F()\bar{d}\geq F(\cdot), μ=1/(tpt)\mu=1/(t_{p}-t), ϕ˙n1\dot{\phi}_{n-1} is a computable function as described after (38), and h()h(\cdot) is the hyperbolic-tangent-like function as defined in (2).

Proof: For t[0,tp)t\in[0,t_{p}), let V=|s|V=|s|. With the control scheme (39), for s0s\neq 0, the upper right-hand derivative of VV along the trajectory of the closed-loop system (36) becomes

DV=ss˙|s|=(d¯F)kμh(|s|)kμh(V).D^{*}V=\frac{s\dot{s}}{|s|}=-(\bar{d}-F)-k\mu h(|s|)\leq-k\mu h(V). (40)

By using Lemma 1, it is easy to get that V[0,tp)V\in\ell_{\infty}[0,t_{p}) and limttpV=0\lim_{t\rightarrow t_{p}}V=0, establishing the same for s(t)s(t) and s˙(t)\dot{s}(t). At the same time, the closed-loop {xi}i=1n1\{x_{i}\}_{i=1}^{n-1}-dynamics become

x˙i=\displaystyle\dot{x}_{i}= xi+1;i=1,,n2\displaystyle x_{i+1};~{}~{}~{}~{}i=1,\cdots,n-2 (41)
x˙n1=\displaystyle\dot{x}_{n-1}= ϕn1\displaystyle-\phi_{n-1}

where ϕn1\phi_{n-1} is the virtual control input. It is seen that the control law (39) reduces the perturbed nn-th order system to an unperturbed (n1)(n-1)-th order system. Therefore, by using Theorem 1-PT-GUAS, we can prove that the closed-loop signals {xi}i=1n1\{x_{i}\}_{i=1}^{n-1}, {x˙i}i=1n1\{\dot{x}_{i}\}_{i=1}^{n-1}, ϕn1\phi_{n-1} and ϕ˙n1\dot{\phi}_{n-1} are bounded and converge to zero as ttpt\rightarrow t_{p}. From (40) and the analysis process in Section 3, it is not difficult to verify that μh(s)\mu h(s) is also bounded. Therefore, it follows from (39) that the control input u(t)u(t) is bounded for t[0,tp)t\in[0,t_{p}). This completes the proof. \hfill\blacksquare

Remark 7

For t[tp,+)t\in[t_{p},+\infty), we specifically design s(t)s(t) as s(t)=xn+i=1n1lixis(t)=x_{n}+\sum_{i=1}^{n-1}l_{i}x_{i}, where {li}i=1n1\{l_{i}\}_{i=1}^{n-1} are assigned such that the polynomial l1+l2s++ln1sn2+sn1l_{1}+l_{2}s+\cdots+l_{n-1}s^{n-2}+s^{n-1} is Hurwitz, and design the corresponding control law as u=d¯sign(s)i=1n1lixi+1u=-\bar{d}\text{sign}(s)-\sum_{i=1}^{n-1}l_{i}x_{i+1}. As a result, D|s(t)|0D^{*}|s(t)|\leq 0, we therefore obtain that s(t)=0,t[tp,+)s(t)=0,\forall t\in[t_{p},+\infty) by recalling that s(tp)=0s(t_{p})=0. Furthermore, it is not difficult to get u[tp,+)u\in\ell_{\infty}[t_{p},+\infty). As the disturbances do not disappear, the control action for ttpt\geq t_{p} is no longer zero but bounded, a necessary effort to fight against the ever-lasting (nonvanishing) uncertainties/disturbances, which is comprehensible in order to maintain each state at the equilibrium (zero) after the prescribed settling time.

Example 3

To verify the effectiveness of the prescribed-time sliding mode controller, we consider the following system:

{x˙1(t)=x2(t),x˙2(t)=u(t)+F(x1,x2,t)\left\{\begin{array}[]{ll}\dot{x}_{1}(t)=x_{2}(t),{}\\ \dot{x}_{2}(t)=u(t)+F(x_{1},x_{2},t)\end{array}\right.

where

F(x1,x2,t)=0.03x1+0.01sin(x2)+0.02sin(2t)F(x_{1},x_{2},t)=0.03x_{1}+0.01\sin(x_{2})+0.02\sin(2t) (42)

here d¯\bar{d} can be selected as d¯=0.03(|x1|+1)\bar{d}=0.03(|x_{1}|+1). According to Theorem 3, the controller is given by

u=d¯sign(s)ϕ˙1kμh(s),t[0,tp)u=-\bar{d}\text{sign}(s)-\dot{\phi}_{1}-k\mu h(s),~{}t\in[0,t_{p}) (43)

with

ϕ1=kμh(x1),μ=1/(tpt),\displaystyle\phi_{1}=k\mu h(x_{1}),~{}~{}\mu=1/(t_{p}-t),
ϕ˙1=ϕ1x1x2+ϕ1μμ˙,\displaystyle\dot{\phi}_{1}=\frac{\partial\phi_{1}}{\partial x_{1}}x_{2}+\frac{\partial\phi_{1}}{\partial\mu}\dot{\mu},
h(x)=(eaxebx)/(aeax+bebx),\displaystyle h(x)=(e^{ax}-e^{-bx})/(ae^{ax}+be^{-bx}),
s=x2+ϕ1,t[0,tp)\displaystyle s=x_{2}+\phi_{1},~{}~{}~{}~{}t\in[0,t_{p})
u=d¯sign(s)ϕ˙1kμh(s),t[0,tp)\displaystyle u=-\bar{d}\text{sign}(s)-\dot{\phi}_{1}-k\mu h(s),~{}t\in[0,t_{p})

In addition, according to Remark 6, we design u=d¯sign(s)l1x2,ttpu=-\bar{d}\text{sign}(s)-l_{1}x_{2},~{}\forall t\geq t_{p} with s=x2+l1x1s=x_{2}+l_{1}x_{1}. For simulation, the design parameters are chosen as a=b=l1=1a=b=l_{1}=1 and k=3k=3. To verify the property of prescribed-time convergence w.r.t. the initial conditions, three different initial values [x1(0);x2(0)]=[1;1][x_{1}(0);x_{2}(0)]=[1;-1], [x1(0);x2(0)]=[2;1][x_{1}(0);x_{2}(0)]=[2;-1] and [x1(0);x2(0)]=[3;1][x_{1}(0);x_{2}(0)]=[3;-1] are considered in Fig. 5. To confirm the property of prescribed-time convergence w.r.t. tpt_{p}, we choose tp=2s,3s,4st_{p}=2s,3s,4s respectively in Fig. 6.

Simulation results show that: i)i) all states converge to zero synchronously within the pre-set time tpt_{p}; and ii)ii) the convergence time is independent of initial conditions and any other design parameter. Certain control chattering is observed as ttpt\rightarrow t_{p}, which is caused by the auxiliary variable ss and controller switching. Specially, the magnitude of control input has a slight increase when |x1(0)||x_{1}(0)| increases or tpt_{p} decreases.

Refer to caption
Figure 5: Simulation results with different initial conditions under tp=2st_{p}=2s.
Refer to caption
Figure 6: Simulation results with different prescribed-time under [x1(0);x2(0)]=[2;0][x_{1}(0);x_{2}(0)]=[-2;0].

V Conclusions

A unified nonlinear and time-varying feedback control scheme is developed to achieve prescribed-time regulation of high-order uncertain systems. The proposed control is able to achieve asymptotic, exponential, or prescribed-time regulation by selecting the design parameters properly. The rule of parameter selection has been given through the Lyapunov theory. Furthermore, prescribed-time output feedback control and prescribed-time sliding mode control for high-order systems are developed, where the advantages of simplicity (elegancy) yet superiority are retained. Extension of the proposed method to more general nonlinear systems with mismatched uncertainties represents an interesting future research topic.

References

  • [1] B. Zhou, “Finite-time stabilization of linear systems by bounded linear time-varying feedback,” Automatica, vol. 113, pp. 108760, Nov. 2020.
  • [2] L. Zhao, J. P. Yu, and P. Shi, “Command Filtered Backstepping-Based Attitude Containment Control for Spacecraft Formation,” IEEE Trans. Syst., Man, Cybern. Syst., vol. 51, no. 2, pp. 1278-1287, Feb. 2021.
  • [3] P. Zarchan, Tactical and strategic missile guidance (6th ed.). AIAA. 2012.
  • [4] S. P. Bhat, and D. S. Bernstein, “Continuous finite-time stabilization of the translational and rotational double integrators,” IEEE Trans. Autom. Control, vol. 43, no. 5, pp. 678-682, May. 1998.
  • [5] S. P. Bhat, and D. S. Bernstein, “Geometric homogeneity with applications to finite-time stability,” Math. Control Signal Syst., vol. 17, no. 2, pp. 101-127, May. 2005.
  • [6] L. Zhao, J. P. Yu, C. Lin, and Y. M. Ma, “Adaptive Neural Consensus Tracking for Nonlinear Multiagent Systems Using Finite-Time Command Filtered Backstepping,” IEEE Trans. Syst., Man, Cybern. Syst., vol.48, no. 11, pp. 2003-2018, Nov. 2018.
  • [7] M. Chen, Q. X. Wu, and R. X. Cui, “Terminal sliding mode tracking control for a class of SISO uncertain nonlinear systems,” ISA Trans., vol. 52, no. 2, pp. 198-206, Nov. 2013.
  • [8] A. Levant, “Higher-order sliding modes, differentiation and output-feedback control,” Int. J. Control, vol. 76, no. 9, pp. 924-941, Sep. 2003.
  • [9] A. Levant, “Homogeneity approach to high-order sliding mode design,” Automatica, vol. 41, no. 5, pp. 823-830, May. 2005.
  • [10] F. Amato, M. Ariola, and P. Dorato, “Finite-time control of linear systems subject to parametric uncertainties and disturbances,” Automatica, vol. 37, no. 9, pp. 1459-1463, Sep. 2001.
  • [11] W. Lin, and C. J. Qian, “Adding a power integrator: a tool for global stabilization of high-order lower-triangular systems,” Syst. and Control Lett., vol. 39, no. 5, pp. 339-351, Apr. 2000.
  • [12] A. Polyakov, D. Efimov, and W. Perruquetti, “Finite-time and fixed-time stabilization: implicit lyapunov function approach,” Automatica, vol. 51, pp. 332-340, Jan. 2015.
  • [13] A. Polyakov, “Nonlinear feedback design for fixed-time stabilization of linear control systems,” IEEE Trans. Autom. Control, vol. 57, no. 8, pp. 2106-2110. Aug. 2012.
  • [14] D. Y. Li, S. S. Ge, and T. H. Lee, “Fixed-time-synchronized consensus control of multi-agent systems,” IEEE Trans. Control Netw. Syst., vol. 8, no. 1, pp. 89-98, Mar. 2021.
  • [15] D. Y. Li, H. Y. Yu, K. P. Tee, Y. Wu, S. S. Ge, and T. H. Lee, “On time-synchronized stability and control,” IEEE Trans. Syst. Man Cybern. Syst., pp. 1-14. Jan. 2021.
  • [16] J. D. Sánchez-Torres, E. N. Sanchez, and A. G. Loukianov, “Predefined time stability of dynamical systems with sliding modes,” in Proc. Amer. Control Conf., 2015, pp. 5842–5846.
  • [17] J. D. Sánchez-Torres, D. Gómez-Gutierrez, E. López, and A. G. Loukianov, “A class of predefined-time stable dynamical systems,” IMA J. Math. Control Inf., vol. 35, no. 1, pp. 1-29, Apr. 2018.
  • [18] Y. D. Song, Y. J. Wang, J. Holloway, and M. Krstić, “Time-varying feedback for regulation of normal-form nonlinear systems in prescribed finite time,” Automatica, vol. 83, pp. 243-251, Sep. 2017.
  • [19] P. Krishnamurthy, F. Khorrami, and M. Krstić, “Robust adaptive prescribed-time stabilization via output feedback for uncertain nonlinear strict-feedback-like systems,” Eur. J. Control, vol. 55, pp. 14-23, Sep. 2019.
  • [20] P. Krishnamurthy, F. Khorrami, and M. Krstić, “A dynamic high-gain design for prescribed-time regulation of nonlinear systems,” Automatica, vol. 115, pp. 108860, May. 2020.
  • [21] B. Zhou, “Finite-time stability analysis and stabilization by bounded linear time-varying feedback,” Automatica, vol. 121, pp. 109191, Nov. 2020.
  • [22] Y. D. Song, Y. J. Wang, and M. Krstić, “Time-varying feedback for stabilization in prescribed finite time,” Int. J. Robust and Nonlinear Control, vol. 29, no. 3, pp. 618-633. Mar. 2019.
  • [23] H. F. Ye, and Y. D. Song, “Prescribed‐time control of uncertain strict‐feedback‐like systems,” Int. J. Robust and Nonlinear Control, vol. 31, pp. 5281-5297, Mar. 2021.
  • [24] Z. Kan, T. Yucelen, E. Doucette, and E. Pasiliao, “A finite-time consensus framework over time-varying graph topologies with temporal constraints,” J. Dyn. Syst. Meas. Control, vol. 139, no. 7, pp. 1-6, Jul. 2017.
  • [25] Y. J. Wang, and Y. D. Song, “Leader-following control of high-order multi-agent systems under directed graphs: Pre-specified finite time approach,” Automatica, vol. 87, pp. 113-120, Jan, 2018.
  • [26] Y. J. Wang, Y. D. Song, and D. J. Hill, “Zero-error consensus tracking with preassignable convergence for nonaffine multiagent systems,” IEEE Trans. Cybern., vol. 51, no. 3, pp. 1300-1310, Mar. 2021.
  • [27] Y. J. Wang, Y. D. Song, D. J. Hill, and M. Krstić, “Prescribed-Time Consensus and Containment Control of Networked Multiagent Systems,” IEEE Trans. Cybern., vol. 49, no. 4, pp. 1138-1147, Apr. 2019.
  • [28] J. Holloway, and M. Krstić, “Presribed-time output feedback for linear systems in controllable canical form,” Automatica, vol. 107, pp. 77-85, May. 2019.
  • [29] W. Li, and M. Krstić, “Stochastic nonlinear prescribed-time stabilization and inverse optimality,” IEEE Trans. Autom. Control, in press, 2021.
  • [30] N. Espitia, and W. Perruquetti, “Predictor-feedback prescribed-time stabilization of LTI systems with input delay,” IEEE Trans. Autom. Control, in press. 2021.
  • [31] A. Shakouri, and N. Assadian, “Prescribed-time control with linear decay for nonlinear systems,” IEEE Control Syst. Lett., in press. 2021.
  • [32] T. Yucelen, Z. Kan, and E. Pasiliao, “Finite-Time Cooperative Engagement,” IEEE Trans. Autom. Control, vol. 64, no. 8, pp. 3521-3526, Aug. 2019.
  • [33] D. Tran, and T. Yucelen, “Finite-time control of perturbed dynamical systems based on a generalized time transformation approach,” Syst. and Control Lett., vol. 136, pp. 104605, Feb. 2020.
  • [34] E. Arabi, and T. Yucelen, “Control of Uncertain Multiagent Systems with Spatiotemporal Constraints,” IEEE Trans. Control Netw. Syst., vol. 8, no. 3, pp. 1107-1115, Sep. 2021.
  • [35] B. Zhou, and Y. Shi, “Prescribed-time stabilization of a class of nonlinear systems by linear time-varying feedback,” IEEE Trans. Autom. Control, in press, 2021.
  • [36] B. L. Tian, Z. Y. Zuo, X. M. Yan, and H. Wang, “A fixed-time output feedback control scheme for double integrator systems,” Automatica, vol. 80, pp. 17-24, Jun, 2017.
  • [37] H. K. Khalil, Nonlinear systems (3rd ed.), Englewood Cliffs, USA: Prentice-Hall. 2002.
  • [38] J. Holloway, and M. Krstić, “Prescribed-Time observers for linear systems in observer canonical form,” IEEE Trans. Autom. Control, vol. 64, no. 9, pp. 3905-3912, Sep. 2019.
  • [39] N. Harl, and S. N. Balakrishnan, “Impact Time and Angle Guidance With Sliding Mode Control,” IEEE Trans. Control Syst. Technol., vol. 20, no. 6, pp. 1436-1449, Nov. 2012.
  • [40] Z. R. Chen, X. Ju, Z. W. Wang, and Q. Li, “The prescribed time sliding mode control for attitude tracking of spacecraft,” Asian Journal of Control, Apr. 2020.