This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Minimal-time Deadbeat Consensus and Individual Disagreement Degree Prediction for High-order Linear Multi-agent Systems

Fu-Long Hu [email protected]    Hai-Tao Zhang [email protected]    Bowen Xu [email protected]    Zhe Hu [email protected]    Wei Ren [email protected] School of Artificial Intelligence and Automation, Key Laboratory of Image Processing and Intelligent Control, and the State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China. School of Artificial Intelligence, Optics and Electronics, Northwestern Polytechnical University, Xi’an 710072, China Department of Electrical and Computer Engineering, University of California, Riverside 92521, USA
Abstract

In this paper, a Hankel matrix-based fully distributed algorithm is proposed to address a minimal-time deadbeat consensus prediction problem for discrete-time high-order multi-agent systems (MASs). Therein, each agent can predict the consensus value with the minimum number of observable historical outputs of its own. Accordingly, compared to most existing algorithms only yielding asymptotic convergence, the present method can attain deadbeat consensus instead. Moreover, based on the consensus value prediction, instant individual disagreement degree value of MASs can be calculated in advance as well. Sufficient conditions are derived to guarantee both the minimal-time deadbeat consensus and the instant individual disagreement degree prediction. Finally, both the effectiveness and superiority of the proposed deadbeat consensus algorithm are substantiated by numerical simulations.

keywords:
Deadbeat consensus prediction; multi-agent systems; networked control systems; collaborative systems;
thanks: The material in this paper was not presented at any conference. Corresponding author: Hai-Tao Zhang.

, , , ,

1 Introduction

Due to their high efficiency, wide coverage and low cost, a large volume of efforts have been devoted to the distributed control of multi-agent systems (MASs). Therein, consensus situation is a requisite that refers to a certain state (i.e., acceleration, position, voltage) of all the agents reaching an agreement merely by using local interaction (Ren & Beard, 2008; Olfati-Saber, 2006). To this end, quite a few consensus algorithms have been proposed over the past few decades (Ren & Beard, 2005; Fax & Murray, 2004; Dimarogonas & Kyriakopoulos, 2007; Huang et al., 2014), which have been found universal applications to smart grids (Yang et al., 2013), social networks (Proskurnikov et al., 2015), multi-sensor networks (Sun et al., 2017), unmanned systems (Liu et al., 2019), etc. So far, most of the existing relevant approaches focus on asymptotically attaining consensus by utilizing local interactions (Olfati-Saber & Murray, 2004; Jadbabaie et al., 2003; Moreau, 2005; Xiao & Boyd, 2004), which update each agent’s state by some kinds of weighted linear combinations of itself and its neighbors. Therein, exact convergence to the consensus state could not be achieved within finite steps, which could not fulfill more and more strict cooperation efficiency requirements of modern industrial MASs.

In the past decades, as a remedy, a promising kind of scheme has been proposed, which calculates the exact final consensus value according to sequential individual states (Charalambous & Hadjicostis, 2018; Zhang et al., 2008, 2019; Manitara & Hadjicostis, 2017; Sundaram & Hadjicostis, 2010). As a pioneer work, a consensus prediction scheme for first-order linear MASs with time-invariant topology was proposed in (Sundaram & Hadjicostis, 2007), where each individual computes the consensus value using finite historical sampling series stored in its memory. Enlightened by such representative works, (Charalambous & Hadjicostis, 2018; Zhang et al., 2008, 2019; Manitara & Hadjicostis, 2017; Sundaram & Hadjicostis, 2010, 2007), some efforts (Yuan, 2012; Yuan et al., 2013) have further reduced the individual consensus prediction cost by substantially decreasing the total historical state series length required to be stored in individual memory. Following this research line, (Charalambous et al., 2015) proposed a distributed algorithm that calculate exact average consensus values in a finite period for first-order MASs with time-delays.

Till date, most of the aforementioned consensus value prediction algorithms focus on first-order MASs and adopt the final value theorem of 𝒵\mathcal{Z}-transform to calculate the eventual consensus values, which can not be directly extended to high-order MASs since the limits of their eventual values do not always exist. Due to the challenges for high-order evolution calculation merely by local information exchanging, so far, minimal-time deadbeat consensus prediction for high-order linear MASs has not been touched. However, for both natural biological swarms/flocks/schools and engineering collective motion systems, it is often required for collective motional systems to take prompt reactions to environmental emergencies, urgent tasks or even suddenly-appeared group targets. In such universal situations, development of a deadbeat consensus prediction protocol becomes indispensable for high-order MASs, which needs to reach exact agreement upon not only positions and velocities, but also accelerations and jerks (Attanasi et al., 2014; Cavagna et al., 2015), with sufficiently short individual memory lengths. Therefore, consensus protocol design of high-order MASs has attracted more and more attention from relevant researchers (Rezaee & Abdollahi, 2015; Seo et al., 2009; Su & Lin, 2016; Zuo et al., 2018; Hu et al., 2019). This motivates us to consider an urgent yet challenging problem, i.e., designing a decentralized protocol to predict the exact eventual consensus values of high-order linear MASs with directed topological backbones merely according to individual memory as short as possible.

In brief, the main contribution of this paper is as below. A minimal-time deadbeat consensus prediction (MDCP) protocol is developed for discrete-time high-order linear MASs. Sufficient conditions are theoretically derived to guarantee the convergence of MDCP. Moreover, rather than just prediction of steady-state consensus values of MASs, the present MDCP could predict the instant states of each individual of the MASs as well, which in turn describes the future evolution of the individual disagreement degree among the MASs more delicately that is desirable for more complicated real application cases like rendezvous, coverage, flocking and formation control.

The remainder of this paper is organized as follows. In Section II, the main problem addressed by the present study is presented together with the preliminaries. Then, MDCP is designed in Section III and the sufficient conditions are derived to guarantee the convergence of MDCP. Afterwards, numerical simulations are conducted in Section IV to verify the effectiveness of the present protocol. Finally, conclusion is drawn in Section V.

Throughout this paper, the following symbols will be used. \mathbb{R}, n\mathbb{R}^{n}, n×m\mathbb{R}^{n\times m}, +\mathbb{R}^{+} refer to the sets of real number, nn-dimensional real vector, n×mn\times m real matrix and positive real number, respectively. \mathbb{N}, \mathbb{Z} and +\mathbb{Z}^{+} indicate the natural number, integer number and positive integer number sets, respectively. The transpose of a matrix \ast is denoted by T\ast^{\mbox{\tiny\sf T}}. Size()(\ast) indicates the rows of a square matrix \ast, and \ast^{\bot} denotes the null space of \ast. (eni)T=[0,,0,1i-th ,0,,0]1×n(e_{n}^{i})^{\mbox{\tiny\sf T}}=\left[0,\ldots,0,1_{i\text{-th }},0,\ldots,0\right]\in\mathbb{R}^{1\times n} is an nn-dimension row vector with ii-th element being 1 and the others 0. The symbol det()(\ast) denotes the determinant of a matrix \ast. \otimes refers to the Kronecker product. 1nn\textbf{1}_{n}\in\mathbb{R}^{n} and 0nn\textbf{0}_{n}\in\mathbb{R}^{n} are the column vectors with all elements being 1 and 0, respectively. 0 is a matrix with compatible dimensions and all elements being 0. Inn×nI_{n}\in\mathbb{R}^{n\times n} denotes an n×nn\times n identity matrix. The 𝒵\mathcal{Z}-transform of a function ff is denoted by 𝒵(f)\mathcal{Z}(f). The symbol :=max{nn<}\lfloor\ast\rfloor:=\max\{n\in\mathbb{Z}\mid n<\ast\} indicates the lower integral function. Euclidean norm of a vector * is denoted by \left\|*\right\|. The symbol ij:=[(i),(i+1),,(j)]ji+1,j>i,i,j{}_{i}^{j}*:=\left[*(i),*(i+1),\dots,*(j)\right]\in\mathbb{R}^{j-i+1},j>i,i,j\in\mathbb{N} denotes a row vector, where (i)*(i) refers to the ii-th element of *. sup()\sup(*) denotes the supremum of set *.

2 Preliminaries and problem formulation

Consider a fixed directed graph consisted of nn agents given by 𝒢=(𝒱,,𝒜)\mathcal{G}=\left(\mathcal{V},\mathcal{E},\mathcal{A}\right), where 𝒱={1,2,,n}\mathcal{V}=\left\{1,2,\dots,n\right\} is a finite nonempty node set denoting the agents, 𝒱×𝒱\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V} is the edge set. 𝒜=[aij]n×n\mathcal{A}=\left[a_{ij}\right]\in\mathbb{R}^{n\times n} represents the weighted adjacency matrix. Here, aij>0a_{ij}>0 if (j,i)(j,i)\in\mathcal{E}, and aij=0a_{ij}=0 otherwise, furthermore, only simple graph is considered. i.e., there is no self-loop. Let :=[ij]n×n\mathcal{L}:=[\mathcal{L}_{ij}]\in\mathbb{R}^{n\times n} denotes the Laplacian matrix, where ii=ijaij\mathcal{L}_{ii}=\sum_{i\neq j}a_{ij}, ij=aij\mathcal{L}_{ij}=-a_{ij} for iji\neq j. If directed graph 𝒢\mathcal{G} has a directed spanning tree, 0 is a simple eigenvalue of \mathcal{L} corresponding to eigenvector 1n\textbf{1}_{n}, i.e., 1n=0\mathcal{L}\textbf{1}_{n}=0.

Consider a discrete-time ss-order linear MAS consisted of nn agents with dynamics described by

xi(1)(k)=xi(1)(k1)+ϵxi(2)(k1),xi(2)(k)=xi(2)(k1)+ϵxi(3)(k1),xi(s1)(k)=xi(s1)(k1)+ϵxi(s)(k1),xi(s)(k)=xi(s)(k1)+ui(k1),\begin{split}x_{i}^{(1)}(k)&=x_{i}^{(1)}(k-1)+\epsilon x_{i}^{(2)}(k-1),\\ x_{i}^{(2)}(k)&=x_{i}^{(2)}(k-1)+\epsilon x_{i}^{(3)}(k-1),\\ &\vdots\\ x_{i}^{(s-1)}(k)&=x_{i}^{(s-1)}(k-1)+\epsilon x_{i}^{(s)}(k-1),\\ x_{i}^{(s)}(k)&=x_{i}^{(s)}(k-1)+u_{i}(k-1),\end{split} (1)

where xi(s)(k)mx_{i}^{(s)}(k)\in\mathbb{R}^{m} denotes the ss-order state of ii-th agent at the kk-th time instant, i=1,2,,ni=1,2,\cdots,n, s,k+s,k\in\mathbb{Z}^{+}, and ϵ+\epsilon\in\mathbb{R}^{+} represents the sampling time. Without loss of generality, m=1m=1 is considered throughout this paper. The scenarios for m>1m>1 can be easily extended with the assistance of the Kronecker product ‘\otimes’.

Conventional control protocol for high-order linear MASs is given as follows (Ren et al., 2007)

ui(k1)=ωj=1naijps(xi(1)(k1)xj(1)(k1)),u_{i}(k-1)=\omega\sum_{j=1}^{n}a_{ij}p_{s}\left(x_{i}^{(1)}(k-1)-x_{j}^{(1)}(k-1)\right), (2)

where ps(x):=c0x+c1x(1)++cs1x(s1)p_{s}(x):=c_{0}x+c_{1}x^{(1)}+\cdots+c_{s-1}x^{(s-1)} is an operator of variable xx, external coupling weight ω<0\omega<0, and internal coupling weight coefficient cl>0c_{l}>0 with l=0,1,2,,s1l=0,1,2,\cdots,s-1. Consensus is called to be reached for high-order MAS if xi()xj()x_{i}^{(\ell)}\rightarrow x_{j}^{(\ell)}, =1,2,,s\ell=1,2,\dots,s, ij\forall i\neq j, i,j=1,2,,ni,j=1,2,\cdots,n. The \ell-order states of all agents are denoted by a vector X():=[x1(),x2(),,xn()]T,=1,2,,sX^{(\ell)}:=\left[x_{1}^{(\ell)},x_{2}^{(\ell)},\dots,x_{n}^{(\ell)}\right]^{\mbox{\tiny\sf T}},\ell=1,2,\cdots,s. The full-state of the ii-th agent is denoted by a vector

Xi:=[xi(1),xi(2),,xi(s)]T,i=1,2,,n.X_{i}:=\left[x_{i}^{(1)},x_{i}^{(2)},\dots,x_{i}^{(s)}\right]^{\mbox{\tiny\sf T}},i=1,2,\cdots,n. (3)

Substituting (2) into (1), the dynamics of the whole MAS 𝒢\mathcal{G} can be rewritten in a compact form as

X(k)=WX(k1),X(k)=WX(k-1), (4)

with X:=[(X(1))T,(X(2))T,,(X(s))T]TX:=\left[(X^{(1)})^{\mbox{\tiny\sf T}},(X^{(2)})^{\mbox{\tiny\sf T}},\dots,(X^{(s)})^{\mbox{\tiny\sf T}}\right]^{\mbox{\tiny\sf T}} and

W=[InϵIn𝟎n𝟎n𝟎n𝟎nInϵIn𝟎n𝟎n𝟎n𝟎n𝟎nInϵInωc0ωc1ωc2ωcs2In+ωcs1]W=\left[\begin{array}[]{cccccc}I_{n}&\epsilon I_{n}&\mathbf{0}_{n}&\ldots&\mathbf{0}_{n}&\mathbf{0}_{n}\\ \mathbf{0}_{n}&I_{n}&\epsilon I_{n}&\ldots&\mathbf{0}_{n}&\mathbf{0}_{n}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \mathbf{0}_{n}&\mathbf{0}_{n}&\mathbf{0}_{n}&\ldots&I_{n}&\epsilon I_{n}\\ \omega c_{0}\mathcal{L}&\omega c_{1}\mathcal{L}&\omega c_{2}\mathcal{L}&\ldots&\omega c_{s-2}\mathcal{L}&I_{n}+\omega c_{s-1}\mathcal{L}\end{array}\right]

being the Perron matrix (Olfati-Saber & Murray, 2004).

To facilitate further analysis, the following assumptions are necessary.

Assumption 1.

MAS 𝒢=(𝒱,,𝒜)\mathcal{G}=\left(\mathcal{V},\mathcal{E},\mathcal{A}\right) has a directed spanning tree.

Assumption 2.

All the eigenvalues of Perron matrix WW of (4) lie inside the unit circle, i.e., λ(W,i)1\left\|\lambda_{(W,i)}\right\|\leq 1, λ(W,i)\lambda_{(W,i)} denotes the ii-th eigenvalue of WW.

Assumptions 1 and 2 ensure the consensus accessibility of the discrete-time high-order linear MAS (4) (Wieland et al., 2008). Moreover, we give some definitions to facilitate further derivation.

Definition 1.

(Consensus vector) For a discrete-time high-order linear MAS (4) fulfilling Assumptions 1 and 2, the consensus vector (Ren et al., 2007) is denoted by c(k)\wp_{c}(k) that satisfies

c(k):=Υ(k,)X(0)s,\wp_{c}(k):=\Upsilon(k,\mathcal{L})X(0)\in\mathbb{R}^{s}, (5)

where

Υ(k,)=[𝟏npTkϵ𝟏npT1(s1)!(kϵ)(s1)𝟏npT𝟎n𝟏npT1(s2)!(kϵ)(s2)𝟏npT𝟎n𝟎nkϵ𝟏npT𝟎n𝟎n𝟏npT],\Upsilon(k,\mathcal{L})=\left[\begin{array}[]{cccc}\mathbf{1}_{n}p^{\mbox{\tiny\sf T}}&k\epsilon\mathbf{1}_{n}p^{\mbox{\tiny\sf T}}&\dots&\frac{1}{(s-1)!}(k\epsilon)^{(s-1)}\mathbf{1}_{n}p^{\mbox{\tiny\sf T}}\\ \mathbf{0}_{n}&\mathbf{1}_{n}p^{\mbox{\tiny\sf T}}&\dots&\frac{1}{(s-2)!}(k\epsilon)^{(s-2)}\mathbf{1}_{n}p^{\mbox{\tiny\sf T}}\\ \mathbf{0}_{n}&\mathbf{0}_{n}&\ddots&k\epsilon\mathbf{1}_{n}p^{\mbox{\tiny\sf T}}\\ \mathbf{0}_{n}&\mathbf{0}_{n}&\dots&\mathbf{1}_{n}p^{\mbox{\tiny\sf T}}\\ \end{array}\right], (6)

pnp\in\mathbb{R}^{n} is a nonnegative vector such that pT=𝟎np^{\mbox{\tiny\sf T}}\mathcal{L}=\mathbf{0}_{n} and pT𝟏n=1p^{\mbox{\tiny\sf T}}\mathbf{1}_{n}=1, c()(k)=(es)Tc(k)\wp_{c}^{(\ell)}(k)=(e_{s}^{\ell})^{\mbox{\tiny\sf T}}\wp_{c}(k) denotes the \ell-order consensus value, kk\in\mathbb{N} denotes the time instant.

Definition 2.

(Individual disagreement vector) For a discrete-time high-order linear MAS (4) fulfilling Assumptions 1 and 2, the individual disagreement vector is denoted by i(k)\wp_{i}(k) that satisfies

(k):=X(k)c(k)𝟏n,\wp(k):=X(k)-\wp_{c}(k)\otimes\mathbf{1}_{n}, (7)

where (k):=[1(k),2(k),,n(k)]Tsn\wp(k):=\left[\wp_{1}(k),\wp_{2}(k),\dots,\wp_{n}(k)\right]^{\mbox{\tiny\sf T}}\in\mathbb{R}^{sn} denotes disagreement vector,

i(k):=[i(1)(k),1(2)(k),,i(s)(k)]Ts,\wp_{i}(k):=\left[\wp_{i}^{(1)}(k),\wp_{1}^{(2)}(k),\dots,\wp_{i}^{(s)}(k)\right]^{\mbox{\tiny\sf T}}\in\mathbb{R}^{s},

i()(k)\wp_{i}^{(\ell)}(k) refers to \ell-order disagreement value for ii-th agent.

Definition 3.

(Consensus-window-launch-time) For a discrete-time high order linear MAS (4) fulfilling Assumptions 1 and 2, given an error thresohold σ+\sigma\in\mathbb{R}^{+}, the consensus-window-launch-time is a time TcT_{c} that satisfies

Tc=sup{T|j=1si=1ni(j)(Tϵ)σ}T_{c}=\sup\left\{T\left|\sum_{j=1}^{s}\sum_{i=1}^{n}\left\|\wp_{i}^{(j)}\left(\frac{T}{\epsilon}\right)\right\|\leq\sigma\right.\right\} (8)

with i(j)(k)\wp_{i}^{(j)}(k) given in Definition 2, Tc+T_{c}\in\mathbb{R}^{+}.

Definition 4.

(Principal Minor) (Horn & Johnson, 2012) For a square matrix n×n\ast\in\mathbb{R}^{n\times n}, a m×mm\times m (n>m)(n>m) submatrix of matrix \ast is a matrix obtained by deleting arbitrary nmn-m rows and the corresponding columns. The determinant of the m×mm\times m principal submatrix is named the principal minor of \ast.

Definition 5.

(Sum of Principal Minor) (Horn & Johnson, 2012) For a square matrix n×n\ast\in\mathbb{R}^{n\times n}. The sum of its principal minor of size mm (there are CnmC_{n}^{m} of them) is denoted by Em()E_{m}(\ast), i.e., for m=1m=1, then E1()=a11++annE_{1}(\ast)=a_{11}+\cdots+a_{nn} and Cn1=nC_{n}^{1}=n; for m=nm=n, then En()=det()E_{n}(\ast)=\operatorname{det}(\ast) and Cnn=1C_{n}^{n}=1.

Definition 6.

(Minimal Polynomial Pair) (Yuan et al., 2013) The minimal polynomial pair of matrix WW is the monic polynomial of smallest degree DiD_{i} that satisfies (esni)Tqi(W)=0(e_{sn}^{i})^{\mbox{\tiny\sf T}}q_{i}(W)=0 with qi(t):=tDi+1+j=0Diαi,jtj,i=1,2,,snq_{i}(t):=t^{D_{i}+1}+\sum_{j=0}^{D_{i}}\alpha_{i,j}t^{j},i=1,2,\dots,sn.

Remark 1.

For each agent ii (i=1,2,,ni=1,2,\cdots,n) in an MAS, its own individual state xix_{i} is generally available. Therefore, without loss of generality, consider the case that the measurable output is xi(1)(k)x_{i}^{(1)}(k). The virtual of the present predictor lies in retrieving the global initial state information and Laplacian matrix of a whole MAS indirectly to solve both Problems 1 and 2 merely by using an individual state series xi(1)(k)x_{i}^{(1)}(k) with a minimal length 2D¯i+12\overline{D}_{i}+1, i.e., an individual memory vector

xi(1)02D¯i:=[xi(1)(0),xi(1)(1),,xi(1)(2D¯i)]2D¯i+1{}_{0}^{2\overline{D}_{i}}x_{i}^{(1)}:=\left[x_{i}^{(1)}(0),x_{i}^{(1)}(1),\ldots,x_{i}^{(1)}(2\overline{D}_{i})\right]\in\mathbb{R}^{2\overline{D}_{i}+1} (9)

with D¯i+\overline{D}_{i}\in\mathbb{Z}^{+} to be determined later.

Now, we are ready to introduce the two main problems addressed in this study as below.

Problem 1.

(Minimal-time deadbeat consensus prediction) Given a discrete-time high-order linear MAS (4) fulfilling Assumptions 1 and 2, for some i=1,2,,ni=1,2,\cdots,n, find a consensus vector ~c(k)s\widetilde{\wp}_{c}(k)\in\mathbb{R}^{s} that satisfies

~c(k)c(k)=0s{\widetilde{\wp}_{c}\left(k\right)-\wp_{c}(k)=\textbf{0}_{s}} (10)

by the memory vector xi(1)02D¯i{}_{0}^{2\overline{D}_{i}}x_{i}^{(1)} given in (9) and c(k)\wp_{c}(k) defined in Definition 1.

Problem 2.

(Instant individual disagreement degree prediction) Given the discrete-time high-order linear MAS (4), for some i=1,2,,ni=1,2,\cdots,n, find a individual disagreement vector ~i(k)s\widetilde{\wp}_{i}(k)\in\mathbb{R}^{s}, such that

~i(k)i(k)=0s{\widetilde{\wp}_{i}\left(k\right)-\wp_{i}(k)=\textbf{0}_{s}} (11)

by memory vector xi(1)02D¯i{}_{0}^{2\overline{D}_{i}}x_{i}^{(1)} given in (9), i(k)\wp_{i}(k) defined in Definition 2.

3 Main technical result

In this section, Problems 1 and 2 will be addressed for the discrete-time high-order linear MAS (4). To this end, we provide the following Lemmas in advance.

Lemma 1.

(Sundaram & Hadjicostis, 2007) The minimal polynomial matrix pair qi(t)q_{i}(t) divides the minimal polynomial of WW for all i=1,2,,sni=1,2,\cdots,sn.

Lemma 2.

WW in (4) has exactly ss eigenvalues at 1 with geometric multiplicity of 1.

Proof: Rewriting (1) in a compact form, one has that

X(k)=(ϵCIn+ωR+Isn)X(k1),X(k)=\left(\epsilon C\otimes I_{n}+\omega R\otimes\mathcal{L}+I_{sn}\right)X(k-1), (12)

with

C\displaystyle C =(010001000)s×s,R\displaystyle=\left(\begin{array}[]{cccc}0&1&\cdots&0\\ \vdots&\ddots&\ddots&\vdots\\ 0&0&\ddots&1\\ 0&0&\cdots&0\end{array}\right)_{s\times s},R =(0000c0c1cs1)s×s.\displaystyle=\left(\begin{array}[]{cccc}0&0&\cdots&0\\ \vdots&\ddots&\cdots&0\\ c_{0}&c_{1}&\cdots&c_{s-1}\end{array}\right)_{s\times s}.

Let Λ:=diag(λ1,,λn)\Lambda:=\operatorname{diag}\left(\lambda_{1},\ldots,\lambda_{n}\right) be a matrix with diagonal elements being the eigenvalues associated with Laplacian \mathcal{L}, i.e., there exists a nonsingular matrix PP such that P1P=ΛP^{-1}\mathcal{L}P=\Lambda. Then, multiply both sides of (12) by IsP1I_{s}\otimes P^{-1}, one has

(IsP1)X(k)\displaystyle\left(I_{s}\otimes P^{-1}\right)X(k) =(B+Isn)(IsP1)X(k1)\displaystyle=(B+I_{sn})\left(I_{s}\otimes P^{-1}\right)X(k-1)

with B=(ϵCIn+ωRΛ)B=\left(\epsilon C\otimes I_{n}+\omega R\otimes\Lambda\right). Let η(k)=(IsP1)X(k)\eta(k)=\left(I_{s}\otimes P^{-1}\right)X(k), one has

ηi(k)=(Bi+Is)ηi(k1){\eta}_{i}(k)=\left(B_{i}+I_{s}\right)\eta_{i}(k-1)

with ηi(k)=[esni,esnn+i,,esn(s1)n+i]Tη(k)\eta_{i}(k)=\left[e_{sn}^{i},e_{sn}^{n+i},\cdots,e_{sn}^{(s-1)n+i}\right]^{\mbox{\tiny\sf T}}\eta(k), Bi=(ϵC+ωλiR)B_{i}=\left(\epsilon C+\omega\lambda_{i}R\right) and i=1,2,,ni=1,2,\dots,n.

According to (Horn & Johnson, 2012), the characteristic polynomial of BiB_{i} is derived as follows

pBi(λ)=\displaystyle p_{B_{i}}(\lambda)= λsωcs1λiλs1ωcs2ϵλiλs2ωcs3ϵ2λiλs3\displaystyle\lambda^{s}-\omega c_{s-1}\lambda_{i}\lambda^{s-1}-\omega c_{s-2}\epsilon\lambda_{i}\lambda^{s-2}-\omega c_{s-3}\epsilon^{2}\lambda_{i}\lambda^{s-3}
ωc1ϵs2λiλωc0ϵs1λi.\displaystyle-\cdots-\omega c_{1}\epsilon^{s-2}\lambda_{i}\lambda-\omega c_{0}\epsilon^{s-1}\lambda_{i}.

Since \mathcal{L} has a simple zero eigenvalue, i.e., there exists one and only one λi=0\lambda_{i}=0. Consequently, pBi(λ)p_{B_{i}}(\lambda) has ss zero eigenvalues, which implies that B+IsnB+I_{sn} only has ss eigenvalues at 1. Moreover,

W=(IsP1)1(B+Isn)(IsP1),W=\left(I_{s}\otimes P^{-1}\right)^{-1}\left(B+I_{sn}\right)\left(I_{s}\otimes P^{-1}\right),

which implies that WW has only ss eigenvalues at 1 as well.

Let 𝒬:=[𝒬1T,𝒬2T,𝒬sT]Tsn\mathcal{Q}:=[\mathcal{Q}_{1}^{\mbox{\tiny\sf T}},\mathcal{Q}_{2}^{\mbox{\tiny\sf T}},\dots\mathcal{Q}_{s}^{\mbox{\tiny\sf T}}]^{\mbox{\tiny\sf T}}\in\mathbb{R}^{sn} denotes the right eigenvector of WW associated to the eigenvalue 1, then one has

W𝒬=𝒬,W\mathcal{Q}=\mathcal{Q},

which implies 𝒬i=𝟎n,i=2,3s\mathcal{Q}_{i}=\mathbf{0}_{n},i=2,3\dots s whereas 𝒬1=𝟎n\mathcal{L}\mathcal{Q}_{1}=\mathbf{0}_{n}. Thereby, 𝒬1\mathcal{Q}_{1} is an eigenvector of \mathcal{L} associated to its eigenvalue at 0. Since \mathcal{L} has only one linearly independent eigenvector 𝒬1\mathcal{Q}_{1} associated to the eigenvalue at 0, which in turn implies that WW has only one linearly independent eigenvector 𝒬=[𝒬1T,𝟎nT,,𝟎nT]T\mathcal{Q}=\left[\mathcal{Q}_{1}^{\mbox{\tiny\sf T}},\mathbf{0}_{n}^{\mbox{\tiny\sf T}},\dots,\mathbf{0}_{n}^{\mbox{\tiny\sf T}}\right]^{\mbox{\tiny\sf T}}. Therefore, the eigenvalue of WW at 1 has algebraic multiplicity of ss but geometric multiplicity of 1. The proof is thus completed.  

Lemma 3.

The minimal polynomial pair of matrix WW can be decomposed into qi(t)=(t1)(i)piq_{i}(t)=(t-1)^{\ell(i)}p_{i}, where (i)=sin,i=1,2,sn\ell(i)=s-\lfloor\frac{i}{n}\rfloor,i=1,2\cdots,sn; pip_{i} denotes a polynomial without the factor (t1)(t-1).

Proof: Let λ\lambda be an eigenvalue of WW that the algebraic multiplicity of ss and geometric multiplicity of 1. Define a Jordan block of order ss as follows

𝑱(λ):=(λ1λ11λ)s×s.\boldsymbol{J}(\lambda):=\left(\begin{array}[]{ccccc}\lambda&1&&&\\ &\lambda&1&&\\ &&\ddots&\ddots&\\ &&&\ddots&1\\ &&&&\lambda\end{array}\right)_{s\times s}.

One has that

𝑱k(λ)={(λkCk1λk110λkCk1λk1 01Ck1λk1λk),k<s(λkCk1λk1Cks1λks+1λkCk1λk1Ck1λk1λk),ks.\begin{aligned} \hfil\displaystyle\boldsymbol{J}^{k}(\lambda)=\left\{\begin{split}&\left(\begin{array}[]{crclc}\lambda^{k}&C_{k}^{1}\lambda^{k-1}&\cdots&1&0\\ &\lambda^{k}&C_{k}^{1}\lambda^{k-1}&\text{ }\ddots&0\\ &\ddots&\qquad\ddots&&1\\ &&\ddots&&\\ &&&&C_{k}^{1}\lambda^{k-1}\\ &&&&\lambda^{k}\end{array}\right),k<s\\ &\left(\begin{array}[]{cclc}\lambda^{k}&C_{k}^{1}\lambda^{k-1}&\quad\cdots&C_{k}^{s-1}\lambda^{k-s+1}\\ &\lambda^{k}&C_{k}^{1}\lambda^{k-1}&\vdots\\ &&\ddots&\\ &&&C_{k}^{1}\lambda^{k-1}\\ &&&\lambda^{k}\end{array}\right),k\geq s\end{split}\right.\end{aligned}.

Case 1: if (i)=1\ell(i)=1, one has that

qi,1(λ)=qi(λ)=λDi+1+αi,DiλDi++αi,1λ+αi,0=0.q_{i,1}(\lambda)={q_{i}(\lambda)}=\lambda^{D_{i}+1}+\alpha_{i,D_{i}}\lambda^{D_{i}}+\cdots+\alpha_{i,1}\lambda+\alpha_{i,0}=0. (13)

Case 2: if (i)=m\ell(i)=m with m+,2msm\in\mathbb{Z}^{+},2\leq m\leq s. By mathematical induction, one has that

qi,m(t)=qi(t)(tλ)m1=j=0Dim+2vj,mtjq_{i,m}(t)=\frac{q_{i}(t)}{(t-\lambda)^{m-1}}=\sum_{j=0}^{{D_{i}}-m+2}v_{j,m}t^{j} (14)

with vj,m=h=jDim+2Cm2+hjm2αi,h+m1λhjv_{j,m}=\sum_{h=j}^{{D_{i}}-m+2}C_{m-2+h-j}^{m-2}\alpha_{i,h+m-1}\lambda^{h-j}.

Bearing in mind Pascal’s Triangle j=0xCm1+jm1=Cm+xm,x+\sum_{j=0}^{x}C_{m-1+j}^{m-1}=C_{m+x}^{m},x\in\mathbb{Z}^{+}, substituting t=λt=\lambda into (14) yields

qi,m(λ)=j=m1Di+1αi,jCjm1λjm+1=0,q_{i,m}(\lambda)=\sum_{j=m-1}^{D_{i}+1}\alpha_{i,j}C_{j}^{m-1}\lambda^{j-m+1}=0, (15)

and qi,x(λ)=0,x+,x<mq_{i,x}(\lambda)=0,x\in\mathbb{Z}^{+},x<m.

Combining Cases 1 and 2, one has that qi,x(λ)=0,x+,x(i)q_{i,x}(\lambda)=0,x\in\mathbb{Z}^{+},x\leq\ell(i). According to Lemma 2, let the Jordan decomposition of WW be indicated by

W=𝒯[𝑱(1)00Z]𝒯1,W=\mathcal{T}\left[\begin{array}[]{cc}\boldsymbol{J}(1)&\textbf{0}\\ \textbf{0}&Z\end{array}\right]\mathcal{T}^{-1},

where ZZ denotes the set of Jordan block whose eigenvalue does not equal 1, 𝒯\mathcal{T} is a nonsingular matrix with columns composed of the (generalized) eigenvectors of WW.

From Definition 6, one has

(esni)T𝒯[qi(𝑱(1))00qi(Z)]𝒯1=0.(e_{sn}^{i})^{\mbox{\tiny\sf T}}\mathcal{T}\left[\begin{array}[]{cc}q_{i}(\boldsymbol{J}(1))&\textbf{0}\\ \textbf{0}&q_{i}(Z)\end{array}\right]\mathcal{T}^{-1}=0. (16)

Let 𝐝sn×s\mathbf{d}\in\mathbb{R}^{sn\times s} denotes the 11-th to ss-th columns of 𝒯\mathcal{T}, direct calculation gives

[1sdis+1sn𝒯i][qi(𝑱(1))00qi(Z)]=0,[_{1}^{s}d_{i}\quad{}_{s+1}^{sn}\mathcal{T}_{i}]\left[\begin{array}[]{cc}q_{i}(\boldsymbol{J}(1))&\textbf{0}\\ \textbf{0}&q_{i}(Z)\end{array}\right]=0, (17)

where 1sdi{}_{1}^{s}d_{i} is ii-th row of 𝐝\mathbf{d} and [1sdis+1sn𝒯i][_{1}^{s}d_{i}\quad{}_{s+1}^{sn}\mathcal{T}_{i}] is the ii-th row of 𝒯\mathcal{T}. According to Lemma 2, WW has only one linearly independent eigenvector 𝒬=[𝒬1T,𝟎nT,,𝟎nT]T\mathcal{Q}=\left[\mathcal{Q}_{1}^{\mbox{\tiny\sf T}},\mathbf{0}_{n}^{\mbox{\tiny\sf T}},\dots,\mathbf{0}_{n}^{\mbox{\tiny\sf T}}\right]^{\mbox{\tiny\sf T}} associated to eigenvalue 1. Without loss of generality, let 𝒬=[𝟏nT,𝟎nT,,𝟎nT]T\mathcal{Q}=\left[\mathbf{1}_{n}^{\mbox{\tiny\sf T}},\mathbf{0}_{n}^{\mbox{\tiny\sf T}},\dots,\mathbf{0}_{n}^{\mbox{\tiny\sf T}}\right]^{\mbox{\tiny\sf T}} and 𝒬1,r=[𝟎nT,𝟎nT,,1ϵr1𝟏r-thT,𝟎nT]T,r=2,3,s\mathcal{Q}_{1,r}=\left[\mathbf{0}_{n}^{\mbox{\tiny\sf T}},\mathbf{0}_{n}^{\mbox{\tiny\sf T}},\dots,\frac{1}{\epsilon^{r-1}}\mathbf{1}_{r\text{-}th}^{\mbox{\tiny\sf T}},\cdots\mathbf{0}_{n}^{\mbox{\tiny\sf T}}\right]^{\mbox{\tiny\sf T}},r=2,3\dots,s, be a right eigenvector and generalized right eigenvectors of WW associated to the eigenvalue at 1, respectively. Then,

𝐝=[𝟏n0n0n0n0n1ϵ𝟏n0n0n0n0n1ϵ2𝟏n0n0n0n1ϵs1𝟏n].\mathbf{d}=\left[\begin{array}[]{ccccc}\mathbf{1}_{n}&\textbf{0}_{n}&\textbf{0}_{n}&\cdots&\textbf{0}_{n}\\ \textbf{0}_{n}&\frac{1}{\epsilon}\mathbf{1}_{n}&\textbf{0}_{n}&\cdots&\textbf{0}_{n}\\ \textbf{0}_{n}&\textbf{0}_{n}&\frac{1}{\epsilon^{2}}\mathbf{1}_{n}&\cdots&\vdots\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \textbf{0}_{n}&\textbf{0}_{n}&\textbf{0}_{n}&\cdots&\frac{1}{\epsilon^{s-1}}\mathbf{1}_{n}\end{array}\right].

Bearing 𝑱k(λ)\boldsymbol{J}^{k}(\lambda) and (13), (15) in mind, one has

qi(𝑱(1))=[qi,1(1)qi,2(1)qi,s(1)0qi,1(1)qi,s1(1)00qi,1(1)].q_{i}(\boldsymbol{J}(1))=\left[\begin{array}[]{cccc}q_{i,1}(1)&q_{i,2}(1)&\ldots&q_{i,s}(1)\\ 0&q_{i,1}(1)&\cdots&q_{i,s-1}(1)\\ \vdots&\vdots&\cdots&\vdots\\ 0&0&\cdots&q_{i,1}(1)\end{array}\right].

From (17), one has 1sdiqi(J(1))=0{}_{1}^{s}d_{i}q_{i}(J(1))=0, all elements of (in+1)\left(\lfloor\frac{i}{n}\rfloor+1\right)-row of qi(J(1))q_{i}(J(1)) are zeros for i=1,2,,sni=1,2,\dots,sn, i.e., qi,m(1)=0,m=1,2,,(i)q_{i,m}(1)=0,m=1,2,\cdots,\ell(i), and the minimal polynomial pair qi(t)q_{i}(t) of WW has at least (i)\ell(i) eigenvalues at 11. Since the degree of the polynomial pair is minimized, qi(t)q_{i}(t) has and only has (i)\ell(i) eigenvalues at 1. The proof is thus completed.  

Remark 2.

The number of eigenvalue 1 contained in the minimal polynomial pair of matrix WW could be determined by a Hankel matrix construction, which will be designed later.

Remark 3.

Note that 𝒵(Ckr)=z(z1)r+1,r\mathcal{Z}(C_{k}^{r})=\frac{z}{(z-1)^{r+1}},r\in\mathbb{N}, kk denotes the kk-th time instant. Firstly, 𝒵(Ck0)=zz1\mathcal{Z}(C_{k}^{0})=\frac{z}{z-1} for r=0r=0. Suppose 𝒵(Ckm)=z(z1)m+1\mathcal{Z}(C_{k}^{m})=\frac{z}{(z-1)^{m+1}} holds for r=mr=m, one has 𝒵(Ckm+1)=𝒵(kmm+1Ckm)=z(z1)m+2\mathcal{Z}(C_{k}^{m+1})=\mathcal{Z}\left(\frac{k-m}{m+1}C_{k}^{m}\right)=\frac{z}{(z-1)^{m+2}} for r=m+1r=m+1.

Theorem 1.

Consider a closed-loop high-order MAS governed by (1), (2). Under Assumptions 1 and 2, Problem 1 is solvable.

Proof: The proof can be divided into three steps.

Step 1: Derive the parameter of the minimal polynomial pair.

Consider the memory vector (9) and Lemma 3, one has

qi(t)=(t1)spi(t)=j=0Di+1αi,jtjq_{i}(t)=(t-1)^{s}p_{i}(t)=\sum_{j=0}^{D_{i}+1}\alpha_{i,j}t^{j} (18)

with αi=[αi,0,αi,1,,αi,Di+1]Di+2\alpha_{i}=\left[\alpha_{i,0},\alpha_{i,1},\dots,\alpha_{i,D_{i}+1}\right]\in\mathbb{R}^{D_{i}+2} being the coefficients of the minimal polynomial pairs of WW to be calculated. Note that

pi(t)=qi(t)(t1)s=j=0Di+1sζi,jtj,p_{i}(t)=\frac{q_{i}(t)}{(t-1)^{s}}=\sum_{j=0}^{D_{i}+1-s}\zeta_{i,j}t^{j}, (19)

where ζi,j\zeta_{i,j}\in\mathbb{R} is another null space coefficient parameters. From Leibniz’s Derivation Rule (Olver, 1993) and Definition 6, one has

ζiΘi(k,D¯i)=0\zeta_{i}\Theta_{i}(k,\overline{D}_{i})=0 (20)

with ζi=[ζi,0,ζi,1,,ζi,D¯i],D¯i=Di+1s\zeta_{i}=\left[\zeta_{i,0},\zeta_{i,1},\dots,\zeta_{i,\overline{D}_{i}}\right],\overline{D}_{i}=D_{i}+1-s and

Θi(k,D¯i)=[h=0s(1)shCshxi(1)(k+h)h=0s(1)shCshxi(1)(k+h+1)h=0s(1)shCshxi(1)(k+h+D¯i)].\Theta_{i}(k,\overline{D}_{i})=\left[\begin{split}&\sum_{h=0}^{s}(-1)^{s-h}C_{s}^{h}x_{i}^{(1)}(k+h)&\\ &\sum_{h=0}^{s}(-1)^{s-h}C_{s}^{h}x_{i}^{(1)}(k+h+1)&\\ &\qquad\qquad\qquad\vdots&\\ &\sum_{h=0}^{s}(-1)^{s-h}C_{s}^{h}x_{i}^{(1)}(k+h+\overline{D}_{i})&\end{split}\right]. (21)

According to (20), build up a Hankel matrix (Partington, 1988) as the following form

Γ=[Θi(k,D¯^i)Θi(k+1,D¯^i)Θi(k+D¯^i,D¯^i)].\begin{split}&\Gamma=\left[\begin{array}[]{cccc}\Theta_{i}(k,\widehat{\overline{D}}_{i})&\Theta_{i}(k+1,\widehat{\overline{D}}_{i})&\cdots&\Theta_{i}(k+\widehat{\overline{D}}_{i},\widehat{\overline{D}}_{i})\\ \end{array}\right].\end{split} (22)

Gradually increase D¯^i\widehat{\overline{D}}_{i} until getting the minimal D¯^i\widehat{\overline{D}}_{i} such that the Hankel matrix Γ\Gamma becomes non-fully ranked. Then, D¯^i\widehat{\overline{D}}_{i} and Γi\Gamma_{i}^{\bot} become estimates of D¯i\overline{D}_{i} and ζi\zeta_{i}, respectively. Together with (18), (19), αi,j\alpha_{i,j} can be obtained.

Step 2: Calculate and decompose-fit the 𝒵\mathcal{Z}-transform of xi(1)(t)x_{i}^{(1)}(t).

According to (18) and Definition 6, one has

(esni)Tj=0Di+1αi,jWjX(k)=j=0Di+1αi,jxi(1)(k+j)=0.\begin{split}\left(e_{sn}^{i}\right)^{\mbox{\tiny\sf T}}\sum_{j=0}^{D_{i}+1}\alpha_{i,j}W^{j}X(k)&=\sum_{j=0}^{D_{i}+1}\alpha_{i,j}x_{i}^{(1)}(k+j)=0.\end{split} (23)

Note that the 𝒵\mathcal{Z}-transform xi(1)(z)=𝒵(xi(1)(t))x_{i}^{(1)}(z)=\mathcal{Z}\left(x_{i}^{(1)}(t)\right), from 𝒵\mathcal{Z}-transform time-shift properties and (23), one has

xi(1)(z)=j=1Di+1αi,jzj(k=0j1xi(1)(k)zk)(z1)spi(z)=j=1Di+1zjh=0Di+1jαi,h+jxi(1)(h)(z1)spi(z).\begin{split}x_{i}^{(1)}(z)&=\frac{\sum_{j=1}^{D_{i}+1}\alpha_{i,j}z^{j}\left(\sum_{k=0}^{j-1}x_{i}^{(1)}(k)z^{-k}\right)}{(z-1)^{s}p_{i}(z)}\\ &=\frac{\sum_{j=1}^{D_{i}+1}z^{j}\sum_{h=0}^{D_{i}+1-j}\alpha_{i,h+j}x_{i}^{(1)}(h)}{(z-1)^{s}p_{i}(z)}.\\ \end{split} (24)

Let

ϕi(z)=j=0Dizjh=0Di+1jαi,h+jxi(1)(h),\phi_{i}(z)=\sum_{j=0}^{D_{i}}z^{j}\sum_{h=0}^{D_{i}+1-j}\alpha_{i,h+j}x_{i}^{(1)}(h), (25)

which could be divided into two parts as

ϕi(z)=pi(z)j=0s1βj(z1)j+j=sDiβj(z1)j.\phi_{i}(z)=p_{i}(z)\sum_{j=0}^{s-1}\beta_{j}(z-1)^{j}+\sum_{j=s}^{D_{i}}\beta_{j}(z-1)^{j}. (26)

Here, βi,i=0,1,,Di\beta_{i}\in\mathbb{R},i=0,1,\cdots,D_{i}, is a parameter.

From Leibniz’s Derivation Rule (Olver, 1993), the nn-th derivative of (26) is given as follows

ϕin(1)=k=0nAnkpink(1)βk,\phi_{i}^{n}(1)=\sum_{k=0}^{n}A_{n}^{k}p_{i}^{n-k}(1)\beta_{k}, (27)

for n=0,1,,s1n=0,1,\cdots,s-1. Direct calculation gives

[β0β1βs1]=1[ϕi(0)(1)ϕi(1)(1)ϕi(s1)(1)],\left[\begin{array}[]{c}\beta_{0}\\ \beta_{1}\\ \vdots\\ \beta_{s-1}\end{array}\right]=\mathcal{M}^{-1}\left[\begin{array}[]{c}\phi_{i}^{(0)}(1)\\ \phi_{i}^{(1)}(1)\\ \vdots\\ \phi_{i}^{(s-1)}(1)\end{array}\right], (28)

with

=[pi0(1)000pi1(1)pi0(1)00pi2(1)A21pi1(1)A22pi0(1)0pis1(1)As11pis2(1)As12pis3(1)As1s1pi0(1)].\displaystyle\mathcal{M}=\left[\begin{array}[]{ccccc}p_{i}^{0}(1)&0&0&\cdots&0\\ p_{i}^{1}(1)&p_{i}^{0}(1)&0&\cdots&0\\ p_{i}^{2}(1)&A_{2}^{1}p_{i}^{1}(1)&A_{2}^{2}p_{i}^{0}(1)&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ p_{i}^{s-1}(1)&A_{s-1}^{1}p_{i}^{s-2}(1)&A_{s-1}^{2}p_{i}^{s-3}(1)&\cdots&A_{s-1}^{s-1}p_{i}^{0}(1)\end{array}\right].

Since pi0(1)0p_{i}^{0}(1)\neq 0, \mathcal{M} is fully ranked, and βi\beta_{i} always exists such that (26) holds.

Step 3: Predict the eventual consensus value of the MAS.

Under Assumptions 1 and 2, xi(s)x_{i}^{(s)} converges to a constant value. Denote the consensus item as the following form

~c(1)(k)=κs1ks1+κs2ks2++κ0k0,\widetilde{\wp}_{c}^{(1)}(k)=\kappa_{s-1}k^{s-1}+\kappa_{s-2}k^{s-2}+\dots+\kappa_{0}k^{0}, (29)

where κi,i=0,1,,s1,\kappa_{i},i=0,1,\dots,s-1, is the coefficient to be determined later. According to the property of 𝒵(Ckr)=z(z1)r+1,r\mathcal{Z}(C_{k}^{r})=\frac{z}{(z-1)^{r+1}},r\in\mathbb{N}, in Remark 3, rewriting (29) in a compact form as

Ψ(k)=r=0s1brCkr,\Psi(k)=\sum_{r=0}^{s-1}b_{r}C_{k}^{r}, (30)

where brb_{r} is the coefficient to be determined.

Substituting (29) into (30) yields

[κs1κs2κ0]=𝒥1[Ψ(s1)(1)Ψ(s2)(1)Ψ(0)(1)]\left[\begin{array}[]{c}\kappa_{s-1}\\ \kappa_{s-2}\\ \vdots\\ \kappa_{0}\end{array}\right]=\mathcal{J}^{-1}\left[\begin{array}[]{c}\Psi^{(s-1)}(1)\\ \Psi^{(s-2)}(1)\\ \vdots\\ \Psi^{(0)}(1)\end{array}\right] (31)

with

𝒥=[As1s1000As1s2As2s200As1s3As2s3As3s30As10As20As30A00].\displaystyle\mathcal{J}=\left[\begin{array}[]{ccccc}A_{s-1}^{s-1}&0&0&\cdots&0\\ A_{s-1}^{s-2}&A_{s-2}^{s-2}&0&\cdots&0\\ A_{s-1}^{s-3}&A_{s-2}^{s-3}&A_{s-3}^{s-3}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ A_{s-1}^{0}&A_{s-2}^{0}&A_{s-3}^{0}&\cdots&A_{0}^{0}\end{array}\right].

Let c(1)(z)=𝒵(~c(1)(k))\wp_{c}^{(1)}(z)=\mathcal{Z}\left(\widetilde{\wp}_{c}^{(1)}(k)\right), one has

~c(1)(z)=r=0s1brz(z1)r+1.\widetilde{\wp}_{c}^{(1)}(z)=\sum_{r=0}^{s-1}b_{r}\frac{z}{(z-1)^{r+1}}. (32)

Let ~i(1)(z)\widetilde{\wp}_{i}^{(1)}(z) denote the disagreement item of xi(1)(z)x_{i}^{(1)}(z) , from (24), (32), one has

H(z)=xi(1)(z)~c(1)(z)~i(1)(z)=zϕi(z)(z1)spi(z)j=0s1bjz(z1)j+1~i(1)(z)=zj=0s1(βjbs1j)(z1)j(z1)s+zj=sDiβj(z1)j(z1)spi(z)~i(1)(z).\begin{split}H(z)=&x_{i}^{(1)}(z)-\widetilde{\wp}_{c}^{(1)}(z)-\widetilde{\wp}_{i}^{(1)}(z)\\ =&\frac{z\phi_{i}(z)}{(z-1)^{s}p_{i}(z)}-\sum_{j=0}^{s-1}b_{j}\frac{z}{(z-1)^{j+1}}-\widetilde{\wp}_{i}^{(1)}(z)\\ =&z\frac{\sum_{j=0}^{s-1}\left(\beta_{j}-b_{s-1-j}\right)(z-1)^{j}}{(z-1)^{s}}\\ &+z\frac{\sum_{j=s}^{D_{i}}\beta_{j}(z-1)^{j}}{(z-1)^{s}p_{i}(z)}-\widetilde{\wp}_{i}^{(1)}(z).\end{split} (33)

Moreover, it follows from H(z)=0H(z)=0 that bs1j=βj with j=0,1,,s1b_{s-1-j}=\beta_{j}\text{ with }j=0,1,\cdots,s-1 and βj,j=0,1,,s1\beta_{j},j=0,1,\cdots,s-1, has been obtained in (28). Then, κj\kappa_{j} can be calculated by (31), and hence the consensus item (29) is obtained.

Direct calculation gives

~c(j)(k)=~c(j1)(k+1)~c(j1)(k)ϵ\widetilde{\wp}_{c}^{(j)}(k)=\frac{\widetilde{\wp}_{c}^{(j-1)}(k+1)-\widetilde{\wp}_{c}^{(j-1)}(k)}{\epsilon} (34)

with j=2,3,,sj=2,3,\cdots,s. Thereby, all orders of the consensus item for MAS (4) could be obtained as well. The proof is thus completed.  

Theorem 2.

Consider a closed-loop high-order MAS governed by (1), (2). Under Assumptions 1 and 2, Problem 2 is solvable.

Proof: Let Ωi,1(z)=pi(z)j=0s1βj(z1)j\Omega_{i,1}(z)=p_{i}(z)\sum_{j=0}^{s-1}\beta_{j}(z-1)^{j}, Ωi,2(z)=j=sDiβj(z1)j\Omega_{i,2}(z)=\sum_{j=s}^{D_{i}}\beta_{j}(z-1)^{j}. From (26), Ωi,2(z)=ϕi(z)Ωi,1(z)\Omega_{i,2}(z)=\phi_{i}(z)-\Omega_{i,1}(z), then

βj=Ωi,2(j)(1)j!,\beta_{j}=\frac{\Omega_{i,2}^{(j)}(1)}{j!}, (35)

for j=s,s+1,,Dij=s,s+1,\cdots,D_{i}.

According to (33), the instant individual disagreement value of ii, i.e., ~i(1)(z)\widetilde{\wp}_{i}^{(1)}(z) is determined as follows

~i(1)(z)=zj=sDiβj(z1)jspi(z).\begin{split}\widetilde{\wp}_{i}^{(1)}(z)&=z\frac{\sum_{j=s}^{D_{i}}\beta_{j}(z-1)^{j-s}}{p_{i}(z)}.\end{split} (36)

According to Lemma 1 and (19), pi(z)p_{i}(z) can denoted by

pi(z)=(zλ1)(zλ2)(zλD¯i),p_{i}(z)=(z-\lambda_{1})(z-\lambda_{2})\dots(z-\lambda_{\overline{D}_{i}}), (37)

where λi,i=1,2,,D¯i\lambda_{i},i=1,2,\cdots,\overline{D}_{i}, is the eigenvalue of matrix WW. Therefore, consider the partial fraction expansion of ~i(1)(z)/z{\widetilde{\wp}_{i}^{(1)}(z)}/{z} as below

~i(1)(z)z=K0z+h12,λh0Khzλh+h3hi=1n¯hKh,hi(zλh)hi,\displaystyle\frac{\widetilde{\wp}_{i}^{(1)}(z)}{z}=\frac{K_{0}}{z}+\sum_{h\in\mathcal{F}_{1}\cup\mathcal{F}_{2},\lambda_{h}\neq 0}\frac{K_{h}}{z-\lambda_{h}}+\sum_{h\in\mathcal{F}_{3}}\sum_{h_{i}=1}^{\overline{n}_{h}}\frac{K_{h,h_{i}}}{\left(z-\lambda_{h}\right)^{h_{i}}},

where 1,2,3\mathcal{F}_{1},\mathcal{F}_{2},\mathcal{F}_{3} indicate single real roots, conjugate single pole and repeated roots sets, respectively, and n¯h\overline{n}_{h} is the multiplicity for h3{h}\in\mathcal{F}_{3}.

The partial fraction coefficient for h12{h}\in\mathcal{F}_{1}\cup\mathcal{F}_{2} is calculated by

Kh=(zλh)~i(1)(z)z|z=λh,K_{h}=\left.\left(z-\lambda_{h}\right)\frac{\widetilde{\wp}_{i}^{(1)}(z)}{z}\right|_{z=\lambda_{h}}, (38)

for h3h\in\mathcal{F}_{3} and

Kh,hi=1(n¯hhi)!dn¯hhidzn¯hhi[(zλh)n¯h~i(1)(z)z]|z=λh.K_{h,h_{i}}=\left.\frac{1}{(\overline{n}_{h}-h_{i})!}\frac{\mathrm{d}^{\overline{n}_{h}-h_{i}}}{\mathrm{~{}d}z^{\overline{n}_{h}-h_{i}}}\left[\left(z-\lambda_{h}\right)^{\overline{n}_{h}}\frac{\widetilde{\wp}_{i}^{(1)}(z)}{z}\right]\right|_{z=\lambda_{h}}.

Let 2+\mathcal{F}_{2}^{+} be a set whose imaginary part is positive in 2\mathcal{F}_{2}. For h2+h\in\mathcal{F}_{2}^{+}, λh=mej𝔫\lambda_{h}=\mathrm{m}e^{j\mathfrak{n}}, Kh=|Kh|ejθK_{h}=\left|K_{h}\right|e^{j\theta}, where jj denotes the imaginary unit.

Taking the inverse 𝒵\mathcal{Z}-transform as

~i(1)(k)=h1,λh0Kh(λh)kν(k)+K0ε(k)+h22|Kh|mkcos(𝔫k+θ)ν(k)+h3hi=1n¯hKh,hi(hi1)!k!(khi+1)!λhkhi+1ν(k),\begin{split}\widetilde{\wp}_{i}^{(1)}(k)=&\sum_{{h}\in\mathcal{F}_{1},\lambda_{h}\neq 0}K_{h}\left(\lambda_{h}\right)^{k}\nu(k)+K_{0}\varepsilon(k)\\ &+\sum_{{h}\in\mathcal{F}_{2}}2\left|K_{h}\right|\mathrm{m}^{k}\cos(\mathfrak{n}k+\theta)\nu(k)\\ &+\sum_{{h}\in\mathcal{F}_{3}}\sum_{h_{i}=1}^{\overline{n}_{h}}\frac{K_{h,h_{i}}}{(h_{i}-1)!}\frac{k!}{(k-h_{i}+1)!}\lambda_{h}^{k-h_{i}+1}\nu(k),\\ \end{split} (39)

where ν(k),ε(k)\nu(k),\varepsilon(k) are step and impulse functions, respectively.

Analogous to (34), all orders of instant individual disagreement ~i\widetilde{\wp}_{i}, i.e., ~i(1),,~i(s),\widetilde{\wp}_{i}^{(1)},\dots,\widetilde{\wp}_{i}^{(s)}, for MAS (4) can be obtained. The proof is thus completed.  

Algorithm 1 is presented below to calculate the eventual consensus value of the MASs, and afterwards the instant individual disagreement value.

Algorithm 1 MDCP-based instant individual disagreement degree prediction algorithm for high-order linear MASs
1:Memory vector 02D¯ixi(1),i=1,2,,n{}_{0}^{2\overline{D}_{i}}x_{i}^{(1)},i=1,2,\cdots,n, sampling time ϵ\epsilon.
2:The consensus item ~c\widetilde{\wp}_{c}, the instant individual disagreement item ~i\widetilde{\wp}_{i}.
3:Set the order of MAS ss, the maximal number of iterations ImaxI_{\max}. Initialize D¯^i=0\widehat{\overline{D}}_{i}=0.
4:for j=1:Imaxj=1:I_{\max} do
5:     Compute the Hankel matrix Γ\Gamma in (22).
6:     if rank(Γ\Gamma) << size(Γ\Gammathen
7:         Break.
8:     end if
9:     D¯^i=D¯^i+1\widehat{\overline{D}}_{i}=\widehat{\overline{D}}_{i}+1
10:end for
11:Calculate the coefficient ζi=Γ\zeta_{i}=\Gamma^{\bot} in (19), D¯i=D¯^i\overline{D}_{i}=\widehat{\overline{D}}_{i}. Calculate the coefficients αi,j\alpha_{i,j} according to (18) and (19).
12:Calculate xi(1)(z)x_{i}^{(1)}(z) and ϕi(z)\phi_{i}(z) according to (24), (26).
13:Calculate βj\beta_{j} in (26) using (28), (35).
14:bs1j=βj,j=0,1,,s1b_{s-1-j}=\beta_{j},j=0,1,\cdots,s-1.
15:Calculate κi\kappa_{i} using (31).
16:Calculate the consensus item ~c(k)\widetilde{\wp}_{c}(k) using (29), (34).
17:Obtain the instant individual disagreement value ~i\widetilde{\wp}_{i} using (34), (LABEL:wpd).
Corollary 3.1.

Denote the state vector Xi(k)X_{i}(k) in (3) by Xi(k)=~c(k)+~i(k)X_{i}(k)=\widetilde{\wp}_{c}(k)+\widetilde{\wp}_{i}(k). Let

W=SJS1,W=SJS^{-1},

where Jsn×snJ\in\mathbb{R}^{sn\times sn} is a diagonal matrix, Ssn×snS\in\mathbb{R}^{sn\times sn} is a nonsingular matrix whose columns are (generalized) eigenvectors of WW. Let 𝒦:=[𝒦1,𝒦2,,𝒦s]T\mathcal{K}:=\left[\mathcal{K}_{1},\mathcal{K}_{2},\ldots,\mathcal{K}_{s}\right]^{\mbox{\tiny\sf T}} with 𝒦r=\mathcal{K}_{r}= S1rX(0),r=1,2,,sS^{-1}_{r}X(0),r=1,2,\cdots,s, S1rS^{-1}_{r} denoting the rr-th row of S1S^{-1}. Then, one has

limkXi(k)r=1sesrh=0srCkh𝒦r+h=0.\lim_{k\rightarrow\infty}X_{i}(k)-\sum_{r=1}^{s}e_{s}^{r}\sum_{h=0}^{s-r}C_{k}^{h}\mathcal{K}_{r+h}=0. (40)

Proof: Since limk~i(k)=0\lim_{k\rightarrow\infty}\widetilde{\wp}_{i}(k)=0, one has limkXi(k)=~c(k)\lim_{k\rightarrow\infty}X_{i}(k)=\widetilde{\wp}_{c(k)}. According to (30) and (Hu et al., 2019), it can be derived that ~c(k)=r=1sesrh=0srCkh𝒦r+h\widetilde{\wp}_{c}(k)=\sum_{r=1}^{s}e_{s}^{r}\sum_{h=0}^{s-r}C_{k}^{h}\mathcal{K}_{r+h}.  

Note that Corollary 1 is a special case corresponding to k+k\rightarrow+\infty of Theorem 2.

Remark 3.2.

The purpose of most consensus prediction methods for a first-order MAS is to recover the consensus value (e.g., average consensus 1ninxi(1)(0)\frac{1}{n}\sum_{i}^{n}x_{i}^{(1)}(0)\in\mathbb{R}). By constrast, in the present study, we need to recover the eventual consensus vector (i.e., recover Υ(k,)X(0)\Upsilon(k,\mathcal{L})X(0) given in (6)), which is more general yet challenging due to the lack of niche analytical tools. More precisely, the challenge of proposed MDCP lies in the establishment of the quantitative relationship between the eigenspectrum of Perron matrix and the minimal polynomial pair, and decomposition-fit of 𝒵\mathcal{Z}-transform of state. The conventional kind of first-order discrete-time consensus prediction algorithms is a special case of the present MDCP with s=1s=1.

Remark 3.3.

In this study, the conventional control protocol for high order linear MASs (Ren et al., 2007) is employed to reach consensus in relative damping manner. Since the Perron matrix has a simple eigenvalue at 1 in the absolute damping manner, the present method can be extended to such a damping scenario with the assistance of Lemma 1 in (Sundaram & Hadjicostis, 2007).

4 Numerical simulation

Refer to caption
Figure 1: Communication topology graph 𝒢\mathcal{G}.
Refer to captionRefer to caption
(a) Temporal evolution of xi(4)x_{i}^{(4)} for routine protocol (a1) and MDCP (a2).
Refer to captionRefer to caption
(b) Temporal evolution of xi(3)x_{i}^{(3)} for routine protocol (b1) and MDCP (b2).
Refer to captionRefer to caption
(c) Temporal evolution of xi(2)x_{i}^{(2)} for routine protocol (c1) and MDCP (c2).
Refer to captionRefer to caption
(d) Temporal evolution of xi(1)x_{i}^{(1)} for routine protocol (d1) and MDCP (d2).
Figure 2: Temporal evolution of state for the fourth-order MAS with topology shown in Fig. 1, the method in (Ren et al., 2007) is adopted as the routine procedure.

Consider a discrete-time 4-order linear MAS composed of five agents governed by (1) and (2), whose communication topology is given in Fig. 1. Pick the external coupling weight ω=0.2\omega=-0.2, internal coupling weight c0=6,c1=6,c2=17,c3=2c_{0}=6,c_{1}=6,c_{2}=17,c_{3}=2, sampling time ϵ=0.1\epsilon=0.1. The Laplacian matrix is given as follows

=[2011011000102011002101113].\mathcal{L}=\left[\begin{array}[]{ccccc}2&0&-1&-1&0\\ -1&1&0&0&0\\ -1&0&2&0&-1\\ -1&0&0&2&-1\\ 0&-1&-1&-1&3\end{array}\right].

The initial values

X(0)=[\displaystyle X(0)=[ 1.96602.51086.16044.73293.5166\displaystyle 1.9660\quad 2.5108\quad 6.1604\quad 4.7329\quad 3.5166
8.30835.85265.49729.17192.8584\displaystyle 8.3083\quad 5.8526\quad 5.4972\quad 9.1719\quad 2.8584
7.57207.53733.80455.67820.7585\displaystyle 7.5720\quad 7.5373\quad 3.8045\quad 5.6782\quad 0.7585
0.53955.30807.79179.34011.2991]T\displaystyle 0.5395\quad 5.3080\quad 7.7917\quad 9.3401\quad 1.2991]^{\mbox{\tiny\sf T}}

are generated randomly.

According to (4), the corresponding Perron matrix is set as follows

W=[I50.1I5𝟎5𝟎5𝟎5I50.1I5𝟎5𝟎5𝟎5I50.1I51.21.23.4I50.4].W=\left[\begin{array}[]{cccccc}I_{5}&0.1I_{5}&\mathbf{0}_{5}&\mathbf{0}_{5}\\ \mathbf{0}_{5}&I_{5}&0.1I_{5}&\mathbf{0}_{5}\\ \mathbf{0}_{5}&\mathbf{0}_{5}&I_{5}&0.1I_{5}\\ -1.2\mathcal{L}&-1.2\mathcal{L}&-3.4\mathcal{L}&I_{5}-0.4\mathcal{L}\end{array}\right].

Consider the first agent, i.e., i=1i=1. Establish a Hankel matrix by (22), increase D¯^1\widehat{\overline{D}}_{1} until Γ\Gamma loses rank. Then, D¯^1=10\widehat{\overline{D}}_{1}=10, which implies 2D¯^1+1=212\widehat{\overline{D}}_{1}+1=21 is the minimal individual memory length. Besides, Γ\Gamma^{\bot} is given as follows

ζ1=[\displaystyle\zeta_{1}=[ 0.0000,0.7209,3.2848,7.5597,10.8951,9.8706,\displaystyle 0.0000,0.7209,-3.2848,7.5597,-10.8951,9.8706,
4.0706,2.6171,5.2873,3.5668,1.0000]T,\displaystyle-4.0706,-2.6171,5.2873,-3.5668,1.0000]^{\mbox{\tiny\sf T}},

which is the coefficient of (19). Together with (18), the minimal polynomial pair of WW is calculated below

α1=[\displaystyle\alpha_{1}=[ 0.0000,0.7209,6.1685,25.0245,63.7266,\displaystyle 0.0000,0.7209,-6.1685,25.0245,-63.7266,
112.6697,142.4475,124.0291,59.0453,14.2656,\displaystyle 112.6697,-142.4475,124.0291,-59.0453,-14.2656,
53.3886,49.1670,25.5545,7.5668,1.0000]T.\displaystyle 53.3886,-49.1670,25.5545,-7.5668,1.0000]^{\mbox{\tiny\sf T}}.

Then, (25) can be obtained. According to (28), one has that

[β0,β1β3]T=[0.00480.05380.67623.7570]T.\displaystyle[\beta_{0},\beta_{1}\dots\beta_{3}]^{\mbox{\tiny\sf T}}=[0.0048\quad 0.0538\quad 0.6762\quad 3.7570]^{\mbox{\tiny\sf T}}.

Besides, b3j=βj for j=0,1,,3b_{3-j}=\beta_{j}\text{ for }j=0,1,\cdots,3. According to (31), the first-order consensus item of the first agent is computed as ~c(1)(k)=0.000794850061k3+0.02451822315k2+0.6508490022k+3.757019522\widetilde{\wp}_{c}^{(1)}(k)=0.000794850061k^{3}+0.02451822315k^{2}+0.6508490022k+3.757019522. Here, the result is converted from a fraction to decimal with 10 significant figures for convenience. In addition, once the first agent has predicted the eventual consensus value, it will send the value to other ones simultaneously.

To compare the performances of the present MDCP and the routine consensus control protocol for high-order MASs (Ren et al., 2007) (in Abbr. Ren’s protocol), let MM^{{}^{\prime}}, MM denote consensus-window-launch-time (CWLT) of Ren’s protocol and MDCP, respectively, here σ=0.1\sigma=0.1. Note that the smaller the σ\sigma is the larger the CWLT is. As shown in Figs. 2 (a2-d2), deadbeat convergence to the consensus state is achieved by the present MDCP. By contrast, from Figs. 2 (a1-d1), it is observed that Ren’s protocol yields asymptotic convergence to the eventual same consensus value. Thus, the main technical results of Theorem 1 and 2 are verified.

To examine the feasibility of the proposed MDCP on more general complex networks, we consider benchmark Erdös–Rényi (ER\mathrm{ER}) (Erdos et al., 1960), Barabási–Albert (BA\mathrm{BA}) (Barabási & Albert, 1999) and Watts–Strogatz (BA\mathrm{BA}) (Watts & Strogatz, 1998) networks. In the ER\mathrm{ER} network, node pairs are connected with a probability ρ\rho. Initially, BA\mathrm{BA} networks is a small clique of mm nodes, and at each time step, a new node is introduced and connected to mm existing nodes. WS\mathrm{WS} network is a lattice where each node connects to zz neighbors with a rewiring probability of 0.30.3. We generate 𝒮=100\mathcal{S}=100 networks of size N=20N=20 for each kind of network model randomly with randomly initial individual state in the range [0,30]\left[0,30\right]. In each network, we pick up an observed node ii from 2020 nodes independent runs. Define the average CWLT of 𝒮\mathcal{S} networks with NN agents as M¯=1N×𝒮j=1𝒮i=1NMi,j and M¯=1𝒮j=1𝒮Mj\overline{M}=\frac{1}{N\times\mathcal{S}}\sum_{j=1}^{\mathcal{S}}\sum_{i=1}^{N}M_{i,j}\text{ and }\overline{M^{{}^{\prime}}}=\frac{1}{\mathcal{S}}\sum_{j=1}^{\mathcal{S}}M^{{}^{\prime}}_{j}, respectively. Here, σ=0.1\sigma=0.1, Mi,jM_{i,j} denotes CWLT for the ii-th agent of the jj-th networks, and MjM^{{}^{\prime}}_{j} refers to CWLT of the jj-th networks. It is observed from Fig. 3 that the average CWLT of MDCP is much shorter than Ren’s protocol (ER(ρ=0.2)\mathrm{ER}(\rho=0.2): 5.20s VS 54.05s, ER(ρ=0.4)\mathrm{ER}(\rho=0.4): 4.32s VS 32.26s, WS(z=6)\mathrm{WS}(z=6): 6.63s VS 42.88s, WS(z=8)\mathrm{WS}(z=8): 6.58s VS 35.94s, BA(m=6)\mathrm{BA}(m=6): 4.78s VS 35.87s, BA(m=9)\mathrm{BA}(m=9): 4.44s VS 27.24s), which implies that consensus has been substantially accelerated by locally sequential observations. Both the feasibility of Theorems 1 and 2, and the superiority of the present MDCP have thus been verified.

Refer to caption
Figure 3: The average CWLT M¯\overline{M^{{}^{\prime}}} and M¯\overline{M} computed for independent 𝒮=100\mathcal{S}=100 ER,BA,WS\mathrm{ER},\mathrm{BA},\mathrm{WS} networks with size N=20N=20 independently.

5 Conclusion

In this paper, both minimal-time deadbeat consensus and instant individual disagreement degree prediction problems are addressed by the propose MDCP for a class of discrete-time high-order linear MASs with directed communication topological backbones. Deadbeat consensus is achieved to fulfill prompt coordination requirements of modern industrial MASs. Sufficient conditions concerning the topology and the associate eigenspectrum of Perron matrix are derived to guarantee the minimal-time deadbeat consensus and instant individual disagreement degree prediction. Extensive simulations are conducted on a variety of benchmark complex networks to show the effectiveness and superiority of the proposed MDCP protocol. The present MDCP can be expected to pave the way from minimal time deadbeat consensus theory to prompt responses to emergence of natural/social/engineering swarming systems.

References

  • Attanasi et al. (2014) Attanasi, A., Cavagna, A., Del Castello, L., Giardina, I., Grigera, T. S., Jelić, A., Melillo, S., Parisi, L., Pohl, O., Shen, E. et al. (2014). Information transfer and behavioural inertia in starling flocks. Nature physics, 10, 691–696.
  • Barabási & Albert (1999) Barabási, A.-L., & Albert, R. (1999). Emergence of scaling in random networks. science, 286, 509–512.
  • Cavagna et al. (2015) Cavagna, A., Del Castello, L., Giardina, I., Grigera, T., Jelic, A., Melillo, S., Mora, T., Parisi, L., Silvestri, E., Viale, M. et al. (2015). Flocking and turning: a new model for self-organized collective motion. Journal of Statistical Physics, 158, 601–627.
  • Charalambous & Hadjicostis (2018) Charalambous, T., & Hadjicostis, C. N. (2018). When to stop iterating in digraphs of unknown size? an application to finite-time average consensus. In 2018 European Control Conference (ECC) (pp. 1–7). IEEE.
  • Charalambous et al. (2015) Charalambous, T., Yuan, Y., Yang, T., Pan, W., Hadjicostis, C. N., & Johansson, M. (2015). Distributed finite-time average consensus in digraphs in the presence of time delays. IEEE Transactions on Control of Network Systems, 2, 370–381.
  • Dimarogonas & Kyriakopoulos (2007) Dimarogonas, D. V., & Kyriakopoulos, K. J. (2007). On the rendezvous problem for multiple nonholonomic agents. IEEE Transactions on Automatic Control, 52, 916–922.
  • Erdos et al. (1960) Erdos, P., Rényi, A. et al. (1960). On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5, 17–60.
  • Fax & Murray (2004) Fax, J. A., & Murray, R. M. (2004). Information flow and cooperative control of vehicle formations. IEEE Transactions on Automatic Control, 49, 1465–1476.
  • Horn & Johnson (2012) Horn, R. A., & Johnson, C. R. (2012). Matrix analysis. Cambridge University Press.
  • Hu et al. (2019) Hu, Z., Fu, D., & Zhang, H.-T. (2019). Decentralized finite time consensus for discrete-time high-order linear multi-agent system. In 2019 Chinese Control Conference (CCC) (pp. 6154–6159). IEEE.
  • Huang et al. (2014) Huang, J., Fang, H., Dou, L., & Chen, J. (2014). An overview of distributed high-order multi-agent coordination. IEEE/CAA Journal of Automatica Sinica, 1, 1–9.
  • Jadbabaie et al. (2003) Jadbabaie, A., Lin, J., & Morse, A. S. (2003). Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Transactions on Automatic Control, 48, 988–1001.
  • Liu et al. (2019) Liu, B., Chen, Z., Zhang, H.-T., Wang, X., Geng, T., Su, H., & Zhao, J. (2019). Collective dynamics and control for multiple unmanned surface vessels. IEEE Transactions on Control Systems Technology, 28, 2540–2547.
  • Manitara & Hadjicostis (2017) Manitara, N. E., & Hadjicostis, C. N. (2017). Distributed stopping for average consensus in digraphs. IEEE Transactions on Control of Network Systems, 5, 957–967.
  • Moreau (2005) Moreau, L. (2005). Stability of multiagent systems with time-dependent communication links. IEEE Transactions on Automatic Control, 50, 169–182.
  • Olfati-Saber (2006) Olfati-Saber, R. (2006). Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Transactions on Automatic Control, 51, 401–420.
  • Olfati-Saber & Murray (2004) Olfati-Saber, R., & Murray, R. M. (2004). Consensus problems in networks of agents with switching topology and time-delays. IEEE Transactions on Automatic Control, 49, 1520–1533.
  • Olver (1993) Olver, P. J. (1993). Applications of Lie groups to differential equations volume 107. Springer Science & Business Media.
  • Partington (1988) Partington, J. R. (1988). An introduction to Hankel operators. 13. Cambridge University Press.
  • Proskurnikov et al. (2015) Proskurnikov, A. V., Matveev, A. S., & Cao, M. (2015). Opinion dynamics in social networks with hostile camps: Consensus vs. polarization. IEEE Transactions on Automatic Control, 61, 1524–1536.
  • Ren & Beard (2005) Ren, W., & Beard, R. W. (2005). Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Transactions on Automatic Control, 50, 655–661.
  • Ren & Beard (2008) Ren, W., & Beard, R. W. (2008). Distributed Consensus in Multi-Vehicle Cooperative Control. Springer.
  • Ren et al. (2007) Ren, W., Moore, K. L., & Chen, Y. (2007). High-order and model reference consensus algorithms in cooperative control of multivehicle systems. Journal of Dynamic Systems, Measurement and Control, 129, 678–688.
  • Rezaee & Abdollahi (2015) Rezaee, H., & Abdollahi, F. (2015). Average consensus over high-order multiagent systems. IEEE Transactions on Automatic Control, 60, 3047–3052.
  • Seo et al. (2009) Seo, J. H., Shim, H., & Back, J. (2009). Consensus of high-order linear systems using dynamic output feedback compensator: Low gain approach. Automatica, 45, 2659–2664.
  • Su & Lin (2016) Su, S., & Lin, Z. (2016). Distributed consensus control of multi-agent systems with higher order agent dynamics and dynamically changing directed interaction topologies. IEEE Transactions on Automatic Control, 61, 515–519. doi:10.1109/TAC.2015.2444211.
  • Sun et al. (2017) Sun, S., Lin, H., Ma, J., & Li, X. (2017). Multi-sensor distributed fusion estimation with applications in networked systems: A review paper. Information Fusion, 38, 122–134.
  • Sundaram & Hadjicostis (2007) Sundaram, S., & Hadjicostis, C. N. (2007). Finite-time distributed consensus in graphs with time-invariant topologies. In 2007 American Control Conference (pp. 711–716). IEEE.
  • Sundaram & Hadjicostis (2010) Sundaram, S., & Hadjicostis, C. N. (2010). Distributed function calculation via linear iterative strategies in the presence of malicious agents. IEEE Transactions on Automatic Control, 56, 1495–1508.
  • Watts & Strogatz (1998) Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘small-world’networks. nature, 393, 440–442.
  • Wieland et al. (2008) Wieland, P., Kim, J.-S., Scheu, H., & Allgöwer, F. (2008). On consensus in multi-agent systems with linear high-order agents. IFAC Proceedings Volumes, 41, 1541–1546.
  • Xiao & Boyd (2004) Xiao, L., & Boyd, S. (2004). Fast linear iterations for distributed averaging. Systems & Control Letters, 53, 65–78.
  • Yang et al. (2013) Yang, S., Tan, S., & Xu, J.-X. (2013). Consensus based approach for economic dispatch problem in a smart grid. IEEE Transactions on Power Systems, 28, 4416–4426.
  • Yuan (2012) Yuan, Y. (2012). Decentralised network prediction and reconstruction algorithms. Ph.D. thesis University of Cambridge.
  • Yuan et al. (2013) Yuan, Y., Stan, G.-B., Shi, L., Barahona, M., & Goncalves, J. (2013). Decentralised minimum-time consensus. Automatica, 49, 1227–1235.
  • Zhang et al. (2008) Zhang, H.-T., Chen, M. Z., Stan, G.-B., Zhou, T., & Maciejowski, J. M. (2008). Collective behavior coordination with predictive mechanisms. IEEE Circuits and Systems Magazine, 8, 67–85.
  • Zhang et al. (2019) Zhang, H.-T., Fan, M.-C., Wu, Y., Gao, J., Stanley, H. E., Zhou, T., & Yuan, Y. (2019). Ultrafast synchronization via local observation. New Journal of Physics, 21, 013040.
  • Zuo et al. (2018) Zuo, Z., Tian, B., Defoort, M., & Ding, Z. (2018). Fixed-time consensus tracking for multiagent systems with high-order integrator dynamics. IEEE Transactions on Automatic Control, 63, 563–570. doi:10.1109/TAC.2017.2729502.
{ack}

This work is supported by the National Natural Science Foundation of China under Grants 62225306 and U2141235.