This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Adaptive Distributed Observer-based Model
Predictive Control for Multi-agent Formation
with Resilience to Communication Link Faults

Binyan Xu    \IEEEmembershipMember, IEEE    Yufan Dai    \IEEEmembershipGraduate Student Member, IEEE   
Afzal Suleman
   and Yang Shi \IEEEmembershipFellow, IEEE This paper was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).B. Xu is with the School of Engineering, University of Guelph, Guelph ON N1G 2W1, Canada (e-mail; [email protected]) ;Y. Dai, A. Suleman and Y. Shi are with the Department of Mechanical Engineering, University of Victoria, Victoria BC V8P 5C2, Canada (e-mail: [email protected]; [email protected]; [email protected]).
Abstract

In order to address the nonlinear multi-agent formation tracking control problem with input constraints and unknown communication faults, a novel adaptive distributed observer-based distributed model predictive control method is developed in this paper. This design employs adaptive distributed observers in local control systems to estimate the leader’s state, dynamics, and relative positioning with respect to the leader. Utilizing the estimated data as local references, the original formation tracking control problem can be decomposed into several fully localized tracking control problems, which can be efficiently solved by the local predictive controller. Through the incorporation of adaptive distributed observers, this proposed design not only enhances the resilience of distributed formation tracking against communication faults but also simplifies the distributed model predictive control formulation.

{IEEEkeywords}

Model predictive control; Unmanned aerial vehicles; Adaptive control; Fault-tolerant; Multi-agent system

1 Introduction

Multi-agent systems (MASs), distinguished by decentralized task allocation, distributed mission execution, and self-organization, have drawn increasing attention across diverse fields due to their broad spectrum of applications. Moreover, the study of formation control, which involves controlling the positions and orientations of agents to attain a particular geometric configuration with respect to a leader, has emerged as a prominent research topic [1]. While numerous studies have been made in the field of formation control [2], it is worth noting that many investigations do not account for input constraints and control optimality. However, the incorporation of input constraints is imperative for an accurate formulation of real-world problems. The inclusion of optimality is also essential to fully exploit available control resources while satisfying input constraints.

An appealing framework for formation control is distributed model predictive control (DMPC). DMPC inherits the advantages of centralized model predictive control (MPC), including systematic handling of hard constraints, optimized control performance, inherent robustness, and the ability to cope with nonlinear multi-variable systems [3]. In addition, the distributed implementation fashion of DMPC effectively distributes the computation workload, further enhancing its appeal and practicality [4]. Numerous DMPC methods for MASs have been proposed, as summarized in review papers such as [5, 6]. However, existing DMPC results may encounter limitations when tackling the distinctive challenges posed by multi-UAV formation problems. First of all, the computation resources of onboard microcontrollers are limited, while the proposed DMPC methods with terminal constraints demand a sufficiently long prediction horizon and, thereby, a large computation amount to ensure feasibility. Secondly, the majority of DMPC methods are tailored to address the cooperative regulation problem that drives all agents toward a prior-known set point [7, 8]. These methods underlie an implicit assumption regarding the communication graph that each agent in the system is directly linked to the leader. Such an assumption is not true in the context of formation control, where the leader’s information is often only available to a portion of the followers. A notable exception, proposed in [9], does not require globally known leader information but is only applicable to multi-vehicle platoon scenarios for tracking a constant-speed leader. There also exists a contradiction between the high communication workload of DMPC and the limited bandwidth of wireless communication networks employed in multi-UAV systems. To attain global stability and feasibility of local optimization problems, it is essential for each distributed optimizer to have access to its neighbors’ most up-to-date optimized control sequences over the prediction horizon [7]. As a result, DMPC usually entails a substantial amount of information exchange, iteratively [10, 11, 12] or sequentially [13, 14]. Moreover, the distributed predictive controller, which involves predictions for both itself and its neighbors, not only needs to receive information from its neighbors but also needs to identify the source of that information.

A significant challenge that remains inadequately addressed in current DMPC studies is maintaining control performance in the presence of communication faults. The communication network plays a crucial role in enabling interactions and facilitating cooperative behaviors among agents. However, the inclusion of communication networks introduces additional vulnerabilities to the control system, particularly when facing unexpected events such as cyber-attacks and channel fading. Communication faults between agents can pose major threats to multi-agent control systems, potentially deteriorating control performance or even overall system stability. Attacks and fading within communication networks can be modeled as uncertainties in the communication links. Recent studies in [15, 16, 17] explore the consensus of MASs with stochastic uncertain communication networks. In [18, 19], deterministic network uncertainties are examined within the context of MASs with single integrator agents. In [20], a distributed state observer-based adaptive control protocol is designed to address the leader-follower consensus for linear MASs with communication link faults. This study demonstrates that the distributed leader state observer network is resilient to communication link faults. However, it requires that all following agents know the leader dynamics. As an extension of this result, [21] proposes adaptive distributed leader state/dynamics observers and control protocols, offering a completely distributed solution for synchronizing linear MASs with time-varying edge weights without the need for global knowledge of the leader dynamics. Most existing research on resilience cooperative control in the presence of communication uncertainties is directed towards unconstrained MASs with linear dynamics. Moreover, to the best of our knowledge, fully distributed control for formation tracking under communication link faults has not yet received significant research attention.

Motivated by the aforementioned investigations, this paper develops a novel adaptive distributed observer-based DMPC method for nonlinear MASs in the presence of input constraints and communication link faults. To achieve the formation tracking objective without relying on global access to the leader’s information, adaptive distributed observers are developed for all local control systems, estimating online the leader’s state, dynamics, and the desired relative position with respect to the leader. With information estimated by these observers, distributed MPC controllers are independently developed to manipulate each agent toward a predetermined formation relative to the estimated leader while adhering to input constraints. The asymptotic convergence of the observation process is demonstrated, which in turn proves the closed-loop control performance of the overall system. To validate the efficacy of the proposed design, simulations are conducted using both a numerical example and a practical 5-UAV system. The key contributions of this research work include:

  • In contrast to prior works such as [18, 19, 20, 21] that focus on the consensus problem in unconstrained, linear MASs, this study explores the formation tracking control problem in MASs with both input constraints and nonlinear dynamics. Adaptive distributed observers are utilized not only to estimate the leader’s state and dynamics but also the desired formation displacements of each agent relative to the leader. With the estimated real-time information as the reference, MPC is employed for the local controller design to achieve optimized control performance subject to the input constraints.

  • By locally estimating tracking references through corresponding adaptive observers, the distributed formation tracking control task can be decoupled into several fully distributed tracking control problems at the local level. This facilitates the development of local controllers. Therefore, the integration of adaptive observers can significantly reduce the complexity of the distributed MPC formulation compared to other designs proposed in [7, 22, 23, 24, 25].

The remainder of this paper is structured as follows: Section 2 provides the mathematical formulation of the control problem and objective; Section 3 elaborates on the distributed control design, presenting the adaptive observer and the MPC-based controller; Section 4 conducts the closed-loop analysis, evaluating the convergence of the estimation and the stability of the control system; Section 5 offers two simulation examples to validate the effectiveness of the proposed design; Finally, Section 6 summarizes this paper.

Notations used in this paper are listed as follows. \mathbb{R} and +\mathbb{R}^{+} denote the set of rational numbers and the set of positive rational numbers, respectively. n\mathbb{R}^{n} is the set of nn-dimensional real column vectors, while n×m\mathbb{R}^{n\times m} is the set of n×mn\times m real matrices. 𝒙\bm{x}^{\top} represents the transpose of 𝒙\bm{x}. 𝒙\|\bm{x}\| represents the standard Euclidean norm of 𝒙\bm{x} and 𝒙𝑸2=𝒙𝑸𝒙\|\bm{x}\|^{2}_{\bm{Q}}=\bm{x}^{\top}\!\bm{Q}\bm{x} is the weighed squared norm of 𝒙\bm{x}. σ¯(𝑨)\overline{\sigma}(\bm{A}) and σ¯(𝑨)\underline{\sigma}(\bm{A}) represent the minimal and maximal eigenvalues of matrix 𝑨\bm{A}, respectively. A diagonal matrix with x1,x2,xnx_{1},x_{2},\cdots x_{n} being the diagonal elements is denoted by diag(x1,x2,xn)\text{diag}(x_{1},x_{2},\cdots x_{n}), while a diagonal matrix with the elements of vector 𝒙\bm{x} on the diagonal is denoted by diag(𝒙)\text{diag}(\bm{x}). A diagonal matrix whose diagonal contains blocks of matrices 𝑨1\bm{A}_{1},𝑨2\bm{A}_{2},\cdots,𝑨n\bm{A}_{n} is denoted by blkdiag(𝑨1,𝑨2,,𝑨n)\text{blkdiag}(\bm{A}_{1},\bm{A}_{2},\cdots,\bm{A}_{n}). The notation \otimes is used to denote the Kronecker product.

2 Problem Formulation

This section outlines the mathematical formulation of the control problem to tackle: multi-agent formation tracking control in the presence of communication faults. Firstly, we detail the dynamics models of individual agents and the virtual leader and describe their intercommunication through a directed weighted graph. Subsequently, the modeling of communication faults is presented. Finally, we introduce leader-follower tracking errors to formulate the control objective for formation tracking.

2.1 Multi-agent System

Consider a multi-agent system comprising MM followers and one virtual leader. The dynamics of both followers and the leader are detailed below, while their interactions are modeled using a weighted directed graph.

2.1.1 Follower Dynamics

The dynamics of the iith follower can be described by the following higher-order MIMO nonlinear model:

{x˙i,1=xi,2x˙i,r1=xi,rx˙i,r=fi(xi)+Gi(xi)uiyi=xi,1\displaystyle\left\{\begin{array}[]{rl}\dot{x}_{i,1}&\!\!=x_{i,2}\\ &\vdots\\ \dot{x}_{i,r\!-\!1}&\!\!=x_{i,r}\\ \dot{x}_{i,r}&\!\!=f_{i}(x_{i})+G_{i}(x_{i})u_{i}\\ y_{i}&\!\!=x_{i,1}\end{array}\right. (6)

where xi=[xi,1xi,2xi,r]rnx_{i}=\left[{x}_{i,1}^{\top}\ {x}_{i,2}^{\top}\ \cdots\ {x}_{i,r}^{\top}\right]^{\top}\in\mathbb{R}^{rn} is the system state vector with each segment xilnx_{i}^{l}\in\mathbb{R}^{n} for l=1,2,,rl=1,2,\cdots,r; ui=[ui,1ui,2ui,n]nu_{i}=\left[u_{i,1}\ u_{i,2}\ \cdots\ u_{i,n}\right]\in\mathbb{R}^{n} and yi=[yi,1yi,2yi,n]ny_{i}=\left[y_{i,1}\ y_{i,2}\ \cdots\ y_{i,n}\right]\in\mathbb{R}^{n} are the control input and system output, respectively; fi(xi)=[fi,1(xi)fi,1(xi)f_{i}(x_{i})=\left[f_{i,1}(x_{i})\ f_{i,1}(x_{i})\ \cdots\ \right. fi,n(xi)]:rnn\left.f_{i,n}(x_{i})\right]^{\top}:\mathbb{R}^{rn}\rightarrow\mathbb{R}^{n} is a vector function, and Gi(xi)=[gi,1(xi)gi,2(xi)gi,n(xi)]:rnn×nG_{i}(x_{i})=\left[g_{i,1}(x_{i})\ g_{i,2}(x_{i})\ \cdots g_{i,n}(x_{i})\right]:\mathbb{R}^{rn}\rightarrow\mathbb{R}^{n\times n} is a square matrix function. To ensure that the system’s behavior is predictable and well-behave around the origin, the following assumption is necessary and commonly employed

Assumption 1

Al entries of fi(xi)f_{i}(x_{i}) and Gi(xi)G_{i}(x_{i}) are sufficiently smooth and locally Lipschitz function of xix_{i} and satisfy fi(0)=0f_{i}(0)=0 and Gi(0)0G_{i}(0)\neq 0.

2.1.2 Communication Graph

The communication among these MM followers can be described using a directed weighted graph. Such graph can be represented by 𝒢={𝒱,}\mathcal{G}=\{\mathcal{V},\mathcal{E}\}. In this representation, 𝒱={1,2,,M}\mathcal{V}=\{1,2,\cdots,M\} is the set of nodes, with each node corresponding to a follower agent. ={(j,i)|i,j𝒱,ij}\mathcal{E}=\{(j,i)|i,j\in\mathcal{V},i\neq j\} is the set of edges and (j,i)(j,i)\in\mathcal{E} means there is a communication link from agent jj to agent ii. Associated with this graph are two critical matrices. The adjacency matrix 𝒜=[aij]\mathcal{A}=[a_{ij}] is defined such that aij>0a_{ij}>0 if (j,i)(j,i)\in\mathcal{E} and aij=0a_{ij}=0 otherwise. The Laplacian matrix =[lij]\mathcal{L}=[l_{ij}] is defined with lii=j=1Maijl_{ii}=\sum_{j=1}^{M}a_{ij} capturing the in-degree of node ii and lij=aijl_{ij}=-a_{ij} for iji\neq j.

2.1.3 Leader Dynamics and Connectivity

In addition to the follower agents, the system includes a virtual leader whose role is to guide the overall behavior of the MAS. The dynamics of this virtual leader can be governed by:

ξ˙0=S0ξ0\displaystyle\dot{\xi}_{0}=S_{0}\xi_{0} (7)

where ξ0=[ξ0,1ξ0,2ξ0,r]rn\xi_{0}=\left[\xi_{0,1}^{\top}\ {\xi}_{0,2}^{\top}\ \cdots\ \xi_{0,r}^{\top}\right]^{\top}\in\mathbb{R}^{rn} represents the state vector of the leader; S0rn×rnS_{0}\in\mathcal{R}^{rn\times rn} denotes the system dynamics matrix.

Remark 1

It is imperative that the leader’s state vector ξ0\xi_{0} is of equivalent dimensionality to the followers’ dynamics, ensuring it can serve as a reference for the followers’ outputs. For instance, the llth segment of ξ0\xi_{0} serves as the reference for yi(l)y_{i}^{(l)} of follower ii.

Note that the state vector ξ0\xi_{0} and the dynamics matrix S0S_{0} of the leader are only accessible to certain followers. The leader can be labeled as node 0, and the connections from this leader to the followers, labeled 1,2,,M1,2,\cdots,M, can be defined by a set of pinning edges 0={(0,i)|i𝒱}\mathcal{E}^{0}=\{(0,i)|i\in\mathcal{V}\}. An edge (0,i)0(0,i)\in\mathcal{E}^{0} indicates that follower ii has direct access to the leader’s state and dynamics. Additionally, we introduce a pinning matrix =diag(b1,b2,,bM)\mathcal{B}={\rm diag}(b_{1},b_{2},\cdots,b_{M}), where bi>0b_{i}>0 if (0,i)0(0,i)\in\mathcal{E}^{0} and bi=0b_{i}=0 otherwise. This matrix effectively quantifies the influence of the leader on each follower.

2.2 Input Constraints and Communication Faults

In this work, we address both the input constraints of individual follower agents and unknown faults that may occur within the communication network. These considerations are crucial for ensuring the robustness and reliability of the system under various operational conditions. The mathematical models that incorporate these elements are provided below.

2.2.1 Input Constraints

Considering the limitations on excitable control actions, the control input of the iith follower is restricted to a nonempty compact convex set, as defined by

uiΩui{uin|u¯iuiu¯i}\displaystyle u_{i}\in\Omega_{u_{i}}\triangleq\left\{u_{i}\in\mathbb{R}^{n}|\ -\underline{u}_{i}\leqslant u_{i}\leqslant\overline{u}_{i}\right\} (8)

where u¯i=[u¯i,1u¯i,2u¯i,n]n+\underline{u}_{i}=[\underline{u}_{i,1}\ \underline{u}_{i,2}\ \cdots\ \underline{u}_{i,n}]^{\top}\in\mathbb{R}^{n+} and u¯i=[u¯i,1u¯i,2u¯i,n]n+\overline{u}_{i}=[\overline{u}_{i,1}\ \overline{u}_{i,2}\ \cdots\ \overline{u}_{i,n}]^{\top}\in\mathbb{R}^{n+}.

2.2.2 Communication Faults

Communication faults can be modeled as time-varying uncertainties affecting the graph edges  [20]:

aijf(t)\displaystyle a^{f}_{ij}(t) =aij+ϑija(t)\displaystyle=a_{ij}+\vartheta^{a}_{ij}(t) (9a)
bif(t)\displaystyle b^{f}_{i}(t) =bi+ϑib(t)\displaystyle=b_{i}+\vartheta^{b}_{i}(t) (9b)

where aija_{ij} and bib_{i} are the idea weights of general and pinning edges, and ϑija\vartheta^{a}_{ij} and ϑib\vartheta^{b}_{i} denote corrupted weights caused by communication faults. This fault model covers the following types of communication faults:

  • Channel manipulation attack: The unknown corrupted weights ϑija\vartheta^{a}_{ij} and ϑib\vartheta^{b}_{i} are capable of modeling a range of cyber attacks. The unknown corrupted weights ϑija\vartheta^{a}_{ij} and ϑib\vartheta^{b}_{i} can simulate various cyber attacks. These attacks involve infiltrating communication channels and manipulating the shared information between vehicles.

  • Fading channel: The corrupted communication weights can also represent the effect of a fading channel, resulting in a decrease in the values of the communication weights.

Remark 2

The existence of ϑija\vartheta^{a}_{ij} and ϑib\vartheta^{b}_{i} introduces time-variation and uncertainty into the weights of the communication links. Consequently, in the event of communication link failures as specified in (9), both the Laplacian matrix and the pinning matrix of the directed graph 𝒢\mathcal{G} undergo modifications. Specifically, the Laplacian matrix is redefined as f(t)=[lijf(t)]\mathcal{L}^{f}(t)=[l^{f}_{ij}(t)], where liif(t)=j=1Maijf(t)l^{f}_{ii}(t)=\sum_{j=1}^{M}a^{f}_{ij}(t) for the diagonal elements, and lijf(t)=aijf(t)l^{f}_{ij}(t)=-a^{f}_{ij}(t) for off-diagonal elements with iji\neq j. Similarly, the pinning matrix is revised to f(t)=diag(b1f(t),b2f(t),,bMf(t))\mathcal{B}^{f}(t)=\text{diag}(b_{1}^{f}(t),b_{2}^{f}(t),\dots,b_{M}^{f}(t)).

Assumption 2

The communication link faults ϑija(t)\vartheta^{a}_{ij}(t) and ϑib(t)\vartheta^{b}_{i}(t) in the directed graph, as well as their derivatives, are bounded. In addition, the signs of aijf(t)a^{f}_{ij}(t) and bif(t)b^{f}_{i}(t) are the same to that of aija_{ij} and bib_{i}.

Remark 3

Assumption 2, as also utilized in [15, 20], ensures the boundedness of communication faults and maintains the invariance of the network structure despite these faults. The modeling of communication link faults in (9) under Assumption 2 can cover various types of communication faults and cyber attacks with bounded derivatives, such as bias attacks and fading channels.

2.3 Formation Tracking Control Objective

Having modeled the MAS, taking into account input constraints and communication faults, we now proceed to formulate the formation tracking control objective. Formation refers to a specific spatial shape maintained by the followers, which is typically defined prior to executing any formation control. To delineate a formation task, we assign each follower in the system a specific formation displacement relative to the virtual leader, denoted as Δi\varDelta_{i} for i=1,2,,Mi=1,2,\cdots,M. Furthermore, let x=[x1x2xM]rnMx=\left[x_{1}^{\top}\ x_{2}^{\top}\ \cdots\ x_{M}^{\top}\right]^{\top}\in\mathbb{R}^{rnM} represent the collective state vector of all the followers. The state of the leader 0 is extended correspondingly as ξ=1Mξ0rnM{\xi}=1_{M}\otimes\xi_{0}\in\mathbb{R}^{rnM}. Then, a global formation tracking error can be defined

x~=[x~1x~2x~M]=xξΔ\displaystyle\tilde{x}=\begin{bmatrix}\tilde{x}_{1}^{\top}\ \tilde{x}_{2}^{\top}\ \cdots\ \tilde{x}_{M}^{\top}\end{bmatrix}^{\top}=x-{\xi}-\varDelta (10)

where Δ=[Δ1Δ2ΔM]rnM\varDelta=\left[\varDelta_{1}^{\top}\ \varDelta_{2}^{\top}\ \cdots\ \varDelta_{M}^{\top}\right]^{\top}\in\mathbb{R}^{rnM} is the collective formation displacement vector.

Assumption 3

To define a practical formation task, the displacement vector Δi\varDelta_{i}, which encodes the desired offset between xi=[yiy˙iyi(r1)]x_{i}=\left[y_{i}^{\top}\ \dot{y}_{i}^{\top}\ \cdots\ y_{i}^{(r-1)\top}\right]^{\top} and ξ0\xi_{0}, is structured as [ΔyiΔ˙yiΔyi(r1)]\left[\varDelta_{y_{i}}^{\top}\ \dot{\varDelta}_{y_{i}}^{\top}\ \cdots\ \varDelta_{y_{i}}^{(r-1)\top}\right]^{\top}. A default assumption is that Δyin\varDelta_{y_{i}}\in\mathbb{R}^{n} should be at least (r1)(r-1)-times differentiable, and the rrth derivative, Δyi(r)\varDelta_{y_{i}}^{(r)}, is considered to be zero.

Remark 4

Given the previously defined directed communication topology, the knowledge of the leader’s state and dynamics, as well as the desired formation displacement information, is not required to be globally known across the MAS. Only agents directly connected to the virtual leader have access to the real-time values of ξ0\xi_{0} and the respective Δi\varDelta_{i}. Agents that do not have a direct communication link with the virtual leader are only required to store and transmit the displacement vector relative to their out-neighbors, defined as Δij=ΔiΔj\varDelta_{ij}=\varDelta_{i}-\varDelta_{j}, along with their state measurement xjx_{j}, to their designated out-neighbor node j𝒩i+j\in\mathcal{N}_{i}^{+} via the communication links.

The control objective of this study is to develop a distributed control strategy that utilizes solely locally available neighborhood information for effective formation tracking of a MAS composed of MM followers (6) and a virtual leader (7). The primary goal is to ensure that the global formation error x~\tilde{x} not only converges to, but also remains within a small region near the origin. To ensure such formation tracking control objective is achievable, the following assumption of the graph topology holds throughout this paper.

Assumption 4

In the directed graph 𝒢\mathcal{G}, each node is either part of a spanning tree with the root node connected to the virtual leader or a standalone node directly connected to the virtual leader.

Remark 5

The above assumption ensures that there is either direct or indirect connectivity between each follower and the leader, providing a directed path from the leader to all the followers. This network structure is critical for achieving synchronized behaviors among the agents, enabling the distributed control strategies to eliminate the formation error across the system effectively.

Lemma 1

[1] Let 𝒢\mathcal{G} be the directed graph for MM followers, labeled as agents or followers 11 to MM. Let \mathcal{L} be the nonsymmetric Laplacian matrix associated with the directed graph 𝒢\mathcal{G}. Suppose that in addition to the MM followers, there exists a leader, labeled as agent 0, whose connection to the MM followers can be described by a pinning matrix =diag(b1,b2,,\mathcal{B}=\text{diag}(b_{1},b_{2},\cdots, bM)b_{M}), where bi>0b_{i}>0 if the iith follower can receive information from the leader and bi=0b_{i}=0 otherwise. Let =+\mathcal{L_{B}}=\mathcal{L}+\mathcal{B}. Then, all eigenvalues of \mathcal{L_{B}} have positive real parts if and only if in the directed graph 𝒢\mathcal{G} the leader has directed paths to all followers.

3 Distributed Control Design

In this section, we present the design of an adaptive distributed MPC framework for addressing the formation tracking control problem with input constraints and communication faults. This framework integrates state observers with MPC controllers via a distributed structure. As illustrated in Figure 1, each follower’s local control system operates independently and relies exclusively on locally available information, consisting of an adaptive observer for estimating the leader information with resilience to communication link faults and an MPC-based controller for determining optimal formation tracking control actions online based on the local estimation of leader’s state, dynamics matrix and desired displacement vector. Let us clarify and elaborate on each component in the subsequent subsections.

Refer to caption
Figure 1: Detailed view of agent ii’s local control system in the distributed network

3.1 Adaptive Leader Observer

Given the limitations on the availability of direct, real-time access to the state and dynamics of the virtual leader among all followers in the network, it becomes essential to develop an adaptive distributed observer within each local control system. This observer is responsible for estimating the leader’s information and the formation displacement, which are critical components for effective formation tracking controller design.

The locally estimated leader state for follower ii is denoted as ξ^i\hat{\xi}_{i}. We can then define a leader state estimation error as

ϵξi\displaystyle\epsilon_{\xi_{i}} =j=1Maij(t)(ξ^iξ^j)+bi(t)(ξ^iξ0)\displaystyle=\displaystyle\sum_{j=1}^{M}a_{ij}(t)\left(\hat{\xi}_{i}-\hat{\xi}_{j}\right)+b_{i}(t)\left(\hat{\xi}_{i}-\xi_{0}\right) (11)

which is available for the local control system of follower ii. The distributed adaptive leader state observer is then designed as

ξ^˙i\displaystyle\dot{\hat{\xi}}_{i} =S^iξ^icξiϵξi\displaystyle=\hat{S}_{i}\hat{\xi}_{i}-c_{\xi_{i}}\epsilon_{\xi_{i}} (12)

where cξic_{\xi_{i}} is a user-designed positive observation gain.

In (12), S^i\hat{S}_{i} is the estimate of the leader’s dynamics matrix S0S_{0}, updated following the following estimating law

S^˙i\displaystyle\dot{\hat{S}}_{i} =(cSi+c˙Si)ϵSi\displaystyle=-\left(c_{S_{i}}+\dot{c}_{S_{i}}\right)\epsilon_{S_{i}} (13)

where ϵSi{\epsilon}_{S_{i}} is the local estimation error for S0S_{0}, defined as

ϵSi\displaystyle\epsilon_{S_{i}} =j=1Maij(t)(S^iS^j)+bi(t)(S^iS0)\displaystyle=\displaystyle\sum_{j=1}^{M}a_{ij}(t)\left(\hat{S}_{i}-\hat{S}_{j}\right)+b_{i}(t)\left(\hat{S}_{i}-S_{0}\right) (14)

In (13), cSi{c}_{S_{i}} satisfying cSi(0)1{c}_{S_{i}}(0)\geqslant 1, is updated by

c˙Si\displaystyle\dot{c}_{S_{i}} =ϵSiϵSi\displaystyle=\vec{\epsilon}_{S_{i}}^{\top}\vec{\epsilon}_{S_{i}} (15)

with ϵSi=vec(ϵSi)\vec{\epsilon}_{S_{i}}={\rm vec}({\epsilon}_{S_{i}}) being the vector form of the matrix ϵSi\epsilon_{S_{i}}. The operation vec(){\rm vec}(\cdot) rearranges the matrix segments into a column vector.

Similarly, let Δ^i\hat{\varDelta}_{i} denote the estimate of the desired formation displacement Δi\varDelta_{i}. Its estimating law is

Δ^˙i\displaystyle\dot{\hat{\varDelta}}_{i} =(cΔi+c˙Δi)ϵΔi\displaystyle=-\left(c_{\varDelta_{i}}+\dot{c}_{\varDelta_{i}}\right)\epsilon_{\varDelta_{i}} (16)

where ϵΔi\epsilon_{{\varDelta}_{i}} is the local estimation errors for Δi\varDelta_{i}, defined as

ϵΔi\displaystyle\epsilon_{{\varDelta}_{i}} =j=1Maij(t)(Δ^iΔ^jΔij)+bi(t)(Δ^iΔi)\displaystyle=\displaystyle\sum_{j=1}^{M}a_{ij}(t)\left(\hat{{\varDelta}}_{i}-\hat{{\varDelta}}_{j}-{\varDelta}_{ij}\right)+b_{i}(t)\left(\hat{{\varDelta}}_{i}-{\varDelta}_{i}\right) (17)

with Δij=ΔiΔj{\varDelta}_{ij}={\varDelta}_{i}-{\varDelta}_{j} being the desired relative displacement between follower ii and jj. In (16), cΔi{c}_{\varDelta_{i}} satisfies cΔi1{c}_{\varDelta_{i}}\geqslant 1 and follows the following adaptive law:

c˙Δi\displaystyle\dot{c}_{\varDelta_{i}} =ϵΔiϵΔi\displaystyle=\epsilon_{\varDelta_{i}}^{\top}\epsilon_{\varDelta_{i}} (18)

3.2 MPC-based Formation Tracking Controller

With the local estimation of the leader’s state ξ^i\hat{\xi}_{i}, the leader’s dynamics matrix S^i\hat{S}_{i}, and the desired formation displacement vector Δ^i\hat{\varDelta}_{i}, we can move on to the development of the formation tracking controller.

By applying model predictive control, a finite-horizon constrained optimization problem for a forward-looking prediction horizon T+T\in\mathbb{R}^{+} is solved at each control update instant. The optimization solution is then implemented to the plant system recedingly, under a sampled-and-hold manner for execution. The control update instant sequence is {tk=δk|k{0,1,2,}}\{t_{k}=\delta k|k\in\{0,1,2,\cdots\}\}, where δ<T+\delta<T\in\mathbb{R}^{+} represents the sampling period.

At the time instant tkt_{k}, the MPC optimization problem is formulated as

minuip(t|tk)(tktk+Tsip(t|tk)Qi2+uip(t|tk)Ri2)dt\displaystyle\min_{u_{i}^{p}(t|t_{k})}\left(\int_{t_{k}}^{t_{k}+T}\left\|s^{p}_{i}(t|t_{k})\right\|^{2}_{Q_{i}}+\left\|u_{i}^{p}(t|t_{k})\right\|^{2}_{R_{i}}\right)\text{d}t (19a)
with
sip(t|tk)=\displaystyle\!\!\!s^{p}_{i}(t|t_{k})\!= l=0r2λi,l(xi,l+1p(t|tk)ξ0,l+1p(t|tk)Δ^i,l+1(tk))\displaystyle\!\sum_{l=0}^{r-2}\lambda_{i,l}\left(x^{p}_{i,l+1}(t|t_{k})\!-\!\xi_{0,l+1}^{p}(t|t_{k})\!-\!\hat{\varDelta}_{i,l+1}(t_{k})\right)
+(xi,rp(t|tk)ξ0,rp(t|tk)Δ^i,r(tk))\displaystyle\!+\!\left(x^{p}_{i,r}(t|t_{k})\!-\!\xi_{0,r}^{p}(t|t_{k})\!-\!\hat{\varDelta}_{i,r}(t_{k})\right)\!\!\! (19b)
subject to
{x˙i,1p(t|tk)=xi,2p(t|tk)x˙i,r1p(t|tk)=xi,rp(t|tk)x˙i,rp(t|tk)=fi(xip(t|tk))+Gi(xip(t|tk))uip(t|tk)\displaystyle\left\{\!\!\!\begin{array}[]{rl}\dot{x}_{i,1}^{p}(t|t_{k})&\!\!\!=x_{i,2}^{p}(t|t_{k})\\ &\!\vdots\\ \dot{x}_{i,r\!-\!1}^{p}(t|t_{k})&\!\!\!=x_{i,r}^{p}(t|t_{k})\\ \dot{x}_{i,r}^{p}(t|t_{k})&\!\!\!=f_{i}(x_{i}^{p}(t|t_{k}))+G_{i}(x_{i}^{p}(t|t_{k}))u_{i}^{p}(t|t_{k})\end{array}\right. (19g)
ξ˙0p(t|tk)=S^i(tk)ξ0p(t|tk)\displaystyle\hskip 17.07164pt\dot{\xi}_{0}^{p}(t|t_{k})=\hat{S}_{i}(t_{k}){\xi}_{0}^{p}(t|t_{k}) (19h)
xip(tk|tk)=xi(tk)\displaystyle\hskip 11.38109ptx_{i}^{p}(t_{k}|t_{k})=x_{i}(t_{k}) (19i)
ξ0p(tk|tk)=ξi(tk)\displaystyle\hskip 11.38109pt\xi_{0}^{p}(t_{k}|t_{k})=\xi_{i}(t_{k}) (19j)
uip(t|tk)Ωu\displaystyle\hskip 14.22636ptu^{p}_{i}(t|t_{k})\in\Omega_{u} (19k)
sip(tk|tk)(l=0r2λi,l(xi,l+2p(tk|tk)ξ0,l+2p(tk|tk)Δ^i,l+1(tk))\displaystyle{s^{p}_{i}}^{\top}\!\!(t_{k}|t_{k})\Bigl{(}\sum_{l=0}^{r\!-\!2}\lambda_{i,l}\left(x_{i,l+2}^{p}(t_{k}|t_{k})\!-\!\xi_{0,l+2}^{p}(t_{k}|t_{k})\!-\!\hat{\varDelta}_{i,l+1}(t_{k})\right)
+fi(xip(tk|tk))+Gi(xip(tk|tk))uip(tk|tk)ξ˙0,rp(tk|tk))\displaystyle+f_{i}(x_{i}^{p}(t_{k}|t_{k}))+G_{i}(x_{i}^{p}(t_{k}|t_{k}))u_{i}^{p}(t_{k}|t_{k})-\dot{\xi}^{p}_{0,r}(t_{k}|t_{k})\Bigr{)}
cisip(tk|tk)2\displaystyle\leqslant-c_{i}\|{s_{i}^{p}}(t_{k}|t_{k})\|^{2} (19l)

where t[tk,tk+T]t\in[t_{k},t_{k}+T], and the internal variables are denoted by a superscript pp to distinguish them from the actual system signals. In the optimization problem (19), constraint (19g) serves as the prediction model to predict the future evolution of the follower itself, while (19h) is to predict the leader’s behavior by making use of S^i\hat{S}_{i} estimated by the observer. Constraints (19i) and (19j) specify the initial conditions of the prediction models (19g) and (19h) , respectively. Compliance with the input constraint is ensured by (19k). The Lyapunov-based stability constraint (19l) is designed to enforce the decay of the Lyapunov function at the current instant tkt_{k}.

Lemma 2

There always exists a feasible solution to the optimization problem (19), constructed as

ui0(t|tk)=\displaystyle u_{i}^{0}(t|t_{k})= sat(vi(t|tk),u¯i,u¯i)\displaystyle{\rm sat}\left(v_{i}(t|t_{k}),\underline{u}_{i},\overline{u}_{i}\right) (20a)
vi(t|tk)=\displaystyle v_{i}(t|t_{k})= Gi1(xip(t|tk))(via(t|tk)+vid(t|tk))\displaystyle G_{i}^{-1}(x_{i}^{p}(t|t_{k}))\left(v_{i}^{a}(t|t_{k})+v_{i}^{d}(t|t_{k})\right) (20b)
via(t|tk)=\displaystyle v_{i}^{a}(t|t_{k})= cisip(t|tk)l=0r2λi,l(xi,l+2p(t|tk)ξ0,l+2p(t|tk)\displaystyle-c_{i}s_{i}^{p}(t|t_{k})\!-\!\sum_{l=0}^{r-2}\lambda_{i,l}\!\left(x_{i,l+2}^{p}(t|t_{k})\!-\!\xi_{0,l+2}^{p}(t|t_{k})\!\right.
Δ^i,l+1(tk))fi(xip(t|tk))+ξ˙p0,r(t|tk)\displaystyle\left.-\!\hat{\varDelta}_{i,l+1}(t_{k})\right)\!-\!f_{i}(x_{i}^{p}(t|t_{k}))\!+\!\dot{\xi}^{p}_{0,r}(t|t_{k}) (20c)
vid(t|tk)=\displaystyle v_{i}^{d}(t|t_{k})= ksiχ¯i1sgn(sip(t|tk))\displaystyle\ -k_{s_{i}}\underline{\chi}_{i}^{-1}\text{sgn}\left(s_{i}^{p}(t|t_{k})\right) (20d)

for t[tk,tk+T]t\in[t_{k},t_{k}+T]. In (20a), sat(vi(t),u¯i,u¯i){\rm sat}\left(v_{i}(t),\underline{u}_{i},\overline{u}_{i}\right) is a saturation function defined as written as

sat(vi(t),u¯i,u¯i)\displaystyle{\rm sat}\left(v_{i}(t),\underline{u}_{i},\overline{u}_{i}\right) =χ(vi(t),u¯i,u¯i)vi(t)\displaystyle=\chi\left(v_{i}(t),\underline{u}_{i},\overline{u}_{i}\right)v_{i}(t) (21)

with

χ(vi(t),u¯i,u¯i)\displaystyle\chi\left(v_{i}(t),\underline{u}_{i},\overline{u}_{i}\right) =diag(χj(vij(t),u¯ij,u¯ij))\displaystyle\!=\!\text{diag}\left(\chi_{j}\left(v_{i_{j}}\!(t),\underline{u}_{i_{j}},\overline{u}_{i_{j}}\right)\right)
χj(vij(t),u¯ij,u¯ij)\displaystyle\chi_{j}\left(v_{i_{j}}\!(t),\underline{u}_{i_{j}},\overline{u}_{i_{j}}\right) ={u¯ijvij(t),if vij(t)u¯ij1,if u¯ij<vij(t)<u¯iju¯ijvij(t),if vij(t)u¯ij\displaystyle\!=\!\left\{\begin{array}[]{cl}\frac{\overline{u}_{i_{j}}}{v_{i_{j}}\!(t)},&\text{if }v_{i_{j}}\!(t)\geqslant\overline{u}_{i_{j}}\\ 1,&\text{if }-\underline{u}_{i_{j}}<v_{i_{j}}\!(t)<\overline{u}_{i_{j}}\\ -\frac{\underline{u}_{i_{j}}}{v_{i_{j}}\!(t)},&\text{if }v_{i_{j}}\!(t)\leqslant-\underline{u}_{i_{j}}\end{array}\right.

in which j=1,2,,nj=1,2,\cdots,n; vciv_{c_{i}}, u¯ij\underline{u}_{i_{j}} and u¯ij\overline{u}_{i_{j}} being the jjth element of viv_{i}, u¯i\underline{u}_{i} and u¯i\overline{u}_{i}. χi(vi,u¯i,u¯i)\chi_{i}\left(v_{i},\underline{u}_{i},\overline{u}_{i}\right) is the control input saturation degree indicator, and it is clear that all of its diagonal elements vary between (0,1](0,1]. vidv_{i}^{d} in (20d) is the discontinuous portion of the feasible input, where χ¯in×n\underline{\chi}_{i}\in\mathbb{R}^{n\times n} is the lower bound of χ(vi,u¯i,u¯i)\chi\left(v_{i},\underline{u}_{i},\overline{u}_{i}\right) satisfying

On<χ¯iχi(vi,u¯i,u¯i)In\displaystyle O_{n}<\underline{\chi}_{i}\leqslant\chi_{i}\left(v_{i},\underline{u}_{i},\overline{u}_{i}\right)\leqslant I_{n} (22)

The user-defined positive parameter ksik_{s_{i}} is chosen appropriately such that

(χi(vi,u¯i,u¯i)In)via1ksi\displaystyle\left\|\left(\chi_{i}\left(v_{i},\underline{u}_{i},\overline{u}_{i}\right)-I_{n}\right)v_{i}^{a}\right\|_{1}\leqslant k_{s_{i}} (23)
Proof 3.1.

Due to the inclusion of the saturation function, the control profile constructed in (20) naturally satisfies the input constraint (19k). Substituting ui0(tk|tk)u_{i}^{0}(t_{k}|t_{k}) into the right half of the inequality in (19l) gives

spi(tk|tk)(i=0r2λi(xi,l+2p(tk|tk)ξ0,l+2p(tk|tk)δ^i,l+1(tk))\displaystyle{s^{p}}_{i}^{\top}(t_{k}|t_{k})\biggl{(}\sum_{i=0}^{r-2}\!\lambda_{i}\!\left(\!x^{p}_{i,l+2}\!(t_{k}|t_{k})\!-\!{\xi^{p}_{0,l+2}}\!(t_{k}|t_{k})\!-\!\hat{\delta}_{i,l+1}(t_{k})\right)
+fi(xip(tk|tk))+Gi(xip(tk|tk))ui0(tk|tk)ξ0,rp˙(tk|tk))\displaystyle\!+\!f_{i}({x}_{i}^{p}(t_{k}|t_{k}))+G_{i}({x}_{i}^{p}(t_{k}|t_{k}))u_{i}^{0}(t_{k}|t_{k})-\dot{\xi^{p}_{0,r}}(t_{k}|t_{k})\biggr{)}
=cisip(tk|tk)ksisip(tk|tk)(χi(vi(tk),u¯i,u¯i)χ¯i1In)\displaystyle=\!-\!c_{i}\|s^{p}_{i}\!(t_{k}|t_{k})\|\!-\!k_{s_{i}}{s_{i}^{p}}^{\top}\!(t_{k}|t_{k})\!\!\left(\!\chi_{i}\left(v_{i}(t_{k}),\underline{u}_{i},\overline{u}_{i}\right)\underline{\chi}_{i}^{-1}\!-\!I_{n}\!\right)
sgn(sip(tk|tk))ksisip(tk|tk)sgn(sip(tk|tk))+sip(tk|tk)\displaystyle\text{sgn}(s_{i}^{p}(t_{k}|t_{k}))-k_{s_{i}}{s_{i}^{p}}^{\top}(t_{k}|t_{k})\text{sgn}(s_{i}^{p}(t_{k}|t_{k}))+{s_{i}^{p}}^{\top}(t_{k}|t_{k})
(χi(vi(tk),u¯i,u¯i)In)via(tk)\displaystyle\left(\chi_{i}\left(v_{i}(t_{k}),\underline{u}_{i},\overline{u}_{i}\right)-I_{n}\right)v_{i}^{a}(t_{k}) (24)

Recalling inequality (22), it can be obtained that Inχi(vi(tk),u¯i,u¯i)χ¯i1I_{n}\leqslant\chi_{i}\left(v_{i}(t_{k}),\underline{u}_{i},\overline{u}_{i}\right)\underline{\chi}_{i}^{-1}. Further with (23), we can have that

spi(tk|tk)(i=0r2λi(xi,l+2p(tk|tk)ξ0,l+2p(tk|tk)δ^i,l+1(tk))\displaystyle{s^{p}}_{i}^{\top}(t_{k}|t_{k})\biggl{(}\sum_{i=0}^{r-2}\!\lambda_{i}\!\left(\!x^{p}_{i,l+2}\!(t_{k}|t_{k})\!-\!{\xi^{p}_{0,l+2}}\!(t_{k}|t_{k})\!-\!\hat{\delta}_{i,l+1}(t_{k})\right)
+fi(xip(tk|tk))+Gi(xip(tk|tk))ui0(tk|tk)ξ0,rp˙(tk|tk))\displaystyle\!+\!f_{i}({x}_{i}^{p}(t_{k}|t_{k}))+G_{i}({x}_{i}^{p}(t_{k}|t_{k}))u_{i}^{0}(t_{k}|t_{k})-\dot{\xi^{p}_{0,r}}(t_{k}|t_{k})\biggr{)}
cisip(tk|tk)ks)isip(tk|tk)1+(χi(vi(tk),u¯i,u¯i)\displaystyle\leqslant-c_{i}\|s_{i}^{p}(t_{k}|t_{k})\|-k_{s)i}\|{s_{i}^{p}}(t_{k}|t_{k})\|_{1}+\|\left(\chi_{i}\left(v_{i}(t_{k}),\underline{u}_{i},\overline{u}_{i}\right)\right.
In)via(tk)1sip(tk|tk)1\displaystyle\left.-I_{n}\right)v_{i}^{a}(t_{k})\|_{1}\|{s_{i}^{p}}(t_{k}|t_{k})\|_{1}
cisip(tk|tk)\displaystyle\leqslant-c_{i}\|s_{i}^{p}(t_{k}|t_{k})\| (25)

which satisfies the Lyapunov-based stability constraint (19l). Therefore, it can be concluded that ui0u_{i}^{0} is a feasible solution to the optimization problem (19).

Given the feasibility of the optimization problem (19), an optimal control profile ui(t|tk)u_{i}^{*}(t|t_{k}) for t[tk,tk+T]t\in[t_{k},t_{k}+T] can always be found by solving it at tkt_{k}. The found optimal solution is then implemented in a receding horizon manner. In this regard, ui(t|tk)u_{i}^{*}(t|t_{k}) is applied to the iith follower until the next measurement is available, so the actual control command ui(t)u_{i}(t) for t[tk,tk+1)t\in[t_{k},t_{k+1}) is

ui(t)=ui(tk|tk)\displaystyle u_{i}(t)=u_{i}^{*}(t_{k}|t_{k}) (26)

When the new measurement is updated at tk+1t_{k+1}, the optimization problem (19) will be solved again with tkt_{k} replaced by tk+1t_{k+1}, and a new optimal control profile ui(t|tk+1)u_{i}^{*}(t|t_{k+1}) for t[tk+1,tk+1+T]t\in[t_{k+1},t_{k+1}+T] will be found. In turn, the newly found optimal control profile updates the actual control command ui(t)u_{i}(t) for t[tk+1,tk+2)t\in[t_{k+1},t_{k+2}).

Finally, we can have that with the initial conditions (19i and 19j) specified and the stability condition (19l) satisfied, the following inequality holds for t[tk,tk+1)t\in[t_{k},t_{k+1})

si(tk)(l=0r2λl(xi,l+2(tk)ξ^i,l+2(tk))+fi(xi(tk))+Gi(x(tk))\displaystyle{s}_{i}^{\top}\!\!(t_{k})\!\biggl{(}\sum_{l=0}^{r-2}\lambda_{l}\left(x_{i,l+2}(t_{k})\!-\!\hat{\xi}_{i,l+2}(t_{k})\!\right)\!\!+\!f_{i}({x}_{i}(t_{k}))\!+\!G_{i}({x}(t_{k}))
ui(t)ξ^˙i,r(tk))cisi(tk)2\displaystyle u_{i}(t)\!-\!\dot{\hat{\xi}}_{i,r}(t_{k})\biggr{)}\leqslant-c_{i}\left\|{s}_{i}(t_{k})\right\|^{2} (27)

4 Closed-loop Stability Analysis

Given that the local control system includes two decoupled components—an adaptive observer and an MPC-based controller—we can perform the closed-loop stability analysis in a two-step manner. This approach allows us to access the stability contributions of the observer and the controller separately. In this section, we first examine the closed-loop performance of the adaptive observer network to prove that global estimation errors converge asymptotically. Subsequently, we evaluate the MPC-based controller, verifying its ability to maintain system stability and formation tracking control performance based on the observer’s estimations.

4.1 Convergence of Estimation

First of all, let us focus on the estimation performance of the observers. We start by denoting collective vectors of ξ^i\hat{\xi}_{i}, S^i\hat{S}_{i} and Δ^i\hat{\varDelta}_{i} for i=1,2,,Mi=1,2,\cdots,M as ξ~=[ξ^1ξ^2ξ^M]\tilde{\xi}=\left[\hat{\xi}^{\top}_{1}\ \hat{\xi}^{\top}_{2}\ \cdots\ \hat{\xi}^{\top}_{M}\right]^{\top}, S^=diag(S^1,S^2,,S^M)\hat{S}={\rm diag}(\hat{S}_{1},\hat{S}_{2},\cdots,\hat{S}_{M}) and Δ^=[Δ^1Δ^2Δ^M]\hat{\varDelta}=\left[\hat{\varDelta}^{\top}_{1}\ \hat{\varDelta}^{\top}_{2}\ \cdots\ \hat{\varDelta}^{\top}_{M}\right]^{\top}. Then, we can define the following collective estimation errors:

ξ~=\displaystyle\tilde{\xi}= [ξ~1ξ~2ξ~M]=ξ^ξ\displaystyle\begin{bmatrix}\tilde{\xi}_{1}^{\top}\ \tilde{\xi}_{2}^{\top}\ \cdots\ \tilde{\xi}_{M}^{\top}\end{bmatrix}^{\top}=\hat{\xi}-\xi (28)
S~=\displaystyle\tilde{S}= diag(S~1,S~2,,S~M)=S^(IMS0)\displaystyle{\rm diag}(\tilde{S}_{1},\tilde{S}_{2},\cdots,\tilde{S}_{M})=\hat{S}-\left(I_{M}\otimes S_{0}\right) (29)
Δ~=\displaystyle\tilde{\varDelta}= [Δ~1Δ~2Δ~M]=Δ^Δ\displaystyle\begin{bmatrix}\tilde{\varDelta}_{1}^{\top}\ \tilde{\varDelta}_{2}^{\top}\ \cdots\ \tilde{\varDelta}_{M}^{\top}\end{bmatrix}^{\top}=\hat{\varDelta}-\varDelta (30)

We can also define a vector form of the leader matrix estimation error as

S~=[S~1S~2S~M]=\displaystyle\tilde{\vec{S}}\!=\!\left[\tilde{\vec{S}}_{1}^{\top}\ \tilde{\vec{S}}_{2}^{\top}\ \cdots\ \tilde{\vec{S}}_{M}^{\top}\right]^{\!\top}\!\!\!\!= [vec(S^1)vec(S^2)vec(S^M)]\displaystyle\!\left[{\rm vec}(\hat{S}_{1})^{\!\top}\ {\rm vec}(\hat{S}_{2})^{\!\top}\cdots{\rm vec}(\hat{S}_{M})^{\!\top}\!\right]^{\!\top}\!\!\!
(1Mvec(S0))\displaystyle-\left(1_{M}\otimes{\rm vec}(S_{0})\right) (31)

By recalling the adaptive distributed observer design in (12), (13), and (16), we can have the dynamics of these global estimation errors as follows

ξ~˙\displaystyle\dot{\tilde{\xi}} =(IMS0+S~)ξ~(CxIrn)ϵξ+S~ξ\displaystyle=\left(I_{M}\otimes S_{0}+\tilde{S}\right)\!\tilde{\xi}-\left(C_{x}\otimes I_{rn}\right)\!\epsilon_{\xi}+\tilde{S}\xi (32)
S~˙\displaystyle\dot{\tilde{S}} =((CS+C˙S)Irn)ϵS\displaystyle=-\left(\left(C_{S}+\dot{C}_{S}\right)\otimes I_{rn}\right)\epsilon_{S} (33)
S~˙\displaystyle\dot{\tilde{\vec{S}}} =((CS+C˙S)I(rn)2)ϵS\displaystyle=-\left(\left(C_{S}+\dot{C}_{S}\right)\otimes I_{(rn)^{2}}\right)\vec{\epsilon}_{S} (34)
Δ~˙\displaystyle\dot{\tilde{\varDelta}} =((CΔ+C˙Δ)Irn)ϵΔ\displaystyle=-\left(\left(C_{\!\varDelta}+\dot{C}_{\!\varDelta}\right)\otimes I_{rn}\right)\epsilon_{\!\varDelta} (35)

where Cx=diag(cξ1,cξ2,,cξM)C_{x}={\rm diag}(c_{\xi_{1}},c_{\xi_{2}},\cdots,c_{\xi_{M}}), CS=diag(cS1,cS2,,cSM)C_{S}={\rm diag}(c_{S_{1}},c_{S_{2}},\cdots,c_{S_{M}}), CΔ=diag(cΔ1,cΔ2,C_{\!\varDelta}={\rm diag}(c_{\varDelta_{1}},c_{\varDelta_{2}}, ,cΔM)\cdots,c_{\varDelta_{M}}), ϵξ=[ϵξ1ϵξ2ϵξM]\epsilon_{\xi}=\left[\epsilon^{\top}_{\xi_{1}}\ \epsilon^{\top}_{\xi_{2}}\ \cdots\ \epsilon^{\top}_{\xi_{M}}\right]^{\top}, ϵS=diag(ϵS1,ϵS2,,ϵSM){\epsilon}_{S}={\rm diag}({\epsilon}_{S_{1}},{\epsilon}_{S_{2}},\cdots,{\epsilon}_{S_{M}}), ϵS=[ϵS1ϵS2ϵSM]\vec{\epsilon}_{S}=\left[\vec{\epsilon}^{\top}_{S_{1}}\ \vec{\epsilon}^{\top}_{S_{2}}\ \cdots\ \vec{\epsilon}^{\top}_{S_{M}}\right]^{\top}, ϵΔ=[ϵΔ1ϵΔ2ϵΔM]\epsilon_{\!\varDelta}=\left[\epsilon^{\top}_{\varDelta_{1}}\ \epsilon^{\top}_{\varDelta_{2}}\ \cdots\ \epsilon^{\top}_{\varDelta_{M}}\right]^{\top}.

We introduce a new notation, f(t)=f(t)+f(t)\mathcal{L^{\it f}_{B}}(t)=\mathcal{L}^{f}(t)+\mathcal{B}^{f}(t), to represent the superposition of the Laplacian matrix and the pinning matrix. This notation allows us to elucidate the relationships between global estimation errors and collective local estimation errors in the presence of communication link faults, as demonstrated below:

ϵξ=\displaystyle\epsilon_{\xi}= (f(t)Irn)ξ~\displaystyle\left(\mathcal{L^{\it f}_{B}}(t)\otimes I_{rn}\right)\tilde{\xi} (36)
ϵS=\displaystyle{\epsilon}_{S}= (f(t)Irn)S~\displaystyle\left(\mathcal{L^{\it f}_{B}}(t)\otimes I_{rn}\right){\tilde{S}} (37)
ϵS=\displaystyle\vec{\epsilon}_{S}= (f(t)I(rn)2)S~\displaystyle\left(\mathcal{L^{\it f}_{B}}(t)\otimes I_{(rn)^{2}}\right){\tilde{\vec{S}}} (38)
ϵΔ=\displaystyle\epsilon_{\!\varDelta}= (f(t)Irn)Δ~\displaystyle\left(\mathcal{L^{\it f}_{B}}(t)\otimes I_{rn}\right)\tilde{\varDelta} (39)

Subsequently, we can derive the dynamics of the local estimation errors, which are outlined below

ϵ˙ξ\displaystyle\dot{\epsilon}_{\xi} =(f(t)Irn)ξ~˙+(˙(t)Irn)ξ~\displaystyle=\left(\mathcal{L^{\it f}_{B}}(t)\otimes I_{rn}\right)\dot{\tilde{\xi}}+\left(\dot{\mathcal{L}}_{\mathcal{B}}(t)\otimes I_{rn}\right)\tilde{\xi}
=(IMS0+S~(t)CξIrn)ϵξ+ϵSξ\displaystyle=\left(I_{M}\otimes S_{0}+\tilde{S}-\mathcal{L}_{\mathcal{B}}(t)C_{\xi}\otimes I_{rn}\right)\epsilon_{\xi}+\epsilon_{S}\xi
+(˙(t)Irn)ξ~\displaystyle\hskip 14.22636pt+\left(\dot{\mathcal{L}}_{\mathcal{B}}(t)\otimes I_{rn}\right)\tilde{\xi} (40)
ϵ˙S\displaystyle\dot{\epsilon}_{S} =((t)Irn)S~˙+(˙(t)Irn)S~\displaystyle=\left(\mathcal{L}_{\mathcal{B}}(t)\otimes I_{rn}\right)\dot{\tilde{S}}+\left(\dot{\mathcal{L}_{\mathcal{B}}}(t)\otimes I_{rn}\right)\tilde{S}
=(f(t)(CS+C˙S)Irn)ϵS+(˙(t)Irn)S~\displaystyle=-\left(\mathcal{L^{\it f}_{B}}(t)\left(C_{S}\!+\!\dot{C}_{S}\right)\!\otimes\!I_{rn}\right){\epsilon}_{S}\!+\!\left(\dot{\mathcal{L}}_{\mathcal{B}}(t)\!\otimes\!I_{rn}\right)\!\tilde{S}\!\!\! (41)
ϵ˙S\displaystyle\dot{\vec{\epsilon}}_{S} =(f(t)(CS+C˙S)I(rn)2)ϵS+(˙(t)I(rn)2)S~\displaystyle\!=\!-\!\left(\!\mathcal{L^{\it f}_{B}}(t)\!\left(\!C_{S}\!+\!\dot{C}_{S}\!\right)\!\otimes\!I_{(rn)^{2}}\right)\!\vec{\epsilon}_{S}\!+\!\left(\!\!\dot{\mathcal{L}}_{\mathcal{B}}(t)\!\otimes\!I_{(rn)^{2}}\!\right)\!\!\tilde{\vec{S}}\!\!\! (42)
ϵ˙Δ\displaystyle\dot{\epsilon}_{\!\varDelta} =(f(t)Irn)Δ~˙+(˙(t)Irn)Δ~\displaystyle=\left(\mathcal{L^{\it f}_{B}}(t)\otimes I_{rn}\right)\dot{\tilde{\varDelta}}+\left(\dot{\mathcal{L}}_{\mathcal{B}}(t)\otimes I_{rn}\right)\tilde{\varDelta}
=(f(t)(CΔ+C˙Δ)Irn)ϵΔ+(˙(t)Irn)Δ~\displaystyle=\!-\!\left(\mathcal{L^{\it f}_{B}}(t)\left(\!C_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\!\right)\!\otimes\!I_{rn}\!\right)\!\epsilon_{\!\varDelta}\!+\!\left(\dot{\mathcal{L}}_{\mathcal{B}}(t)\!\otimes\!I_{rn}\!\right)\!\tilde{\varDelta} (43)

Having derived the dynamics of the error, we can then move on to formulate the first theorem regarding the convergence of distributed adaptive estimation. Before proceeding, however, it is essential to establish several foundational lemmas. These lemmas are building blocks for the proof of the main theorem.

Lemma 3

Under Assumptions 2 and 4, the following relationships hold

ξ~\displaystyle\left\|\tilde{\xi}\right\| ϵξλmin(f(t))\displaystyle\leqslant\frac{\|\epsilon_{\xi}\|}{\lambda_{\min}(\mathcal{L^{\it f}_{B}}(t))} (44)
S~\displaystyle\left\|\tilde{\vec{S}}\right\| ϵSλmin(f(t))\displaystyle\leqslant\frac{\|\vec{\epsilon}_{S}\|}{\lambda_{\min}(\mathcal{L^{\it f}_{B}}(t))} (45)
Δ~\displaystyle\left\|\tilde{\varDelta}\right\| ϵΔλmin(f(t))\displaystyle\leqslant\frac{\|\epsilon_{\!\varDelta}\|}{\lambda_{\min}(\mathcal{L^{\it f}_{B}}(t))} (46)
Proof 4.1.

Referring back to Lemma 1, it can be obtained that the matrix f(t)\mathcal{L^{\it f}_{B}}(t) is nonsingular and positive-definite. This guarantees that (f(t))1\left(\mathcal{L^{\it f}_{B}}(t)\right)^{-1} exists, and is nonnegative. Then, it follows from (36) - (39) that

ξ~\displaystyle\tilde{\xi} =(f1(t)Irn)ϵξ\displaystyle=\left(\mathcal{L}^{f^{-1}}_{\mathcal{B}}(t)\otimes I_{rn}\right)\epsilon_{\xi} (47)
S~\displaystyle\tilde{\vec{S}} =(f1(t)I(rn)2)ϵS\displaystyle=\left(\mathcal{L}^{f^{-1}}_{\mathcal{B}}(t)\otimes I_{(rn)^{2}}\right)\vec{\epsilon}_{S} (48)
Δ~\displaystyle\tilde{\varDelta} =(f1(t)Irn)ϵΔ\displaystyle=\left(\mathcal{L}^{f^{-1}}_{\mathcal{B}}(t)\otimes I_{rn}\right)\epsilon_{\!\varDelta} (49)

we can then have that

ξ~\displaystyle\left\|\tilde{\xi}\right\| (f1(t)Irn)ϵξϵξλmin(f(t))\displaystyle\leqslant\left\|\left(\mathcal{L}^{f^{-1}}_{\mathcal{B}}(t)\otimes I_{rn}\right)\epsilon_{\xi}\right\|\leqslant\frac{\|\epsilon_{\xi}\|}{\lambda_{\min}(\mathcal{L^{\it f}_{B}}(t))} (50)
S~\displaystyle\left\|\tilde{\vec{S}}\right\| (f1(t)I(rn)2)ϵSϵSλmin(f(t))\displaystyle\leqslant\left\|\left(\mathcal{L}^{f^{-1}}_{\mathcal{B}}(t)\otimes I_{(rn)^{2}}\right)\vec{\epsilon}_{S}\right\|\leqslant\frac{\|\vec{\epsilon}_{S}\|}{\lambda_{\min}(\mathcal{L^{\it f}_{B}}(t))} (51)
Δ~\displaystyle\left\|\tilde{\varDelta}\right\| (f1(t)Irn)ϵΔϵΔλmin(f(t))\displaystyle\leqslant\left\|\left(\mathcal{L}^{f^{-1}}_{\mathcal{B}}(t)\otimes I_{rn}\right)\epsilon_{\!\varDelta}\right\|\leqslant\frac{\|\epsilon_{\!\varDelta}\|}{\lambda_{\min}(\mathcal{L^{\it f}_{B}}(t))} (52)

which prove Lemma 3.

Lemma 4

Define a diagonal matrix P(t)=diag(p1(t),p2(t),,pM(t))P(t)=\text{diag}\left(p_{1}(t),p_{2}(t),\cdots,p_{M}(t)\right) with [p1(t)p2(t)pM(t)]=(f(t))11M\left[p_{1}(t)\ p_{2}(t)\ \cdots\ p_{M}(t)\right]^{\top}=\left(\mathcal{L^{\it f}_{B}}(t)\right)^{-1}{1}_{M}. If Assumptions 2 and 4 hold, we can have that P(t)P(t) is positive-definite. Furthermore, the symmetric matrix Q(t)Q(t) defined by

Q(t)=P(t)f(t)+f(t)P(t)\displaystyle Q(t)=P(t)\mathcal{L^{\it f}_{B}}(t)+\mathcal{L}_{\mathcal{B}}^{f\top}(t)P(t) (53)

is positive-definite as well. Additionally, both Q(t)Q(t) and its time derivative are bounded.

Proof 4.2.

The proof of Lemma 4 can be found under Lemmas 1 and 2 in [20], and is therefore omitted here for brevity.

Now, we present our first main result of the closed-loop analysis by the following theorem.

Theorem 1

Suppose that Assumptions 2 and 4 hold. Consider the MM-agent system with the virtual leader (7), interconnected via the weighted directed graph 𝒢\mathcal{G}. Implement the leader dynamics (7), the distributed leader state observer (12), the leader dynamics observer (13) and the formation displacement observer (16) for i=1,2,,Mi=1,2,\cdots,M. If the leader state observer gain cξic_{\xi_{i}} for i=1,2,,Mi=1,2,\cdots,M are selected such that the following condition is satisfied

λmin(Cξ)>1+κκ0\displaystyle\lambda_{\min}(C_{\xi})>1+\frac{\kappa^{*}}{\kappa_{0}} (54)

where κ=5κ24κ0+5κ3κ0+5κ4κ0+5κ5κ0\kappa^{*}=\frac{5\kappa_{2}}{4\kappa_{0}}+\frac{5\kappa_{3}}{\kappa_{0}}+\frac{5\kappa_{4}}{\kappa_{0}}+\frac{5\kappa_{5}}{\kappa_{0}} with κ0=mint0κmin(Q(t))\kappa_{0}=\min_{\forall t\geqslant 0}\kappa_{\min}(Q(t)), κ1=maxt0λmax(P2(t))\kappa_{1}\!=\!\max_{\forall t\geqslant 0}\lambda_{\max}\!\left(P^{2}(t)\right), κ2=maxt0λmax(P˙2(t))\kappa_{2}\!=\!\max_{\forall t\geqslant 0}\lambda_{\max}\!\left(\!\dot{P}^{2}(t)\right), κ3=maxt0λmax(P2(t)f(t)˙f(t)˙f(t)f(t))\kappa_{3}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(P^{2}(t)\mathcal{L}^{f^{-\top}}_{\mathcal{B}}\!\!(t)\dot{\mathcal{L}}_{\mathcal{B}}^{f^{\top}}\!\!(t)\dot{\mathcal{L}}^{f}_{\mathcal{B}}(t)\mathcal{L^{\it f}_{B}}(t)\right), κ4=maxt0λmax(P2(t)\kappa_{4}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(P^{2}(t)\right. S0S0)\left.\otimes S_{0}^{\top}S_{0}\right), and κ5=maxt0λmax(S~(P2(t)Irn)S~)\kappa_{5}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(\tilde{S}^{\top}\left(P^{2}(t)\otimes I_{rn}\right)\tilde{S}\right), then all signals within the observer network are globally bounded. Moreover, all the estimated errors, ξ~\tilde{\xi}, S~\tilde{S}, S~\tilde{\vec{S}} and Δ~\tilde{\varDelta}, asymptotically converge to the origin.

Proof 4.3.

To prove the convergence of the observer network, we divide the proof into three parts. Firstly, in Part 1 and Part 2, the convergence of the displacement estimation error and the dynamics matrix estimation error are proven, respectively. Finally, in Part 3, we can prove that the leader state estimation error asymptotically converges to the origin.

Part 1: To demonstrate the convergence of the formation displacement estimation error, we begin by examining its corresponding local estimation error ϵΔ\epsilon_{\varDelta}. A Lyapunov function candidate can be selected as follows

VΔ=\displaystyle V_{\!\varDelta}= i=1m(2cΔi+c˙Δi)pi(t)ϵΔiϵΔi+i=1m(cΔiαΔ)2\displaystyle\sum_{i=1}^{m}\left(2c_{\varDelta_{i}}+\dot{c}_{\varDelta_{i}}\right)p_{i}(t)\epsilon_{\varDelta_{i}}^{\top}\epsilon_{\varDelta_{i}}+\sum_{i=1}^{m}\left(c_{\varDelta_{i}}-\alpha_{\!\varDelta}\right)^{2}
=\displaystyle= ϵΔ((2CΔ+C˙Δ)P(t)Irn)ϵΔ+tr((CΔαΔIM)2)\displaystyle\epsilon_{\!\varDelta}^{\top}\left(\left(2C_{\!\varDelta}+\dot{C}_{\!\varDelta}\right)P(t)\otimes I_{rn}\right)\epsilon_{\!\varDelta}+\text{tr}\left(\left(C_{\varDelta}-\alpha_{\!\varDelta}I_{M}\right)^{2}\right) (55)

where αΔ\alpha_{\!\varDelta} is a positive constant to be determined later; P(t)P(t) is defined in Lemma 4.

Taking the time derivative of VΔV_{\!\varDelta} gives

V˙Δ=\displaystyle\dot{V}_{\!\varDelta}= 4i=1m(cΔi+c˙Δi)pi(t)ϵΔiϵ˙Δi+2imc˙Δipi(t)ϵΔiϵΔi\displaystyle 4\sum_{i=1}^{m}\left(c_{\varDelta_{i}}+\dot{c}_{\varDelta_{i}}\right)p_{i}(t)\epsilon_{\varDelta_{i}}^{\top}\dot{\epsilon}_{\varDelta_{i}}+2\sum_{i}^{m}\dot{c}_{\varDelta_{i}}p_{i}(t)\epsilon_{\varDelta_{i}}^{\top}\epsilon_{\varDelta_{i}}
+i=1m(2cΔi+c˙Δi)p˙i(t)ϵΔiϵΔi+2i=1m(cΔiαΔ)c˙Δi\displaystyle+\sum_{i=1}^{m}\left(2c_{\varDelta_{i}}+\dot{c}_{\varDelta_{i}}\right)\dot{p}_{i}(t)\epsilon_{\varDelta_{i}}^{\top}\epsilon_{\varDelta_{i}}+2\sum_{i=1}^{m}\left(c_{\varDelta_{i}}-\alpha_{\varDelta}\right)\dot{c}_{\varDelta_{i}}
=\displaystyle= 4ϵΔ((CΔ+C˙Δ)P(t)Irn)ϵ˙Δ+2ϵΔ(C˙ΔP(t)Irn)ϵΔ\displaystyle 4\epsilon_{\!\varDelta}^{\top}\left(\left(C_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\right)\!P(t)\!\otimes\!I_{rn}\right)\dot{\epsilon}_{\!\varDelta}\!+\!2\epsilon_{\!\varDelta}^{\top}\left(\dot{C}_{\!\varDelta}P(t)\!\otimes\!I_{rn}\right)\!{\epsilon}_{\!\varDelta}\!
+ϵΔ((2CΔ+C˙Δ)P˙(t)Irn)ϵΔ+2ϵΔ(CΔIrn)ϵΔ\displaystyle+\!\epsilon_{\!\varDelta}^{\top}\left(\left(2C_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\right)\dot{P}(t)\!\otimes\!I_{rn}\right){\epsilon}_{\!\varDelta}+2\epsilon_{\!\varDelta}^{\top}\left(C_{\varDelta}\otimes I_{rn}\right)\epsilon_{\!\varDelta}
2αΔϵΔϵΔ\displaystyle-2\alpha_{\!\varDelta}\epsilon_{\!\varDelta}^{\top}\epsilon_{\!\varDelta} (56)

We have c˙Δi0\dot{c}_{\varDelta_{i}}\geqslant 0 and cΔi(t)1c_{\varDelta_{i}}(t)\geqslant 1. Substituting (4.1) into (4.3) gives

V˙Δ\displaystyle\dot{V}_{\!\varDelta}\!\!\leqslant\! 2κ0ϵΔ((CΔ+C˙Δ)2Irn)ϵΔ+2ϵΔ(C˙ΔP(t)Irn)ϵΔ\displaystyle-2\kappa_{0}\epsilon_{\!\varDelta}^{\top}\!\left(\left(C_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\right)^{\!2}\!\otimes\!I_{rn}\right)\!{\epsilon}_{\!\varDelta}\!+\!2\epsilon_{\!\varDelta}^{\top}\!\!\left(\!\dot{C}_{\!\varDelta}P(t)\!\otimes\!I_{rn}\!\right)\!{\epsilon}_{\!\varDelta}
+ϵΔ((2CΔ+C˙Δ)P˙(t)Irn)ϵΔ+2ϵΔ(CΔIrn)ϵΔ\displaystyle+\epsilon_{\!\varDelta}^{\top}\!\left(\left(2C_{\!\varDelta}+\dot{C}_{\!\varDelta}\right)\!\dot{P}(t)\otimes I_{rn}\right)\!{\epsilon}_{\!\varDelta}\!+\!2\epsilon_{\!\varDelta}^{\top}\!\left(C_{\varDelta}\!\otimes\!I_{rn}\right)\epsilon_{\!\varDelta}\!
+4ϵΔ((CΔ+C˙Δ)P(t)˙f(t)Irn)Δ~2αΔϵΔϵΔ\displaystyle+\!4\epsilon_{\!\varDelta}^{\top}\!\left(\left(C_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\right)\!\!P(t)\dot{\mathcal{L}}^{f}_{\mathcal{B}}(t)\!\otimes\!I_{rn}\right)\!\tilde{\varDelta}\!-\!2\alpha_{\!\varDelta}\epsilon_{\!\varDelta}^{\top}\epsilon_{\!\varDelta}\!\!\! (57)

where κ0=mint0κmin(Q(t))\kappa_{0}=\min_{\forall t\geqslant 0}\kappa_{\min}(Q(t)).

Applying Young’s inequality, one has

2ϵΔ(C˙ΔP(t)Irn)ϵΔκ02ϵΔ(C˙Δ2Irn)ϵΔ\displaystyle\!\!\!\!2\epsilon_{\!\varDelta}^{\top}\!\left(\dot{C}_{\!\varDelta}{P}(t)\!\otimes\!I_{rn}\right)\!{\epsilon}_{\!\varDelta}\!\leqslant\frac{\kappa_{0}}{2}\epsilon_{\!\varDelta}^{\top}\!\left(\dot{C}_{\!\varDelta}^{2}\!\otimes\!I_{rn}\right)\!{\epsilon}_{\!\varDelta}\!
+2κ0ϵΔ(P2(t)Irn)ϵΔ\displaystyle\hskip 99.58464pt+\!\frac{2}{\kappa_{0}}\epsilon_{\!\varDelta}^{\top}\!\left({P}^{2}(t)\!\otimes\!I_{rn}\right)\!{\epsilon}_{\!\varDelta} (58a)
ϵΔ((2CΔ+C˙Δ)P˙(t)Irn)ϵΔκ04ϵΔ((CΔ2+C˙Δ2)Irn)ϵΔ\displaystyle\!\!\!\!\epsilon_{\!\varDelta}^{\top}\!\!\left(\left(2C_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\right)\!\dot{P}(t)\!\otimes\!I_{rn}\right)\!\!{\epsilon}_{\!\varDelta}\!\leqslant\frac{\!\kappa_{0}\!}{4}\epsilon_{\!\varDelta}^{\top}\!\!\left(\left({C}_{\!\varDelta}^{2}\!+\!\dot{C}_{\!\varDelta}^{2}\right)\!\!\otimes\!I_{rn}\right)\!{\epsilon}_{\!\varDelta}\!
+5κ0ϵΔ(P˙2(t)Irn)ϵΔ\displaystyle\hskip 128.0374pt+\!\frac{5}{\!\kappa_{0}\!}\epsilon_{\!\varDelta}^{\top}\!\!\left(\!\dot{P}^{2}(t)\!\otimes\!I_{rn}\right)\!{\epsilon}_{\!\varDelta}\!\!\!\!\! (58b)
2ϵΔ(CΔIrn)ϵΔκ02ϵΔ(CΔ2Irn)ϵΔ+2κ0ϵΔϵΔ\displaystyle\!\!\!\!2\epsilon_{\!\varDelta}^{\top}\left(C_{\varDelta}\otimes I_{rn}\right)\epsilon_{\!\varDelta}\leqslant\frac{\kappa_{0}}{2}\epsilon_{\!\varDelta}^{\top}\left({C}_{\!\varDelta}^{2}\otimes I_{rn}\right){\epsilon}_{\!\varDelta}+\frac{2}{\kappa_{0}}\epsilon_{\!\varDelta}^{\top}{\epsilon}_{\!\varDelta} (58c)
4ϵΔ((CΔ+C˙Δ)P(t)˙f(t)Irn)Δ~κ04ϵΔ((CΔ+C˙Δ)2Irn)ϵΔ\displaystyle\!\!\!\!4\epsilon_{\!\varDelta}^{\top}\!\!\left(\!\left(\!C_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\!\right)\!\!P\!(t)\dot{\mathcal{L}}^{f}_{\mathcal{B}}(t)\!\otimes\!I_{rn}\right)\!\tilde{\varDelta}\!\leqslant\!\!\frac{\kappa_{0}}{4}\epsilon_{\!\varDelta}^{\top}\!\!\Bigl{(}\!\left(\!{C}_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\!\right)^{2}\!\!\!\otimes\!I_{rn}\!\Bigr{)}\!{\epsilon}_{\!\varDelta}
+16κ0ϵΔ(P2(t)f(t)˙f(t)˙f(t)f(t)Irn)ϵΔ\displaystyle\!\!\!\!\!+\!\frac{16}{\kappa_{0}}\epsilon_{\!\varDelta}^{\top}\!\!\left(\!P^{2}(t)\mathcal{L}^{f^{-\top}}_{\mathcal{B}}\!\!(t)\dot{\mathcal{L}}_{\mathcal{B}}^{f^{\top}}\!\!(t)\dot{\mathcal{L}}^{f}_{\mathcal{B}}(t)\mathcal{L^{\it f}_{B}}(t)\!\otimes\!I_{rn}\!\right)\!\epsilon_{\!\varDelta}\!\!\! (58d)

We define κ1=maxt0λmax(P2(t))\kappa_{1}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(P^{2}(t)\right), κ2=maxt0λmax(P˙2(t))\kappa_{2}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(\dot{P}^{2}(t)\right), and κ3=maxt0λmax(P2(t)f(t)˙f(t)˙f(t)f(t))\kappa_{3}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(P^{2}(t)\mathcal{L}^{f^{-\top}}_{\mathcal{B}}\!\!(t)\dot{\mathcal{L}}_{\mathcal{B}}^{f^{\top}}\!\!(t)\dot{\mathcal{L}}^{f}_{\mathcal{B}}(t)\mathcal{L^{\it f}_{B}}(t)\right). Then, substituting the above inequalities into V˙Δ\dot{V}_{\!\varDelta} yields

V˙Δλ0ϵΔ((CΔ+C˙Δ)2Irn)ϵΔ+κϵΔϵΔ2αΔϵΔϵΔ\displaystyle\dot{V}_{\!\varDelta}\!\leqslant\!-\!{\lambda}_{0}\epsilon_{\!\varDelta}^{\top}\!\Bigl{(}\!\left(\!C_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\!\right)^{\!2}\!\!\otimes\!I_{rn}\Bigr{)}{\epsilon}_{\!\varDelta}+{\kappa}{\epsilon}_{\!\varDelta}^{\top}{\epsilon}_{\!\varDelta}-2\alpha_{\!\varDelta}{\epsilon}_{\!\varDelta}^{\top}{\epsilon}_{\!\varDelta} (59)

where κ=2κ1κ0+5κ2κ0+2κ0+16κ3κ0\kappa=\frac{2\kappa_{1}}{\kappa_{0}}+\frac{5\kappa_{2}}{\kappa_{0}}+\frac{2}{\kappa_{0}}+\frac{16\kappa_{3}}{\kappa_{0}}. There exists a bounded constant αΔ\alpha_{\!\varDelta} satisfying αΔκ2\alpha_{\!\varDelta}\geqslant\frac{\kappa}{2} such that

V˙ΔϵΔ((CΔ+C˙Δ)2Irn)ϵΔ\displaystyle\dot{V}_{\!\varDelta}\leqslant-\epsilon_{\!\varDelta}^{\top}\!\left(\left(C_{\!\varDelta}+\dot{C}_{\!\varDelta}\right)^{\!2}\!\otimes\!I_{rn}\right)\!{\epsilon}_{\!\varDelta} (60)

which implies that all signals in the developed observer including ϵΔ\epsilon_{\!\varDelta}, CΔC_{\!\varDelta} and C˙Δ\dot{C}_{\!\varDelta} are globally bounded under communication link faults.

To get the convergence of ϵΔ\epsilon_{\!\varDelta}, we solve (60) as

VΔ(t)VΔ(0)\displaystyle V_{\!\varDelta}(t)\!-\!V_{\!\varDelta}(0) λ00tϵΔ((CΔ+C˙Δ)2Irn)ϵΔdτ\displaystyle\!\leqslant\!-{\lambda}_{0}\!\int_{0}^{t}\!\epsilon_{\!\varDelta}^{\top}\!\left(\left(C_{\!\varDelta}\!+\!\dot{C}_{\!\varDelta}\right)^{\!2}\!\otimes\!I_{rn}\!\right)\!{\epsilon}_{\!\varDelta}\text{d}\tau (61)

With the fact that c˙Δi0\dot{c}_{\varDelta_{i}}\geqslant 0 and cΔi1{c}_{\varDelta_{i}}\geqslant 1, one has

0VΔ(t)\displaystyle 0\leqslant V_{\!\varDelta}(t) λ00tϵΔ(CΔIrn)ϵΔdτ+VΔ(0)\displaystyle\leqslant-{\lambda}_{0}\int_{0}^{t}\epsilon_{\!\varDelta}^{\top}\left(C_{\!\varDelta}\otimes I_{rn}\right){\epsilon}_{\!\varDelta}\text{d}\tau+V_{\!\varDelta}(0)
κ00tϵΔϵΔdτ+VΔ(0)\displaystyle\leqslant-\kappa_{0}\int_{0}^{t}\epsilon_{\varDelta}^{\top}\epsilon_{\varDelta}\text{d}\tau+V_{\!\varDelta}(0) (62)

or equivalently

00tϵΔϵΔdτVΔ(0)κ0\displaystyle 0\leqslant\int_{0}^{t}\epsilon_{\varDelta}^{\top}\epsilon_{\varDelta}\text{d}\tau\leqslant\frac{V_{\!\varDelta}(0)}{\kappa_{0}} (63)

This demonstrates that 0tϵΔϵΔdτ\int_{0}^{t}\epsilon_{\varDelta}^{\top}\epsilon_{\varDelta}\text{d}\tau is bounded. By applying Barbalat’s Lemma, we establish that ϵΔ\epsilon_{\varDelta} converges to zero. Furthermore, according to Lemma 3, it can be deduced that the estimation error Δ~\tilde{\varDelta} also globally converges to zero, even in scenarios involving communication link faults.

Part 2: Similarly, the leader state estimation error convergence can be proven. We firstly select a candidate Lyapunov function as

VS=\displaystyle\!\!\!V_{S}\!=\! i=1m(2cSi+c˙Si)pi(t)ϵSiϵSi+i=1m(cSiαS)2\displaystyle\sum_{i=1}^{m}\left(2c_{S_{i}}+\dot{c}_{S_{i}}\right)p_{i}(t)\vec{\epsilon}_{S_{i}}^{\top}\vec{\epsilon}_{S_{i}}+\sum_{i=1}^{m}\left(c_{S_{i}}-\alpha_{S}\right)^{2}
=\displaystyle\!= ϵS((2CS+C˙S)P(t)I(rn)2)ϵS+tr((CSαSI(rn)2)2)\displaystyle\vec{\epsilon}_{S}^{\top}\!\!\left(\left(2C_{S}\!+\!\dot{C}_{S}\right)\!P(t)\!\otimes\!I_{(rn)^{2}}\!\right)\!\vec{\epsilon}_{S}\!+\!\text{tr}\!\left(\!\left(C_{S}\!-\!\alpha_{S}I_{(rn)^{2}}\right)^{2}\right)\!\! (64)

Following the same procedure, the derivative of VSV_{S} can be bounded by

V˙Sλ0ϵS((CS+C˙S)2I(rn)2)ϵS\displaystyle\dot{V}_{S}\leqslant-{\lambda}_{0}\vec{\epsilon}_{S}^{\top}\!\left(\left(C_{S}+\dot{C}_{S}\right)^{\!2}\!\otimes\!I_{(rn)^{2}}\right)\!\vec{\epsilon}_{S} (65)

with αS\alpha_{S} appropriately chosen such that αSκ2\alpha_{S}\geqslant\frac{\kappa}{2}.

We can further have that

00tϵSϵSdτVS(0)κ0\displaystyle 0\leqslant\int_{0}^{t}\vec{\epsilon}_{S}^{\top}\vec{\epsilon}_{S}\text{d}\tau\leqslant\frac{V_{S}(0)}{\kappa_{0}} (66)

This implies that 0tϵSϵSdτ\int_{0}^{t}\vec{\epsilon}_{S}^{\top}\vec{\epsilon}_{S}\text{d}\tau is bounded. Applying Barbalat’s Lemma, we can conclude that ϵS\vec{\epsilon}_{S} converges to zero. Furthermore, based on Lemma 3, it can be concluded that the estimation errors ϵS{\epsilon}_{S}, S~\tilde{\vec{S}}, and S~\tilde{S} also converge to zero.

Part 3: Finally, to demonstrate the stability and convergence of the leader state estimation, we select the following candidate for a Lyapunov function:

Vξ=\displaystyle V_{\xi}= i=1mpi(t)ϵξiϵξi=ϵξ(P(t)Irn)ϵξ\displaystyle\sum_{i=1}^{m}p_{i}(t)\epsilon_{\xi_{i}}^{\top}\epsilon_{\xi_{i}}=\epsilon_{\xi}^{\top}\left(P(t)\otimes I_{rn}\right)\epsilon_{\xi} (67)

By taking the time derivative of VξV_{\xi}, we can have

V˙ξ=\displaystyle\dot{V}_{\xi}= 2ϵξ(P(t)Irn)ϵ˙ξ+ϵξ(P˙(t)Irn)ϵξ\displaystyle 2\epsilon_{\xi}^{\top}\left(P(t)\otimes I_{rn}\right)\dot{\epsilon}_{\xi}+\epsilon_{\xi}^{\top}\left(\dot{P}(t)\otimes I_{rn}\right){\epsilon}_{\xi} (68)

Substituting (4.1) into it gives

V˙ξ\displaystyle\!\!\!\!\dot{V}_{\xi}\!\leqslant\! κ0λmin(Cξ)ϵξϵξ+ϵξ(P˙(t)Irn)ϵξ+2ϵξ(P(t)˙(t)\displaystyle-\kappa_{0}\lambda_{\min}(C_{\xi})\epsilon_{\xi}^{\top}\!{\epsilon}_{\xi}+\epsilon_{\xi}^{\top}\!\left(\dot{P}(t)\otimes I_{rn}\right)\!{\epsilon}_{\xi}+2\epsilon_{\xi}^{\top}\!\left(P(t)\dot{\mathcal{L}}_{\mathcal{B}}(t)\right.
Irn)ξ~+2ϵx(P(t)S0)ϵξ+2ϵξ(P(t)Irn)S~ϵξ\displaystyle\left.\otimes I_{rn}\right)\!\tilde{\xi}+2\epsilon_{x}^{\top}\!\left(P(t)\otimes S_{0}\right)\epsilon_{\xi}+2\epsilon_{\xi}^{\top}\left(P(t)\otimes I_{rn}\right)\tilde{S}{\epsilon}_{\xi}
+2ϵξ(P(t)Irn)ϵSξ\displaystyle+2\epsilon_{\xi}^{\top}\left(P(t)\otimes I_{rn}\right)\epsilon_{S}\xi (69)

where κ0=mint0λmin(Q(t))\kappa_{0}=\min_{\forall t\geqslant 0}\lambda_{\min}(Q(t)).

We define ς=ϵSξ\varsigma=\epsilon_{S}\xi as a new variable that converges to zero given the convergence of ϵS\epsilon_{S}. Applying Young’s inequality, one has

ϵξ(P˙(t)Irn)ϵξ\displaystyle\!\!\!\!\epsilon_{\xi}^{\top}\!\left(\dot{P}(t)\!\otimes\!I_{rn}\right)\!{\epsilon}_{\xi}\!\leqslant\! κ05ϵξϵξ+54κ0ϵξ(P˙2(t)Irn)ϵξ\displaystyle\frac{\kappa_{0}}{5}\epsilon_{\xi}^{\top}{\epsilon}_{\xi}\!+\!\frac{5}{4\kappa_{0}}\epsilon_{\xi}^{\top}\!\!\left(\!\dot{P}^{2}(t)\!\otimes\!I_{rn}\!\right)\!\epsilon_{\xi}\!\!\!\! (70a)
2ϵξ(P(t)˙f(t)Irn)ξ~\displaystyle\!\!\!\!2\epsilon_{\xi}^{\top}\!\!\!\left(\!P(t)\dot{\mathcal{L}}^{f}_{\mathcal{B}}(t)\!\otimes\!I_{rn}\!\right)\!\tilde{\xi}\!\leqslant κ05ϵξϵξ+5κ0ϵξ(P2(t)f(t)˙f(t)\displaystyle\frac{\kappa_{0}}{5}\epsilon_{\xi}^{\top}{\epsilon}_{\xi}\!+\!\frac{5}{\kappa_{0}}\epsilon_{\xi}^{\top}\!\!\left(\!P^{2}(t)\mathcal{L}^{f^{-\top}}_{\mathcal{B}}\!\!(t)\dot{\mathcal{L}}_{\mathcal{B}}^{f^{\top}}\!\!(t)\right.
˙f(t)f(t)Irn)ϵξ\displaystyle\left.\dot{\mathcal{L}}^{f}_{\mathcal{B}}(t)\mathcal{L^{\it f}_{B}}\!(t)\!\otimes\!I_{rn}\right)\!\epsilon_{\xi} (70b)
2ϵξ(P(t)S0)ϵξ\displaystyle\!\!\!\!2\epsilon_{\xi}^{\top}\!\left(P(t)\otimes S_{0}\right)\!\epsilon_{\xi}\leqslant κ05ϵξϵξ+5κ0ϵξ(P2(t)S0S0)ϵξ\displaystyle\!\frac{\kappa_{0}}{5}\epsilon_{\xi}^{\top}\!{\epsilon}_{\xi}\!+\!\frac{5}{\kappa_{0}}\epsilon_{\xi}^{\top}\!\left(\!P^{2}(t)\!\otimes\!S_{0}^{\top}\!S_{0}\!\right)\!\epsilon_{\xi}\!\!\! (70c)
2ϵξ(P(t)Irn)S~ϵξ\displaystyle\!\!\!\!2\epsilon_{\xi}^{\top}\!\left(P(t)\otimes I_{rn}\right)\!\tilde{S}\epsilon_{\xi}\leqslant κ05ϵξϵξ+5κ0ϵξS~(P2(t)Irn)S~ϵξ\displaystyle\!\frac{\kappa_{0}}{5}\!\epsilon_{\xi}^{\top}\!{\epsilon}_{\xi}\!+\!\frac{5}{\kappa_{0}}\!\epsilon_{\xi}^{\top}\!\tilde{S}^{\top}\!\!\!\left(\!P^{2}\!(t)\!\otimes\!I_{rn}\!\right)\!\!\tilde{S}\!\epsilon_{\xi}\!\!\! (70d)
2ϵξ(P(t)Irn)ς\displaystyle\!\!\!\!2\epsilon_{\xi}^{\top}\!\left(P(t)\otimes I_{rn}\right)\!\varsigma\leqslant κ05ϵξϵξ+5κ0ς(P2(t)Irn)ς\displaystyle\!\frac{\kappa_{0}}{5}\epsilon_{\xi}^{\top}{\epsilon}_{\xi}\!+\!\frac{5}{\kappa_{0}}\varsigma^{\top}\!\!\left(P^{2}(t)\!\otimes\!I_{rn}\right)\!\varsigma\!\! (70e)

Given the boundedness of S~\tilde{S}, we can define new constants κ4=maxt0λmax(P2(t)\kappa_{4}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(P^{2}(t)\right. S0S0)\left.\otimes S_{0}^{\top}S_{0}\right) and κ5=maxt0λmax(S~(P2(t)Irn)S~)\kappa_{5}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(\tilde{S}^{\top}\left(P^{2}(t)\otimes I_{rn}\right)\tilde{S}\right), and recall the definitions of κ1=maxt0λmax(P2(t))\kappa_{1}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(P^{2}(t)\right), κ2=maxt0λmax(P˙2(t))\kappa_{2}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(\dot{P}^{2}(t)\right), κ3=maxt0λmax(P2(t)f(t)\kappa_{3}=\max_{\forall t\geqslant 0}\lambda_{\max}\left(P^{2}(t)\mathcal{L}^{f^{-\top}}_{\mathcal{B}}\!\!(t)\right. ˙f(t)˙f(t)f(t))\left.\dot{\mathcal{L}}_{\mathcal{B}}^{f^{\top}}\!\!(t)\dot{\mathcal{L}}^{f}_{\mathcal{B}}(t)\mathcal{L^{\it f}_{B}}(t)\right). Substituting the above inequalities into V˙ξ\dot{V}_{\!\xi} yields

V˙ξ\displaystyle\dot{V}_{\xi}\leqslant (κ0λmin(Cξ)κ0κ)ϵξϵξ+5κ1κ0ςς\displaystyle-\left(\kappa_{0}\lambda_{\min}(C_{\xi})-\kappa_{0}-\kappa^{*}\right)\epsilon_{\xi}^{\top}\!{\epsilon}_{\xi}+\frac{5\kappa_{1}}{\kappa_{0}}\varsigma^{\top}\varsigma (71)

where κ=5κ24κ0+5κ3κ0+5κ4κ0+5κ5κ0\kappa^{*}=\frac{5\kappa_{2}}{4\kappa_{0}}+\frac{5\kappa_{3}}{\kappa_{0}}+\frac{5\kappa_{4}}{\kappa_{0}}+\frac{5\kappa_{5}}{\kappa_{0}}.

Therefore, when the user-designated gains cξic_{\xi_{i}} for i=1,2,,Mi=1,2,\cdots,M are appropriately chosen to satisfy the stability condition (71), a positive constant αξ\alpha_{\xi} must exist such that 0<αξκ0λmin(Cξ)κ0κ0<\alpha_{\xi}\leqslant\kappa_{0}\lambda_{\min}(C_{\xi})-\kappa_{0}-\kappa^{*}. As a result, one has

V˙ξ\displaystyle\dot{V}_{\xi}\leqslant αξϵξϵξ+5κ1κ0ςς\displaystyle-\alpha_{\xi}\epsilon_{\xi}^{\top}\!{\epsilon}_{\xi}+\frac{5\kappa_{1}}{\kappa_{0}}\varsigma^{\top}\varsigma (72)

which means that the dynamics of ϵξ\epsilon_{\xi} is robust input-to-state stable with ς\varsigma as a disturbances input. Integrating the above inequalities over [0,t][0,t] gives

Vξ(t)Vξ(0)αξ0tϵξϵξdτ+5κ1κ00tςςdτ\displaystyle V_{\xi}(t)-V_{\xi}(0)\leqslant-\alpha_{\xi}\int_{0}^{t}\epsilon_{\xi}^{\top}\epsilon_{\xi}\text{d}\tau+\frac{5\kappa_{1}}{\kappa_{0}}\int_{0}^{t}\varsigma^{\top}\varsigma\text{d}\tau (73)

which further yields

αξ0tϵξϵξdτ5κ1κ00tςςdτ\displaystyle\alpha_{\xi}\int_{0}^{t}\epsilon_{\xi}^{\top}\epsilon_{\xi}\text{d}\tau\leqslant\frac{5\kappa_{1}}{\kappa_{0}}\int_{0}^{t}\varsigma^{\top}\varsigma\text{d}\tau (74)

From Part 2, we have proven the convergence of ϵS\epsilon_{S}, which also implies the convergence of ς\varsigma to zero. Hence, it can be concluded that 0tςςdτ\int_{0}^{t}\varsigma^{\top}\varsigma\text{d}\tau is bounded. From the above inequality, we can prove the boundedness of 0tϵξϵξdτ\int_{0}^{t}\epsilon_{\xi}^{\top}\epsilon_{\xi}\text{d}\tau. By using Barbalat’s Lemma, ϵξ\epsilon_{\xi} can be proven to converge to zero as well. From Lemma 3, it can be obtained that ξ~\tilde{\xi} also asymptotically converges to the origin.

4.2 Stability of Control

Following the establishment of the asymptotic convergence of the estimation errors as demonstrated in Theorem 1, we can now present our second main result of this paper, which summarizes the convergence of the system’s actual state to the locally estimated leader state under the proposed Lyapunov-based MPC framework.

We start by defining a sliding mode tracking control error for the follower ii as

si=l=0r2λi,l(xi,l+1ξ^i,l+1Δ^i,l+1)+(xi,rξ^i,rΔ^i,r)\displaystyle\!\!s_{i}\!=\!\sum_{l=0}^{r-2}\!\lambda_{i,l}\!\left(x_{i,l+1}\!-\!\hat{\xi}_{i,l+1}\!-\!\hat{\Delta}_{i,l+1}\!\right)\!+\!\left(x_{i,r}\!-\!\hat{\xi}_{i,r}\!-\!\hat{\Delta}_{i,r}\!\right)\!\!\! (75)

Then, the following Lyapunov function candidate for follower ii can be considered

Vi=12si2\displaystyle V_{i}=\frac{1}{2}\|s_{i}\|^{2} (76)

Differentiating VV along the closed-loop dynamics yields

V˙i=si(\displaystyle\dot{V}_{i}=s_{i}^{\top}\biggl{(} l=0r2λl(xi,l+2ξ^i,l+2Δ^i,l+2)+fi(xi)\displaystyle\sum_{l=0}^{r-2}\lambda_{l}\left(x_{i,l+2}-\hat{\xi}_{i,l+2}-\hat{\Delta}_{i,l+2}\right)+f_{i}({x}_{i})
+Gi(xi)uiξ^˙i,r)\displaystyle+G_{i}({x}_{i})u_{i}-\dot{\hat{\xi}}_{i,r}\biggr{)} (77)

Recalling the inequality (3.2), adding and subtracting the right-hand side of it to and from (4.2), we can rewrite V˙i(t)\dot{V}_{i}(t) over the interval t[tk,tk+1)t\in[t_{k},t_{k+1}) as follows:

V˙i(t)\displaystyle\!\!\dot{V}_{i}(t)\!\leqslant cisi(t)2+ci(si(t)2si(tk)2)+si(t)(l=0r2λl\displaystyle\!-\!c_{i}\|s_{i}(t)\|^{2}\!+\!c_{i}\left(\|s_{i}(t)\|^{2}\!-\!\|s_{i}(t_{k})\|^{2}\right)\!+\!s_{i}^{\top}\!(t)\biggl{(}\sum_{l=0}^{r-2}\lambda_{l}
(xi,l+2(t)ξ^i,l+2(t)Δ^i,l+2(t))+fi(xi(t))+Gi(xi(t))\displaystyle\!\!\!\left(\!x_{i,l+2}(t)\!-\!\hat{\xi}_{i,l+2}(t)\!-\!\hat{\Delta}_{i,l+2}(t)\right)\!+\!f_{i}({x}_{i}(t))\!+\!G_{i}({x}_{i}(t))
ui(t)ξ^˙i,r(t))si(tk)(l=0r2λl(xi,l+2(tk)ξ^i,l+2(tk)\displaystyle\!\!u_{i}(t)\!-\!\dot{\hat{\xi}}_{i,r}(t)\biggr{)}\!-\!s_{i}^{\top}(t_{k})\!\biggl{(}\sum_{l=0}^{r-2}\lambda_{l}\left(\!x_{i,l+2}(t_{k})\!-\!\hat{\xi}_{i,l+2}(t_{k})\!\right.
Δ^i,l+2(tk))+fi(xi(tk))+Gi(x(tk))ui(t)ξ^˙i,r(tk))\displaystyle\left.\!\!\!\!\!-\!\hat{\Delta}_{i,l\!+\!2}(t_{k})\!\right)\!\!+\!\!f_{i}({x}_{i}(t_{k}))\!+\!G_{\!i}\!({x}(t_{k}))u_{i}(t)\!-\!\dot{\hat{\xi}}_{i,r}(t_{k})\!\biggr{)}\!\!\!\!\!\!\!\! (78)

By invoking Lipschitz continuity and under Assumption 1, there must exist positive Lipschitz constants Lsi2L_{s_{i}^{2}}, LxiL_{x_{i}} and LξiL_{\xi_{i}} such that

|si(t)2si(tk)2|Lsi2xi(t)xi(tk)\displaystyle\left|\|s_{i}(t)\|^{2}\!-\!\|s_{i}(t_{k})\|^{2}\right|\leqslant L_{s_{i}^{2}}\left\|x_{i}(t)-x_{i}(t_{k})\right\| (79a)
|si(t)(l=0r2λl(xi,l+2(t)ξ^i,l+2(t)Δ^i,l+2(t))+fi(xi(t))+Gi(xi(t))\displaystyle\biggl{|}s_{i}^{\top}\!\!(t)\!\biggl{(}\!\sum_{l\!=\!0}^{r\!-\!2}\!\lambda_{l}\!\left(\!x_{i,l\!+\!2}(t)\!-\!\hat{\xi}_{i,l\!+\!2}(t)\!-\!\hat{\Delta}_{i,l\!+\!2}(t)\!\right)\!+\!f_{i}({x}_{i}(t))\!+\!G_{i}({x}_{i}(t))
ui(t)ξ^˙i,r(t))si(tk)(l=0r2λl(xi,l+2(tk)ξ^i,l+2(tk)Δ^i,l+2(tk))\displaystyle u_{i}(t)\!-\!\dot{\hat{\xi}}_{i,r}(t)\!\biggr{)}\!\!-\!\!s_{i}^{\!\top\!}(t_{k})\!\biggl{(}\sum_{l=0}^{r-2}\lambda_{l}\!\left(x_{i,l+2}(t_{k})\!-\!\hat{\xi}_{i,l+2}(t_{k})\!-\!\hat{\Delta}_{i,l+2}(t_{k})\!\right)\!
+fi(xi(tk))+Gi(x(tk))ui(t)ξ^˙i,r(tk))|\displaystyle+\!\!f_{i}({x}_{i}(t_{k}))\!+\!G_{i}({x}(t_{k}))u_{i}(t)\!-\!\dot{\hat{\xi}}_{i,r}(t_{k})\!\biggr{)}\!\biggr{|}
Lxixi(t)xi(tk)+Lξiξ^i(t)ξ^i(tk)\displaystyle\leqslant L_{x_{i}}\left\|x_{i}(t)-x_{i}(t_{k})\right\|+L_{\xi_{i}}\left\|\hat{\xi}_{i}(t)-\hat{\xi}_{i}(t_{k})\right\| (79b)
We also have positive constants MxiM_{x_{i}} and MξiM_{\xi_{i}} satisfying
xi(t)xi(tk)\displaystyle\left\|x_{i}(t)-x_{i}(t_{k})\right\| Mxiδ\displaystyle\leqslant M_{x_{i}}\delta (79c)
ξ^i(t)ξ^i(tk)\displaystyle\left\|\hat{\xi}_{i}(t)-\hat{\xi}_{i}(t_{k})\right\| Mξiδ\displaystyle\leqslant M_{\xi_{i}}\delta (79d)

By substituting (79) into (4.2), we have that

V˙i(t)\displaystyle\dot{V}_{i}(t)\leqslant cisi(t)2+(κi1+ciκi2)δ\displaystyle-c_{i}\|s_{i}(t)\|^{2}+(\kappa_{i_{1}}+c_{i}\kappa_{i_{2}})\delta (80)

where κi1=LxiMxi+LξiMξi\kappa_{i_{1}}=L_{x_{i}}M_{x_{i}}+L_{\xi_{i}}M_{\xi_{i}} and κi2=Lsi2Mxi\kappa_{i_{2}}=L_{s_{i}^{2}}M_{x_{i}}.

We define a collective sliding mode tracking control error as s=[s1s2sM]s\!=\!\left[s_{1}^{\top}\ s_{2}^{\top}\ \cdots\ s_{M}^{\top}\right]^{\!\top}\!\!, and Theorem 2 below encapsulates the second main result of this paper

Theorem 2

Suppose Assumptions 1-4 hold. Consider the MASs with MM followers (6) and a virtual leader 7 in closed loop under the developed adaptive distributed observer-based Lyapunov-based MPC framework with the leader observer (12)-(16) and the MPC problem (19). If s(0)Ωρs0{s|Vρs0}s(0)\in\Omega_{\rho_{s}^{0}}\triangleq\left\{s|V\leqslant\rho_{s}^{0}\right\} and the following stability condition is satisfied by choosing appropriate control gains cic_{i} for i=1,2,,Mi=1,2,\cdots,M,

2ρsλmin(C)+(κ1+λmax(C)κ2)δ0\displaystyle-2\rho_{s}\lambda_{\min}(C)+\left(\kappa_{1}+\lambda_{\max}(C)\kappa_{2}\right)\delta\leqslant 0 (81)

where ρsρs0\rho_{s}\leqslant\rho_{s}^{0}, C=diag(c1,c2,,cM)C\!=\!\text{diag}(c_{1},c_{2},\cdots,c_{M}), κ1=max(κ11,κ21,,κM1)\kappa_{1}\!=\!\max(\kappa_{1_{1}},\kappa_{2_{1}},\cdots,\kappa_{M_{1}}) and κ2=max(κ12,κ22,,κM2)\kappa_{2}\!=\!\max(\kappa_{1_{2}},\kappa_{2_{2}},\cdots,\kappa_{M_{2}}), then, the sliding mode error ss of the closed-loop system is always bounded and ultimately converges to Ωρs{s|Vρs}\Omega_{\rho_{s}}\triangleq\left\{s|V\leqslant\rho_{s}\right\}.

Proof 4.4.

A global Lyapunov function for the entire MAS can be selected as

V=12s2=i=1M12si2\displaystyle V=\frac{1}{2}\|s\|^{2}=\sum_{i=1}^{M}\frac{1}{2}\|s_{i}\|^{2} (82)

Recalling (80), we can obtain that

V˙\displaystyle\dot{V}\leqslant λmin(C)s(t)2+(κ1+λmax(C)κ2)δ\displaystyle-\lambda_{\min}(C)\|s(t)\|^{2}+(\kappa_{1}+\lambda_{\max}(C)\kappa_{2})\delta (83)

where C=diag(c1,c2,,cM)C\!=\!\text{diag}(c_{1},c_{2},\cdots,c_{M}), κ1=max(κ11,κ21,,κM1)\kappa_{1}\!=\!\max(\kappa_{1_{1}},\kappa_{2_{1}},\cdots,\kappa_{M_{1}}) and κ2=max(κ12,κ22,,\kappa_{2}\!=\!\max(\kappa_{1_{2}},\kappa_{2_{2}},\cdots, κM2)\kappa_{M_{2}}).

From the definition of VV, we further have

V˙λmin(C)V+(κ1+λmax(C)κ2)δ\displaystyle\dot{V}\leqslant-\lambda_{\min}(C)V+(\kappa_{1}+\lambda_{\max}(C)\kappa_{2})\delta (84)

If the condition (81) is satisfied, then it can be derived that V˙<0\dot{V}<0 for all s{s|ρs0<Vρs}s\in\left\{s|\rho_{s}^{0}<V\leqslant\rho_{s}\right\} and V˙=0\dot{V}=0 for V=ρsV=\rho_{s}. Therefore, it implies that ss converges to Ωρs\Omega_{\rho_{s}} without leaving the stability region Ωρs0\Omega_{\rho^{0}_{s}} as tt approaches \infty

Based on Theorem 2, we can conclude that the state variable x{x} converges towards ξ^+Δ^\hat{\xi}+\hat{\Delta}. Given the previously validated asymptotic convergence of ξ^\hat{\xi} to ξ\xi and Δ^\hat{\Delta} to Δ\Delta in Theorem 1, we can conclusively demonstrate the ultimate boundedness and convergence of the global formation tracking error x~\tilde{x}, as defined in (10). This conclusion is drawn by combining the results from both Theorem 1 and Theorem 2.

Remark 6

It is crucial to recognize that the analytical convergence error arises from the sampled-and-hold implementation of the employed MPC. The asymptotically stable nature of the distributed observer network does not compromise the ultimate accuracy of formation tracking by providing sufficiently accurate estimations to the controller.

5 Simulation Study

Simulation studies on two different examples are carried out to evaluate the performance of the proposed method in achieving formation tracking control under input constraints and communication link faults. This section provides the simulation results and

5.1 Example 1: A Numerical Multi-agent System

We first consider a numerical example—a nonlinear MAS with 3 followers and a leader node 0. The 33 followers can be described by third-order nonlinear systems as follows

{x˙1,1=x1,2x˙1,2=x1,3x˙1,3=x1,1x1,2+x1,3x1,13+u1y1=x1,1\displaystyle\left\{\begin{array}[]{rl}\dot{x}_{1,1}&\!\!=x_{1,2}\\ \dot{x}_{1,2}&\!\!=x_{1,3}\\ \dot{x}_{1,3}&\!\!=x_{1,1}x_{1,2}+x_{1,3}-x_{1,1}^{3}+u_{1}\\ y_{1}&\!\!=x_{1,1}\end{array}\right. (85e)
{x˙2,1=x2,2x˙2,2=x2,3x˙2,3=x2,1sin(x2,2)+cos2(x2,3)+u2y2=x2,1\displaystyle\left\{\begin{array}[]{rl}\dot{x}_{2,1}&\!\!=x_{2,2}\\ \dot{x}_{2,2}&\!\!=x_{2,3}\\ \dot{x}_{2,3}&\!\!=x_{2,1}\sin(x_{2,2})+\cos^{2}(x_{2,3})+u_{2}\\ y_{2}&\!\!=x_{2,1}\end{array}\right. (85j)
{x˙3,1=x2,2x˙3,2=x2,3x˙3,3=12(x3,1+x3,21)2(x3,31)+u3y3=x3,1\displaystyle\left\{\begin{array}[]{rl}\dot{x}_{3,1}&\!\!=x_{2,2}\\ \dot{x}_{3,2}&\!\!=x_{2,3}\\ \dot{x}_{3,3}&\!\!=-\frac{1}{2}(x_{3,1}+x_{3,2}-1)^{2}(x_{3,3}-1)+u_{3}\\ y_{3}&\!\!=x_{3,1}\end{array}\right. (85o)

with the input constraint defined as

Ωui={ui|3ui3}\displaystyle\Omega_{u_{i}}=\left\{u_{i}|-3\leqslant u_{i}\leqslant 3\right\} (86)

for i=1,2,3i=1,2,3. The initial conditions of the 3 followers are x1(0)=[1.3 0 0]x_{1}(0)=[1.3\ 0\ 0]^{\top}, x2(0)=[0.5 0 0]x_{2}(0)=[0.5\ 0\ 0]^{\top}, x3(0)=[0 0 0]x_{3}(0)=[0\ 0\ 0]^{\top}.

Refer to caption
Figure 2: Communication link faults
Refer to caption
Figure 3: Formation tracking performance
Refer to caption
Figure 4: Norms of formation tracking errors
Refer to caption
Figure 5: Norms of estimation errors
Refer to caption
Figure 6: Control commands

Let the dynamics of the leader node be

{ξ˙0,1=ξ0,2ξ˙0,2=ξ0,3ξ˙0,3=ξ0,11.16ξ0,22ξ0,3\displaystyle\left\{\begin{array}[]{rl}\dot{\xi}_{0,1}&\!\!=\xi_{0,2}\\ \dot{\xi}_{0,2}&\!\!=\xi_{0,3}\\ \dot{\xi}_{0,3}&\!\!=-{\xi}_{0,1}-1.16\xi_{0,2}-2\xi_{0,3}\end{array}\right. (90)

with ξ0(0)=[1 0 0]\xi_{0}(0)=\left[1\ 0\ 0\right]^{\top}.

In simulations, the desired formation displacement vectors of the 3 followers with respect to the leader 0 are set as Δ10=[0 0 0]{\varDelta}_{10}=[0\ 0\ 0]^{\top}, Δ20=[0.2 0 0]{\varDelta}_{20}=[0.2\ 0\ 0]^{\top}, Δ30=[0.2 0 0]{\varDelta}_{30}=[-0.2\ 0\ 0]^{\top}, Δ40=[2 0 0]{\varDelta}_{40}=[2\ 0\ 0]^{\top}. Time-varying edge weights, including the adjacency matrix and pinning gains, are designed to mimic faults in the communication network. In particular, the adjacency matrix and the pinning matrix are

𝒜\displaystyle\mathcal{A} =[0001+0.5sin(t)rand([0,1])00100]\displaystyle=\left[\begin{array}[]{lll}0&0&\hskip 28.45274pt0\\ 1+0.5\sin(t)*{\rm rand([0,1])}&0&\hskip 28.45274pt0\\ 1&0&\hskip 28.45274pt0\end{array}\right] (94)
\displaystyle\mathcal{B} =[1+0.3sin(t)rand([0,1])00000000]\displaystyle=\left[\begin{array}[]{lll}1+0.3\sin(t)*{\rm rand([0,1])}&0&\hskip 28.45274pt0\\ 0&0&\hskip 28.45274pt0\\ 0&0&\hskip 28.45274pt0\end{array}\right] (98)

where rand([0,1]){\rm rand}([0,1]) is a random signal chosen from the interval [0,1][0,1].

The control parameters are selected following the obtained stability conditions. The sampling period for updating the control actions is set as 0.20.2s. The leader state observation gains are chosen as cξ1=cξ2=cξ3=2c_{\xi_{1}}=c_{\xi_{2}}=c_{\xi_{3}}=2. In the definition of the sliding mode tracking error, λ1,0=λ2,0=λ3,0=1\lambda_{1,0}=\lambda_{2,0}=\lambda_{3,0}=1 and λ1,1=λ2,1=λ3,1=2\lambda_{1,1}=\lambda_{2,1}=\lambda_{3,1}=2. The control gains are c1=c2=c3=2c_{1}=c_{2}=c_{3}=2. In the MPC problems, the prediction horizon is 0.8s, Q1=Q2=Q3=10Q_{1}=Q_{2}=Q_{3}=10 and R1=R2=R3=0.1R_{1}=R_{2}=R_{3}=0.1.

The time-varying communication fault parameters introduced to the network are depicted in Figure 2. The simulation results, as shown in Figures 3-6, illustrate the responses of the three followers with solid lines in blue, orange, and yellow, while the virtual leader’s responses are represented with gray dashed lines. Specifically, Figure 3 illustrates the output trajectories of the followers, demonstrating that the formation tracking objective has been successfully achieved. The norms of the tracking errors, relative to both the estimated and actual leader states, are displayed in Figure 4. Additionally, Figure 5 presents the estimation errors of the relative position displacement, the leader’s dynamics, and the leader state. Figure 6 illustrates the control commands applied to the followers, showing that the input constraint (86) is satisfied.

5.2 Example 2: A Multi-UAV System

Next, we consider applying the proposed adaptive distributed control strategy to the outer-loop translation control of a group of UAVs. The networked UAV system comprises 5 UAVs, with their translational motions described by

{ζ˙i=viv˙i=g+1mir(uηi)uFi\displaystyle\left\{\begin{array}[]{rl}\dot{{\zeta}}_{i}&={v}_{i}\\ \dot{{v}}_{i}&=-{g}+\frac{1}{m_{i}}{r}(u_{\eta_{i}})u_{F_{i}}\end{array}\right. (99c)

where i=1,2,,5i=1,2,\cdots,5; g=[0 0 9.81]g=[0\ 0\ 9.81]^{\top} and mi=2.618m_{i}=2.618 kg; ζi3{\zeta}_{i}\in\mathbb{R}^{3} and vi3v_{i}\in\mathbb{R}^{3} are position and velocity vectors of the 5 UAVs; uηi=[uϕiuθiuψi]u_{\eta_{i}}=\left[u_{\phi_{i}}\ u_{\theta_{i}}\ u_{\psi_{i}}\right]^{\top} and uFiu_{F_{i}}\in\mathbb{R} are control inputs of the translational subsystem, representing the desired rotation angles and the total thrust force, respectively. The control inputs suffer from the following input constraints:

Ωuηi\displaystyle\Omega_{u_{\eta_{i}}} ={uηi|[0.50.50.1]uηi[0.5 0.5 0.1]}\displaystyle=\left\{u_{\eta_{i}}|[-0.5\ -0.5\ -0.1]^{\top}\leqslant u_{\eta_{i}}\leqslant[0.5\ 0.5\ 0.1]^{\top}\right\} (100)
ΩuFi\displaystyle\Omega_{u_{F_{i}}} ={uFi|4uFi4}\displaystyle=\left\{u_{F_{i}}|-4\leqslant u_{F_{i}}\leqslant 4\right\} (101)

The initial positions of the 5 UAVs are ζ1(0)=[10 0 0]\zeta_{1}(0)=[10\ 0\ 0]^{\top}, x2(0)=[7 0 0]x_{2}(0)=[7\ 0\ 0]^{\top}, x3(0)=[13 0 0]x_{3}(0)=[13\ 0\ 0]^{\top}, x4(0)=[8.5 0 0]x_{4}(0)=[8.5\ 0\ 0]^{\top}, x5(0)=[11.5 0 0]x_{5}(0)=[11.5\ 0\ 0]^{\top}. Their initial linear velocities are all zero.

Let the dynamics of the leader node be

ξ˙0\displaystyle\dot{\xi}_{0} =[0001000000100000010.0676000.10400000.0676000.10400000.0025000.02]ξ0\displaystyle\!=\!\left[\begin{smallmatrix}0&0&0&1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1\\ -0.0676&0&0&-0.1040&0&0\\ 0&-0.0676&0&0&-0.1040&0\\ 0&0&-0.0025&0&0&-0.02\end{smallmatrix}\right]\xi_{0}\!\! (102)

with ξ0(0)=[10 0 0 0 3 1.2]\xi_{0}(0)=\left[10\ 0\ 0\ 0\ 3\ 1.2\right]^{\top}.

Refer to caption
Figure 7: Formation shape and communication graph of the 5-UAV system
Refer to caption
Figure 8: Communication link faults in the 5-UAV system
Refer to caption
Figure 9: Formation tracking performance of the 5 UAVs
Refer to caption
Figure 10: Norms of estimation and tracking errors of the 5 UAVs
Refer to caption
Figure 11: Control commands of the 5 UAVs

The prescribed formation geometric shape and directed communication graph of the 5-UAV systems are illustrated in Figure 7. The desired displacement vectors of the followers with respect to the leader 0 are set as Δ10=[0 1.1 0 0 0 0]{\varDelta}_{10}=[0\ 1.1\ 0\ 0\ 0\ 0]^{\top}, Δ20=[1.5 0 0 0 0 0]{\varDelta}_{20}=[-1.5\ 0\ 0\ 0\ 0\ 0]^{\top}, Δ30=[1.5 0 0 0 0 0]{\varDelta}_{30}=[1.5\ 0\ 0\ 0\ 0\ 0]^{\top}, Δ40=[0.951.8 0 0 0 0]{\varDelta}_{40}=[-0.95\ -1.8\ 0\ 0\ 0\ 0]^{\top}, Δ50=[0.951.8 0 0 0 0]{\varDelta}_{50}=[0.95\ -1.8\ 0\ 0\ 0\ 0]^{\top}. Time-varying edge weights, including the adjacency matrix and pinning gains, are designed to mimic faults in the communication network. In particular, the adjacency matrix and the pinning matrix are

𝒜\displaystyle\mathcal{A} =[000001+0.5sin(t)rand([0,1])000010000011+0.5sin(t)rand([0,1])0000110]\displaystyle=\left[\begin{smallmatrix}0&0&0&0&0\\ 1+0.5\sin(t)*{\rm rand([0,1])}&0&0&0&0\\ 1&0&0&0&0\\ 0&1&1+0.5\sin(t)*{\rm rand([0,1])}&0&0\\ 0&0&1&1&0\end{smallmatrix}\right] (103)
\displaystyle\mathcal{B} =[1+0.3sin(t)rand([0,1])00000000000000]\displaystyle={\scriptstyle\left[\begin{smallmatrix}1+0.3\sin(t)*{\rm rand([0,1])}&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{smallmatrix}\right]} (104)

where rand([0,1]){\rm rand}([0,1]) is a random signal chosen from the interval [0,1][0,1]. The time-varying fault communication parameters added to the network are illustrated in Figure 2.

The control parameters are selected following the previously established stability conditions. The sampling period for updating the control actions is set as 0.20.2s. The leader state observation gains are chosen as cξ1=cξ2=cξ3=cξ4=cξ5=1.2c_{\xi_{1}}=c_{\xi_{2}}=c_{\xi_{3}}=c_{\xi_{4}}=c_{\xi_{5}}=1.2. In the definition of the sliding mode tracking error, λ1,0=λ2,0=λ3,0=λ4,0=λ5,0=1\lambda_{1,0}=\lambda_{2,0}=\lambda_{3,0}=\lambda_{4,0}=\lambda_{5,0}=1. The control gains are c1=c2=c3=c4=c5=2c_{1}=c_{2}=c_{3}=c_{4}=c_{5}=2. In the MPC problems, the prediction horizon is 0.8s, Q1=Q2=Q3=Q4=Q5=diag(2,2,5)Q_{1}=Q_{2}=Q_{3}=Q_{4}=Q_{5}=\text{diag}(2,2,5) and R1=R2=R3=R4=R5=diag(0.1,10,10,10)R_{1}=R_{2}=R_{3}=R_{4}=R_{5}=\text{diag}(0.1,10,10,10).

The simulation results are illustrated in Figures 9-11, where the responses of the 5 UAVs are depicted with solid lines in blue, orange, yellow, purple, and green, and the virtual leader’s responses are depicted with gray dashed lines. Figure 9 illustrates the formation tracking performance of the 5-UAV system in 3D and three-plane views. The norms of the estimation and tracking errors of the 5 UAVs are displayed in Figure 10, respectively. Figure 11 shows the control commands of the 5 UAVs. It can be seen that their input constraints are all satisfied.

6 Conclusions

A novel adaptive distributed observer-based DMPC method has been introduced in this paper, which is developed for nonlinear multi-agent formation tracking with input constraints and unknown communication faults. The method utilizes adaptive distributed observers in each local control system to estimate the state, dynamics, and relative position of the leader, enabling each agent to independently achieve formation tracking without direct access to the leader’s information. The designed distributed MPC controllers use the estimated information to manipulate agents into a predefined formation while respecting input constraints. This research employs adaptive observers to reduce the complexity in the DMPC design, allowing for effective local controller formulation and resilient distributed formation tracking.

References

  • [1] W. Ren and Y. Cao, Distributed Coordination of Multi-Agent Networks: Emergent Problems, Models, and Issues.   Springer, 2011, vol. 1.
  • [2] J. Qin, Q. Ma, Y. Shi, and L. Wang, “Recent advances in consensus of multi-agent systems: A brief survey,” IEEE Transactions on Industrial Electronics, vol. 64, no. 6, pp. 4972–4983, 2016.
  • [3] Y. Shi and K. Zhang, “Advanced model predictive control framework for autonomous intelligent mechatronic systems: A tutorial overview and perspectives,” Annual Reviews in Control, vol. 52, pp. 170–196, 2021.
  • [4] T. Keviczky, F. Borrelli, and G. J. Balas, “A study on decentralized receding horizon control for decoupled systems,” in Proceedings of the 2004 American Control Conference, vol. 6, Boston, MA, USA, May 2004, pp. 4921–4926.
  • [5] P. D. Christofides, R. Scattolini, D. M. de la Pena, and J. Liu, “Distributed model predictive control: A tutorial review and future research directions,” Computers & Chemical Engineering, vol. 51, pp. 21–41, 2013.
  • [6] R. R. Negenborn and J. M. Maestre, “Distributed model predictive control: An overview and roadmap of future research opportunities,” IEEE Control Systems Magazine, vol. 34, no. 4, pp. 87–97, 2014.
  • [7] W. B. Dunbar and R. M. Murray, “Distributed receding horizon control for multi-vehicle formation stabilization,” Automatica, vol. 42, no. 4, pp. 549–558, 2006.
  • [8] H. Li and Y. Shi, “Robust distributed model predictive control of constrained continuous-time nonlinear systems: A robustness constraint approach,” IEEE Transactions on Automatic Control, vol. 59, no. 6, pp. 1673–1678, 2013.
  • [9] Y. Zheng, S. E. Li, K. Li, F. Borrelli, and J. K. Hedrick, “Distributed model predictive control for heterogeneous vehicle platoons under unidirectional topologies,” IEEE Transactions on Control Systems Technology, vol. 25, no. 3, pp. 899–910, 2016.
  • [10] M. Mercangöz and F. J. Doyle III, “Distributed model predictive control of an experimental four-tank system,” Journal of Process Control, vol. 17, no. 3, pp. 297–308, 2007.
  • [11] B. T. Stewart, A. N. Venkat, J. B. Rawlings, S. J. Wright, and G. Pannocchia, “Cooperative distributed model predictive control,” Systems & Control Letters, vol. 59, no. 8, pp. 460–469, 2010.
  • [12] A. N. Venkat, J. B. Rawlings, and S. J. Wright, “Stability and optimality of distributed model predictive control,” in Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, Dec. 2005, pp. 6680–6685.
  • [13] A. Richards and J. How, “A decentralized algorithm for robust constrained model predictive control,” in Proceedings of the 2004 American Control Conference, vol. 5, Boston, MA, USA, Jun. 2004, pp. 4261–4266.
  • [14] A. Richards and J. P. How, “Robust distributed model predictive control,” International Journal of Control, vol. 80, no. 9, pp. 1517–1531, 2007.
  • [15] Z. Li and J. Chen, “Robust consensus for multi-agent systems communicating over stochastic uncertain networks,” SIAM Journal on Control and Optimization, vol. 57, no. 5, pp. 3553–3570, 2019.
  • [16] T. Li, F. Wu, and J.-F. Zhang, “Multi-agent consensus with relative-state-dependent measurement noises,” IEEE Transactions on Automatic Control, vol. 59, no. 9, pp. 2463–2468, 2014.
  • [17] X. Ma and N. Elia, “Mean square performance and robust yet fragile nature of torus networked average consensus,” IEEE Transactions on Control of Network Systems, vol. 2, no. 3, pp. 216–225, 2015.
  • [18] J. Wang and N. Elia, “Consensus over networks with dynamic channels,” International Journal of Systems, Control and Communications, vol. 2, no. 1-3, pp. 275–297, 2010.
  • [19] D. Zelazo and M. Bürger, “On the robustness of uncertain consensus networks,” IEEE Transactions on Control of Network Systems, vol. 4, no. 2, pp. 170–178, 2015.
  • [20] C. Chen, K. Xie, F. L. Lewis, S. Xie, and R. Fierro, “Adaptive synchronization of multi-agent systems with resilience to communication link faults,” Automatica, vol. 111, p. 108636, 2020.
  • [21] Q. Yang, Y. Lyu, X. Li, C. Chen, and F. L. Lewis, “Adaptive distributed synchronization of heterogeneous multi-agent systems over directed graphs with time-varying edge weights,” Journal of the Franklin Institute, vol. 358, no. 4, pp. 2434–2452, 2021.
  • [22] W. B. Dunbar, “Distributed receding horizon control of dynamically coupled nonlinear systems,” IEEE Transactions on Automatic Control, vol. 52, no. 7, pp. 1249–1263, 2007.
  • [23] H. Wei, C. Liu, and Y. Shi, “A robust distributed MPC framework for multi-agent consensus with communication delays,” IEEE Transactions on Automatic Control, 2024.
  • [24] H. Wei, Q. Sun, J. Chen, and Y. Shi, “Robust distributed model predictive platooning control for heterogeneous autonomous surface vehicles,” Control Engineering Practice, vol. 107, p. 104655, 2021.
  • [25] H. Wei, C. Shen, and Y. Shi, “Distributed Lyapunov-based model predictive formation tracking control for autonomous underwater vehicles subject to disturbances,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 8, pp. 5198–5208, 2019.