This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Controlling identical linear multi-agent systems over directed graphs

Nicola Zaupa, Luca Zaccarian, Sophie Tarbouriech, Isabelle Queinnec, Giulia Giordano This work was supported in part by ANR through Grant HANDY under Grant ANR-18-CE40-0010; and in part by MUR PRIN Grant DOCEAT under Grant 2020RTWES4. N. Zaupa, I. Queinnec and S. Tarbouriech are with LAAS-CNRS, University of Toulouse, CNRS, 31400 Toulouse, France. Email: {nzaupa,queinnec,tarbour}@laas.frG. Giordano is with Department of Industrial Engineering, University of Trento, 38123 Trento, Italy. Email: [email protected]L. Zaccarian is with LAAS-CNRS, University of Toulouse, CNRS, 31400 Toulouse, France and with Department of Industrial Engineering, University of Trento, 38123 Trento, Italy. Email: [email protected]
Abstract

We consider the problem of synchronizing a multi-agent system (MAS) composed of several identical linear systems connected through a directed graph. To design a suitable controller, we construct conditions based on Bilinear Matrix Inequalities (BMIs) that ensure state synchronization. Since these conditions are non-convex, we propose an iterative algorithm based on a suitable relaxation that allows us to formulate Linear Matrix Inequality (LMI) conditions. As a result, the algorithm yields a common static state-feedback matrix for the controller that satisfies general linear performance constraints. Our results are achieved under the mild assumption that the graph is time-invariant and connected.

I Introduction

In the last decades, the study of networks, and in particular the distributed control of networked MAS, has attracted a lot of interest in systems and control, due to the broad range of applications in many different areas [1], including: power systems, biological systems, sensors network, cooperative control of unmanned aerial vehicles, quality-fair delivery of media content, formation control of mobile robots, and synchronization of oscillators. In networked MAS, the general objective is to reach an agreement on a variable of interest.

We focus our attention on the synchronization problem, where the goal is to reach a common trajectory for all agents. In the literature, we can find several studies on scalar agents, but recent works also address networks of agents with finite-dimensional linear input-output dynamics [2]. In the case of identical agents, a common static control law inducing state synchronization can be designed by exploiting the information exchange among the agents, which modifies the system dynamics. This exchange is modeled by a (directed or undirected) graph. The spectrum of the Laplacian matrix of this graph plays an important role in the evolution of the associated networked system [3].

Necessary and sufficient conditions ensuring synchronization have been given under several different forms depending on the context [4, 2, 5, 6, 7]. A set of necessary and sufficient conditions for identical SISO agents over arbitrary time-invariant graphs is summarized in [8].

Different approaches for control design can be found in the literature depending on the desired objective. Most of the results are based on the solution of an algebraic Riccati equation, under the assumption that the static control law has a given infinite gain margin structure [9, 10, 11, 12]: the state-feedback matrix KK has the form K=BPK=B^{\top}P, where BB is the input matrix and PP is the solution of an algebraic Riccati equation. However, imposing an infinite gain margin potentially limits the achievable performance. As shown in [9], by choosing a small enough constant, a feedback law can be designed without knowing the network topology; in practice, this constant depends on the non-zero eigenvalue of the Laplacian matrix having the smallest real part. While [10] studies the output feedback case, we consider the dual case, which is also discussed in [11]. The design procedure in [13] allows achieving synchronization under bounded HH_{\infty} disturbances, thanks to an observer-based dynamic controller, expressed in terms of suitable algebraic Riccati equations, which guarantees disturbance rejection properties. A different approach based on LMIs is presented in [14], where synchronization conditions are imposed by relying on strong assumptions on the structure of the Lyapunov matrices, while the problem size is independent of the number of agents.

In this work, we study the design of a static state-feedback control law ensuring MAS synchronization. The agents are modeled as identical LTI subsystems and their interconnections are described by time-invariant, directed, and connected graphs. We introduce a design strategy based on LMIs, similar to the one in [14], but without imposing any assumption on the controller structure or constraints on the Lyapunov matrices, thus ensuring higher degrees of freedom in the design, and potentially improved optimized stabilizers. Through a relaxation of the conditions in [8], we formulate an iterative LMI-based procedure to design a static state-feedback control law. Our LMI formulation allows us to easily embed additional linear constraints in order to reach a desired performance [15].

Notation. {\mathbb{R}} and \mathbb{C} denote the sets of real and complex numbers, respectively. We denote with ȷ\jmath the imaginary unit. Given λ=a+ȷb\lambda=a+\jmath b\in\mathbb{C}, Re(λ)=a\operatorname{Re}(\lambda)=a and Im(λ)=b\operatorname{Im}(\lambda)=b are its real and imaginary parts, respectively; λ=aȷb\lambda^{*}=a-\jmath b is its complex conjugate. INI_{N} is the identity matrix of size NN, while 𝟏NN\mathbf{1}_{N}\in\mathbb{R}^{N} denotes the NN dimensional (column) vector with all 11 entries. For any matrix AA, AA^{\top} denotes the transpose of AA. Given two matrices AA and BB, ABA\otimes B indicates their Kronecker product. Given a complex matrix An×m{A}\in{\mathbb{C}}^{n\times m}, A{A}^{*} denotes its conjugate transpose and He(A)=A+A\operatorname{He}\left(A\right)=A+A^{*}. Matrix An×n{A}\in{\mathbb{C}}^{n\times n} is Hermitian if A=A{A}={A}^{*}, namely Re(A)\operatorname{Re}(A) is symmetric (Re(A)=Re(A))\left(\operatorname{Re}(A)=\operatorname{Re}(A)^{\top}\right) and Im(A)\operatorname{Im}(A) is skew-symmetric (Im(A)=Im(A))\left(\operatorname{Im}(A)=-\operatorname{Im}(A)^{\top}\right). We denote the Euclidean distance of a point xx from a set 𝒜{\mathcal{A}} as |x|𝒜|x|_{\mathcal{A}}.

II Problem Statement

Consider NN identical dynamical systems

x˙i=Axi+Buii=1,,N,\dot{x}_{i}=Ax_{i}+Bu_{i}\quad\quad i=1,\dots,N, (1a)
with state vector xinx_{i}\in\mathbb{R}^{n}, input vector uimu_{i}\in\mathbb{R}^{m}, state matrix An×nA\in{\mathbb{R}}^{n\times n} and input matrix Bn×mB\in{\mathbb{R}}^{n\times m}. Assume that the pair (A,B)(A,B) is controllable. The directed graph 𝒢\mathcal{G} with weight matrix 𝒲N×N\mathcal{W}\in\mathbb{R}^{N\times N} captures the communication topology among the agents; its Laplacian matrix is L:=diag(𝒲𝟏N)𝒲L:={\operatorname{diag}(\mathcal{W}\mathbf{1}_{N})}-\mathcal{W}. Denote by 0=λ0,λ1,,λν0=\lambda_{0},\lambda_{1},\dots,\lambda_{\nu} the eigenvalues of LL, ordered with non-decreasing real part (complex conjugate pairs and repeated eigenvalues are only counted once). The control input uiu_{i} affecting agent ii is expressed as
ui=Kj=1N𝒲ij(xjxi)=Kj=1NLijxj,\displaystyle u_{i}=K\sum_{j=1}^{N}{\mathcal{W}}_{ij}(x_{j}-x_{i})=-K\sum_{j=1}^{N}{L}_{ij}x_{j}, (1b)

where 𝒲ij{\mathcal{W}}_{ij} and LijL_{ij} are the entries of the weight and Laplacian matrices, respectively, and Km×nK\in{\mathbb{R}}^{m\times n} is the state-feedback matrix. Each agent uses only relative information with respect to the others, as typically desired in applications.

By defining the aggregate state vector x:=[x1xN]nNx:=\left[x_{1}^{\top}\dots x_{N}^{\top}\right]^{\top}\in{\mathbb{R}}^{nN} and input vector u:=[u1uN]mNu:=\left[u_{1}^{\top}\dots u_{N}^{\top}\right]^{\top}\in{\mathbb{R}}^{mN}, we can write the interconnection (1) as

x˙\displaystyle\dot{x} =(INA)x+(INB)u,\displaystyle=(I_{N}\otimes A)x+(I_{N}\otimes B)u, (2a)
u\displaystyle u =(LK)x.\displaystyle=-(L\otimes K)x. (2b)

The overall closed-loop expression is

x˙=((INA)(LBK))x.\dot{x}=\Bigl{(}\left(I_{N}\otimes A\right)-(L\otimes BK)\Bigr{)}x. (3)

Our goal is to synthesize a common static control law that enforces synchronization among systems (1a). To this aim, we introduce the synchronization set:

𝒜:={x:xixj=0,i,j{1,,N}}.\mathcal{A}:=\begin{Bmatrix}x:\,x_{i}-x_{j}=0,\,\forall i,j\in\left\{1,\dots,N\right\}\end{Bmatrix}. (4)

We recall the definition of “μ\mu–synchronization” from [8].

Definition 1 (μ\mu–Synchronization)

The attractor 𝒜\mathcal{A} in (4) is μ\mu–UGES (uniformly globally exponentially stable with rate μ>0\mu>0) for system (1) if there exists M>0M>0 such that |x(t)|𝒜Meμt|x(0)|𝒜|x(t)|_{\mathcal{A}}\leq M\operatorname{e}^{-\mu t}|x(0)|_{\mathcal{A}} for all t0t\geq 0.

Some of the necessary and sufficient conditions for μ\mu–synchronization in [8, Theorem 1] are here adapted to deal with a synthesis problem: matrix CC in [8] is replaced by the state-feedback matrix KK in the closed-loop dynamics (3). Moreover, we can exploit parameter μ\mu in iterative approaches for optimization-based selections of KK.

Proposition 1

Consider the system in (1) and the attractor 𝒜\mathcal{A} in (4). The synchronization set 𝒜\mathcal{A} is μ\mu–UGES if and only if any of the following conditions holds:

  1. 1.

    [Complex condition] The complex-valued matrices

    Ak:=AλkBK,k=1,,ν,A_{k}:=A-\lambda_{k}BK,\quad k=1,\dots,\nu, (5a)

    have spectral abscissa smaller than μ-\mu.

  2. 2.

    [Real condition] The real-valued matrices

    Ae,k:=[ARe(λk)BKIm(λk)BKIm(λk)BKARe(λk)BK],A_{e,k}:=\begin{bmatrix}A-\operatorname{Re}(\lambda_{k})BK&\operatorname{Im}(\lambda_{k})BK\\ -\operatorname{Im}(\lambda_{k})BK&A-\operatorname{Re}(\lambda_{k})BK\end{bmatrix}, (5b)

    k=1,,νk=1,\dots,\nu, have spectral abscissa smaller than μ-\mu.

  3. 3.

    [Lyapunov inequality] For each k=1,,νk=1,\dots,\nu, there exist real-valued matrices Pk=Pk0P_{k}=P_{k}^{\top}\succ 0 and Πk=Πk\Pi_{k}^{\top}=-\Pi_{k} such that matrix Pe,k:=[PkΠkΠkPk]0P_{e,k}:=\left[\begin{smallmatrix}P_{k}&-\Pi_{k}\\ \Pi_{k}&P_{k}\end{smallmatrix}\right]\succ 0 satisfies

    He(Pe,kAe,k)2μPe,k.\displaystyle\operatorname{He}\left(P_{e,k}A_{e,k}\right)\prec{-2\mu P_{e,k}}. (5c)

III Feedback Design

We aim at designing a common state-feedback matrix KK so as to ensure synchronization to 𝒜\mathcal{A}, i.e., so as to satisfy the conditions in Proposition 1. We choose an LMI-based approach to design KK, which allows us to easily embed additional linear constraints in the design process. Relevant linear constraints may be related, e.g., to the HH_{\infty} gain, saturation handling, gain norm, and convergence rate [15].

III-A Revisited synchronization conditions

We can distinguish two main cases: either the Laplacian eigenvalues are all real or at least one of them is complex. In the former case, conditions (5a) can be framed within an LMI formulation, through a procedure similar to the one we describe next. In the latter case, we refer to expression (5c), where the problem is lifted to a higher space, considering Ae,kA_{e,k} instead of AkA_{k}, so as to work with real-valued matrices. We focus on the latter case, which is more general and includes the former. Let us define the inverse of Pe,kP_{e,k} in (5c)

Qe,k:=Pe,k1=Qe,k=[QkΣkΣkQk],k=1,,ν,Q_{e,k}:=P_{e,k}^{-1}=Q_{e,k}^{\top}=\left[\begin{smallmatrix}Q_{k}&\Sigma_{k}\\ -\Sigma_{k}&Q_{k}\end{smallmatrix}\right],\quad k=1,\dots,\nu,

with QkQ_{k} symmetric positive definite and Σk\Sigma_{k} skew-symmetric. In fact, Qk1=PkΠkPk1ΠkQ_{k}^{-1}=P_{k}-\Pi_{k}^{\top}P_{k}^{-1}\Pi_{k} (which is invertible, since applying the Schur complement to Pe,kP_{e,k} yields Qk10Q_{k}^{-1}\succ 0) and Σk=Pk1ΠkQk=QkΠkPk1\Sigma_{k}=P_{k}^{-1}\Pi_{k}Q_{k}=Q_{k}\Pi_{k}P_{k}^{-1} (where the equality holds because ΠkPk1Qk1=ΠkΠkPk1ΠkPk1Πk=(PkΠkPk1Πk)Pk1Πk=Qk1Pk1Πk\Pi_{k}P_{k}^{-1}Q_{k}^{-1}=\Pi_{k}-\Pi_{k}P_{k}^{-1}\Pi_{k}^{\top}P_{k}^{-1}\Pi_{k}=(P_{k}-\Pi_{k}P_{k}^{-1}\Pi_{k}^{\top})P_{k}^{-1}\Pi_{k}=Q_{k}^{-1}P_{k}^{-1}\Pi_{k} ). Then, we can left- and right- multiply inequality (5c) by Qe,kQ_{e,k}, obtaining

He(Ae,kQe,k)2μQe,k.\displaystyle\operatorname{He}\left(A_{e,k}Q_{e,k}\right)\prec-2\mu Q_{e,k}. (6)

To look for a common state-feedback matrix KK, even when the matrices Qe,kQ_{e,k} are different, we take advantage of the results in [16]. We can rewrite (6) as

[I2nAe,k](ΦμQe,k)[I2nAe,k]0,\begin{bmatrix}I_{2n}&A_{e,k}^{\top}\end{bmatrix}(\Phi_{\mu}\otimes Q_{e,k})\begin{bmatrix}I_{2n}\\ A_{e,k}\end{bmatrix}\prec 0, (7)

where Φμ=[2μ110]\Phi_{\mu}=\left[\begin{smallmatrix}2\mu&1\\ 1&0\end{smallmatrix}\right] describes the stability region, which in our case is the complex half-plane with real part smaller than μ-\mu. Then, according to [16, Section 3.1], (7) is equivalent to the existence of matrices X1,k,X2,k2n×2nX_{1,k},X_{2,k}\in{\mathbb{R}}^{2n\times 2n} satisfying

(ΦμQe,k)+He([Ae,kI2n][X1,kX2,k])0,\displaystyle(\Phi_{\mu}\otimes Q_{e,k})+\operatorname{He}\left(\begin{bmatrix}A_{e,k}\\ -I_{2n}\end{bmatrix}\begin{bmatrix}{X_{1,k}}&{X_{2,k}}\end{bmatrix}\right)\prec 0, (8)

where X1,kX_{1,k} and X2,kX_{2,k} are multipliers that add degrees of freedom by decoupling matrices Qe,kQ_{e,k} and Ae,kA_{e,k}. Conditions (8) are still necessary and sufficient for μ\mu–synchronization, because they are equivalent to (7).

According to the derivation in [16, Section 3.3], imposing X2,k=αX1,k{X_{2,k}}=\alpha{X_{1,k}}, with α>0\alpha>0, does not add conservatism as far as there are ν\nu independent matrices X1,k{X_{1,k}}. We are now going to relax this condition by assuming that matrices X1,kX_{1,k} and X2,kX_{2,k} have the specific structure

X1,k:=XeZk,X2,k:=XeWk,X_{1,k}:={X_{e}}Z_{k},\qquad X_{2,k}:={X_{e}}W_{k},

where Xe:=[X00X]=I2XX_{e}:=\left[\begin{smallmatrix}X&0\\ 0&X\end{smallmatrix}\right]=I_{2}\otimes X, with Xn×nX\in{\mathbb{R}}^{n\times n}, is a block-diagonal matrix common to all ν\nu inequalities, while Zk,Wk2n×2nZ_{k},W_{k}\in{\mathbb{R}}^{2n\times 2n} are different multipliers for every inequality. This assumption introduces conservativeness; therefore, the conditions are now only sufficient. However, we can now expand (8) and obtain the bilinear formulation

[2μQe,kQe,kQe,k0]+He([ΘkZkΘkWkXeZkXeWk])0,\begin{bmatrix}2\mu{Q_{e,k}}&{Q_{e,k}}\\ {Q_{e,k}}&0\end{bmatrix}+\operatorname{He}\left(\begin{bmatrix}{\Theta_{k}}{Z_{k}}&{\Theta_{k}}{W_{k}}\\ -{X_{e}}{Z_{k}}&-{X_{e}}{W_{k}}\end{bmatrix}\right)\prec 0, (9)

with Θk=(I2AX)(ΛkBY){\Theta_{k}}=\bigl{(}I_{2}\otimes AX\bigr{)}-\bigl{(}\Lambda_{k}\otimes B{Y}\bigr{)}, where Λk=[αkβkβkαk]\Lambda_{k}=\left[\begin{smallmatrix}\alpha_{k}&-\beta_{k}\\ \beta_{k}&\alpha_{k}\end{smallmatrix}\right] is related to the kk-th eigenvalue λk=αk+ȷβk\lambda_{k}=\alpha_{k}+\jmath\beta_{k} and Y:=KXY:=KX is a suitable change of variables. An expanded version of (9) with the variables highlighted is provided in equation (12) at the top of the next page.

Constraints (9) alone, might lead to badly conditioned optimized selections of KK, due to the fact that the joint spectral abscissa of Ae,kA_{e,k} for all k=1,,νk=1,\ldots,\nu may potentially grow unbounded for arbitrarily large values of KK. Thus, as a possible sample formulation of a multi-objective optimization, we fix a maximum desired norm κ¯\bar{\kappa} for KK and enforce the constraint Kκ¯\|K\|\leq{\bar{\kappa}} through the following LMI formulation:

[X+XIYYκ¯2I]0,\begin{bmatrix}X+X^{\top}-I&Y^{\top}\\ Y&{{\bar{\kappa}}}^{2}I\end{bmatrix}\succ 0, (10)

stemming from the expression KKκ¯2IK^{\top}K\preceq{{\bar{\kappa}}}^{2}I after applying a Schur complement and exploiting (XI)(XI)0(X-I)(X^{\top}-I)\succeq 0.

We then suggest to synthesize a state-feedback matrix KK satisfying Kκ¯\|K\|\leq{\bar{\kappa}} and maximizing μ\mu by solving the bilinear optimization problem

maxX,YZ1,,ZνW1,,WνQe,1,,Qe,νμ,\displaystyle\mathop{\max}_{\begin{subarray}{c}X,Y\\ Z_{1},\dots,Z_{\nu}\\ W_{1},\dots,W_{\nu}\\ Q_{e,1},\dots,Q_{e,\nu}\end{subarray}}\quad\mu, subject to: (11)
(10),Qe,k0,\displaystyle\mbox{\eqref{eq:LMI_kappa}},\quad Q_{e,k}\succ 0,
BMI (9), for all k=1,,ν,\displaystyle\mbox{BMI }\eqref{eq:LMI_Xe},\mbox{ for all }k=1,\dots,\nu,

where alternative performance-oriented linear constraints can be straightforwardly included, and then selecting K=YX1K=YX^{-1}. An iterative approach can be used to make the problem quasi-convex and solve it iteratively with standard LMI techniques.

[2μ[𝐐𝐤𝚺𝐤𝚺𝐤𝐐𝐤][𝐐𝐤𝚺𝐤𝚺𝐤𝐐𝐤][𝐐𝐤𝚺𝐤𝚺𝐤𝐐𝐤]0]+He([((I2A𝐗)(ΛkB𝐘))𝐙𝐤((I2A𝐗)(ΛkB𝐘))𝐖𝐤[𝐗00𝐗]𝐙𝐤[𝐗00𝐗]𝐖𝐤])0.\begin{bmatrix}2\mathbf{\color[rgb]{1,0.39,0.13}\mu}\begin{bmatrix}\mathbf{\color[rgb]{1,0.39,0.13}Q_{k}}&-\mathbf{\color[rgb]{1,0.39,0.13}\Sigma_{k}}\\ \mathbf{\color[rgb]{1,0.39,0.13}\Sigma_{k}}&\mathbf{\color[rgb]{1,0.39,0.13}Q_{k}}\end{bmatrix}&\begin{bmatrix}\mathbf{\color[rgb]{1,0.39,0.13}Q_{k}}&-\mathbf{\color[rgb]{1,0.39,0.13}\Sigma_{k}}\\ \mathbf{\color[rgb]{1,0.39,0.13}\Sigma_{k}}&\mathbf{\color[rgb]{1,0.39,0.13}Q_{k}}\end{bmatrix}\\ \phantom{2\mu}\begin{bmatrix}\mathbf{\color[rgb]{1,0.39,0.13}Q_{k}}&-\mathbf{\color[rgb]{1,0.39,0.13}\Sigma_{k}}\\ \mathbf{\color[rgb]{1,0.39,0.13}\Sigma_{k}}&\mathbf{\color[rgb]{1,0.39,0.13}Q_{k}}\end{bmatrix}&0\end{bmatrix}+\operatorname{He}\left(\begin{bmatrix}\bigl{(}(I_{2}\otimes A\mathbf{\color[rgb]{0.04,1,1}X})-\left(\Lambda_{k}\otimes B\mathbf{\color[rgb]{0.04,1,1}Y}\right)\bigr{)}\mathbf{\color[rgb]{0.5,0,1}Z_{k}}\phantom{\begin{bmatrix}~{}\\ ~{}\end{bmatrix}}&\bigl{(}(I_{2}\otimes A\mathbf{\color[rgb]{0.04,1,1}X})-\left(\Lambda_{k}\otimes B\mathbf{\color[rgb]{0.04,1,1}Y}\right)\bigr{)}\mathbf{\color[rgb]{0.5,0,1}W_{k}}\\ -\begin{bmatrix}\mathbf{\color[rgb]{0.04,1,1}X}&0\\ 0&\mathbf{\color[rgb]{0.04,1,1}X}\end{bmatrix}\mathbf{\color[rgb]{0.5,0,1}Z_{k}}&-\begin{bmatrix}\mathbf{\color[rgb]{0.04,1,1}X}&0\\ 0&\mathbf{\color[rgb]{0.04,1,1}X}\end{bmatrix}\mathbf{\color[rgb]{0.5,0,1}W_{k}}\end{bmatrix}\right)\prec 0. (12)
Data: Ae,k,k=1,,νA_{e,k},k=1,\ldots,\nu, tolerance
Result: KK
ZkI2nZ_{k}\leftarrow I_{2n};
WkαI2nW_{k}\leftarrow\alpha I_{2n};
while |μprevμS|>tolerance|\mu_{prev}-\mu_{S}|>\mbox{tolerance} do
       μprev=μS\mu_{prev}=\mu_{S};
       -- synthesis step in Sec. III-B --
       (μ𝐒,𝐗,𝐘,𝐐𝐞,𝐤)=(\mathbf{\color[rgb]{1,0.39,0.13}\mu_{S}},\mathbf{\color[rgb]{0.04,1,1}X},\mathbf{\color[rgb]{0.04,1,1}Y},\mathbf{\color[rgb]{1,0.39,0.13}Q_{e,k}})= solve (11) given (Zk,Wk)(Z_{k},W_{k});
       -- analysis step in Sec. III-B --
       (μ𝐀,𝐙𝐤,𝐖𝐤,𝐐𝐞,𝐤)=(\mathbf{\color[rgb]{1,0.39,0.13}\mu_{A}},\mathbf{\color[rgb]{0.5,0,1}Z_{k}},\mathbf{\color[rgb]{0.5,0,1}W_{k}},\mathbf{\color[rgb]{1,0.39,0.13}Q_{e,k}})= solve (11) given (X,Y)(X,Y);
      
end while
K=YX1K=YX^{-1};
Algorithm 1 Iterative design of control matrix KK
Remark 1

The most natural way to include the coefficient μ\mu in the equations is that inspired by the techniques in [16], leading to the formulation in (9), where μ\mu defines the stability region in the complex plane and the problem results in a generalized eigenvalue problem (GEVP). As an alternative, μ\mu could be introduced as a destabilizing effect acting on the matrices Ae,kA_{e,k} (shifting their eigenvalues to the right in the complex plane), which are still required to be Hurwitz:

[0Qe,kQe,k0]+He([Ae,k+2μI2nI2n][X1,kX2,k])0.\begin{bmatrix}0&{Q_{e,k}}\\ {Q_{e,k}}&0\end{bmatrix}+\operatorname{He}\left(\begin{bmatrix}A_{e,k}+2\mu I_{2n}\\ -I_{2n}\end{bmatrix}\begin{bmatrix}{X_{1,k}}&{X_{2,k}}\end{bmatrix}\right)\prec 0.

However, with this formulation, the problem is no longer a GEVP. This complicates the implementation, since the feasibility domain with respect to μ\mu could be bounded (while in a GEVP it is right or left unbounded) and bisection is not appropriate. We tested this alternative approach in simulation and we obtained similar results to those achieved with Algorithm 1, with the advantage of typically reaching convergence after a significantly lower number of iterations.

III-B Iterative algorithm

In order to solve the BMI optimization problem (11) with convex techniques, we focus our attention on BMI (9), since the other constraints are linear, and we propose an iterative approach for the problem solution, described in Algorithm 1. The algorithm is composed of two steps:
1) Synthesis step: for given fixed multipliers ZkZ_{k} and WkW_{k}, k=1,,νk=1,\ldots,\nu, optimization (11) is solved in the decision variables (μ,X,Y,Qe,k)(\mu,X,Y,Q_{e,k}), which corresponds to a generalized eigenvalue problem (easily solvable by a bisection algorithm) due to the fact that matrices Qe,kQ_{e,k} are all positive definite;
2) Analysis step: for given fixed matrices XX and YY, optimization (11) is solved in the decision variables (μ,Zk,Wk,Qe,k)(\mu,Z_{k},W_{k},Q_{e,k}), which corresponds again to a generalized eigenvalue problem due to positive definiteness of Qe,kQ_{e,k}.

Algorithm 1 essentially comprises iterations of the two steps above, until parameter μ\mu increases less than a specified tolerance over two steps. To the end of establishing a useful means of comparison in the simulations reported in Section IV, we naively initialize the algorithm by fixing the initial multipliers as scaled identity matrices (with α>0\alpha>0 properly tuned). More generally, we emphasize that using the Riccati construction of [10], stabilizability of (A,B)(A,B) is sufficient for ensuring the existence of a Riccati-based solution of (11) and it is immediate to design an infinite gain margin solution where all the matrices Ae,kA_{e,k} share a common quadratic Lyapunov function. We do not pursue this initialization here, so as to perform a fair comparison between our algorithm (initialized in a somewhat naive way) and the construction resulting from [10].

The following proposition establishes useful properties of the algorithm.

Proposition 2

For any selection of the tolerance, if the initial condition is feasible, then all the iterations of Algorithm 1 are feasible. Moreover, μ\mu never decreases across two successive iterations. Finally, the algorithm terminates in a finite number of steps.

Proof.

About recursive feasibility, note that once the first step is feasible, for any pair of successive steps, the optimal solution to the previous step is structurally a feasible solution to the subsequent step. Indeed, the variables that are frozen at each iteration are selected by using their optimal values from the previous step. Since the cost maximized at each step is always μ\mu, then μ\mu can never decrease across subsequent steps and then recursive feasibility is guaranteed.

About the algorithm terminating in a finite number of steps, note that the optimal value of μ\mu is necessarily upper bounded by a maximum achievable μ¯\overline{\mu} depending on the norm of matrices AA, BB, on the eigenvalue of LL having minimum norm, and on the bound κ¯\bar{\kappa} imposed on the norm of the state-feedback matrix KK. Since μ\mu monotonically increases across iterations and it is upper bounded, then it must converge to a value μ\mu^{\star} and eventually reach any tolerance limit across pairs of consecutive iterations. ∎

Remark 2

Computationally speaking, each iteration of Algorithm 1 amounts to solving a GEVP because μ\mu is multiplying a sign definite matrix Qe,k0Q_{e,k}\succ 0, and hence the conditions are monotonic with respect to μ\mu. Therefore, we can find the optimal μ\mu via a bisection algorithm: if the problem is feasible for μ=μ\mu=\mu^{\star}, then the problem is feasible for all μμ\mu\leq\mu^{\star}; on the other hand, if the problem is infeasible for μ=μ\mu=\mu^{\star}, then the problem is infeasible for all μμ\mu\geq\mu^{\star}. Our objective is to find the maximum μ\mu for which the problem is feasible (so that no other larger μ\mu leads to feasibility).

IV Comparison and Simulations

To test the effectiveness of Algorithm 1, we compare it with other design procedures that solve the simultaneous stabilization problem. The benchmark problem is the maximization of the rate μ\mu with the norm of KK upper bounded by κ¯{\bar{\kappa}}, as induced by constraint (10).

Refer to caption
Figure 1: Left: topologies of the considered graphs. Right: eigenvalues of the Laplacian matrix; the black cross denotes λ0=0\lambda_{0}=0, green full dots denote all the other eigenvalues. The values considered in method “b” are visualized as squares and the relative set is delimited by a dashed line.

Lcpx,5L_{\mathrm{cpx},5} & μ^\hat{\mu} 0.657 0.654 0.684 0.686 1.197 14 1.922 3.853 4.005 4.016 4.684 5
κ\kappa 20.00 16.23 17.95 18.05 20.00 20.00 16.72 15.93 15.93 19.99
tt (s) 0.125 1.562 1.109 2.703 78.81 0.047 0.516 0.531 0.719 9.156
Lcpx,10L_{\mathrm{cpx},10} μ^\hat{\mu} 0.107 0.171 0.178 0.178 0.374 20 1.466 1.968 1.987 1.990 2.209 8
κ\kappa 20.00 16.37 16.47 16.58 20.00 20.00 17.43 17.28 17.28 19.99
tt (s) 0.047 0.984 1.109 3.594 168.9 0.031 0.375 0.562 2.359 22.17
L,4L_{\circ,4} μ^\hat{\mu} 0.577 0.654 0.654 0.654 1.096 12 1.972 3.853 3.853 3.853 4.254 6
κ\kappa 20.00 16.23 16.23 16.24 20.00 20.00 16.72 16.72 16.73 20.00
tt (s) 0.031 0.953 0.812 3.000 56.25 0.000 0.219 0.359 0.703 8.062
L,10L_{\circ,10} μ^\hat{\mu} 0.093 0.075 0.075 0.075 0.368 94 1.177 1.403 1.403 1.402 1.517 18
κ\kappa 19.97 16.84 16.85 16.85 19.95 20.00 17.94 17.94 17.95 19.95
tt (s) 0.062 1.266 1.828 3.859 614.8 0.000 0.391 0.438 0.781 34.39
L,10L_{\star,10} μ^\hat{\mu} 0.657 0.715 0.715 0.715 1.201 12 2.401 4.147 4.147 4.148 5.818 66
κ\kappa 20.00 19.93 19.93 19.93 19.99 20.00 14.00 14.00 14.00 20.00
tt (s) 0.062 0.922 0.828 1.344 36.09 0.000 0.438 0.188 0.531 61.02

TABLE I: Simulation results for dynamics AX-29A_{\text{X-29}} and AoscA_{osc} with different network topologies.
AX-29A_{\text{X-29}} AoscA_{\text{osc}}
“a” “b” “c” “d” “e” #iter “a” “b” “c” “d” “e” #iter
graph Riccati Listmann Ae,kA_{e,k} Direct Alg. 1 Alg. 1 Riccati Listmann Ae,kA_{e,k} Direct Alg. 1 Alg. 1

IV-A Dynamical system and network

In our simulations we consider two types of agent dynamics: one is oscillatory,

(Aosc,Bosc)=([0110],[01]);(A_{\text{osc}},B_{\text{osc}})=\left(\begin{bmatrix}0&-1\\ 1&0\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix}\right);

while the second one is the unstable lateral dynamics of a forward-swept wing, the Grumman X-29A, as in [14],

(AX-29,BX-29)=([2.0590.99716.5500.10230.06796.77900.06030.99280.16450.0441310.0716800.0],[1.3470.23650.091940.070560.00061410.000686600])(A_{\text{X-29}},B_{\text{X-29}})=\\ \left(\left[\begin{smallmatrix}-2.059&0.997&-16.55&0\\ -0.1023&-0.0679&6.779&0\\ -0.0603&-0.9928&-0.1645&0.04413\\ 1&0.07168&0&0.0\end{smallmatrix}\right],\left[\begin{smallmatrix}1.347&0.2365\\ 0.09194&-0.07056\\ -0.0006141&0.0006866\\ 0&0\end{smallmatrix}\right]\right)

We consider five communication networks: two circular graphs with N=4N=4 and N=10N=10 agents, two generic directed graphs with N=5N=5 and N=10N=10 agents characterized by complex eigenvalues, and a star graph with N=10N=10 agents. The graph topologies and the eigenvalues of the associated Laplacian matrices are visualized in Fig. 1.

IV-B Compared approaches

The approaches that we compare are the following ones:

  1. a.

    [Riccati], the dual case of [10, Section 5.5]: the gain is structured as K=BPK=B^{\top}P, where PP is the unique solution to the algebraic Riccati equation

    AP+PA2bPBBP+aI=0,A^{\top}P+PA-2bPBB^{\top}P+aI=0, (13)

    with bRe(λk)b\leq\operatorname{Re}(\lambda_{k}) and a>0a>0. We solve (13) by fixing b=min(Re(λk))b=\min(\operatorname{Re}(\lambda_{k})) and adjusting the value of aa so as to respect the bound Kκ¯\|K\|\leq{\bar{\kappa}}, which is easily done due to the monotonicity of K\|K\| with respect to aa.

  2. b.

    [Listmann] from [14]: LMI conditions (6) with Qe,k=[Q00Q]Q_{e,k}=\left[\begin{smallmatrix}Q&0\\ 0&Q\end{smallmatrix}\right] and Y=KQY=KQ are imposed for the λk\lambda_{k} corresponding to the corners of a rectangular box in the complex plane that includes all non-zero eigenvalues of LL (considering the eigenvalues in the first quadrant is enough, since conjugate eigenvalues lead to the same condition), while incorporating in the LMI-based design the maximum norm condition (10). A fixed number of LMIs need to be solved regardless of the network size.

  3. c.

    [Ae,kA_{e,k}]: the method resembles that in “b”, but now conditions (6) are imposed for each λk\lambda_{k}, k=1,,νk=1,\dots,\nu.

  4. d.

    [Direct]: one iteration of Algorithm 1 is executed, which amounts to solving (11) with Zk=I2nZ_{k}=I_{2n}, Wk=αI2nW_{k}=\alpha I_{2n} and α>0\alpha>0 properly tuned. Notably, matrices Qe,kQ_{e,k} do not have a pre-defined structure.

  5. e.

    [Algorithm 1]: the procedure in Algorithm 1 is executed up to its termination, as guaranteed by Proposition 2.

In the simulations, the convergence rate of the solutions is estimated from the spectral abscissa of matrices AkA_{k} in (5a), namely, from the largest-real-part eigenvalue:

μ^=max(Re(eig(Ak))).\hat{\mu}=-\max\left(\operatorname{Re}\bigl{(}\text{eig}\left(A_{k}\right)\bigr{)}\right).

IV-C Results

We implement the different procedures in MATLAB, using the toolbox YALMIP [17] with MOSEK as an LMI solver. For the algorithm, we consider a tolerance of 10310^{-3} and κ¯=20{\bar{\kappa}}=20 as the bound on the norm of KK.

For the different combinations of dynamics and graph topologies, Table I reports a summary of all our results, along with estimated converge rate μ^\hat{\mu}, norm of KK and execution time. The time evolutions of the distances from the synchronization set 𝒜\mathcal{A} are shown in Figs. 2 and 3 for the approaches “a”, “b” and “e”.

Generally, method “a” has a worse performance than the considered LMI-based methods. The gain bound is reached, but the convergence rate is the slowest, most likely because the approach is forcing an infinite gain margin for KK. Locating the eigenvalues of the AkA_{k} matrices in the complex plane shows that method “a” tends to move to the left a few eigenvalues (faster modes) and penalize others, so that the convergence speed is limited.

Method “b” performs similarly to “c”, as expected, since the two methods simply consider different (eigen)values. In general, method “b” is more conservative than “c”, but is faster in larger networks. With Lcpx,5L_{cpx,5} and Lcpx,10L_{cpx,10}, method “b” is slightly more conservative, as is reasonable, since it considers values that are not in the spectrum of the Laplacian.

Method “d” generally yields better results than “b” and “c”, provided that proper values of the parameter α\alpha are chosen. This improvement is due to the decoupling between matrices Ae,kA_{e,k} and Qe,kQ_{e,k}.

The best results are obtained using our proposed Algorithm 1, which gives the highest convergence rate and gets close to the system specifications. However, the computational load is higher, since several iterations are needed. With dynamics AX-29A_{\text{X-29}}, the states are always converging faster; with dynamics AoscA_{\text{osc}}, the performance is similar to that obtained with other non-iterative LMI-based techniques. Algorithm 1 outperforms the other procedures in the case with dynamics AX-29A_{\text{X-29}} and graph L,10L_{\circ,10}: even though it requires quite some iterations to converge, it provides a controller that leads to almost one-order-of-magnitude faster convergence than the others, as shown in Fig. 2.

Refer to caption
Figure 2: Time evolution of the distance of the states from the synchronization set 𝒜\mathcal{A} for agents with dynamics AX-29A_{\text{X-29}}. The methods “a”, “b” and “e” are compared. The inset figures zoom into the second-half time.
Refer to caption
Figure 3: Time evolution of the distance of the states from the synchronization set 𝒜\mathcal{A} for agents with dynamics AoscA_{\text{osc}}. The methods “a”, “b” and “e” are compared. The inset figures zoom into the second-half time.

V Conclusions

We focused on the synchronization of identical linear systems in the case of full-state feedback. First we provided some necessary and sufficient conditions for the synchronization. Then, we relaxed them in order to have a new formulation that can be iteratively solved through LMIs. This new procedure to solve the simultaneous stabilization problem, although requiring relatively large computational times, turns out to give better results in our benchmark problem where the convergence rate is maximized under given constraints on the performance.

Our results pave the way for further developments, such as the use of alternative methods (e.g., convex-concave decomposition) to deal with BMIs and the extension to the case of static output feedback control laws.

Acknowledgment: The authors would like to thank Domenico Fiore for his work in the initial stages of this research activity.

References

  • [1] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and Cooperation in Networked Multi-Agent Systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, 2007.
  • [2] L. Scardovi and R. Sepulchre, “Synchronization in networks of identical linear systems,” Automatica, vol. 45, no. 11, pp. 2557–2562, 2009.
  • [3] M. Mesbahi and M. Egerstedt, Graph Theoretic Methods in Multiagent Networks.   Princeton University Press, 2010.
  • [4] F. Xiao and L. Wang, “Consensus problems for high-dimensional multi-agent systems,” IET Control Theory & Applications, vol. 1, no. 3, pp. 830–837, 2007.
  • [5] C.-Q. Ma and J.-F. Zhang, “Necessary and Sufficient Conditions for Consensusability of Linear Multi-Agent Systems,” IEEE Transactions on Automatic Control, vol. 55, no. 5, pp. 1263–1268, 2010.
  • [6] Z. Li, Z. Duan, G. Chen, and L. Huang, “Consensus of Multiagent Systems and Synchronization of Complex Networks: A Unified Viewpoint,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 57, no. 1, pp. 213–224, 2010.
  • [7] L. Dal Col, S. Tarbouriech, L. Zaccarian, and M. Kieffer, “A consensus approach to PI gains tuning for quality-fair video delivery,” International Journal of Robust and Nonlinear Control, vol. 27, no. 9, pp. 1547–1565, 2017.
  • [8] G. Giordano, I. Queinnec, S. Tarbouriech, L. Zaccarian, and N. Zaupa, “Equivalent conditions for the synchronization of identical linear systems over arbitrary interconnections,” 2023. [Online]. Available: https://arxiv.org/abs/2303.17321
  • [9] J. Qin, H. Gao, and W. X. Zheng, “Exponential Synchronization of Complex Networks of Linear Systems and Nonlinear Oscillators: A Unified Analysis,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 3, pp. 510–521, 2015.
  • [10] A. Isidori, Lectures in Feedback Design for Multivariable Systems.   Springer International Publishing, 2017, ch. 5.
  • [11] F. Bullo, Lectures on Network Systems.   Kindle Publishing, 2022, ch. 8.
  • [12] A. Saberi, A. A. Stoorvogel, M. Zhang, and P. Sannuti, Synchronization of Multi-Agent Systems in the Presence of Disturbances and Delays.   Springer International Publishing, 2022.
  • [13] H. L. Trentelman, K. Takaba, and N. Monshizadeh, “Robust Synchronization of Uncertain Linear Multi-Agent Systems,” IEEE Transactions on Automatic Control, vol. 58, no. 6, pp. 1511–1523, 2013.
  • [14] K. D. Listmann, “Novel conditions for the synchronization of linear systems,” in 54th IEEE Conference on Decision and Control, 2015.
  • [15] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory.   SIAM, 1994.
  • [16] G. Pipeleers, B. Demeulenaere, J. Swevers, and L. Vandenberghe, “Extended LMI characterizations for stability and performance of linear systems,” Systems & Control Letters, vol. 58, no. 7, pp. 510–518, 2009.
  • [17] J. Löfberg, “Yalmip: A toolbox for modeling and optimization in matlab,” in Proceedings of the CACSD Conference, 2004.