This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

High Performance Distributed Control Law for Large-Scale Linear Systems: A Partitioned Distributed Observer Approach

Haotian Xu, Shuai Liu, , Ling Shi This work is supported by National Natural Science Foundation of China (No.61533013, 61633019, 61903290), Funded by China Postdoctoral Science Foundation (No.2022M721932). The corresponding author: Shuai Liu.H. Xu is with Department of Automation, Shanghai Jiao Tong University, Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai, 200240 (e-mail: [email protected]; [email protected])L. Shi is with Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (e-mail: [email protected]; [email protected]).
Abstract

In recent years, the distributed-observer-based distributed control law has shown powerful ability to arbitrarily approximate the centralized control performance. However, the traditional distributed observer requires each local observer to reconstruct the state information of the whole system, which is unrealistic for large-scale scenarios. To fill this gap, this paper develops a greedy-idea-based large-scale system partition algorithm, which can significantly reduce the dimension of local observers. Then, the partitioned distributed observer for large-scale systems is proposed to overcome the problem that the system dynamics are difficult to estimate due to the coupling between partitions. Furthermore, the two-layer Lyapunov analysis method is adopted and the dynamic transformation lemma of compact errors is proved, which solves the problem of analyzing stability of the error dynamic of the partitioned distributed observer. Finally, it is proved that the distributed control law based on the partitioned distributed observer can also arbitrarily approximate the control performance of the centralized control law, and the dimension of the local observer is greatly reduced compared with the traditional method. The simulation results show that when the similarity between the physical network and the communication network is about 80%80\%, the local observer dimension is greatly reduced by 90%90\% and the relative error between the performance of the distributed control law and that of the centralized control law is less than 1%1\%.

Index Terms:
Distributed observer, Continuous time system estimation, Consensus, Sensor networks, Switching topologies

I Introduction

Distributed observer is a cooperative observer network composed of NN agents and a communication network. Each agent is a node of the network and contains a local observer as well as a local controller. Each local observer only has access to partial system outputs, and all local observers cooperate to complete the state estimation of the whole system through information interaction via communication networks. Distributed observer has extensive application value and profound theoretical value. In terms of application value, global scholars have enabled distributed observer to play important roles in flexible structures [1], smart vehicles [2], microgrids [3, 4], deep-sea detectors [5, 6], spacecraft in low Earth orbit [7] and other fields. In terms of theoretical value, distributed observer can help distributed control law to improve its performance with a qualitative change. Readers may refer the theoretical value to the article of [2, 8, 9], [10], and [11], which prove that the distributed-observer-based distributed control law has the same performance as that of the centralized control law. Owing to its broad application prospects and outstanding theoretical significance, distributed observer finds itself a center of research in recent years [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 25, 26].

It should be noted that the current distributed observer has an obvious shortage: It requires that every agent has to reconstruct the states of the whole system. In other words, if the system dimension is nn, then the dimensions of local observers on every agent are all nn, which is the origin of a special term in the field of distributed observer: state omniscience. Accordingly, it is barely able for a large-scale system to achieve state omniscience because of the high dimensions of local observers. After all, we cannot require each agent to compute a local observer with the same dimension as the large-scale system, otherwise, we will lose the original intention of the distributed design. Although a few articles have studied the minimum order distributed observer [28] or reduced order distributed observer [29], they replace NN full-dimension observers contained in the local observer with NN reduced-dimension observers. It cannot essentially solve the problem of the high dimension of local observers caused by large-scale systems.

This paper will explore whether there is a method that can effectively reduce the dimension of local observers on each agent. One of the biggest challenges in studying this problem is to reduce the dimension of local observers without affecting their ability to help distributed control approach centralized control performance. According to [30], we know that the reason why distributed control law cannot achieve the same performance as that of the centralized control law is that distributed control law cannot obtain as much state information as centralized control. However, in [2], an agent can estimate the whole system states through the state omniscience of distributed observer. It indicates that state omniscience is an important factor in why the distributed-observer-based distributed control law has centralized control performance. Therefore, approaching the centralized control performance without relying on state omniscience presents the distributed-observer-based distributed control law with huge challenges.

The idea to overcome these challenges in this paper comes from the thinking of large-scale interconnected systems structure. We find that, for large-scale interconnected systems (it is better to use a graph to represent the interconnection relationship among subsystems and name this graph as the physical network), the local control law on each agent does not actually need the whole large-scale system states. Generally speaking, the states of physical network neighbor agents are enough for an agent to achieve centralized control performance. For example, the states employed in the centralized control law of interconnected microgrid systems are only the states of each agent’s physical network neighbor agents. Inspired by this, can we build a novel distributed observer so that the local observer on each agent only needs to estimate the states of its physical neighbor agents? If feasible, the local observer dimension on each agent can be greatly reduced because the interconnection of a large system is often sparse, and the average number of physical neighbor agents of each agent is generally far less than the number of all agents. In addition, since the states of all physical neighbor agents of an agent are estimated, the local control law can use the same state information as that of centralized control. It indicates that the aforementioned method does not affect the performance of the distributed observer and its role in distributed control law while reducing the dimension of all local observers.

Building the above-mentioned new distributed observer is a formidable task because a local observer containing only the physical network neighbor states can be established if and only if the physical network and communication network are identical. Unfortunately, physical networks and communication networks are generally different. For this reason, this paper tries to develop a method to partition all nodes. Each partition contains multiple nodes, and each node belongs to multiple partitions. Then, distributed observer can be designed in each partition, and all agents can complete the state estimation of all their physical neighbors through one or more partitioned distributed observers. Hence, the main focus of the partition is on enabling the union of all partitions on an agent to cover all its neighbor agents (this is a necessary condition to ensure that the distributed-observer-based distributed control law has centralized control performance), and also requiring the number of agents in the union of partitions to be as small as possible (reduce the dimension of local observers on each agent as much as possible).

Therefore, the first scientific challenge faced by this paper is how to establish an intelligent partition method that meets the above conditions. There is a lot of research on large-scale network partition in the existing literature, but these methods are based on one or more of the following conditions: 1) No intersection between any two partitions [31, 32, 33]; 2) One needs to know how many partitions in advance [34]; 3) There is a clear and single objective function [35, 33, 36]; 4) Partition the node coverage area [37, 38]. However, the partition problem in this paper does not meet any of the above conditions. Furthermore, the existing partition methods are all based on the topology information of only one network, while the topology information of two networks (physical network and communication network) should be considered at the same time in this paper. Therefore, the existing partition methods are not applicable to this work.

The second scientific challenge is how to deal with the problem of error dynamic coupling between various partitions. Although the idea of this paper is to partition large-scale systems and build distributed observer for small-scale systems within the partitions, distributed observers in the partitions are not independent, and they are dynamically coupled with the distributed observer in other partitions. If we do not eliminate this coupling, the observer is not implementable because they include the unknown states from other partitions. On the contrary, if this coupling is eliminated, we will face the model mismatch problem. What is more difficult is that even if the coupling relationship is eliminated in the observer design, it will occur in the error dynamics and lead to the difficulty of stability analysis. Therefore, the classical distributed observer theory, which is close to maturity, cannot be used either in the partitioned distributed observer design or in the error dynamic stability analysis.

Focusing on the above challenges, this paper contributes to the following three aspects:

1) We propose a reasonable and feasible partition method for large-scale systems. Furthermore, the partition method proposed in this paper can achieve the Pareto optimal solution (Each agent has its own most expected partitioning result, so the partitioning problem in this paper is a multi-objective optimization problem).

2) A design method is proposed for the partitioned distributed observer and a two-layer Lyapunov method is developed to analyze the stability of the error dynamic of the partitioned distributed observer. The proposed method guarantees the implementation of the partitioned distributed observer. It is also proved that each agent can estimate the states of all its physical network neighbor agents, and the estimation error can dynamically converge to any small invariant set.

3) Although the dimension of each local observer is greatly reduced, the distributed control law based on the partitioned distributed observer can still arbitrarily approximate the control performance of centralized control. It provides an important theoretical basis for the application of distributed observer in large-scale systems.

The remainder of this paper is organized as follows. Section II formulates the problem. Section III gives the partition method and partitioned distributed observer design method. The performance of the error dynamics of partitioned distributed observer and closed-loop system are analyzed in Section IV. The simulation results are given in Section V, and Section VI concludes this paper.

Notations: Let n\mathbb{R}^{n} be the Euclidean space of nn-dimensional column vectors and m×n\mathbb{R}^{m\times n} be the Euclidean space of m×nm\times n-dimensional matrices. Denote ATA^{T} the transpose of matrix AA. σ¯(A)\bar{\sigma}(A) and σ¯(A)\underline{\sigma}(A) stand for the maximum and minimum eigenvalues of AA, respectively, if A=ATA=A^{T}. We cast col{A1,,An}col\{A_{1},\ldots,A_{n}\} as [A1T,,AnT]T[A_{1}^{T},\ldots,A_{n}^{T}]^{T}, and diag{A1,,An}diag\{A_{1},\ldots,A_{n}\} as a block diagonal matrix with AiA_{i} on its diagonal, where A1,,AnA_{1},\ldots,A_{n} are matrices with arbitrary dimensions. dim{}dim\{\cdot\} represents the dimension of a vector. Inn×nI_{n}\in\mathbb{R}^{n\times n} is an identity matrix and sym{A}=A+ATsym\{A\}=A+A^{T} if AA is a square matrix. We denote |||\cdot| the cardinality of a set. \|\cdot\| represents the 22-norm of vectors or matrices. Furthermore, there are many symbols in this paper. To avoid confusion for readers, we have listed a comparison table of easily confused symbols in Table 1, and corresponding symbols will also be explained when they first appear.

TABLE I: Symbol description
Symbol Definition
𝒪p\mathcal{O}_{p} The ppth partition
𝒫i\mathscr{P}_{i} Set of partitions that containing node ii
𝒪(𝒫i)\mathcal{O}(\mathscr{P}_{i}) 𝒪j𝒫i𝒪j\bigcup_{\mathcal{O}_{j}\in\mathscr{P}_{i}}\mathcal{O}_{j}
D(𝒫i)D(\mathscr{P}_{i}) 𝒪j𝒫i𝒪j\sum_{\mathcal{O}_{j}\in\mathscr{P}_{i}}\mathcal{O}_{j}
x^li(p)\hat{x}_{li}^{(p)} Estimation of xix_{i} generated by observers on agent ll in partition 𝒪p\mathcal{O}_{p}
x¯li\bar{x}_{li} x¯li=1/Ni,𝒫l𝒪p𝒫lx^li(p)\bar{x}_{li}=1/N_{i,\mathscr{P}_{l}}\cdot\sum_{\mathcal{O}_{p}\in\mathscr{P}_{l}}\hat{x}_{li}^{(p)}
𝕩li\mathbbm{x}_{li} Saturate value of x¯li\bar{x}_{li}
ei(p)e_{\star i}^{(p)} col{eji(p)},j𝒪pcol\{e_{ji}^{(p)}\},~{}j\in\mathcal{O}_{p}
eie_{i\star} col{eij(p),j𝒪p,𝒪p𝒫i}col\{e_{ij}^{(p)},~{}j\in\mathcal{O}_{p},~{}\mathcal{O}_{p}\in\mathscr{P}_{i}\}

II Problem formulation

II-A Overall objectives

Consider a large-scale interconnected system composed by NN subsystems (corresponding to NN agents) and the iith subsystem takes the form of:

x˙i=j=1NAijxj+Biui,\displaystyle\dot{x}_{i}=\sum_{j=1}^{N}A_{ij}x_{j}+B_{i}u_{i}, (1)
yi=Cixi,\displaystyle y_{i}=C_{i}x_{i}, (2)

where xinx_{i}\in\mathbb{R}^{n}, yipy_{i}\in\mathbb{R}^{p}, and uimu_{i}\in\mathbb{R}^{m} are the system states, output measurements, and control inputs of the iith subsystem, respectively; xjnx_{j}\in\mathbb{R}^{n} is the state of the jjth subsystem; The matrices AiiA_{ii}, BiB_{i}, and CiC_{i} represent the system matrix, control matrix, and output matrix of the iith subsystem with compatible dimensions, respectively; and Aijn×nA_{ij}\in\mathbb{R}^{n\times n} stands for the coupling matrix between the iith and the jjth subsystems. Let A=[Aij]i,j=1NA=[A_{ij}]_{i,j=1}^{N}, B=diag{B1,,BN}B=diag\{B_{1},\ldots,B_{N}\}, and C=col{C1,,CN}C=col\{C_{1},\ldots,C_{N}\}, then the compact form of the large-scale system is given by

x˙=Ax+Bu,\displaystyle\dot{x}=Ax+Bu, (3)
y=Cx,\displaystyle y=Cx, (4)

where x=col{x1,,xN}nNx=col\{x_{1},\ldots,x_{N}\}\in\mathbb{R}^{nN}, y=col{y1,,yN}pNy=col\{y_{1},\ldots,y_{N}\}\in\mathbb{R}^{pN}, and u=col{u1,,uN}mNu=col\{u_{1},\ldots,u_{N}\}\in\mathbb{R}^{mN}.

We construct a physical network for this large-scale system by its interconnection. Let 𝒱\mathcal{V} be the nodes (agents) set and we say (j,i)(j,i) is an edge connecting jj and ii if Aij0\|A_{ij}\|\neq 0. Then, the set p\mathcal{E}_{p} is defined by p={(i,j):Aji0}\mathcal{E}_{p}=\{(i,j):~{}\|A_{ji}\|\neq 0\}. Accordingly, the adjacency matrix is defined as 𝒜p=[βij]i,j=1N\mathcal{A}_{p}=[\beta_{ij}]_{i,j=1}^{N} where βij=1\beta_{ij}=1 if (j,i)p(j,i)\in\mathcal{E}_{p}. Let 𝒩i={j:βij=1}\mathcal{N}_{i}=\{j:~{}\beta_{ij}=1\} be the set of physical network neighbors of agent ii. The physical network can be directed or undirected and the schematic diagrams in this paper all use undirected graphs, but it should be noted that when the physical network is an undirected graph, the system matrix is assumed to have constraints with Aij=Aji=0\|A_{ij}\|=\|A_{ji}\|=0; When the physical network is a directed graph, the system matrix has no constraints. Moreover, an undirected graph 𝒢c={𝒱,c,𝒜c}\mathcal{G}_{c}=\{\mathcal{V},\mathcal{E}_{c},\mathcal{A}_{c}\} represents the communication network among all nodes, where c\mathcal{E}_{c} and 𝒜c=[αij]i,j=1N\mathcal{A}_{c}=[\alpha_{ij}]_{i,j=1}^{N} are the associated edge set and adjacency matrix, respectively. Then, the set of communication network neighbors of agent ii is given by 𝒞i={j:αij=1}\mathcal{C}_{i}=\{j:~{}\alpha_{ij}=1\}. The Laplacian matrix associated to 𝒢c\mathcal{G}_{c} is denoted by c=𝒟c𝒜c\mathcal{L}_{c}=\mathcal{D}_{c}-\mathcal{A}_{c}, where 𝒟c=diag{di,i=1,,N}\mathcal{D}_{c}=diag\{d_{i},~{}i=1,\ldots,N\} with di=αii+j=1Nαijd_{i}=-\alpha_{ii}+\sum_{j=1}^{N}\alpha_{ij}.

Refer to caption
Figure 1: Architecture diagram of distributed observer.

In traditional distributed observer, each local observer—the orange block in Figure 1—is required to reconstruct all the states of the entire system based on yiy_{i} and the information exchanged via communication network. Then, each local controller—blue block in Figure 1—can use all the information of the entire system. Therefore, all local observers can jointly establish a distributed control law with centralized performance. Many existing studies, such as [2, 9], have proved this result, i.e., the distributed-observer-based distributed control law can achieve centralized control performance. However, in these studies, each local observer needs to reconstruct the states of the whole system, i.e, dimension of observers on each agent is nNnN. This is not realistic for large-scale systems. Therefore, this paper will investigate whether it is possible to achieve distributed control laws with arbitrary approximation of centralized control performance without requiring each local observer to estimate global information.

To formulate the problem, we assume that there is a centralized control law u(x)=col{u1(x),,uN(x)}u(x)=col\{u_{1}(x),\ldots,u_{N}(x)\} such that system (3) is stabilized, and denote xc(t)x_{c}(t) the solution of (3) with centralized control law. We further assume a distributed control law u¯i(x^i)\bar{u}_{i}(\hat{x}_{i\star}) such that xr(t)x_{r}(t) solved by (3) with u=u¯(x^i)u=\bar{u}(\hat{x}_{i\star}) is stabilized, where x^i\hat{x}_{i\star} is the states estimation generated by the iith local observer located at the iith agent, and u¯(x^i)=col{u¯i(x^i),i=1,,N}\bar{u}(\hat{x}_{i\star})=col\{\bar{u}_{i}(\hat{x}_{i\star}),~{}i=1,\ldots,N\}. Subsequently, the overall goal of this paper consists of the following two parts.

1) Design the partitioned distributed observer with x^i\hat{x}_{i\star} being the states of the iith local observer, and x^i\hat{x}_{i\star} satisfies dim{x^i}dim{x}dim\{\hat{x}_{i\star}\}\ll dim\{x\};

2) Design a distributed-observer-based distributed control law u¯(x^i)\bar{u}(\hat{x}_{i\star}) such that xr(t)x_{r}(t) can approach xc(t)x_{c}(t) arbitrarily, i.e., xc(t)xr(t)<ε\|x_{c}(t)-x_{r}(t)\|<\varepsilon, ε>0\forall\varepsilon>0 and t0\forall t\geq 0.

II-B An illustrative example for the target problem

Refer to caption
Figure 2: Partition examples of simple network systems.

In this subsection, we will illustrate an example for helping readers understand the problems proposed in the above subsection. See in Figure 2, the coupling relationship of a large-scale system is shown in the physical network, in which each node represents an subsystem. Assume that each subsystem has a corresponding agent, and the communication relationship among all agents is shown as communication network. The iith agent has access to the information of yiy_{i}, and is supposed to establish several local observers and a local control law.

We take node 1313 as an example to demonstrate the specific issues to be studied in this paper. As seen in Figure 2, node 1313 has 55 physical neighbors (11,12,14,15,1611,12,14,15,16) and 33 communication neighbors (11,14,1611,14,16). To design a local control law to make the control performance as close as possible to the centralized control law, agent 1313 needs the states of all its physical neighbors. In traditional method, a distributed observer will be designed such that agent 1313 can reconstruct all states of the entire system. However, only the states of 11,12,14,15,1611,12,14,15,16 are required. Therefore, this paper intends to design a group of partitions and design distributed observer in each partition. In this way, the useless information estimated by each agent can be greatly reduced. For example, in Figure 2, node 1313 belongs to 33 partitions—{11,12,13}\{11,12,13\}, {13,15,16}\{13,15,16\}, and {13,14}\{13,14\}. By designing distributed observers separately in three partitions, agent 1313 only needs to estimate the states of 55 subsystems (in traditional methods, agent 1313 needs to estimate the states of all 4747 subsystems). With partitioned distributed observers, local control laws can be established based on the results of state estimation.

Up to now, we have briefly described the main idea of this paper through an example. To achieve this idea, there are mainly three steps involved: 1) Design a reasonable and effective partitioning algorithm; 2) Design partitioned distributed observer in each partition; 3) Design distributed control law.

Before achieving these steps, we first specify two basic assumptions and an important lemma.

Assumption 1

Pair (A,B)(A,B) is assumed to be controllable and (Ci,Aii)(C_{i},A_{ii}) for i=1,,Ni=1,\ldots,N are assumed to be observable.

Assumption 2

Graphs corresponding to communication network is assumed to be undirected, connected and simple (A graph or network is simple means there is no self-loop and multiple edges).

Lemma 1 ([39])

Consider an undirected connected graph 𝒢={𝒱,,𝒜}\mathcal{G}=\{\mathcal{V},\mathcal{E},\mathcal{A}\}. Let 𝒮=diag{1,0,,0}N×N\mathcal{S}=diag\{1,0,\ldots,0\}\in\mathbb{R}^{N\times N} and then all eigenvalues of matrix =+𝒮\mathcal{H}=\mathcal{L}+\mathcal{S} locate at the open right half of plane, where \mathcal{L} is the Laplacian matrix of 𝒢\mathcal{G}.

Remark 1

The reader should note that Assumption 1 is not completely the same as the traditional assumption in distributed observers. In existing literature [15, 9], it is generally assumed that (C,A)(C,A) is observable, but (Ci,A)(C_{i},A) may not be observable. Assumption 1 used in this paper is given owing to the form of system (1)–(2). This system format is easier to express our core views on partitioning and status information retrieval. Actually, the partitioned distributed observer can also be achieved based on the General system x˙=Ax,y=Cx\dot{x}=Ax,~{}y=Cx and the traditional assumption as in [15, 9]. However, due to limitations in space and the need to express the main idea, the theory of partitioned distributed observers for general systems will be included in our future research.

III Design of partitioned distributed observer

A large-scale network partition algorithm will be given in this section. Then, based on the partition results, we propose the design method and observer structure of the partitioned distributed observer.

III-A Network partition

As mentioned in Section I and Subsection II-B, it is unnecessary to require each local observer to reconstruct the states of the whole large-scale system. Theoretically, each agent only needs to estimate part of states required by its local control law. Therefore, in order to establish a distributed observer that can only estimate the state information required by the local control law, we need to partition the network and ensure that all partitions of each agent contain all its physical network neighbors. To illustrate it in more detail, we introduce several simple examples.

Refer to caption
Figure 3: Partition examples of simple network systems.

In Figure 3a), there is a simple network system with 66 nodes. One may recognize its partition by observation easily: it could be partitioned into three categories with {1,2,3},{4,5,6}\{1,2,3\},\{4,5,6\}, and {3,4}\{3,4\}. With this partition, the physical network neighbors of all nodes have been included in their partitions. For example, 𝒩3={1,2,4}\mathcal{N}_{3}=\{1,2,4\}, and partitions to which node 33 belonging are {1,2,3}\{1,2,3\} and {3,4}\{3,4\}. Obviously, 𝒩3{1,2,3}{3,4}\mathcal{N}_{3}\subset\{1,2,3\}\cup\{3,4\}. Similarly, Figure 3b) is a star network system with 99 nodes. One can partition it with {1,2}\{1,2\}, {1,3}\{1,3\}, {1,4}\{1,4\}, {1,5}\{1,5\}, {1,6}\{1,6\}, {1,7}\{1,7\}, {1,8}\{1,8\}, {1,9}\{1,9\} and all agents’ physical neighbors are covered by their partitions. As for Figure 3c), the acquisition of partitions is not so intuitive, but we can still partition it with {1,2,3,4}\{1,2,3,4\}, {5,6,7,8}\{5,6,7,8\}, and {3,5}\{3,5\} after the observation and analysis. Through the above examples, we know the basic idea and function of partition, but we cannot obtain reasonable partition results through observation for large-scale systems. Therefore, a reasonable partition algorithm is necessary for distributed observers of large-scale systems.

To this end, we propose a partition method based on the greedy algorithm. Before showing the algorithm details, some notations are introduced. 𝒪k𝒱\mathcal{O}_{k}\subset\mathcal{V} is defined as the kkth partition of the large-scale system, and 𝒫i\mathscr{P}_{i} denotes the set of partitions that containing node ii. For example, in Figure 3c), 𝒪1={1,2,3,4}\mathcal{O}_{1}=\{1,2,3,4\}, 𝒪2={5,6,7,8}\mathcal{O}_{2}=\{5,6,7,8\}, and 𝒪3={3,5}\mathcal{O}_{3}=\{3,5\}. In addition, 𝒫1={𝒪1}\mathscr{P}_{1}=\{\mathcal{O}_{1}\}, and 𝒫3={𝒪1,𝒪3}\mathscr{P}_{3}=\{\mathcal{O}_{1},\mathcal{O}_{3}\}. Set 𝒪(𝒫i)=𝒪j𝒫i𝒪j\mathcal{O}(\mathcal{P}_{i})=\bigcup_{\mathcal{O}_{j}\in\mathscr{P}_{i}}\mathcal{O}_{j}, and denote D(𝒫i)=𝒪j𝒫i|𝒪j|D(\mathscr{P}_{i})=\sum_{\mathcal{O}_{j}\in\mathscr{P}_{i}}|\mathcal{O}_{j}| the total number of agents that need to be estimated by local observer on the iith agent. Let Pa(i,j)Pa(i,j) be a set of the nodes belonging to the shortest path between ii and jj in communication network i,j𝒱\forall i,j\in\mathcal{V}. For example, in Figure 3c), Pa(1,4)={1,2,3,4}Pa(1,4)=\{1,2,3,4\}. Besides, we define r1,r2,,rNr_{1},r_{2},\cdots,r_{N} for nodes belonging to 𝒱\mathcal{V}, and define the symbol \preceq by rirjr_{i}\preceq r_{j} if |𝒞ri|<|𝒞rj||\mathcal{C}_{r_{i}}|<|\mathcal{C}_{r_{j}}|, or |𝒞ri|=|𝒞rj||\mathcal{C}_{r_{i}}|=|\mathcal{C}_{r_{j}}| and |𝒩ri||𝒩rj||\mathcal{N}_{r_{i}}|\geq|\mathcal{N}_{r_{j}}|. In addition, define s1,s2,,sNs_{1},s_{2},\cdots,s_{N} for nodes belonging to 𝒱\mathcal{V}, and further define \succeq by sisjs_{i}\succeq s_{j} if |𝒞si|>|𝒞sj||\mathcal{C}_{s_{i}}|>|\mathcal{C}_{s_{j}}|, or |𝒞si|=|𝒞sj||\mathcal{C}_{s_{i}}|=|\mathcal{C}_{s_{j}}| and |𝒩si||𝒩sj||\mathcal{N}_{s_{i}}|\geq|\mathcal{N}_{s_{j}}|.

Now, one can see Algorithm 1 for the developed partition method. For the convenience of readers to understand the algorithm, we will briefly describe the algorithm process here.

Step 1: Sort ri,i=1,,Nr_{i},~{}i=1,\ldots,N based on symbol \preceq.

Step 2: Starting from the minimum rir_{i}. Each node chooses its partitions according to the greedy algorithm in sequence (The larger the degree of node in the communication network, the more partition schemes can be selected. Therefore, priority should be given to the node with the smallest communication network degree).

Step 3: Sort si,i=1,,Ns_{i},~{}i=1,\ldots,N based on symbol \succeq.

Step 4: Starting from the maximum sis_{i}. Each node considers whether to merge several partitions according to two criteria (Line 27–29 in Algorithm 1 and their details are shown in (D2)) in sequence (In communication networks, nodes with larger degrees are more likely to be forced to join many redundant partitions. Therefore, priority is given to nodes with larger degrees to determine whether to remove some redundant partitions through merging).

Note that the partition algorithms described in Algorithm 1 and Algorithm 2 actually solve a multi-objective optimization problem because each node has its own desired partition result, but the desired optimal partitions among different nodes are conflicting. In the complete version of this article, we prove that the partition results obtained through Algorithm 1 and Algorithm 2 are Pareto solutions to this multi-objective optimization problem.

Propsition 1

Partition results obtained from Algorithm 1 and Algorithm 2 are already located on the Pareto frontier.

Proof:

Given a group of partitions (𝒪1,,𝒪m\mathcal{O}_{1},\ldots,\mathcal{O}_{m}) obtained by Algorithm 1 and Algorithm 2. We first prove three propositions.

1) Add any nodes in arbitrary 𝒪i={i1,,ir},i{1,2,,m}\mathcal{O}_{i}=\{i_{1},\ldots,i_{r}\},~{}i\in\{1,2,\ldots,m\} would increase the dimensions of observers on at least one node. Since 𝒪i\mathcal{O}_{i} is obtained by at least one node (denote as i1i_{1}) within the partition using a greedy algorithm, node i1i_{1} makes the optimal choice when selecting 𝒪i\mathcal{O}_{i} based on the current situation. Now, let’s discuss two scenarios. First, i1i_{1} has yet to be included in any partition before selecting 𝒪i\mathcal{O}_{i}. In this case, 𝒪i\mathcal{O}_{i} is the first partition for i1i_{1}, so it is the optimal partition choice for i1i_{1} at that time. In this scenario, adding any new node to this partition will increase the dimension of the observer on i1i_{1}. Second, before selecting 𝒪i\mathcal{O}_{i}, i1i_{1} has already been included in another partition (denote as 𝒪j\mathcal{O}_{j}) by another node (let’s call it i2i_{2}). In this case, if a new node i0i_{0} is added into 𝒪i\mathcal{O}_{i}, then the dimension of observers on node i1i_{1} will increase owing to the joining of i0i_{0}, unless 𝒪j\mathcal{O}_{j} becomes a subset of 𝒪i{i0}\mathcal{O}_{i}\cup\{i_{0}\} and node ii chooses to merge 𝒪i\mathcal{O}_{i} and 𝒪j\mathcal{O}_{j}. However, 𝒪j\mathcal{O}_{j} is the optimal choice made by i2i_{2} based on the greedy algorithm. Therefore, any changes made by i1i_{1} to 𝒪j\mathcal{O}_{j} will inevitably result in an increase in the observer dimension on i2i_{2}. This proposition is proven.

2) For arbitrary 𝒪i,i{1,2,,m}\mathcal{O}_{i},~{}i\in\{1,2,\ldots,m\}, every node within this partition is indispensable. Taking i𝒪ii_{\in}\mathcal{O}_{i} as an example, there are two reasons why i1i_{1} includes i2i_{2} in the partition based on the greedy algorithm. First, i1i_{1} needs to estimate the information of i2i_{2}. In this case, if i2i_{2} is excluded from the partition, i1i_{1} would still need to establish a new partition 𝒪i\mathcal{O}_{i}^{\prime} that includes both i1i_{1} and i2i_{2} in order to estimate the state of i2i_{2}. Thus, it must hold that |𝒪i|<|𝒪i|+|𝒪i\{i2}|=2+|𝒪i|1|\mathcal{O}_{i}|<|\mathcal{O}_{i}^{\prime}|+|\mathcal{O}_{i}\backslash\{i_{2}\}|=2+|\mathcal{O}_{i}|-1. Second, if i1i_{1} does not need to estimate the state of i2i_{2}, it implies that the connectivity of the communication network within 𝒪i\mathcal{O}_{i} depends on the presence of i2i_{2}. In this case, if i2i_{2} is removed, the partition becomes invalid. In conclusion, we have demonstrated that “any node in any partition is indispensable”.

Based on the above three propositions, we will prove that the partitioning results obtained by Algorithm 1 and Algorithm 2 are already Pareto optimal solutions. To this end, it is sufficient to show that reducing the value of D(𝒫j)D(\mathscr{P}_{j}) for any node (e.g., jj) will result in an increase in D(𝒫k)D(\mathscr{P}_{k}) for at least one node (e.g., kk). We will discuss this with two cases.

First, let’s assume that 𝒫j\mathscr{P}_{j} contains multiple partitions (𝒪j1\mathcal{O}_{j1}, 𝒪j2\mathcal{O}_{j2}, \ldots). In order to reduce D(𝒫j)D(\mathscr{P}_{j}), node jj can either remove a node from a certain partition (e.g., deleting node kk from partition 𝒪j1\mathcal{O}_{j1}) or merge multiple partitions (e.g., merging 𝒪j1\mathcal{O}_{j1} and 𝒪j2\mathcal{O}_{j2}). Based on the previous propositions, there exists at least one node i1i_{1}, for which 𝒪j1\mathcal{O}_{j1} is optimal. Therefore, any operation of reducing or merging 𝒪j1\mathcal{O}_{j1} will inevitably lead to an increase in D(𝒫i1)D(\mathscr{P}_{i_{1}}). Hence, in this case, optimizing the observer dimensions of node jj will inevitably result in at least one other node suffering a loss in benefit or resulting in an invalid partitioning result.

Second, let’s assume that 𝒫j\mathscr{P}_{j} contains only one partition. In this case, the only way for node jj to reduce D(𝒫j)D(\mathscr{P}_{j}) is by removing some nodes from that partition. However, according to Proposition 2), this approach will obviously result in at least one other node suffering a loss in benefit.

In conclusion, the partition results obtained from Algorithm 1 and Algorithm 2 are already located on the Pareto frontier. ∎

We have the following discussions regarding Algorithm 1 and 2.

Algorithm 1 Partition algorithm of distributed observer for large-scale systems
0:  𝒩i\mathcal{N}_{i} and 𝒞i\mathcal{C}_{i} i𝒱\forall i\in\mathcal{V}
0:  A partition of distributed observer
1:  Establishing partitions based on greedy algorithm;
2:  Initialize p=1p=1 and 𝒪p=\mathcal{O}_{p}=\emptyset; initialize 𝒫i=\mathscr{P}_{i}=\emptyset i𝒱\forall i\in\mathcal{V};
3:  for i=1i=1 to NN do
4:     Choose ri𝒱r_{i}\in\mathcal{V};
5:     if 𝒩ri\𝒪(𝒫ri)\mathcal{N}_{r_{i}}\backslash\mathcal{O}(\mathscr{P}_{r_{i}})\neq\emptyset then
6:        for j𝒩ri\𝒪(𝒫ri)j\in\mathcal{N}_{r_{i}}\backslash\mathcal{O}(\mathscr{P}_{r_{i}}) do
7:           Find Pa(ri,j)Pa(r_{i},j);
8:           Set 𝒪p=𝒪pPa(ri,j)\mathcal{O}_{p}=\mathcal{O}_{p}\cup Pa(r_{i},j);
9:        end for
10:     end if
11:     Set 𝒫k=𝒫k{𝒪p}\mathscr{P}_{k}=\mathscr{P}_{k}\cup\{\mathcal{O}_{p}\} k𝒪p\forall k\in\mathcal{O}_{p};
12:     Set p=p+1p=p+1 and initialize 𝒪p=\mathcal{O}_{p}=\emptyset;
13:  end for
14:  Merging partitions based on greedy algorithm;
15:  for i=1i=1 to NN do
16:     Choose si𝒱s_{i}\in\mathcal{V};
17:     Calculate a set MsiM_{s_{i}} and define si=|Msi|\ell_{s_{i}}=|M_{s_{i}}|;
18:     ////Refer Algorithm 2 to obtain MsiM_{s_{i}}
19:     while jsij\leq\ell_{s_{i}} do
20:        Set the jjth element of MsiM_{s_{i}} be Jsij={pm1,pm2,,pm}J_{s_{i}j}=\{p_{m1},p_{m2},\ldots,p_{m\ell}\}, where =|Jsij|\ell=|J_{s_{i}j}|;
21:        if k=1𝒪pmk=\bigcup_{k=1}^{\ell}\mathcal{O}_{p_{mk}}=\emptyset or k=1min{|𝒪pmk|,1}=1\sum_{k=1}^{\ell}\min\{|\mathcal{O}_{p_{mk}}|,1\}=1 then
22:           Let j=j+1j=j+1 and go to line 2121;
23:        end if
24:        Set c1=0c_{1}=0 and c2=0c_{2}=0;
25:        Calculate D(𝒫l)D(\mathscr{P}_{l}) lk=1𝒪pmk\forall l\in\bigcup_{k=1}^{\ell}\mathcal{O}_{p_{mk}};
26:        Calculate D(𝒫si)=|k=1𝒪pmk|D^{*}(\mathscr{P}_{s_{i}})=\left|\bigcup_{k=1}^{\ell}\mathcal{O}_{p_{mk}}\right|;
27:        Set c1=1c_{1}=1 if D(𝒫si)D(𝒫l)D(𝒫si)D(𝒫si)D^{*}(\mathscr{P}_{s_{i}})-D(\mathscr{P}_{l})\leq D(\mathscr{P}_{s_{i}})-D^{*}(\mathscr{P}_{s_{i}}) lk=1𝒪pmk\forall l\in\bigcup_{k=1}^{\ell}\mathcal{O}_{p_{mk}} and lsil\neq s_{i};
28:        Set c2=1c_{2}=1 if (D(𝒫si))2lk=1𝒪pmkD(𝒫l)\left(D^{*}(\mathscr{P}_{s_{i}})\right)^{2}\leq\sum_{l\in\bigcup_{k=1}^{\ell}\mathcal{O}_{p_{mk}}}D(\mathscr{P}_{l});
29:        if c1c2=1c_{1}c_{2}=1 then
30:           Let 𝒪pm1=k=1𝒪pmk\mathcal{O}_{p_{m1}}=\bigcup_{k=1}^{\ell}\mathcal{O}_{p_{mk}}, and 𝒪pmk=\mathcal{O}_{p_{mk}}=\emptyset for k=2,,k=2,\ldots,\ell;
31:        end if
32:        Let j=j+1j=j+1;
33:     end while
34:  end for

(D1) The node constraint conditions can be satisfied. The guarantee of constraint conditions (It is required that the union of all partitions where each agent is located should include all its physical network neighbor agents) is mainly in the first part “establishing partitions” Algorithm 1 because its “Partition initialization” serves the purpose of allowing each node to select partitions to cover its physical network neighboring nodes.

Algorithm 2 Calculate MsiM_{s_{i}} for i=1,,Ni=1,\ldots,N
0:  𝒫si\mathscr{P}_{s_{i}}
0:  MsiM_{s_{i}}
1:  Initialize Msi=M_{s_{i}}=\emptyset;
2:  Construct a set of 𝒫si\mathscr{P}_{s_{i}}’s subsets, denoted by 𝒫sic={𝒥𝒫si;|𝒥|2}\mathscr{P}_{s_{i}}^{c}=\{\mathcal{J}\subset\mathscr{P}_{s_{i}};~{}|\mathcal{J}|\geq 2\}; Without lossing of generality, define 𝒫sic={𝒥si1,𝒥si2,,𝒥siζsi}\mathscr{P}_{s_{i}}^{c}=\{\mathcal{J}_{s_{i}1},\mathcal{J}_{s_{i}2},\ldots,\mathcal{J}_{s_{i}\zeta_{s_{i}}}\}, where ζsi=2|𝒫si||𝒫si|1\zeta_{s_{i}}=2^{|\mathscr{P}_{s_{i}}|}-|\mathscr{P}_{s_{i}}|-1;
3:  for k=1k=1 to ζsi\zeta_{s_{i}} do
4:     Calculate (𝒥sik)=𝒪p𝒥sik𝒪p\mathcal{I}(\mathcal{J}_{s_{i}k})=\bigcap_{\mathcal{O}_{p}\in\mathcal{J}_{s_{i}k}}\mathcal{O}_{p};
5:     if |(𝒥sik)|2|\mathcal{I}(\mathcal{J}_{s_{i}k})|\geq 2 then
6:        Set Msi=Msi{𝒥sik}M_{s_{i}}=M_{s_{i}}\cup\{\mathcal{J}_{s_{i}k}\};
7:     end if
8:  end for

(D2) The central part of Algorithm 1 adopts the idea of a greedy algorithm. According to (D1), the feasible partition scheme under the constraint conditions has been reached with the first part of Algorithm 1. However, since the first part starts from the node with the smallest degree in the communication network, the node with a large degree (hub node) may have an excessive computational burden. The second part of the algorithm is mainly employed to reduce the computational burden of hub nodes. The developed method allows the hub nodes’ neighbors in the communication network to share the computing burden equally by merging the partitions of hub nodes. Whether it is shared equally depends on:

1) The increment of the local observer dimension of any neighbor node cannot be greater than the reduction of the local observer dimension of the hub node;

2) The total dimension of the local observers of the hub node and its neighbors should be reduced compared to that before the merging partition.

Therefore, the merging part will further optimize the partition results obtained by partition initialization.

(D3) Detailed description of “merging partition” part. Suppose a hub node wants to merge partitions 𝒪pm1,,𝒪pm\mathcal{O}_{p_{m1}},\ldots,\mathcal{O}_{p_{m\ell}}. After the partitions are merged, the local observers of all nodes in the merged partition 𝒪pm1\mathcal{O}_{p_{m1}} must estimate all the states involved in 𝒪pm1\mathcal{O}_{p_{m1}}. Hence, the dimensions of local observers of all nodes are equal after merging, which can be denoted as D(𝒫si)=|k=1𝒪pmk|D^{*}(\mathscr{P}_{s_{i}})=|\bigcup_{k=1}^{\ell}\mathcal{O}_{p_{mk}}| where sis_{i} is the index of the hub node. Therefore, condition 1) in (D2) indicates D(𝒫si)D(𝒫l)D(𝒫si)D(𝒫si)D^{*}(\mathscr{P}_{s_{i}})-D(\mathscr{P}_{l})\leq D(\mathscr{P}_{s_{i}})-D^{*}(\mathscr{P}_{s_{i}}) for l=1,,sil=1,\ldots,\ell_{s_{i}}. Furthermore, after merging partitions, there are D(𝒫si)D^{*}(\mathscr{P}_{s_{i}}) nodes in the partition, and each node needs to estimate all states of D(𝒫si)D^{*}(\mathscr{P}_{s_{i}}) nodes. Therefore, condition 2) can be formulated as (D(𝒫si))2lk=1𝒪pmkD(𝒫l)\left(D^{*}(\mathscr{P}_{s_{i}})\right)^{2}\leq\sum_{l\in\bigcup_{k=1}^{\ell}\mathcal{O}_{p_{mk}}}D(\mathscr{P}_{l}). The aforementioned explains the source of step 2727 and step 2828 in Algorithm 1.

Refer to caption
Figure 4: Usage example of Algorithm 1: initial partition. The part marked in red is the node with i0\ell_{i}\neq 0, which needs to consider whether to merge partitions.
Refer to caption
Figure 5: Usage example of Algorithm 1: Final partition.

(D4) Algorithm 1 has polynomial time complexity.

First, focus on the first part of the algorithm. The complexity of this part is reflected in the number of calculations of lines 77 to line 99. They are the core contents of “establishing partitions” and are calculated with at most N1Ni=1N|𝒩i|N\cdot\frac{1}{N}\sum_{i=1}^{N}|\mathcal{N}_{i}| times. Note that the average degree 1Ni=1N|𝒩i|\frac{1}{N}\sum_{i=1}^{N}|\mathcal{N}_{i}| is scarce relative to the total number NN in a large-scale network.

Second, we calculate the time complexity of Algorithm 2. Since Algorithm 2 is the 1818th line of Algorithm 1, its complexity is also a part of the complexity of Algorithm 1. Its core steps include line 44 and line 66, and they are calculated with at least N1Ni=1NζiN\cdot\frac{1}{N}\sum_{i=1}^{N}\zeta_{i} times. According to the partition selection method, the number of partitions of each node will not exceed its communication network degree. Hence,

1Ni=1Nζi=\displaystyle\frac{1}{N}\sum_{i=1}^{N}\zeta_{i}= 1Ni=1N(2|𝒫i||𝒫i|1)\displaystyle\frac{1}{N}\sum_{i=1}^{N}\left(2^{|\mathscr{P}_{i}|}-|\mathscr{P}_{i}|-1\right)
\displaystyle\approx 1Ni=1N(2|𝒞i||𝒞i|1)\displaystyle\frac{1}{N}\sum_{i=1}^{N}\left(2^{|\mathcal{C}_{i}|}-|\mathcal{C}_{i}|-1\right)
=\displaystyle= 2κNκN1,\displaystyle 2^{\kappa N}-\kappa N-1, (5)

where κ\kappa stands for the proportion of the average degree of communication network to NN. Note that 2κNκN1=O(N)2^{\kappa N}-\kappa N-1=O(N) when κ1Nlog2[N+1]N\kappa\leq\frac{1}{N}\log_{2}[N+1]\ll N.

Third, core steps (line 25,26,3025,26,30) of the last loop (line 1515 to line 3434) of Algorithm 1 should be calculated with N1Ni=1NsiN\cdot\frac{1}{N}\sum_{i=1}^{N}\ell_{s_{i}} times. Besides,

1Ni=1Ni<1Ni=1Nζi=O(N).\displaystyle\frac{1}{N}\sum_{i=1}^{N}\ell_{i}<\frac{1}{N}\sum_{i=1}^{N}\zeta_{i}=O(N). (6)

In summary, the complexity of Algorithm 1 is

O(N)1Ni=1N|𝒩i|+O(N)1Ni=1Nζi+O(N)1Ni=1Ni\displaystyle O(N)\frac{1}{N}\sum_{i=1}^{N}|\mathcal{N}_{i}|+O(N)\frac{1}{N}\sum_{i=1}^{N}\zeta_{i}+O(N)\frac{1}{N}\sum_{i=1}^{N}\ell_{i}
\displaystyle\leq O(N)+O(N)O(N)+O(N)O(N)O(N2).\displaystyle O(N)+O(N)\cdot O(N)+O(N)\cdot O(N)\leq O(N^{2}). (7)

Therefore, Algorithm 1 has the polynomial time complexity.

The following text explains Algorithm 1 and 2 in more detail with an example. Figure 4 shows the physical and communication network considered in this example, and The detailed steps of partitioning this network are listed in Figure 4 and Figure 5, where the former shows the initial partitioning steps and the latter shows the merging steps. To facilitate readers’ understanding, some key steps will be interpreted.

(I1) We focus on the area containing nodes 1,,101,\ldots,10. This area is taken as an example to show why partition design should be carried out in the order of r1r2rNr_{1}\preceq r_{2}\preceq\cdots\preceq r_{N}. In light of the sorting rule, node 11 with 𝒩1=2\mathcal{N}_{1}=2 and 𝒞1=1\mathcal{C}_{1}=1 is the first node to be considered. Based on line 77 to line 1010, we know 𝒪1={1,2,5}\mathcal{O}_{1}=\{1,2,5\}. Hence, 𝒫1={𝒪1}\mathscr{P}_{1}=\{\mathcal{O}_{1}\}, 𝒫2={𝒪1}\mathscr{P}_{2}=\{\mathcal{O}_{1}\}, 𝒫5={𝒪1}\mathscr{P}_{5}=\{\mathcal{O}_{1}\}. Subsequently, partitions 𝒪2={3,4}\mathcal{O}_{2}=\{3,4\}, 𝒪3={6,7}\mathcal{O}_{3}=\{6,7\}, 𝒪4={5,8,9}\mathcal{O}_{4}=\{5,8,9\}, 𝒪5={6,10}\mathcal{O}_{5}=\{6,10\}, 𝒪6={10,11}\mathcal{O}_{6}=\{10,11\} can be selected for nodes 44, 77, 88, and 1010 in order. Then, note that 1,5𝒪(𝒫2)1,5\in\mathcal{O}(\mathscr{P}_{2}); thus 𝒪7={2,3,6}\mathcal{O}_{7}=\{2,3,6\} is necessary included in 𝒫2\mathscr{P}_{2}. Besides, since 𝒩3\mathcal{N}_{3} has been contained in 𝒪(𝒫3)\mathcal{O}(\mathscr{P}_{3}), node 33 needs not to select partitions any more. This reflects the advantages of the proposed node sorting rule: the node with a larger degree in the communication network selects the partition in the later order, which neither affects its partition selection nor interferes with the partition selection of the previous node, but greatly reduces the computational load (most of the required partitions have been selected in advance). The same situation can also be seen at node 66. However, the unreasonable order of nodes will lead to a large number of partitions with high repetition (See in I2).

(I2) We focus on nodes 4242, 4343, and 4444. Their selection partition order is 42434442\preceq 43\preceq 44. Node 4242 selects its partition as {40,41,42,43}\{40,41,42,43\}. Hence, only node 4444 remains in 𝒩43\𝒪(𝒫43)\mathcal{N}_{43}\backslash\mathcal{O}(\mathscr{P}_{43}). It means {43,44}\{43,44\} should be added into 𝒫43\mathscr{P}_{43}. However, since 41𝒩4441\in\mathcal{N}_{44}, 𝒫44\mathscr{P}_{44} must include a partition {41,43,44}\{41,43,44\}. It leads to a new partition to be added to node 4343, which has completed the partition selection before that. More unreasonably, two highly repetition partitions {41,43,44},{43,44}\{41,43,44\},\{43,44\} exist at the same time in 𝒫43\mathscr{P}_{43} and 𝒫44\mathscr{P}_{44}. The reason for this phenomenon is that |𝒩44\𝒪(𝒫44)|>|𝒩43\𝒪(𝒫43)||\mathcal{N}_{44}\backslash\mathcal{O}(\mathscr{P}_{44})|>|\mathcal{N}_{43}\backslash\mathcal{O}(\mathscr{P}_{43})| after the node 4242 selects its partitions. The above analysis shows that we have the advantage of allowing the node with a smaller communication network degree and larger physical network degree to select partitions preferentially. Otherwise, highly similar partitions like {41,43,44}\{41,43,44\} and {43,44}\{43,44\} will appear in large numbers. In addition, the above analysis also shows that a more reasonable sorting rule should be: node ii will preferentially select the partition if |𝒩i\𝒪(𝒫i)|>|𝒩j\𝒪(𝒫j)||\mathcal{N}_{i}\backslash\mathcal{O}(\mathscr{P}_{i})|>|\mathcal{N}_{j}\backslash\mathcal{O}(\mathscr{P}_{j})| when |𝒞j|=|𝒞i||\mathcal{C}_{j}|=|\mathcal{C}_{i}|. However, this sort rule means that one has to sort the nodes again after a node selects its partition, which is adverse to the implementation of the algorithm. Therefore, we do not improve the algorithm from the perspective of the remaining physical network neighbor nodes but use the merging strategy to make up for this problem.

(I3) Figure 5 displays the steps of merging partitions. Note that the actual calculation is not complex, although the complexity of the partition merging part is O(N2)O(N^{2}). It is because after the partitions of some nodes are merged, many other nodes that initially needed to merge partitions no longer need to perform the merging steps. For example, in Figure 4, all nodes 41,43,4441,43,44 need to merge partitions. However, MsiM_{s_{i}} (defined by Algorithm 2) with respect to node 4343 and 4444 are both empty after 4141 merging its partitions, and thus they need not perform the merging steps anymore.

Remark 2

From this example, we can see that if we design partitioned distributed observers based on the partitioning results shown in Figure 4 and 5, we can significantly reduce the dimension of observers on each agent. However, we must emphasize an important fact: the degree of reduction in observer dimension on agents is closely related to the similarity between the communication network and the physical network. Due to the arbitrariness of communication and physical networks, some strange partitions may occur. For example, two nodes that are neighbors in a physical network may be far apart in a communication network, which can lead to the occurrence of huge partitions. In such cases, the reduction in the dimension of observers on each agent will not be as pronounced as in Figure 4 and 5. Therefore, we introduce the concept of “network similarity”—defined by Spc=2|cp|/(|c|+|p|)S_{pc}=2|\mathcal{E}_{c}\cap\mathcal{E}_{p}|/(|\mathcal{E}_{c}|+|\mathcal{E}_{p}|)—to improve the rigor when describing simulation results. In other words, when describing how much our partition algorithm reduces the dimension of observers on each agent, we must indicate the degree of network similarity because the lower the network similarity, the higher the possibility of large abnormal partitions, and the lower the possibility of significant reduction in observer dimension.

III-B Partitioned distributed observer

This subsection designs distributed observer for each partition based on the partition method in the previous subsection. Suppose that 𝒪p\mathcal{O}_{p} is a partition belonging to 𝒫i\mathscr{P}_{i}, and then the state observer on the iith agent to itself takes the form of:

x^˙ii(p)=\displaystyle\dot{\hat{x}}_{ii}^{(p)}= Aiix^ii(p)+j𝒩iAijx¯ij+θΓϵHi(yiCix^ii(p))\displaystyle A_{ii}\hat{x}_{ii}^{(p)}+\sum_{j\in\mathcal{N}_{i}}A_{ij}\bar{x}_{ij}+\theta\Gamma_{\epsilon}H_{i}\left(y_{i}-C_{i}\hat{x}_{ii}^{(p)}\right)
+Biu^ii+γθnP~i1j𝒪pαij(x^ji(p)x^ii(p)),\displaystyle+B_{i}\hat{u}_{ii}+\gamma\theta^{n}\tilde{P}_{i}^{-1}\sum_{j\in\mathcal{O}_{p}}\alpha_{ij}\left(\hat{x}_{ji}^{(p)}-\hat{x}_{ii}^{(p)}\right), (8)

where x^ii(p)\hat{x}_{ii}^{(p)} is the estimation of xix_{i} generated by observers on agent ii in partition 𝒪p\mathcal{O}_{p}; x¯ij=1/Nj,𝒫i𝒪p𝒫ix^ij(p)\bar{x}_{ij}=1/N_{j,\mathscr{P}_{i}}\cdot\sum_{\mathcal{O}_{p}\in\mathscr{P}_{i}}\hat{x}_{ij}^{(p)} with Nj,𝒫iN_{j,\mathscr{P}_{i}} being the number of occurrences of jj in 𝒪(𝒫i)\mathcal{O}(\mathscr{P}_{i}); x^ij(p)\hat{x}_{ij}^{(p)} stands for the state estimation of the jjth subsystem generated by the iith agent. Γϵ=diag{ϵn1,,ϵ,1}\Gamma_{\epsilon}=diag\{\epsilon^{n-1},\ldots,\epsilon,1\} is the high-gain matrix with ϵ=1/θ\epsilon=1/\theta and θ\theta being the high-gain parameter; Hin×pH_{i}\in\mathbb{R}^{n\times p} and γ\gamma are, respectively, the observer gain and coupling gain; P~i=ΓϵPiΓϵ1\tilde{P}_{i}=\Gamma_{\epsilon}P_{i}\Gamma_{\epsilon}^{-1} is the so-called weighted matrix and PiP_{i} is a symmetric positive definite matrix that will be designed later; u^ii(p)\hat{u}_{ii}^{(p)} is the control input relying on x¯ij\bar{x}_{ij}.

The dynamics of x^li(p)\hat{x}_{li}^{(p)} l𝒪p\forall l\in\mathcal{O}_{p} can be expressed as

x˙li(p)=\displaystyle\dot{x}_{li}^{(p)}= Aiix^li(p)+j𝒪(𝒫l)𝒩iAijx¯lj\displaystyle A_{ii}\hat{x}_{li}^{(p)}+\sum_{j\in\mathcal{O}(\mathscr{P}_{l})\cap\mathcal{N}_{i}}A_{ij}\bar{x}_{lj}
+Biu^li+γθnj𝒪pαij(x^ji(p)x^li(p)),\displaystyle+B_{i}\hat{u}_{li}+\gamma\theta^{n}\sum_{j\in\mathcal{O}_{p}}\alpha_{ij}\left(\hat{x}_{ji}^{(p)}-\hat{x}_{li}^{(p)}\right), (9)

where x^li(p)\hat{x}_{li}^{(p)} is the estimation of xix_{i} generated by observers on agent ll in partition 𝒪p\mathcal{O}_{p}, and u^li\hat{u}_{li} is the control input relying on x¯lj\bar{x}_{lj}.

Denote eii(p)=xix^ii(p)e_{ii}^{(p)}=x_{i}-\hat{x}_{ii}^{(p)} and eli(p)=x^li(p)xie_{li}^{(p)}=\hat{x}_{li}^{(p)}-x_{i}, we have

e˙ii(p)=\displaystyle\dot{e}_{ii}^{(p)}= (AiiθΓϵHiCi)eii(p)+j𝒩iAije¯ij\displaystyle\left(A_{ii}-\theta\Gamma_{\epsilon}H_{i}C_{i}\right)e_{ii}^{(p)}+\sum_{j\in\mathcal{N}_{i}}A_{ij}\bar{e}_{ij}
+Bi(u^iiu¯i)γθnP~i1j𝒪pαij(eji(p)eii(p)),\displaystyle+B_{i}(\hat{u}_{ii}-\bar{u}_{i})-\gamma\theta^{n}\tilde{P}_{i}^{-1}\sum_{j\in\mathcal{O}_{p}}\alpha_{ij}\left(e_{ji}^{(p)}-e_{ii}^{(p)}\right), (10)
e˙li(p)=\displaystyle\dot{e}_{li}^{(p)}= Aiieli(p)+j𝒪(𝒫l)𝒩iAije¯lj\displaystyle A_{ii}e_{li}^{(p)}+\sum_{j\in\mathcal{O}(\mathscr{P}_{l})\cap\mathcal{N}_{i}}A_{ij}\bar{e}_{lj}
+Bi(u^liu¯i)γθnj𝒪pαij(eji(p)eli(p)),\displaystyle+B_{i}(\hat{u}_{li}-\bar{u}_{i})-\gamma\theta^{n}\sum_{j\in\mathcal{O}_{p}}\alpha_{ij}\left(e_{ji}^{(p)}-e_{li}^{(p)}\right), (11)

where e¯lj=1/Nj,𝒫i𝒪p𝒫lelj(p)\bar{e}_{lj}=1/N_{j,\mathscr{P}_{i}}\cdot\sum_{\mathcal{O}_{p}\in\mathscr{P}_{l}}e_{lj}^{(p)}; and u¯i\bar{u}_{i} is the actual control law employed in the closed-loop system.

Remark 3

This section has achieved the first goal of this paper through designing the partition algorithm and partition distributed observer: the dimension of observers on agent ii (x^icol{x^ij(p),j𝒪p,𝒪p𝒫i}\hat{x}_{i\star}\triangleq col\{\hat{x}_{ij}^{(p)},~{}j\in\mathcal{O}_{p},~{}\mathcal{O}_{p}\in\mathscr{P}_{i}\}) is much smaller than that of the interconnected system.

Remark 4

There are two differences between the design of the partitioned distributed observer and the traditional distributed observer [13, 15]. First, since the states of the jjth subsystem may be repeatedly estimated by agent ii in multiple partitions, we introduce the fusion estimation (x¯ij\bar{x}_{ij}) in the coupling part of observers on each agent. Second, agent ii needs to use the physical network neighbor information of agent ll when estimating the states of subsystem ll (see in (9)). In traditional distributed observers, this information can be directly found in observers on the iith agent. However, since the dimension of observers on the iith agent in this paper is much smaller than that of the interconnected system, its states fail to cover the neighbor states of subsystem ll. Therefore, we need to eliminate the coupling relationships (j𝒪(𝒫l)𝒩iAijx¯lj\sum_{j\in\mathcal{O}(\mathscr{P}_{l})\cap\mathcal{N}_{i}}A_{ij}\bar{x}_{lj} in (9)) that cannot be obtained by agent ii when designing x^il\hat{x}_{il}. In other words, to reduce the dimension of observers on each agent, we have to artificially introduce model mismatch to compensate for the lack of information caused by observer dimension reduction. Moreover, the control input u^il\hat{u}_{il} in observers on the ii agent also needs the state estimation x^l\hat{x}_{l\star}. So, multiple estimates and model mismatch also occur in the control input (u^ii\hat{u}_{ii} and u^li\hat{u}_{li}) of the partitioned distributed observer (8) and (9). The content introduced here is the novelty of the partitioned distributed observer design in this section, and it is also one of the main difficulties to be solved later.

IV Distributed control law for large-scale linear systems

Section III has reduced the dimension of observers on each agent by partitioning the network. This section will analyze the performance of the partitioned distributed observer and the closed-loop system after the dimension reduction of the local observer. Subsection IV-A shows the design of distributed control law. The stability of the error dynamics (10) and (11) and the closed-loop system will be proved in subsection IV-B and IV-C, respectively. Subsection IV-D will show the achievement of the second goal of this paper.

IV-A Distributed control law

State feedback control law can be employed in the large-scale system (3) and (4). By observing the dynamics of the iith subsystem (1) and (2), the globally asymptotically stable control law gives rise to

ui=j𝒩i{i}Kijxj,\displaystyle u_{i}=\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}x_{j}, (12)

where KijK_{ij} is designed so that A+BKA+BK is a Hurwitz matrix with K=[Kij]i,j=1NK=[K_{ij}]_{i,j=1}^{N}. However, (12) is the state feedback control law with precise system states. Hence, one should replace them with the state estimations generated by the observers on the iith agent. In the case of the traditional distributed-observer-based distributed control law [2, 9, 10, 11], the exact states of (12) can be directly replaced by state estimations of the iith local observer. However, since this paper studies large-scale systems and does not want every agent to reconstruct all the states of the whole system, the concept of partition is introduced and thus leads to a fusion estimation problem (See Remark 4. For example, agent 3131 in the example of Figure 4 and Figure 5 has estimated its states three times in three partitions). Hence, x¯ii=1/|𝒫i|𝒪p𝒫ix^ii(p)\bar{x}_{ii}=1/|\mathscr{P}_{i}|\cdot\sum_{\mathcal{O}_{p}\in\mathscr{P}_{i}}\hat{x}_{ii}^{(p)}, and x¯li=1/Ni,𝒫l𝒪p𝒫lx^li(p)\bar{x}_{li}=1/N_{i,\mathscr{P}_{l}}\cdot\sum_{\mathcal{O}_{p}\in\mathscr{P}_{l}}\hat{x}_{li}^{(p)} (note that 1/Ni,𝒫i=1/|𝒫i|1/N_{i,\mathscr{P}_{i}}=1/|\mathscr{P}_{i}|) are expected to be employed in the control law of the iith subsystem, i.e.,

u^ii=j𝒩i{i}Kijx¯ij.\displaystyle\hat{u}_{ii}=\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}\bar{x}_{ij}. (13)

Control law (13) can be used in observers on the iith agent but not in closed-loop systems because it cannot protect the closed-loop performance from the peak phenomenon of observers. Hence, we need to add a saturation mechanism, i.e.,

u¯i=j𝒩i{i}Kij𝕩ij,\displaystyle\bar{u}_{i}=\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}\mathbbm{x}_{ij}, (14)

where 𝕩ij\mathbbm{x}_{ij} is the so-called saturation value of x¯ij\bar{x}_{ij} and it is expressed as 𝕩ij=sat{x¯ij/}\mathbbm{x}_{ij}=\mathcal{M}sat\{\bar{x}_{ij}/\mathcal{M}\} with \mathcal{M} being a given positive constant and sat{}sat\{\cdot\} being the saturation function. Furthermore, u^li\hat{u}_{li} in (9)—the control law with the state estimations of the llth agent to the iith subsystem—takes the form of:

u^li=j𝒪(𝒫l)𝒩iKijx¯lj.\displaystyle\hat{u}_{li}=\sum_{j\in\mathcal{O}(\mathscr{P}_{l})\cap\mathcal{N}_{i}}K_{ij}\bar{x}_{lj}. (15)

Control law (12) is with precise and global states and is thus regarded as the centralized control law. (13) and (15) are employed in distributed observer (8) and (9). As for (14), it is the distributed-observer-based distributed control. The closed-loop system combined with (3) and (14) is the main research object of this paper.

Refer to caption
Figure 6: Schematic diagram for delineating the observation and control strategies.

For the convenience of understanding, the diagram of the distributed control law described in this subsection is shown in Figure 6. As illustrated, the observers on the iith agent use yiy_{i} and the information obtained via the communication network to obtain x¯ij,j𝒩i\bar{x}_{ij},~{}j\in\mathcal{N}_{i} required for the iith agent. Then, observers send these x¯ij\bar{x}_{ij} to the iith local controller to get u^ii\hat{u}_{ii} and u^il,l𝒪(𝒫i)\hat{u}_{il},~{}l\in\mathcal{O}(\mathscr{P}_{i}). Then, these control signals are used in observers of iith agent, and u¯i\bar{u}_{i}—control signal after applying saturation mechanism—is used to control the iith subsystem.

In what follows, some properties concerning these four control laws (12)–(15) are proved.

Lemma 2

There exists constants T1>0T_{1}>0, ρii>0\rho_{ii}>0, and κii>0\kappa_{ii}>0 such that

u^iiu¯iρiiei+κii,\displaystyle\|\hat{u}_{ii}-\bar{u}_{i}\|\leq\rho_{ii}\|e_{i\star}\|+\kappa_{ii}, (16)

for t<T1t<T_{1} and all i=1,,Ni=1,\ldots,N, where ei=col{eij(p),j𝒪p,𝒪p𝒫i}e_{i\star}=col\{e_{ij}^{(p)},~{}j\in\mathcal{O}_{p},~{}\mathcal{O}_{p}\in\mathscr{P}_{i}\}.

Proof:

Since there is no finite time escape problem for linear systems, a constant T1>0T_{1}>0 exists such that the states of closed-loop system (3) and (14) are bounded when t<T1t<T_{1}. Then, we know there are constants Mui>0M_{u_{i}}>0 and Mu¯iM_{\bar{u}_{i}} such that uiMui\|u_{i}\|\leq M_{u_{i}} and u¯iMu¯i\|\bar{u}_{i}\|\leq M_{\bar{u}_{i}} when t<T1t<T_{1}. It follows uiu¯iMu¯i+Mui\|u_{i}-\bar{u}_{i}\|\leq M_{\bar{u}_{i}}+M_{u_{i}}. To continue, we first calculate

col{e¯ij,j𝒩i{i}}2\displaystyle\left\|col\left\{\bar{e}_{ij},~{}j\in\mathcal{N}_{i}\cup\{i\}\right\}\right\|^{2}
\displaystyle\leq j𝒩i{i}𝒪p𝒫ieij(p)22j𝒩i{i}𝒪p𝒫ieij(p)2\displaystyle\sum_{j\in\mathcal{N}_{i}\cup\{i\}}\left\|\sum_{\mathcal{O}_{p}\in\mathscr{P}_{i}}e_{ij}^{(p)}\right\|^{2}\leq 2\sum_{j\in\mathcal{N}_{i}\cup\{i\}}\sum_{\mathcal{O}_{p}\in\mathscr{P}_{i}}\left\|e_{ij}^{(p)}\right\|^{2}
\displaystyle\leq 𝒪p𝒫ij𝒪peij(p)22ei2.\displaystyle\sum_{\mathcal{O}_{p}\in\mathscr{P}_{i}}\sum_{j\in\mathcal{O}_{p}}\left\|e_{ij}^{(p)}\right\|^{2}\leq 2\|e_{i\star}\|^{2}.

Consequently,

u^iiui\displaystyle\|\hat{u}_{ii}-u_{i}\|
=\displaystyle= j𝒩i{i}Kijx¯ijj𝒩i{i}Kijxj\displaystyle\left\|\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}\bar{x}_{ij}-\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}x_{j}\right\|
=\displaystyle= j𝒩i{i}Kij1Nj,𝒫i𝒪p𝒫ix^ij(p)j𝒩i{i}Kijxj\displaystyle\left\|\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}\frac{1}{N_{j,\mathscr{P}_{i}}}\sum_{\mathcal{O}_{p}\in\mathscr{P}_{i}}\hat{x}_{ij}^{(p)}-\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}x_{j}\right\|
=\displaystyle= j𝒩i{i}Kij1Nj,𝒫i𝒪p𝒫ieij(p)\displaystyle\left\|\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}\frac{1}{N_{j,\mathscr{P}_{i}}}\sum_{\mathcal{O}_{p}\in\mathscr{P}_{i}}e_{ij}^{(p)}\right\|
\displaystyle\leq [Kij,j𝒩i{i}]×col{e¯ij,j𝒩i{i}}\displaystyle\left\|[K_{ij},~{}j\in\mathcal{N}_{i}\cup\{i\}]\right\|\times\left\|col\left\{\bar{e}_{ij},~{}j\in\mathcal{N}_{i}\cup\{i\}\right\}\right\|
\displaystyle\leq 2Kiei,\displaystyle\sqrt{2}\|K_{i}\|\|e_{i\star}\|, (17)

where Ki=[Ki1,,KiN]K_{i}=[K_{i1},\ldots,K_{iN}]. Denote ρii=2Ki\rho_{ii}=\sqrt{2}\|K_{i}\|, and κii=Mu¯i+Mui\kappa_{ii}=M_{\bar{u}_{i}}+M_{u_{i}}. Then, we obtain

u^iiu¯=u^iiui+uiu¯iρii,2ei+κii.\displaystyle\|\hat{u}_{ii}-\bar{u}\|=\|\hat{u}_{ii}-u_{i}\|+\|u_{i}-\bar{u}_{i}\|\leq\rho_{ii,2}\|e_{i\star}\|+\kappa_{ii}.

This completes the proof. ∎

Lemma 3

There are constants T1>0T_{1}>0, ρli>0\rho_{li}>0, κii>0\kappa_{ii}>0 such that

u^liu¯iρliel+κli,\displaystyle\|\hat{u}_{li}-\bar{u}_{i}\|\leq\rho_{li}\|e_{l\star}\|+\kappa_{li}, (18)

for t<T1t<T_{1} and all l𝒪pl\in\mathcal{O}_{p} and 𝒪p𝒫i\mathcal{O}_{p}\in\mathscr{P}_{i}, where el=col{elj(p),j𝒪p,𝒪p𝒫l}e_{l\star}=col\{e_{lj}^{(p)},~{}j\in\mathcal{O}_{p},~{}\mathcal{O}_{p}\in\mathscr{P}_{l}\}.

Proof:

Similar to the proof of Lemma 2, there are T1>0T_{1}>0 and WT1>0W_{T_{1}}>0 such that xix_{i} i=1,,N\forall i=1,\ldots,N are bounded by WT1W_{T_{1}} when t<T1t<T_{1}. Accordingly, we directly calculate

u^liui\displaystyle\|\hat{u}_{li}-u_{i}\|
\displaystyle\leq j𝒩i𝒪(𝒫l)Kij1Nj,𝒫l𝒪p𝒫lx^lj(p)\displaystyle\left\|\sum_{j\in\mathcal{N}_{i}\cap\mathcal{O}(\mathscr{P}_{l})}K_{ij}\frac{1}{N_{j,\mathscr{P}_{l}}}\sum_{\mathcal{O}_{p}\in\mathscr{P}_{l}}\hat{x}_{lj}^{(p)}\right.
j𝒩i𝒪(𝒫l)Kijxjj𝒪(𝒫l)\(𝒩i{i})Kijxj\displaystyle\left.-\sum_{j\in\mathcal{N}_{i}\cap\mathcal{O}(\mathscr{P}_{l})}K_{ij}x_{j}-\sum_{j\in\mathcal{O}(\mathscr{P}_{l})\backslash(\mathcal{N}_{i}\cup\{i\})}K_{ij}x_{j}\right\|
\displaystyle\leq 2Kiel+|𝒪(𝒫l)\(𝒩i{i})|KiWT1.\displaystyle\sqrt{2}\|K_{i}\|\|e_{l\star}\|+\sqrt{|\mathcal{O}(\mathscr{P}_{l})\backslash(\mathcal{N}_{i}\cup\{i\})|}\|K_{i}\|W_{T_{1}}. (19)

Then, defining ρli=2Ki\rho_{li}=\sqrt{2}\|K_{i}\|, and κli=Mu¯i+Mui+|𝒪(𝒫l)\(𝒩i{i})|×KiWT1\kappa_{li}=M_{\bar{u}_{i}}+M_{u_{i}}+\sqrt{|\mathcal{O}(\mathscr{P}_{l})\backslash(\mathcal{N}_{i}\cup\{i\})|}\times\|K_{i}\|W_{T_{1}} yield the conclusion. ∎

This section has designed a distributed-observer-based distributed control law and analyzed the influence of model mismatch and fusion estimation in distributed observer on the control input. Since the local observer no longer reconstructs all the states of the interconnected system, there is a mismatch between u¯i\bar{u}_{i} and u^ii\hat{u}_{ii}, as well as u¯i\bar{u}_{i} and u^li\hat{u}_{li}. Therefore, the error forms u¯iu^ii\bar{u}_{i}-\hat{u}_{ii} and u¯iu^li\bar{u}_{i}-\hat{u}_{li} in this paper are difficult to express explicitly, which is different from the situation of the traditional distributed observer. Lemma 2 and Lemma 3 in this subsection effectively solve this problem by introducing finite time constraints. It is worth emphasizing that the finite time condition only plays an auxiliary role in this paper, and it will be abandoned in subsection IV-C.

In addition, a discussion about KK should be given. Designing the controller gain KK in this section requires the use of global information. This approach is to ensure a meaningful comparison between the performance of the distributed control law proposed in this paper and that of the centralized control law (at least, the form of the control law should be the same). In many practical systems, it is sufficient to achieve the performance of centralized control by designing the control gain in a distributed manner. For example, in problems such as unmanned vehicle formations and coordinated control of microgrids (including the majority of practical systems that high-order systems can represent), each agent can design its own control gain. Then, based on the information provided by the partitioned distributed observer, it is possible to achieve distributed control laws with centralized control performance.

IV-B Performance of partitioned distributed observer

This subsection mainly focuses on the stability of error dynamics of partitioned distributed observers. Before that, we need to provide an important lemma. Denote ei(p)=col{eji(p),j𝒪p}e_{\star i}^{(p)}=col\{e_{ji}^{(p)},~{}j\in\mathcal{O}_{p}\} and then we can state the follows.

Lemma 4

Consider eie_{i\star} defined in Lemma 2 and ei(p)e_{\star i}^{(p)}. We have i=1N𝒪p𝒫iei(p)2=i=1Nei2=e2\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\|e_{\star i}^{(p)}\|^{2}=\sum_{i=1}^{N}\|e_{i\star}\|^{2}=\|e\|^{2}, and i=1N𝒪p𝒫iei(p)2e\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\|e_{\star i}^{(p)}\|\leq\sqrt{2}\|e\|, where e=col{ei,i=1,,N}e=col\{e_{i\star},~{}i=1,\dots,N\}.

Proof:

We know ei2=𝒪p𝒫ij𝒪peij(p)2\|e_{i\star}\|^{2}=\sum_{\mathcal{O}_{p}\in\mathscr{P}_{i}}\sum_{j\in\mathcal{O}_{p}}\|e_{ij}^{(p)}\|^{2} and ei(p)2=j𝒪peji(p)2\|e_{\star i}^{(p)}\|^{2}=\sum_{j\in\mathcal{O}_{p}}\|e_{ji}^{(p)}\|^{2} based on the definition of eie_{i\star} and ei(p)e_{\star i}^{(p)}. Hence,

i=1N𝒪p𝒫iei(p)2=i=1N𝒪p𝒫ij𝒪peji(p)2\displaystyle\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left\|e_{\star i}^{(p)}\right\|^{2}=\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{j\in\mathcal{O}_{p}}\left\|e_{ji}^{(p)}\right\|^{2}
=\displaystyle= i=1N𝒪p𝒫ij𝒪peij(p)2=i=1Nei2=e2.\displaystyle\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{j\in\mathcal{O}_{p}}\left\|e_{ij}^{(p)}\right\|^{2}=\sum_{i=1}^{N}\|e_{i\star}\|^{2}=\|e\|^{2}.

Furthermore, by the inequality of arithmetic and geometric means, we have

(i=1N𝒪p𝒫iei(p))2\displaystyle\left(\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left\|e_{\star i}^{(p)}\right\|\right)^{2}
\displaystyle\leq 2i=1N𝒪p𝒫iei(p)2=2i=1Nei2=2e2.\displaystyle 2\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left\|e_{\star i}^{(p)}\right\|^{2}=2\sum_{i=1}^{N}\|e_{i\star}\|^{2}=2\|e\|^{2}.

Therefore, i=1N𝒪p𝒫iei(p)2e\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\|e_{\star i}^{(p)}\|\leq\sqrt{2}\|e\|. ∎

Now, the main theorem of this subsection can be given. Note that this is a preliminary conclusion on the stability of error dynamics of partitioned distributed observers. The complete conclusion will be presented in Theorem 3 in the next subsection.

Theorem 1

Consider system (1), distributed observer (8)–(9), and networks subject to Assumption 1 and 2, and also consider the constant T1>0T_{1}>0 used in Lemma 2 and 3. Assume that states of closed-loop system (3) and (14) are bounded when t<T1t<T_{1}. Then, for arbitrary T1<T1T_{1}^{\prime}<T_{1}, there is a θ\theta such that the error dynamics of (8)–(9) can converge to an any small invariant set (around the origin) during t<T1<T1t<T_{1}^{\prime}<T_{1} if

1) HiH_{i} is chosen as θn1H¯i\theta^{n-1}\bar{H}_{i} such that A¯iiH¯iC¯i\bar{A}_{ii}-\bar{H}_{i}\bar{C}_{i} is a Hurwitz matrix (see more detail of H¯i\bar{H}_{i} in Remark 6) for all i=1,,Ni=1,\ldots,N where A¯ii=1θnΓϵ1AiiΓϵ\bar{A}_{ii}=\frac{1}{\theta^{n}}\Gamma_{\epsilon}^{-1}A_{ii}\Gamma_{\epsilon} and C¯i=CiΓϵ\bar{C}_{i}=C_{i}\Gamma_{\epsilon};

2) PiP_{i} is a symmetric positive definite matrix solved by

sym{Pi(A¯iH¯iC¯i)}=γIn,\displaystyle sym\{P_{i}(\bar{A}_{i}-\bar{H}_{i}\bar{C}_{i})\}=-\gamma I_{n}, (20)

3) For all θ1\theta\geq 1, coupling gain γ\gamma satisfies

γ>1θλ¯𝒪p(λA/θn1+2λPλ¯(A+ρB)),\displaystyle\gamma>\frac{1}{\theta\underline{\lambda}_{\mathcal{O}_{p}}}\left(\lambda_{A}/\theta^{n-1}+2\lambda_{P}\bar{\lambda}\left(\|A\|+\rho\|B\|\right)\right), (21)

where λ¯𝒪p=mini=1,,N{λ𝒪p,𝒪p𝒫i}\underline{\lambda}_{\mathcal{O}_{p}}=\min_{i=1,\ldots,N}\{\lambda_{\mathcal{O}_{p}},~{}\mathcal{O}_{p}\in\mathscr{P}_{i}\} with λ𝒪p=σ¯(𝒪p+𝒮𝒪p)\lambda_{\mathcal{O}_{p}}=\underline{\sigma}(\mathcal{L}_{\mathcal{O}_{p}}+\mathcal{S}_{\mathcal{O}_{p}}), 𝒪p\mathcal{L}_{\mathcal{O}_{p}} being the Laplacian matrix of the subgraph among 𝒪p\mathcal{O}_{p} as well as 𝒮𝒪p=diag{1,0,,0}|𝒪|p×|𝒪|p\mathcal{S}_{\mathcal{O}_{p}}=diag\{1,0,\ldots,0\}\in\mathbb{R}^{|\mathcal{O}|_{p}\times|\mathcal{O}|_{p}}; λA=maxi{σ¯(Aii+AiiT)}\lambda_{A}=\max_{i}\{\bar{\sigma}(A_{ii}+A_{ii}^{T})\}, λP=maxi{σ¯(Pi)}\lambda_{P}=\max_{i}\{\bar{\sigma}(P_{i})\}, λ¯=maxi{λi}\bar{\lambda}=\max_{i}\{\lambda_{i}\} with λi=|𝒫i|max{|𝒪p|,p𝒫i}\lambda_{i}=|\mathscr{P}_{i}|\max\{|\mathcal{O}_{p}|,~{}p\in\mathscr{P}_{i}\}, and ρ=maxi{ρi}\rho=\max_{i}\{\rho_{i}\} with ρi=max{ρli,l𝒪(𝒫i)}\rho_{i}=\max\{\rho_{li},~{}l\in\mathcal{O}(\mathscr{P}_{i})\}.

Proof:

Set ηii(p)=Γϵ1eii(p)\eta_{ii}^{(p)}=\Gamma_{\epsilon}^{-1}e_{ii}^{(p)} and ηji(p)=eji(p)\eta_{ji}^{(p)}=e_{ji}^{(p)}, then, based on equation (10), the dynamics of ηii(p)\eta_{ii}^{(p)} gives rise to

η˙ii(p)=\displaystyle\dot{\eta}_{ii}^{(p)}= θn(A¯iiH¯iC¯i)ηii(p)+Γϵ1j𝒩iAijη¯ij\displaystyle\theta^{n}\left(\bar{A}_{ii}-\bar{H}_{i}\bar{C}_{i}\right)\eta_{ii}^{(p)}+\Gamma_{\epsilon}^{-1}\sum_{j\in\mathcal{N}_{i}}A_{ij}\bar{\eta}_{ij}
+Γϵ1Bi(u^iiu¯i)γθnPi1j𝒪pαij(ηji(p)ηii(p)),\displaystyle+\Gamma_{\epsilon}^{-1}B_{i}(\hat{u}_{ii}-\bar{u}_{i})-\gamma\theta^{n}P_{i}^{-1}\sum_{j\in\mathcal{O}_{p}}\alpha_{ij}\left(\eta_{ji}^{(p)}-\eta_{ii}^{(p)}\right),

where η¯ij=Γϵ1e¯ij\bar{\eta}_{ij}=\Gamma_{\epsilon}^{-1}\bar{e}_{ij}. Denote ηi(p)=col{ηji(p),j𝒪p}\eta_{\star i}^{(p)}=col\{\eta_{ji}^{(p)},~{}j\in\mathcal{O}_{p}\} and ηi=col{ηij,j𝒪p,𝒪p𝒫i}\eta_{i\star}=col\{\eta_{ij},~{}j\in\mathcal{O}_{p},~{}\mathcal{O}_{p}\in\mathscr{P}_{i}\}, then

η˙i(p)=\displaystyle\dot{\eta}_{\star i}^{(p)}= Ai(p)ηi(p)+Γi(p)Δi(p)+Γi(p)Ui(p)\displaystyle A_{\star i}^{(p)}\eta_{\star i}^{(p)}+\Gamma_{\star i}^{(p)}\Delta_{\star i}^{(p)}+\Gamma_{\star i}^{(p)}U_{\star i}^{(p)}
γθn(Pi(p))1(𝒪pIn)ηi(p),\displaystyle-\gamma\theta^{n}\left(P_{\star i}^{(p)}\right)^{-1}\left(\mathcal{L}_{\mathcal{O}_{p}}\otimes I_{n}\right)\eta_{\star i}^{(p)},

where

Ai(p)=diag{θn(A¯iiH¯iC¯i),Aii,,Aii|𝒪p|1},\displaystyle A_{\star i}^{(p)}=diag\left\{\theta^{n}(\bar{A}_{ii}-\bar{H}_{i}\bar{C}_{i}),~{}\underbrace{A_{ii},\ldots,A_{ii}}_{|\mathcal{O}_{p}|-1}\right\},
Γi(p)=diag{Γϵ1,In,,In|𝒪p|1},\displaystyle\Gamma_{\star i}^{(p)}=diag\left\{\Gamma_{\epsilon}^{-1},~{}\underbrace{I_{n},\ldots,I_{n}}_{|\mathcal{O}_{p}|-1}\right\},
Δi(p)=col{Δii(p),Δli(p),l𝒪p},\displaystyle\Delta_{\star i}^{(p)}=col\left\{\Delta_{ii}^{(p)},~{}\Delta_{li}^{(p)},~{}l\in\mathcal{O}_{p}\right\},
Δii(p)=j𝒩iAijη¯ij,\displaystyle\Delta_{ii}^{(p)}=\sum_{j\in\mathcal{N}_{i}}A_{ij}\bar{\eta}_{ij},
Δli(p)=j𝒩i𝒪(𝒫l)Aijη¯ljj𝒩i\𝒪(𝒫l)Aijxj,\displaystyle\Delta_{li}^{(p)}=\sum_{j\in\mathcal{N}_{i}\cap\mathcal{O}(\mathcal{P}_{l})}A_{ij}\bar{\eta}_{lj}-\sum_{j\in\mathcal{N}_{i}\backslash\mathcal{O}(\mathcal{P}_{l})}A_{ij}x_{j},
Ui(p)=col{Uii,Uli,l𝒪p},\displaystyle U_{\star i}^{(p)}=col\left\{U_{ii},~{}U_{li},~{}l\in\mathcal{O}_{p}\right\},
Uii=Bi(u^iiu¯i),Uli=Bi(u^liu¯i),\displaystyle U_{ii}=B_{i}(\hat{u}_{ii}-\bar{u}_{i}),~{}U_{li}=B_{i}(\hat{u}_{li}-\bar{u}_{i}),
Pi(p)=diag{Pi,In,,In|𝒪p|1}.\displaystyle P_{\star i}^{(p)}=diag\left\{P_{i},~{}\underbrace{I_{n},\ldots,I_{n}}_{|\mathcal{O}_{p}|-1}\right\}.

To move on, we have

Δi(p)\displaystyle\left\|\Delta_{\star i}^{(p)}\right\|\leq l𝒪pΔli(p)\displaystyle\sum_{l\in\mathcal{O}_{p}}\left\|\Delta_{li}^{(p)}\right\|
\displaystyle\leq l𝒪pAiηl+l𝒪p,liAix\displaystyle\sum_{l\in\mathcal{O}_{p}}\|A_{i}\|\left\|\eta_{l\star}\right\|+\sum_{l\in\mathcal{O}_{p},~{}l\neq i}\|A_{i}\|\left\|x\right\|
\displaystyle\leq l𝒪pAi(ηl+WT1),\displaystyle\sum_{l\in\mathcal{O}_{p}}\|A_{i}\|\left(\left\|\eta_{l\star}\right\|+W_{T_{1}}\right), (22)

where Ai=[Ai1,,AiN]A_{i}=[A_{i1},\ldots,A_{iN}], and WT1W_{T_{1}} is defined in Lemma 3. Besides,

Ui(p)\displaystyle\left\|U_{\star i}^{(p)}\right\|\leq l𝒪pUli\displaystyle\sum_{l\in\mathcal{O}_{p}}\|U_{li}\|
\displaystyle\leq Bil𝒪p(ρliel+κli)\displaystyle\|B_{i}\|\sum_{l\in\mathcal{O}_{p}}\left(\rho_{li}\|e_{l\star}\|+\kappa_{li}\right)
\displaystyle\leq ρiBil𝒪pel+Bi|𝒪p|κi,\displaystyle\rho_{i}\|B_{i}\|\sum_{l\in\mathcal{O}_{p}}\|e_{l\star}\|+\|B_{i}\||\mathcal{O}_{p}|\kappa_{i}, (23)

where κi=maxl𝒪p{κli}\kappa_{i}=\max_{l\in\mathcal{O}_{p}}\{\kappa_{li}\}. Subsequently, the Lyapunov candidate can be chosen as V=i=1NViV=\sum_{i=1}^{N}V_{i}, where

Vi=𝒪p𝒫i(ηi(p))TPi(p)ηi(p).\displaystyle V_{i}=\sum_{\mathcal{O}_{p}\in\mathscr{P}_{i}}\left(\eta_{\star i}^{(p)}\right)^{T}P_{\star i}^{(p)}\eta_{\star i}^{(p)}.

Then, based on (IV-B) and (IV-B), the derivative of ViV_{i} along with ηi(p)\eta_{\star i}^{(p)} gives rise to

V˙i=\displaystyle\dot{V}_{i}= 𝒪p𝒫i(ηi(p))Tsym{Pi(p)Ai(p)}ηi(p)\displaystyle\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left(\eta_{\star i}^{(p)}\right)^{T}sym\left\{P_{\star i}^{(p)}A_{\star i}^{(p)}\right\}\eta_{\star i}^{(p)}
+θn1𝒪p𝒫i(2Pi(p)Δi(p)ηi(p)+2Pi(p)Ui(p)ηi(p))\displaystyle+\theta^{n-1}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left(2P_{\star i}^{(p)}\Delta_{\star i}^{(p)}\eta_{\star i}^{(p)}+2P_{\star i}^{(p)}U_{\star i}^{(p)}\eta_{\star i}^{(p)}\right)
𝒪p𝒫iγθn(ηi(p))T(𝒪pIn)ηi(p)\displaystyle-\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\gamma\theta^{n}\left(\eta_{\star i}^{(p)}\right)^{T}\left(\mathcal{L}_{\mathcal{O}_{p}}\otimes I_{n}\right)\eta_{\star i}^{(p)}
\displaystyle\leq 𝒪p𝒫iθn(ηii(p))Tsym{Pi(A¯iiH¯iC¯i)}ηii(p)\displaystyle\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\theta^{n}\left(\eta_{ii}^{(p)}\right)^{T}sym\left\{P_{i}\left(\bar{A}_{ii}-\bar{H}_{i}\bar{C}_{i}\right)\right\}\eta_{ii}^{(p)}
𝒪p𝒫iγθn(ηi(p))T(𝒪pIn)ηi(p)\displaystyle-\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\gamma\theta^{n}\left(\eta_{\star i}^{(p)}\right)^{T}\left(\mathcal{L}_{\mathcal{O}_{p}}\otimes I_{n}\right)\eta_{\star i}^{(p)}
+𝒪p𝒫ij𝒪p,ji(ηji(p))Tsym{Aii}ηji(p)\displaystyle+\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{j\in\mathcal{O}_{p},~{}j\neq i}\left(\eta_{ji}^{(p)}\right)^{T}sym\{A_{ii}\}\eta_{ji}^{(p)}
+2θn1σ¯(Pi)𝒪p𝒫i(l𝒪pAi(ηl+WT1)\displaystyle+2\theta^{n-1}\bar{\sigma}(P_{i})\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left(\sum_{l\in\mathcal{O}_{p}}\|A_{i}\|\left(\left\|\eta_{l\star}\right\|+W_{T_{1}}\right)\right.
+ρiBil𝒪pηl+Bi|𝒪p|κi)ηi(p).\displaystyle+\left.\rho_{i}\|B_{i}\|\sum_{l\in\mathcal{O}_{p}}\|\eta_{l\star}\|+\|B_{i}\||\mathcal{O}_{p}|\kappa_{i}\right)\left\|\eta_{\star i}^{(p)}\right\|. (24)

According to conditions 1), 2), and Lemma 1, we have

(ηii(p))Tsym{PiΛi}ηii(p)γ(ηi(p))T(𝒪pIn)ηi(p)\displaystyle\left(\eta_{ii}^{(p)}\right)^{T}sym\left\{P_{i}\Lambda_{i}\right\}\eta_{ii}^{(p)}-\gamma\left(\eta_{\star i}^{(p)}\right)^{T}\left(\mathcal{L}_{\mathcal{O}_{p}}\otimes I_{n}\right)\eta_{\star i}^{(p)}
=\displaystyle= γ(ηii(p))Tηii(p)γ(ηi(p))T(𝒪pIn)ηi(p)\displaystyle-\gamma\left(\eta_{ii}^{(p)}\right)^{T}\eta_{ii}^{(p)}-\gamma\left(\eta_{\star i}^{(p)}\right)^{T}\left(\mathcal{L}_{\mathcal{O}_{p}}\otimes I_{n}\right)\eta_{\star i}^{(p)}
=\displaystyle= γ(ηi(p))T((𝒪p+𝒮𝒪p)In)ηi(p)\displaystyle-\gamma\left(\eta_{\star i}^{(p)}\right)^{T}\left(\left(\mathcal{L}_{\mathcal{O}_{p}}+\mathcal{S}_{\mathcal{O}_{p}}\right)\otimes I_{n}\right)\eta_{\star i}^{(p)}
=\displaystyle= γλ𝒪pηi(p)2,\displaystyle-\gamma\lambda_{\mathcal{O}_{p}}\left\|\eta_{\star i}^{(p)}\right\|^{2}, (25)

where Λi=A¯iiH¯iC¯i\Lambda_{i}=\bar{A}_{ii}-\bar{H}_{i}\bar{C}_{i}. Then, substituting (IV-B) into (IV-B) yields

V˙i\displaystyle\dot{V}_{i}\leq γθn𝒪p𝒫iλ𝒪pηi(p)2+𝒪p𝒫iσ¯(Aii+AiiT)ηi(p)2\displaystyle-\gamma\theta^{n}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\lambda_{\mathcal{O}_{p}}\left\|\eta_{\star i}^{(p)}\right\|^{2}+\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\bar{\sigma}(A_{ii}+A_{ii}^{T})\left\|\eta_{\star i}^{(p)}\right\|^{2}
+2θn1σ¯(Pi)𝒪p𝒫il𝒪p(Ai+ρiBi)ηlηi(p)\displaystyle+2\theta^{n-1}\bar{\sigma}(P_{i})\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{l\in\mathcal{O}_{p}}\left(\|A_{i}\|+\rho_{i}\|B_{i}\|\right)\left\|\eta_{l\star}\right\|\left\|\eta_{\star i}^{(p)}\right\|
+2θn1σ¯(Pi)𝒪p𝒫i(AiWT1+Biκi)|𝒪p|ηi(p)\displaystyle+2\theta^{n-1}\bar{\sigma}(P_{i})\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left(\|A_{i}\|W_{T_{1}}+\|B_{i}\|\kappa_{i}\right)|\mathcal{O}_{p}|\left\|\eta_{\star i}^{(p)}\right\|
\displaystyle\leq γθn𝒪p𝒫iλ𝒪pηi(p)2+𝒪p𝒫iσ¯(Aii+AiiT)ηi(p)2\displaystyle-\gamma\theta^{n}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\lambda_{\mathcal{O}_{p}}\left\|\eta_{\star i}^{(p)}\right\|^{2}+\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\bar{\sigma}(A_{ii}+A_{ii}^{T})\left\|\eta_{\star i}^{(p)}\right\|^{2}
+θn1σ¯(Pi)(Ai+ρiBi)𝒪p𝒫il𝒪pηl2\displaystyle+\theta^{n-1}\bar{\sigma}(P_{i})\left(\|A_{i}\|+\rho_{i}\|B_{i}\|\right)\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{l\in\mathcal{O}_{p}}\left\|\eta_{l\star}\right\|^{2}
+θn1σ¯(Pi)(Ai+ρiBi)𝒪p𝒫il𝒪pηi(p)2\displaystyle+\theta^{n-1}\bar{\sigma}(P_{i})\left(\|A_{i}\|+\rho_{i}\|B_{i}\|\right)\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{l\in\mathcal{O}_{p}}\left\|\eta_{\star i}^{(p)}\right\|^{2}
+2θn1σ¯(Pi)(AiWT1+Biκi)𝒪p𝒫i|𝒪p|ηi(p).\displaystyle+2\theta^{n-1}\bar{\sigma}(P_{i})\left(\|A_{i}\|W_{T_{1}}+\|B_{i}\|\kappa_{i}\right)\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}|\mathcal{O}_{p}|\left\|\eta_{\star i}^{(p)}\right\|.

Note that i=1N𝒪p𝒫il𝒪pηl2\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{l\in\mathcal{O}_{p}}\|\eta_{l\star}\|^{2} is equivalent to repeatedly calculating i=1Nηi2\sum_{i=1}^{N}\|\eta_{i\star}\|^{2} for all i=1,,Ni=1,\ldots,N up to λ¯\bar{\lambda} times and so does i=1N𝒪p𝒫il𝒪pηi(p)2\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{l\in\mathcal{O}_{p}}\|\eta_{\star i}^{(p)}\|^{2}, where λ¯=maxi{λi}\bar{\lambda}=\max_{i}\{\lambda_{i}\} with λi=|𝒫i|max{|𝒪p|,p𝒫i}\lambda_{i}=|\mathscr{P}_{i}|\max\{|\mathcal{O}_{p}|,~{}p\in\mathscr{P}_{i}\}. It leads thereby to

V˙\displaystyle\dot{V}\leq γθnλ¯𝒪pi=1N𝒪p𝒫iηi(p)2+λAi=1N𝒪p𝒫iηi(p)2\displaystyle-\gamma\theta^{n}\underline{\lambda}_{\mathcal{O}_{p}}\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left\|\eta_{\star i}^{(p)}\right\|^{2}+\lambda_{A}\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left\|\eta_{\star i}^{(p)}\right\|^{2}
+λPθn1(A+ρB)i=1N𝒪p𝒫il𝒪pηl2\displaystyle+\lambda_{P}\theta^{n-1}\left(\|A\|+\rho\|B\|\right)\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{l\in\mathcal{O}_{p}}\left\|\eta_{l\star}\right\|^{2}
+λPθn1(A+ρB)i=1N𝒪p𝒫il𝒪pηi(p)2\displaystyle+\lambda_{P}\theta^{n-1}\left(\|A\|+\rho\|B\|\right)\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\sum_{l\in\mathcal{O}_{p}}\left\|\eta_{\star i}^{(p)}\right\|^{2}
+2λPθn1(AWT1+Bκ)i=1N𝒪p𝒫i|𝒪p|ηi(p)\displaystyle+2\lambda_{P}\theta^{n-1}\left(\|A\|W_{T_{1}}+\|B\|\kappa\right)\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}|\mathcal{O}_{p}|\left\|\eta_{\star i}^{(p)}\right\|
\displaystyle\leq γθnλ¯𝒪pi=1N𝒪p𝒫iηi(p)2\displaystyle-\gamma\theta^{n}\underline{\lambda}_{\mathcal{O}_{p}}\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left\|\eta_{\star i}^{(p)}\right\|^{2}
+θn1λPλ¯(A+ρB)i=1Nηi2\displaystyle+\theta^{n-1}\lambda_{P}\bar{\lambda}\left(\|A\|+\rho\|B\|\right)\sum_{i=1}^{N}\|\eta_{i\star}\|^{2}
+θn1(λA+λPλ¯(A+ρB))i=1N𝒪p𝒫iηi(p)2\displaystyle+\theta^{n-1}\left(\lambda_{A}+\lambda_{P}\bar{\lambda}\left(\|A\|+\rho\|B\|\right)\right)\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left\|\eta_{\star i}^{(p)}\right\|^{2}
+2θn1λPλ¯(AWT1+Bκ)i=1N𝒪p𝒫iηi(p),\displaystyle+2\theta^{n-1}\lambda_{P}\bar{\lambda}\left(\|A\|W_{T_{1}}+\|B\|\kappa\right)\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\left\|\eta_{\star i}^{(p)}\right\|,

where κ=maxi=1,,N{κi}\kappa=\max_{i=1,\ldots,N}\{\kappa_{i}\}. It is noticed that Lemma 4 leads to

i=1N𝒪p𝒫iηi(p)2=i=1Nηi2,\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\|\eta_{\star i}^{(p)}\|^{2}=\sum_{i=1}^{N}\|\eta_{i\star}\|^{2},

and

i=1N𝒪p𝒫iηi(p)2η,\sum_{i=1}^{N}\sum_{\mathcal{O}_{p}\in\mathcal{P}_{i}}\|\eta_{\star i}^{(p)}\|\leq\sqrt{2}\|\eta\|,

where η=col{ηi,i=1,,N}\eta=col\{\eta_{i\star},~{}i=1,\dots,N\}. Then, by denoting c(θ)=γθλ¯𝒪pλA/θn12λPλ¯(A+ρB)c(\theta)=\gamma\theta\underline{\lambda}_{\mathcal{O}_{p}}-\lambda_{A}/\theta^{n-1}-2\lambda_{P}\bar{\lambda}\left(\|A\|+\rho\|B\|\right), and cK=2λPλ¯(AWT1+Bκ)c_{K}=2\lambda_{P}\bar{\lambda}\left(\|A\|W_{T_{1}}+\|B\|\kappa\right), we can further obtain

V˙\displaystyle\dot{V}\leq c(θ)θn1η2+cKθn1η.\displaystyle-c(\theta)\theta^{n-1}\|\eta\|^{2}+c_{K}\theta^{n-1}\|\eta\|. (26)

Condition 3) indicates that c(θ)>0c(\theta)>0, and c(θ)c(\theta) is obviously monotonically increasing and radially unbounded with respect to θ\theta.

Equation (26) infers that Ωη={η:ηcK/c(θ)}\Omega_{\eta}=\{\|\eta\|:~{}\|\eta\|\leq c_{K}/c(\theta)\} is an invariant set. So, V˙<0\dot{V}<0 when η\|\eta\| is out of Ωη\Omega_{\eta}. Then, we obtain by the conclusion of [40] that

η(t)max{beac(θ)tη(0),cKc(θ)}\displaystyle\|\eta(t)\|\leq\max\left\{be^{-ac(\theta)t}\|\eta(0)\|,~{}\frac{c_{K}}{c(\theta)}\right\} (27)

for all t>0t>0 and some positive constants a,ba,b. It means

e(t)max{bθn1eac(θ)te(0),cKc(θ)},\displaystyle\|e(t)\|\leq\max\left\{b\theta^{n-1}e^{-ac(\theta)t}\|e(0)\|,~{}\frac{c_{K}}{c(\theta)}\right\}, (28)

which shows that e\|e\| can converge to an any small invariant set Ωe={e:ecK/c(θ)}\Omega_{e}=\{\|e\|:~{}\|e\|\leq c_{K}/c(\theta)\} in any short time by designing proper θ\theta. In other words, one can choose a proper θ\theta so that e\|e\| converges to Ωe\Omega_{e} for any T1<T1T_{1}^{\prime}<T_{1}. ∎

This section has proved that error dynamics of the partitioned distributed observer can converge to an any small invariant set before the given time T1T_{1}. Due to the inability to analyze the performance of distributed observers and closed-loop systems separately, Theorem 1 relies on assumptions about T1T_{1}. In Theorem 3 of the next subsection, we will provide a joint analysis of the observer and controller, where the assumption about T1T_{1} will be removed. Next, we will further elaborate on some details and supplementary explanations of Theorem 1 in several remarks.

Remark 5

The design methods for the parameters of the partitioned distributed observer are detailed in Theorem 1. According to Theorem 1, all parameters except for γ\gamma only rely on the information of each individual agent and do not require any other information. Designing γ\gamma involves the information of system matrix, input matrix, and network topology. However, γ\gamma is actually just a sufficiently large constant. Therefore, in practical usage, it is often avoided to use global information by designing γ\gamma as an adaptive parameter.

Remark 6

Assumption 1 requires the observability of (Ci,Aii)(C_{i},A_{ii}). Note that C¯i\bar{C}_{i} and A¯ii\bar{A}_{ii} are obtained by a similarity transformation from CiC_{i} and AiA_{i}. Thus [9, Lemma 3.1] shows that (C¯i,A¯ii)(\bar{C}_{i},\bar{A}_{ii}) is also observable. It means H¯i\bar{H}_{i} given in condition 1) of Theorem 1 can be obtained through pole configuration.

Remark 7

Fusion estimation and model mismatch in partitioned distributed observer design make the stability analysis of its error dynamics (10)–(11) a real challenge because they not only bring more complex error dynamics but also derive two distinct compact error forms (ei(p)e_{\star i}^{(p)} and eie_{i\star}). In this subsection, Lemma 4 demonstrates the strict mathematical relationship between these two compact error forms. Then, Theorem 1 develops the two-layer Lyapunov analysis method, which ingeniously transforms the above mathematical relationship into the results that can be used in the traditional Lyapunov stability analysis. The proof of Lemma 4 and the design of the two-layer Lyapunov function (V=i=1NViV=\sum_{i=1}^{N}V_{i}) are the keys to successfully proving the error dynamic stability of the partitioned distributed observer.

IV-C Closed-loop performance under the distributed control law

This subsection first proves the performance of the closed-loop system under the assumption of stable observer error (Theorem 2). Then, we presents the joint analysis results of the observer and controller in Theorem 3, in which the stability analysis of the error dynamics of the distributed observer and the closed-loop system dynamics is completed (without relying on additional assumptions).

At the beginning, one obtains by centralized control law (12) that the closed-loop system with (12) is

x˙=Ax+BKx=(A+BK)x.\displaystyle\dot{x}=Ax+BKx=(A+BK)x. (29)

Furthermore, the closed-loop system under distributed control law (13) takes the form of

x˙i=\displaystyle\dot{x}_{i}= Aiixi+j𝒩iAijxj+Bij𝒩i{i}Kijx¯ij\displaystyle A_{ii}x_{i}+\sum_{j\in\mathcal{N}_{i}}A_{ij}x_{j}+B_{i}\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}\bar{x}_{ij}
=\displaystyle= Aiixi+j𝒩iAijxj+Bij𝒩i{i}Kijxj\displaystyle A_{ii}x_{i}+\sum_{j\in\mathcal{N}_{i}}A_{ij}x_{j}+B_{i}\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}x_{j}
+Bij𝒩i{i}Kijx¯ijBij𝒩i{i}Kijxj.\displaystyle+B_{i}\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}\bar{x}_{ij}-B_{i}\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}x_{j}. (30)

Let U=col{u^iiui,i=1,,N}U=col\{\hat{u}_{ii}-u_{i},~{}i=1,\ldots,N\}, then the compact form of (IV-C) is expressed as

x˙=(A+BK)x+BU.\displaystyle\dot{x}=(A+BK)x+BU. (31)

Now, we states the follows to show the performance of (31).

Theorem 2

Consider the closed-loop system (31) as well as its communication network and physical network subject to Assumption 1 and 2. If the error e\|e\| of the partitioned distributed observer (8)–(9) stays in Ωe\Omega_{e} with t0t\geq 0, then x\|x\| can converge to an invariant

Ωx={x:x4cKQBK/c(θ)c2}.\displaystyle\Omega_{x}=\{\|x\|:~{}\|x\|\leq 4c_{K}\|QB\|\|K\|/c(\theta)c_{2}\}.

where c(θ)=γθλ¯𝒪pλA/θn12λPλ¯(A+ρB)c(\theta)=\gamma\theta\underline{\lambda}_{\mathcal{O}_{p}}-\lambda_{A}/\theta^{n-1}-2\lambda_{P}\bar{\lambda}\left(\|A\|+\rho\|B\|\right)—given in the proof of Theorem 1—is a monotonically increasing function with respect to θ\theta and radially unbounded. Furthermore, Ωx\Omega_{x} can be arbitrarily small with observer parameter θ\theta tending to infinity.

Proof:

Since A+BKA+BK is Hurwitz, there is a symmetric positive definite matrix QQ so that sym{Q(A+BK)}=c2InNsym\{Q(A+BK)\}=-c_{2}I_{nN}, where c2>0c_{2}>0 is a given constant. Then, the Lyapunov candidate can be chosen as W(t)=xTQxW(t)=x^{T}Qx. Its derivative along with (31) gives rise to

W˙=xTsym{Q(A+BK)}x+2xTQBU.\displaystyle\dot{W}=x^{T}sym\{Q(A+BK)\}x+2x^{T}QBU. (32)

Based on (IV-A), we know u^iiui2Kiei\|\hat{u}_{ii}-u_{i}\|\leq\sqrt{2}\|K_{i}\|\|e_{i\star}\|. Hence,

W˙\displaystyle\dot{W}\leq c2x2+2xQBi=1N2Kiei\displaystyle-c_{2}\|x\|^{2}+2\|x\|\|QB\|\sum_{i=1}^{N}\sqrt{2}\|K_{i}\|\|e_{i\star}\|
\displaystyle\leq c2x2+22xQBKi=1Nei\displaystyle-c_{2}\|x\|^{2}+2\sqrt{2}\|x\|\|QB\|\|K\|\sum_{i=1}^{N}\|e_{i\star}\|
\displaystyle\leq c2x2+4QBKxe\displaystyle-c_{2}\|x\|^{2}+4\|QB\|\|K\|\|x\|\|e\|
\displaystyle\leq c2x2+4cKc(θ)QBKx.\displaystyle-c_{2}\|x\|^{2}+\frac{4c_{K}}{c(\theta)}\|QB\|\|K\|\|x\|. (33)

It means that Ωx={x:x4cKQBK/c(θ)c2}\Omega_{x}=\{\|x\|:~{}\|x\|\leq 4c_{K}\|QB\|\|K\|/c(\theta)c_{2}\} is an invariant set. Furthermore, since c(θ)c(\theta) is a monotonically increasing function with respect to θ\theta and radially unbounded, we have

limθ4cKQBKc(θ)c2=0.\displaystyle\lim_{\theta\to\infty}\frac{4c_{K}\|QB\|\|K\|}{c(\theta)c_{2}}=0. (34)

Therefore, Ωx\Omega_{x} is an arbitrary small invariant. ∎

In what follows, we will focus on the closed-loop system with the distributed control (15) containing saturation mechanism:

x˙i=Aiixi+j𝒩iAijxj+Bij𝒩i{i}Kij𝕩ij.\displaystyle\dot{x}_{i}=A_{ii}x_{i}+\sum_{j\in\mathcal{N}_{i}}A_{ij}x_{j}+B_{i}\sum_{j\in\mathcal{N}_{i}\cup\{i\}}K_{ij}\mathbbm{x}_{ij}. (35)

This is the actual control law adopted by the closed-loop system in this paper, and also the source of output information yiy_{i} in the partitioned distributed observer (8)–(9). Let U¯=col{u¯iui,i=1,,N}\bar{U}=col\{\bar{u}_{i}-u_{i},~{}i=1,\ldots,N\} and obtain the compact form

x˙=(A+BK)x+BU¯.\displaystyle\dot{x}=(A+BK)x+B\bar{U}. (36)

The following theorem will show the stability of (36).

Theorem 3

The distributed-observer-based distributed control law for large-scale system is given by (8)–(9) and (13)–(15). If we choose proper gains KijK_{ij} in (13) and observer parameters by Theorem 1, then the states of the closed-loop system (36) can converge to an invariant set

Ωx={x:x4cKQBK/c(θ)c2}.\displaystyle\Omega_{x}=\{\|x\|:~{}\|x\|\leq 4c_{K}\|QB\|\|K\|/c(\theta)c_{2}\}. (37)

This set can be arbitrarily small when observer parameter θ\theta tending to infinity.

Proof:

As a linear system, there is no finite time escape problem in the system (36). Hence, there are constants T1>0T_{1}>0 and WT1>0W_{T_{1}}>0 so that xWT1\|x\|\leq W_{T_{1}} for all 0tT10\leq t\leq T_{1}. Then, in light of Theorem 1, we know there is a proper θ>0\theta>0 such that e\|e\| converges to Ωc\Omega_{c} during (0,T1](0,T_{1}^{\prime}] where T1T_{1}^{\prime} is a constant satisfying 0<T1<T10<T_{1}^{\prime}<T_{1}.

Subsequently, at time T1T_{1}^{\prime}, we have xWT1\|x\|\leq W_{T_{1}}, which yields x¯ijxj+e¯ijWT1+cK/c(θ)\|\bar{x}_{ij}\|\leq\|x_{j}\|+\|\bar{e}_{ij}\|\leq W_{T_{1}}+c_{K}/c(\theta). By choosing >WT1+cK/c(θ)\mathcal{M}>W_{T_{1}}+c_{K}/c(\theta) for all θ1\theta\geq 1, we know u¯i=u^ii\bar{u}_{i}=\hat{u}_{ii} at t=T1t=T_{1}^{\prime}. In other words, the closed-loop system (36) degenerates into the closed-loop system (31).

Let ΩW={x:xWT1}\Omega_{W}=\{\|x\|:~{}\|x\|\leq W_{T_{1}}\}, then (e,x)Ωc×ΩW(\|e\|,\|x\|)\in\Omega_{c}\times\Omega_{W} at t=T1t=T_{1}^{\prime}. It follows by Theorem 2 that W˙<0\dot{W}<0 if xΩx\|x\|\notin\Omega_{x}. Therefore, x\|x\| will not leave ΩW\Omega_{W} and thus it will always be bounded by WT1W_{T_{1}}, which infers that e\|e\| stays in Ωe\Omega_{e} for t>T1t>T_{1}^{\prime} owing to Theorem 1. As a result, x\|x\| will go into Ωx\Omega_{x} because of the conclusion of Theorem 2. Hence, (e,x)(\|e\|,\|x\|) will go into Ωc×Ωx\Omega_{c}\times\Omega_{x} with tt\to\infty. ∎

Refer to caption
Figure 7: Schematic diagram of the main proof process.

Theorem 3 is of primary importance for this paper. It combines the conclusions of Theorem 1 and Theorem 2, and obtains the final result: the error system and the closed-loop system of the partitioned distributed observer can converge to any small invariant sets. Furthermore, the finite time conditions involved in Theorem 1 are eliminated by the analysis in Theorem 3. In fact, the final conclusion of the theorem only depends on the controllability and observability of the system as well as the connectivity of the communication network and does not depend on any other assumptions. These conclusions also indicate that, compared with the traditional distributed observer, the performance of the partitioned distributed observer in this paper has almost no loss, but the dimension of observers on each agent is greatly reduced. These results make the distributed observer more practical in large-scale system problems.

Remark 8

The proof of the stability of observer error dynamics and closed-loop systems mainly involves three steps. First, we assume that the states of the closed-loop system are bounded, and then the error dynamic e(t)e(t) can be proved to converge to Ωe\Omega_{e} when t=T1t=T_{1}^{\prime} (see in Figure 7a). Second, we show that the states of the closed-loop system can converge to Ωx\Omega_{x} when eie_{i} is always in Ωe\Omega_{e}. Finally, as seen in Figure 7b, we assume that ΩW\Omega_{W} is the bound of the closed-loop system before T1T_{1}. By designing θ\theta, the trajectory of the observer can converge to the neighbor of the true states before xx escapes ΩW\Omega_{W}. After that, the distributed control law can control the dynamics of the closed-loop system to an arbitrarily small invariant set Ωx\Omega_{x}.

IV-D The ability of performance recovery

This subsection will show that the state trajectories of the closed-loop system (36) controlled by the distributed-observer-based distributed control law can approximate the trajectories of (29) arbitrarily. To this end, we denote xc(t)x_{c}(t) the solution of closed-loop system (29) with initial states xc(0)=x0x_{c}(0)=x_{0}. Moreover, the solution of (36) is defined as xr(t)x_{r}(t) with initial value xr(0)=x0x_{r}(0)=x_{0}. Note that the trajectories of xr(t)x_{r}(t) depends on θ\theta.

Theorem 4

The performance of distributed-observer-based distributed control law can approach that of the centralized control law arbitrarily, i.e., for any ε>0\varepsilon>0, there exists a constant M>0M>0 and θ>M\theta>M, such that xr(t)xc(t)<ε\|x_{r}(t)-x_{c}(t)\|<\varepsilon for all t>0t>0.

Proof:

Since KK is chosen so that A+BKA+BK is a Hurwitz matrix, the system (29) is stable. Hence, there exists T2>0T_{2}>0 such that xc(t)ε/2\|x_{c}(t)\|\leq\varepsilon/2 for all t>T2t>T_{2}. Besides, according to Theorem 3, there is M1>0M_{1}>0 so that 4cKQBK/c(θ)c2<ε/24c_{K}\|QB\|\|K\|/c(\theta)c_{2}<\varepsilon/2 for all θ>M1\theta>M_{1}. Then, there is a constant T3>0T_{3}>0 such that xr(t)<ε/2\|x_{r}(t)\|<\varepsilon/2 for all θ>M1\theta>M_{1} and t>T3t>T_{3}. Subsequently, we have

xc(t)xr(t)xc(t)+xr(t)<ε\displaystyle\|x_{c}(t)-x_{r}(t)\|\leq\|x_{c}(t)\|+\|x_{r}(t)\|<\varepsilon (38)

for arbitrary t>Tmax{T2,T3}t>T\triangleq\max\{T_{2},T_{3}\}. Up to now, we have proved that xc(t)xr(t)<ε\|x_{c}(t)-x_{r}(t)\|<\varepsilon holds on t(T,)t\in(T,\infty). In what follows, we will show it also holds on t(0,T]t\in(0,T].

For this purpose, we construct a function T(θ)T(\theta), which satisfies limθT(θ)=0\lim_{\theta\to\infty}T(\theta)=0 and e<δ(θ)\|e\|<\delta^{\prime}(\theta) for t>T(θ)t>T(\theta), where δ(θ)>0\delta^{\prime}(\theta)>0 is a function with limθδ(θ)=0\lim_{\theta\to\infty}\delta^{\prime}(\theta)=0.

We first consider the interval (0,T(θ)](0,T(\theta)]. Note that both xc(t)x_{c}(t) and xr(t)x_{r}(t) are bounded, as well as x˙c(0)\dot{x}_{c}(0) and x˙r(0)\dot{x}_{r}(0) are finite, thus there are two positive proportional functions fc(t)=kctf_{c}(t)=k_{c}t and fr(t)=krtf_{r}(t)=k_{r}t satisfying xc(t)xc(0)kct\|x_{c}(t)-x_{c}(0)\|\leq k_{c}t and xr(t)xr(0)krt\|x_{r}(t)-x_{r}(0)\|\leq k_{r}t when t(0,T(θ)]t\in(0,T(\theta)], which leads thereby to

xc(t)xr(t)\displaystyle\|x_{c}(t)-x_{r}(t)\| xc(t)xc(0)+xr(t)xr(0)\displaystyle\leq\|x_{c}(t)-x_{c}(0)\|+\|x_{r}(t)-x_{r}(0)\|
kct+krtkT(θ)\displaystyle\leq k_{c}t+k_{r}t\leq kT(\theta)

for all t(0,T(θ)]t\in(0,T(\theta)], where k=max{kc,kr}k=\max\{k_{c},k_{r}\}. As a result, there exists a constant M2>0M_{2}>0 so that T(θ)<ε/kT(\theta)<\varepsilon/k for arbitrary θ>M2\theta>M_{2}. Therefore, we have xc(t)xr(t)<ε\|x_{c}(t)-x_{r}(t)\|<\varepsilon θ>M2\forall\theta>M_{2} and t(0,T(θ)]t\in(0,T(\theta)].

Finally, we consider the scenario where t(T(θ),T]t\in(T(\theta),T]. We define Fc(t,x)=(A+BK)xF_{c}(t,x)=(A+BK)x and Fr(t,x,e)=(A+BK)x+BU¯F_{r}(t,x,e)=(A+BK)x+B\bar{U}, where U¯\bar{U} is an implicit function of ee in light of Theorem 2 and 3. Since (T(θ),T](T(\theta),T] is a finite time interval, the conditions of the famous theorem (See in any monograph of “Ordinary Differential Equation”) named “The Continuous Dependence Theorem of Solutions of Differential Equations on Initial Values and Parameters” are fulfilled. It indicates that there is a constant δ>0\delta>0 such that xc(t)xr(t)<ε\|x_{c}(t)-x_{r}(t)\|<\varepsilon if the parameter ee in Fr(t,x,e)F_{r}(t,x,e) satisfying e<δ\|e\|<\delta. Bearing in mind the conclusion of Theorem 1 and 3, there exists M3>0M_{3}>0 such that e<δ(θ)<δ\|e\|<\delta^{\prime}(\theta)<\delta for all θ>M3\theta>M_{3}.

Therefore, there exists θ>M=max{M1,M2,M3}\theta>M=\max\{M_{1},M_{2},M_{3}\} so that xc(t)xr(t)<ε\|x_{c}(t)-x_{r}(t)\|<\varepsilon, ε>0\forall\varepsilon>0 and t(0,)\forall t\in(0,\infty). ∎

Up to now, the goals of this paper have been fully realized. After partition, the dimension of each local observer is far less than that of the interconnected system, but the distributed control law designed based on the partition distributed observer can still ensure that the controlled system can approximate the centralized control performance arbitrarily.

Remark 9

In the process of implementing the partitioned distributed observer and the distributed-observer-based distributed control law, the high-gain parameter θ\theta plays an important role. However, it should be noted that the existence of high gain makes this method unable to deal directly with measurement noise. When measurement noise exists, there is a trade-off between the steady-state error (caused by measurement noise), the steady-state error (caused by model mismatch), and the difference between the distributed control performance and the centralized control performance. The smaller the latter two, the larger the former. Therefore, when both measurement noise and model mismatch exist, finding an optimal θ\theta to consider these three indicators simultaneously is a topic that is worth further research in the future.

V Simulation

To illustrate the effectiveness of the developed methods, this section will be divided into three parts. Firstly, we describe the simulation system. Then, the effectiveness of the network partitioning method will be demonstrated. Finally, we show the validity of the partitioned distributed observer and the distributed-observer-based distributed control law.

V-A System formulation

The frequency subsystem of the droop control system contained in the microgrid system is considered. Assume that the large-scale system contains NN microgrids. Let δi\delta_{i} be the electrical angle of the iith generator, and ωi\omega_{i} be its angular velocity. Then, the dynamics of (δi,ωi)T(\delta_{i},\omega_{i})^{T} are governed by

δ˙i=\displaystyle\dot{\delta}_{i}= ωi,\displaystyle\omega_{i}, (39)
ω˙i=\displaystyle\dot{\omega}_{i}= 1τPiωikPiτPi(P1iVi2+P2iVi+P3iPd)\displaystyle-\frac{1}{\tau_{P_{i}}}\omega_{i}-\frac{k_{P_{i}}}{\tau_{P_{i}}}\left(P_{1i}V_{i}^{2}+P_{2i}V_{i}+P_{3i}-P_{d}\right)
kPiτPi(j=1N|βij|ViVjsin(δiδj))1τPiui,\displaystyle-\frac{k_{P_{i}}}{\tau_{P_{i}}}\left(\sum_{j=1}^{N}|\beta_{ij}|V_{i}V_{j}\sin(\delta_{i}-\delta_{j})\right)-\frac{1}{\tau_{P_{i}}}u_{i}, (40)
yi=\displaystyle y_{i}= δi,\displaystyle\delta_{i}, (41)

where τPi\tau_{P_{i}} and kPik_{P_{i}} represent the filter coefficient for measuring active power and frequency drop gain, respectively; PdP_{d} stands for the expected active power; and ViV_{i} is the voltage of the iith microgrid system; βij\beta_{ij} represents the coupling relationship among microgrid ii and jj. Then, by linearizing the system (39)–(41) around the equilibrium point and denoting xi=(xi1,xi2)Tx_{i}=(x_{i1},x_{i2})^{T} with xi1=δix_{i1}=\delta_{i} and xi2=ωix_{i2}=\omega_{i}, we have

x˙i=\displaystyle\dot{x}_{i}= Aiixi+j=1NAijxj+Biui,\displaystyle A_{ii}x_{i}+\sum_{j=1}^{N}A_{ij}x_{j}+B_{i}u_{i}, (42)
yi=\displaystyle y_{i}= Cixi=xi1,\displaystyle C_{i}x_{i}=x_{i1}, (43)

where

Aii=[01kPiτPij=1N|βij|ViVj1τPi],\displaystyle A_{ii}=\begin{bmatrix}0&1\\ -\frac{k_{P_{i}}}{\tau_{P_{i}}}\sum_{j=1}^{N}|\beta_{ij}|V_{i}V_{j}&-\frac{1}{\tau_{P_{i}}}\end{bmatrix},
Aij=[00kPiτPi|βij|ViVj0],Bi=[01],Ci=[10].\displaystyle A_{ij}=\begin{bmatrix}0&0\\ \frac{k_{P_{i}}}{\tau_{P_{i}}}|\beta_{ij}|V_{i}V_{j}&0\end{bmatrix},~{}B_{i}=\begin{bmatrix}0\\ 1\end{bmatrix},~{}C_{i}=\begin{bmatrix}1&0\end{bmatrix}.

The physical network and communication network corresponding to the system are both shown in Figure 2. βij\beta_{ij} and αij\alpha_{ij} are the elements of their adjacency matrices, respectively. Therefore, the large-scale system considered in this section contains 4747 subsystems, i.e., N=47N=47.

In this paper, we randomly select τPi\tau_{P_{i}} within [0.012,0.018][0.012,0.018], and randomly select kPik_{P_{i}} within [1×1015,10×1015][1\times 10^{-15},10\times 10^{-15}]. Besides, we choose Vi=110VV_{i}=110{\rm V} for all i=1,,Ni=1,\ldots,N.

V-B Effectiveness of partition method

It is seen from Figure 2 that the physical network and communication network among 4747 agents are different. We know Spc=84.75%S_{pc}=84.75\% by calculating the similarity between these two networks.

Each agent needs to estimate the whole system’s states if the classical distributed observer method [12, 13, 15, 16] is used. Therefore, the dimension of observers on each agent is 9494. In contrast, it can be greatly reduced if the partitioned distributed observer developed in this paper is employed. See in Figure 8; the purple bar shows that, after partitioning, the maximum observer dimension of all agents is only 2020 (reduced by 78.72%78.72\%), which is much lower than that of the traditional method (yellow bar). In addition, after partitioning, the minimum observer dimension on each agent is only 44 (reduced by 95.7%95.7\%), and the average value is only 10.808310.8083 (reduced by 88.51%88.51\%). Both of them are much lower than the value of the yellow bar. By the way, the merging partition process in Algorithm 1 in this paper can also effectively reduce the dimension of observers on each agent. By comparing the blue with purple bars, we see that the maximum and average values of the local observer dimensions after the partition merging (purple bar) are smaller than those before the partition merging (blue bar). Besides, through MATLAB simulation, the partition process of this example takes 0.0170.017 seconds.

Refer to caption
Figure 8: Variation of dynamic process performance with θ\theta.
Refer to caption
Figure 9: Physical and communication network with 100 nodes.
Refer to caption
Figure 10: Variation of dynamic process performance with θ\theta.

To further illustrate the effectiveness of the partition algorithm, this paper also selects a system with 100100 nodes. Its physical network and communication network are shown in Figure 9. In this example, the similarity between the two networks is 78.49%78.49\%. As seen in Figure 10, compared to the traditional distributed observer, the maximum dimension of observers on each agent in the proposed partitioned distributed observer is reduced by 84.00%84.00\%, and the average dimension is reduced by 92.46%92.46\%. In this example, the time consumption of the partition algorithm by MATLAB is 0.070.07 seconds.

In conclusion, the partition algorithm proposed in this paper can reduce the average local observer dimension by about 90%90\% when the similarity between the physical network and the communication network is about 80%80\%. Furthermore, the time consumption of the developed algorithm is very small.

V-C Performance of the distributed-observer-based distributed control law

We design the centralized control law of system (42) as

ui=j=1NKijxj+Kiixi,\displaystyle u_{i}=\sum_{j=1}^{N}K_{ij}x_{j}+K_{ii}x_{i}, (44)

where Kij=[kPiτPi|βij|ViVj0]K_{ij}=\begin{bmatrix}\frac{k_{P_{i}}}{\tau_{P_{i}}}|\beta_{ij}|V_{i}V_{j}&0\end{bmatrix} and KiiK_{ii} is chosen so that Aii+BiKiiA_{ii}+B_{i}K_{ii} is a Hurwitz matrix. Subsequently, we obtain the distributed control law u¯i\bar{u}_{i} by designing a partitioned distributed observer.

The observer gains HiH_{i}, and the weighted matrix PiP_{i} can be calculated by Theorem 1 (Since there are 4747 different HiH_{i} and PiP_{i}, we will not show their specific forms). We set γ=100θ2\gamma=100\theta^{2} and randomly select the initial values of the closed-loop system within [0,1][0,1]. In addition, the initial values of the partitioned distributed observer are all chosen as x^ij(p)=2×12|𝒪|p\hat{x}_{ij}^{(p)}=2\times 1_{2|\mathcal{O}|_{p}} for all 𝒪p𝒫i\mathcal{O}_{p}\in\mathscr{P}_{i} and all i=1,,47i=1,\ldots,47.

Then, the trajectories of the closed-loop system with different θ\theta are shown in Figure 11. Subfigure a) is the trajectories of xc(t)x_{c}(t) yielded by the centralized control law. Subfigures b), c), and d) are trajectories of xr(t)x_{r}(t), which is obtained by the distributed-observer-based distributed control law. From Figure 11d) and 11c), we know xr(t)x_{r}(t) cannot converge to zero when high-gain parameter θ=2\theta=2, while xr(t)x_{r}(t) tends to zero when θ=7\theta=7. However, it can be seen in Figure 11c) that the dynamic performance of xr(t)x_{r}(t) with θ=7\theta=7 is still not as good as that of xc(t)x_{c}(t). When θ=15\theta=15, we find that the trajectories in Figure 11b) are almost the same as that in Figure 11a). It indicates that the distributed-observer-based distributed control can achieve the same performance as that of the centralized control law.

Refer to caption
Figure 11: Performance of the distributed-observr-based distributed control law. a) is the trajectories of xc(t)x_{c}(t), and b), c), and d) are the trajectories of xr(t)x_{r}(t) with different θ\theta.
Refer to caption
Figure 12: Variation of dynamic process performance with θ\theta.

In addition, we need to point out that Figure 11 also shows the validity of Theorem 3. In this figure, closed-loop system states converge to a very small invariant set (The steady-state error shown in Figure 11b) is almost 5×1065\times 10^{-6}). In addition, the premise that the closed-loop system states can converge to any small invariant set is that the state estimation error of the partitioned distributed observer can converge to any invariant set at any fast speed. Therefore, the excellent performance of the closed-loop system in Figure 11b) infers the excellent performance of the partitioned distributed observer.

To further illustrate the ability of performance recovery, we selected θ=2\theta=2, θ=3\theta=3, θ=5\theta=5, θ=7\theta=7, θ=9\theta=9, θ=12\theta=12, and θ=15\theta=15 to show the approximation process of distributed-observer-based distributed control performance to centralized control performance. Define the performance index of the dynamic process as

Ix=i=147j=12010xij(t)2𝑑t.\displaystyle I_{x}=\sum_{i=1}^{47}\sum_{j=1}^{2}\int_{0}^{10}\|x_{ij}(t)\|^{2}dt. (45)

Let IxcI_{x_{c}} and IxrI_{x_{r}} be the performance index of centralized control law and distributed control law, respectively. Then, Figure 12 shows how IxrIxcI_{x_{r}}-I_{x_{c}} changes with θ\theta. Since the initial values are randomly selected, we have simulated 66 times for each θ\theta to eliminate the influence of the randomness of the initial values. See in Figure 12, the blue dot represents the average value of IxrIxcI_{x_{r}}-I_{x_{c}} corresponding to each θ\theta, and the blue shadow represents the floating range of IxrIxcI_{x_{r}}-I_{x_{c}} obtained from 66 simulations. Simulation results show that the control performance of the distributed-observer-based distributed control law is infinitely close to that of the centralized control law.

It is worth mentioning that literature [2] also studied the distributed-observer-based distributed control law, which can arbitrarily approximate the centralized control performance. However, it studied small-scale systems, while this paper studies large-scale systems. Furthermore, according to the developed method in [2], the dimension of the local observer on each agent is equal to the dimension of the whole interconnected system. As a contrast, in this paper, the average dimension of observers on each agent is only 11.49%11.49\% of the interconnected system dimensions (See in Figure 8).

VI Conclusions

This paper has developed a distributed control law based on the partitioned distributed observer for large-scale interconnected linear systems. Firstly, we have designed a partitioning algorithm, which is achieved by two steps, including initializing and merging. The algorithm can significantly reduce the dimension of local observers. Secondly, the partitioned distributed observer for large-scale systems has been designed. To analyze the stability of its error dynamics, this paper has proposed the two-layer Lyapunov analysis method and proved the dynamic transformation lemma of compact errors. Finally, we have designed the distributed control law based on the developed partitioned distributed observer, which has been proved to have the ability to approximate the control performance of the centralized control arbitrarily. The simulation results have shown that the relative error between the performance of the distributed control law and that of the centralized control law is less than 1%1\%. Besides, the local observer dimension can be reduced by 90%90\% when the similarity between the communication network and the communication network is about 80%80\%.

References

  • [1] X. Zhang, K. Movric, M. Sebek, W. Desmet, and C. Faria, “Distributed observer and controller design for spatially interconnected systems,” IEEE Transactions on Control Systems Technology, vol. 27, no. 1, pp. 1–13, 2019.
  • [2] H. Xu, S. Liu, B. Wang, and J. Wang, “Distributed-observer-based distributed control law for affine nonlinear systems and its application on interconnected cruise control of intelligent vehicles,” IEEE Transactions on Intelligent Vehicles, vol. 8, no. 2, pp. 1874–1888, 2023.
  • [3] H. Fawzi, P. Tabuada, and S. Diggavi, “Secure estimation and control for cyber-physical systems under adversarial attacks,” IEEE Transactions on Automatic Control, vol. 59, no. 6, pp. 1454–1467, 2014.
  • [4] Y. Jiang, Y. Yang, S.-C. Tan, and S. Y. Hui, “Distributed sliding mode observer-based secondary control for DC microgrids under cyber-attacks,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 11, no. 1, pp. 144–154, 2021.
  • [5] Y. Li, L. Liu, W. Yu, Y. Wang, and X. Guan, “Noncooperative mobile target tracking using multiple AUVs in anchor-free environments,” IEEE Internet of Things Journal, vol. 7, no. 10, pp. 9819–9833, 2020.
  • [6] Y. Li, B. Li, W. Yu, S. Zhu, and X. Guan, “Cooperative localization based multi-AUV trajectory planning for target approaching in anchor-free environments,” IEEE Transactions on Vehicular Technology, vol. 71, no. 3, pp. 3092–3107, 2022.
  • [7] K. Liu, J. Lv, and Z. Lin, “Design of distributed observers in the presence of arbitrarily large communication delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 9, pp. 4447–4461, 2018.
  • [8] H. Xu, S. Liu, S. Zhao, and J. Wang, “Distributed control for a class of nonlinear systems based on distributed high-gain observer,” ISA Transactions, vol. 138, pp. 329–340, 2023.
  • [9] H. Xu and J. Wang, “Distributed observer-based control law with better dynamic performance based on distributed high-gain observer,” International Journal of Systems Science, vol. 51, no. 4, pp. 631–642, 2020.
  • [10] B. Huang, Y. Zou, and Z. Meng, “Distributed-observer-based Nash equilibrium seeking algorithm for quadratic games with nonlinear dynamics,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 11, pp. 7260–7268, 2021.
  • [11] T. Meng, Z. Lin, and Y. A. Shamash, “Distributed cooperative control of battery energy storage systems in DC microgrids,” IEEE/CAA Journal of Automatica Sinica, vol. 8, no. 3, pp. 606–616, 2021.
  • [12] S. Battilotti and M. Mekhail, “Distributed estimation for nonlinear systems,” Automatica, vol. 107, pp. 562–573, 2019.
  • [13] T. Kim, C. Lee, and H. Shim, “Completely decentralized design of distributed observer for linear systems,” IEEE Transactions on Automatic Control, vol. 65, no. 11, pp. 4664–4678, 2020.
  • [14] M. Deghat, V. Ugrinovskii, I. Shames, and C. Langbort, “Detection and mitigation of biasing attacks on distributed estimation networks,” Automatica, vol. 99, pp. 369–381, 2019.
  • [15] W. Han, H. L. Trentelman, Z. Wang, and Y. Shen, “A simple approach to distributed observer design for linear systems,” IEEE Transactions on Automatic Control, vol. 64, no. 1, pp. 329–336, 2019.
  • [16] H. Xu, J. Wang, H. Wang, and B. Wang, “Distributed observers design for a class of nonlinear systems to achieve omniscience asymptotically via differential geometry,” International Journal of Robust and Nonlinear Control, vol. 31, no. 13, pp. 6288–6313, 2021.
  • [17] C. Deng, C. Wen, J. Huang, X. Zhang, and Y. Zou, “Distributed observer-based cooperative control approach for uncertain nonlinear mass under event-triggered communication,” IEEE Transactions on Automatic Control, vol. 67, no. 5, pp. 2669–2676, 2022.
  • [18] L. Wang and A. S. Morse, “A distributed observer for a time-invariant linear system,” IEEE Transactions on Automatic Control, vol. 63, no. 7, pp. 2123–2130, 2018.
  • [19] K. Liu, Y. Chen, Z. Duan, and J. Lü, “Cooperative output regulation of LTI plant via distributed observers with local measurement,” IEEE Transactions on Cybernetics, vol. 48, no. 7, pp. 2181–2190, 2018.
  • [20] G. Yang, H. Rezaee, A. Alessandri, and T. Parisini, “State estimation using a network of distributed observers with switching communication topology,” Automatica, vol. 147, p. 110690, 2023.
  • [21] G. Yang, H. Rezaee, A. Serrani, and T. Parisini, “Sensor fault-tolerant state estimation by networks of distributed observers,” IEEE Transactions on Automatic Control, vol. 67, no. 10, pp. 5348–5360, 2022.
  • [22] H. Xu, J. Wang, B. Wang, and I. Brahmia, “Distributed observer design for linear systems to achieve omniscience asymptotically under jointly connected switching networks,” IEEE Transactions on Cybernetics, vol. 52, no. 12, pp. 13 383–13 394, 2022.
  • [23] H. Xu, S. Liu, B. Wang, and J. Wang, “Distributed observer design over directed switching topologies,” IEEE Transactions on Automatic Control, vol. pp, no. 99, pp. 1–14, 2024.
  • [24] L. Wang, J. Liu, B. D. Anderson, and A. S. Morse, “Split-spectrum based distributed state estimation for linear systems,” Automatica, vol. 161, p. 111421, 2024.
  • [25] E. Baum, Z. Liu, and O. Stursberg, “Distributed state estimation of linear systems with randomly switching communication graphs,” IEEE Control Systems Letters, vol. pp, no. 99, pp. 1–1, 2024.
  • [26] P. Duan, Y. Lv, G. Wen, and M. Ogorzałek, “A framework on fully distributed state estimation and cooperative stabilization of LTI plants,” IEEE Transactions on Automatic Control, vol. pp, no. 99, pp. 1–1, 2024.
  • [27] H. Su, Y. Wu, and W. Zheng, “Distributed observer for LTI systems under stochastic impulsive sequential attacks,” Automatica, vol. 159, p. 111370, 2024.
  • [28] W. Han, H. L. Trentelman, Z. Wang, and Y. Shen, “Towards a minimal order distributed observer for linear systems,” Systems & Control Letters, vol. 114, pp. 59–65, 2018.
  • [29] X. Wang, Z. Fan, Y. Zhou, and Y. Wan, “Distributed observer design of discrete-time complex dynamical systems with long-range interactions,” Journal of the Franklin Institute, vol. pp, no. 99, pp. 1–15, 2022.
  • [30] A. Zecevic and D. Siljak, “A new approach to control design with overlapping information structure constraints,” Automatica, vol. 41, no. 2, pp. 265–272, 2005.
  • [31] Y. Ding, C. Wang, and L. Xiao, “An adaptive partitioning scheme for sleep scheduling and topology control in wireless sensor networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 20, no. 9, pp. 1352–1365, 2009.
  • [32] H. Meyerhenke, P. Sanders, and C. Schulz, “Parallel graph partitioning for complex networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 28, no. 9, pp. 2625–2638, 2017.
  • [33] Y. Yang, Y. Sun, Q. Wang, F. Liu, and L. Zhu, “Fast power grid partition for voltage control with balanced-depth-based community detection algorithm,” IEEE Transactions on Power Systems, vol. 37, no. 2, pp. 1612–1622, 2022.
  • [34] A. Condon and R. M. Karp, “Algorithms for graph partitioning on the planted partition model,” Random Structures and Algorithms, vol. 18, no. 2, pp. 221–232, 2001.
  • [35] L. Wang, B. Yang, Y. Chen, X. Zhang, and J. Orchard, “Improving neural-network classifiers using nearest neighbor partitioning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2255–2267, 2017.
  • [36] H. Ruan, H. Gao, Y. Liu, L. Wang, and J. Liu, “Distributed voltage control in active distribution network considering renewable energy: A novel network partitioning method,” IEEE Transactions on Power Systems, vol. 35, no. 6, pp. 4220–4231, 2020.
  • [37] E. Bakolas, “Distributed partitioning algorithms for locational optimization of multiagent networks in SE(2),” IEEE Transactions on Automatic Control, vol. 63, no. 1, pp. 101–116, 2018.
  • [38] C. Shah and R. Wies, “Adaptive day-ahead prediction of resilient power distribution network partitions,” in 2021 IEEE Green Technologies Conference (GreenTech), 2021, pp. 477–483.
  • [39] Y. Hong, G. Chen, and L. Bushnell, “Distributed observers design for leader-following control of multi-agent networks,” Automatica, vol. 44, no. 3, pp. 846–850, 2008.
  • [40] H. K. Khalil, “High-gain observers in feedback control: Application to permanent magnet synchronous motors,” IEEE Control Systems Magazine, vol. 37, no. 3, pp. 25–41, 2017.