This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Privacy-Preserved Average Consensus Algorithms with Edge-based Additive Perturbations

Yi Xiong and Zhongkui Li This work was supported by the National Natural Science Foundation of China under grants 61973006 and U1713223.Y. Xiong and Z. Li are with College of Engineering, Peking University, Beijing 100871, China. E-mail: [email protected], [email protected]
Abstract

In this paper, we consider the privacy preservation problem in both discrete- and continuous-time average consensus algorithms with strongly connected and balanced graphs, against either internal honest-but-curious agents or external eavesdroppers. A novel algorithm is proposed, which adds edge-based perturbation signals to the process of consensus computation. Our algorithm can be divided into two phases: a coordinated scrambling phase, which is for privacy preservation, and a convergence phase. In the scrambling phase, each agent is required to generate some perturbation signals and add them to the edges leading out of it. In the convergence phase, the agents update their states following a normal updating rule. It is shown that an internal honest-but-curious agent can obtain the privacy of a target agent if and only if no other agents can communicate with the target agent. As for external eavesdroppers, it is proved that this kind of attackers can never obtain any agent’s privacy.

Index Terms:
Multi-agent systems, average consensus, privacy preservation, edge-based perturbation signals.

I Introduction

In the last decade, preserving privacy in average consensus has received increasing attentions in the systems and control community. Traditional consensus algorithms, e.g., those in [1, 2, 3], require agents to directly exchange their states information for computation. This brings some security issues in the sense that the agents’ initial states would be easily revealed to their out-neighbors and possible external adversaries. The disclosure of agents’ initial states may be undesirable, since they usually contain some important and sensitive information. An example is that some participants of an organization want to reach a common opinion with a consensus algorithm but they do not want their own opinions known by others [4]. Therefore, as long as cyber security is concerned, consensus algorithms should be secure and have the ability of preventing internal or external adversaries from obtaining agents’ privacy, i.e., their initial states.

The privacy preserving average consensus problem has been extensively investigated by many researchers. A natural idea is to encrypt the transmitted messages so that they will not be accessed by the public or third parties [5, 6]. By making use of the Pailler cryptosystem and designing an ingenious communication mechanism, privacy preservation was successfully embedded in pairwise communication in [7]. The ratio consensus and homomorphic cryptosystems are combined in [8] to preserve privacy. Nevertheless, cryptology-based methods share a common weakness that they usually put heavy burden on the communication and computation on the network.

Many works, e.g., [9, 4, 10, 11, 12, 13, 14, 15], protect privacy by adding random noises or deterministic perturbation signals to the transmitted messages. Some of them utilized a tool called differential privacy [16, 17]. For instance, [9] and [18] added random noises according to a Laplace distribution to messages in order to protect the real initial states. It should be pointed out that in the existing works based on differential privacy, accurate average consensus cannot be reached. To overcome this drawback, some researchers designed correlated noises ingeniously. In [4] and [12], a sequence of zero-sum correlated noises were added to the messages to preserve privacy. Nevertheless, these correlated-noise-based works require undirected graphs and does not consider external adversaries. Instead of adding random noises, the authors in [11] inserted some well-designed deterministic perturbation signals into the classic consensus algorithm. In [15], the authors proposed an interesting privacy-preserving consensus algorithm, which adds edge-based perturbations. There are also some other methods to preserve privacy, such as the state decomposition [19], output masking [20], hot-pluggable methods [21] and observability based methods [22, 23, 24].

In this paper, we intend to propose a novel privacy preserving consensus algorithm, which can achieve accurate average consensus for a broader class of network graphs and under a much relaxed privacy-preserving condition, compared to the existing literature. It is observed that in those aforementioned works such as [4, 12, 11], each agent adds noises or perturbation signals to its original state and then broadcasts the obfuscated value to all its neighboring agents. The privacy preserving performance, nevertheless, might be compromised, since the heterogeneity of the network in the sense that the agents usually have different amounts of neighbors is not fully exploited. Motivated by this observation, in this paper we propose to insert some well-designed perturbation signals to the communication edges in the network. This method of adding edge-based perturbation signals, different from the node-based methods in the aforementioned works, brings more degrees of design freedom.

Specifically, in this paper we consider the privacy preservation problem in both discrete- and continuous-time average consensus algorithms with strongly connected and balanced graphs, against either internal or external attackers. By the proposed edge-based method, each agent designs a different perturbation signal for every edge leading out of it and then sends the corrupted values to its neighbors. A distinct feature is that perturbation signals does not to be added all the time. In the discrete-time case, each agent only adds perturbation signals at the first iteration, while in the continuous-time case the perturbation signals are injected only in a finite time interval. It is shown that an internal honest-but-curious agent can obtain privacy of a target agent if and only if no other agents except the internal attacker can communicate with the target agent. It is further proved that external eavesdroppers cannot obtain any agent’s privacy.

Compared to the existing privacy preserving average consensus algorithms, the algorithm proposed in this paper have several advantages. In contrast to the differential-privacy-based methods in [9, 18], our method can achieve accurate average consensus. Besides, our algorithm is valid when the graph is a balanced directed graph, while most existing works, e.g., [7, 4, 12, 19], require undirected graphs. Compared to those in [11], our algorithm might perform better against the internal adversaries. In [11], an internal attacker can obtain an agent’s privacy if and only if the agent itself and all its in-neighbors are the attacker’s in-neighbors. By contrast, in the current paper the privacy disclosure happens only when the internal attacker is the only agent that can communicate with the target agent. Note that [15] also proposed an edge-based privacy-preserving consensus algorithm, which requires undirected graphs and considers only internal attackers. The algorithm in this paper, nevertheless, is simpler and is applicable to the case when the graph is directed and balanced and when external attackers exist. Moreover, the algorithm design and privacy analysis in this paper are based on a system-theorectic framework, significantly different form those in [15] which are from a perspective of probability analysis.

The rest of this paper is organized as follows. The privacy preservation problem is formulated in Section II. A discrete-time privacy preserving average consensus algorithm is proposed and its properties are analyzed in Section III. Extensions to the continuous-time case are considered in Section IV. Numerical simulations are conducted in Section V to demonstrate the effectiveness of our discrete-time algorithm. Finally, this paper is concluded in Section VI.

II Problem Formulation

In this paper, we consider a network of NN (N3N\geq 3) agents. The communication among agents is modeled by a graph 𝒢\mathcal{G}, consisting of a node set 𝒱\mathcal{V} and an edge set 𝒱×𝒱\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}. If (i,j)(i,j)\in\mathcal{E}, it means that agent ii can send information to agent jj, in which case agent ii is said to be an in-neighbor of agent jj and agent jj is said to be an out-neighbor of agent ii. A graph with the property that diin=diout, i𝒱d_{i}^{in}=d_{i}^{out},\mbox{ }\forall i\in\mathcal{V}, is said to be balanced, where diind_{i}^{in} (respectively, dioutd_{i}^{out}) denotes the cardinality of agent ii’s in-neighbor set 𝒩iin\mathcal{N}_{i}^{in} (respectively, out-neighbor set 𝒩iout\mathcal{N}_{i}^{out}). In such a graph, define each node’s degree as di=diin=dioutd_{i}=d_{i}^{in}=d_{i}^{out}.

Assumption 1

The graph 𝒢\mathcal{G} is strongly connected and balanced.

Two types of attackers are considered in this paper. The first type is called internal honest-but-curious agents. This type of attackers are some agents in the network, which implement our algorithm correctly but will try to evaluate other agents’ initial states via the collected information. The second type is called external eavesdroppers. This type of attackers exist outside the network and does not take part in the process of consensus computation, but they are able to hack into communication links and have access to all transmitted information among neighboring agents.

Lemma 1 ([1, 25])

For a strongly connected and balanced graph, the following updating rule guarantees accurate average consensus:

xi[k+1]=xi[k]+ϵj=1Naij(xj[k]xi[k]), k0,x_{i}[k+1]=x_{i}[k]+\epsilon\sum_{j=1}^{N}a_{ij}(x_{j}[k]-x_{i}[k]),\mbox{ }\forall k\geq 0, (1)

where ϵ(0,1max{d1,,dN})\epsilon\in(0,\frac{1}{max\{d_{1},\cdots,d_{N}\}}) is a constant and aija_{ij} denotes the (i,j)(i,j)-th entry of the adjacency matrix 𝐀\mathbf{A}, defined as aii=0a_{ii}=0, aij=1a_{ij}=1 if (j,i)(j,i)\in\mathcal{E} and aij=0a_{ij}=0 otherwise.

Lemma 2 ([1, 25])

For a strongly connected and balanced graph, the average consensus is achieved under the following continuous-time updating rule:

x˙i(t)=cj=1Naij(xi(t)xj(t)), t0,\dot{x}_{i}(t)=-c\sum_{j=1}^{N}a_{ij}(x_{i}(t)-x_{j}(t)),\mbox{ }\forall t\geq 0, (2)

where c>0c>0 is a constant.

The consensus algorithms (1) and (2) cannot preserve privacy. For example, it is obvious in (1) that all agents’ initial states information will be transmitted at the first iteration, implying that an agent’s privacy will be revealed to its out-neighbors and all agents’ privacy will be revealed to the external eavesdroppers.

The goal of this paper is to design a novel consensus algorithm that can fulfill two tasks: i) to achieve accurate average consensus, and ii) to prevent the aforementioned two types of attackers from obtaining agents’ privacy.

III Discrete-time Privacy Preserving Updating Rule and Privacy Analysis

In this section, we consider the discrete-time consensus case.

The proposed algorithm is stated as follows. At the first iteration, each agent generates some perturbation signals and then add them to the edges leading out of it. Specially, for each agent ii, if (i,j)(i,j)\in\mathcal{E}, then agent ii choose a real number pi(j)p_{i}^{(j)}\in\mathbb{R} and transmit yi(j)[0]=xi[0]+pi(j)y_{i}^{(j)}[0]=x_{i}[0]+p_{i}^{(j)} to its out-neighbor, agent jj. All the agents then update their states as

xi[1]=xi[0]+ϵ1j=1Naij(yj(i)[0]xi[0])+ϵ1pi(i),x_{i}[1]=x_{i}[0]+\epsilon_{1}\sum_{j=1}^{N}a_{ij}(y_{j}^{(i)}[0]-x_{i}[0])+\epsilon_{1}p_{i}^{(i)}, (3)

where ϵ1(,0)(0,)\epsilon_{1}\in(-\infty,0)\bigcup(0,\infty) is a constant and pi(i)=j𝒩ioutpi(j)p_{i}^{(i)}=-\sum_{j\in\mathcal{N}_{i}^{out}}p_{i}^{(j)}. For k1k\geq 1, all agents transmit their true state values and implement the algorithm in Lemma 1, i.e.,

xi[k+1]=xi[k]+ϵ2j=1Naij(xj[k]xi[k]), k1,x_{i}[k+1]=x_{i}[k]+\epsilon_{2}\sum_{j=1}^{N}a_{ij}(x_{j}[k]-x_{i}[k]),\mbox{ }\forall k\geq 1, (4)

where ϵ2(0,1max{d1,,dN})\epsilon_{2}\in(0,\frac{1}{max\{d_{1},\cdots,d_{N}\}}) is a constant.

The algorithm is summarized in Algorithm 1.

Algorithm 1 Privacy Preserving Average Consensus Algorithm.

When k=0k=0:

1:  Each agent ii chooses did_{i} numbers from the set of real numbers \mathbb{R}: pi(j1),,pi(jdi)p_{i}^{(j_{1})},\cdots,p_{i}^{(j_{d_{i}})}, where j1,,jdi𝒩ioutj_{1},\cdots,j_{d_{i}}\in\mathcal{N}_{i}^{out};
2:  Each agent ii computes pi(i)=j𝒩ioutpi(j)p_{i}^{(i)}=-\sum_{j\in\mathcal{N}_{i}^{out}}p_{i}^{(j)};
3:  Agent ii transmits yi(j)[0]y_{i}^{(j)}[0] to agent jj, j𝒩iout\forall j\in\mathcal{N}_{i}^{out};
4:  Each agent ii updates its state according to (3);

When k1k\geq 1:

1:  All agents update according to the normal rule (4).
Theorem 1

By Algorithm 1, accurate average consensus is achieved.

Proof 1

Firstly, we prove that the first iteration (3) does not change the sum of agents’ states. Define the perturbation matrix as 𝐏=[pi(j)]N×N\mathbf{P}=[p_{i}^{(j)}]\in\mathbb{R}^{N\times N}, where pi(j)p_{i}^{(j)} is defined to be 0 if (i,j)(i,j)\notin\mathcal{E}. Note that pi(i)=j𝒩ioutpi(j)p_{i}^{(i)}=-\sum_{j\in\mathcal{N}_{i}^{out}}p_{i}^{(j)}, we have 𝐏𝟏N=𝟎,\mathbf{P}\mathbf{1}_{N}=\mathbf{0}, where 𝟏N\mathbf{1}_{N} is a N×1N\times 1 vector with unitary elements. By definition, we rewrite the first iteration (3) as

𝐱[1]=(𝐈ϵ1𝐋)𝐱[0]+ϵ1𝐏T𝟏N,\mathbf{x}[1]=(\mathbf{I}-\epsilon_{1}\mathbf{L})\mathbf{x}[0]+\epsilon_{1}\mathbf{P}^{T}\mathbf{1}_{N}, (5)

where 𝐱=[x1,,xN]T\mathbf{x}=[x_{1},\cdots,x_{N}]^{T}, 𝐈\mathbf{I} denotes the identity matrix, and 𝐋=[lij]N×N\mathbf{L}=[l_{ij}]\in\mathbb{R}^{N\times N} represents the Laplacian matrix, defined as lii=jiaijl_{ii}=\sum_{j\neq i}a_{ij} and lij=aijl_{ij}=-a_{ij} if iji\neq j.. Note that 𝐋T𝟏N=𝟎\mathbf{L}^{T}\mathbf{1}_{N}=\mathbf{0} as the graph is balanced, we have

𝟏NT𝐱[1]=𝟏NT(𝐈ϵ1𝐋)𝐱[0]+ϵ1𝟏NT𝐏T𝟏N=𝟏NT𝐱[0],\displaystyle\mathbf{1}_{N}^{T}\mathbf{x}[1]=\mathbf{1}_{N}^{T}(\mathbf{I}-\epsilon_{1}\mathbf{L})\mathbf{x}[0]+\epsilon_{1}\mathbf{1}_{N}^{T}\mathbf{P}^{T}\mathbf{1}_{N}=\mathbf{1}_{N}^{T}\mathbf{x}[0], (6)

implying that the sum of agents’ states is unchanged after the first iteration.

From k=1k=1, the agents implement the normal algorithm in Lemma 1. Then, it follows from Lemma 1 that the accurate average consensus can be achieved. \blacksquare

Remark 1

The injected perturbations are locally zero-sum, i.e., j{i}𝒩ioutpi(j)=0\sum_{j\in\{i\}\bigcup\mathcal{N}_{i}^{out}}p_{i}^{(j)}=0. This is vital to guaranteeing the accuracy of the final convergence value.

III-A Privacy Preservation Against Internal Honest-but-curious Agents

The privacy preservation property against internal honest-but-curious agents is firstly analyzed. We assume that there exists only one internal honest-but-curious agent and it is represented by mm. Nevertheless, it is worth noting that our results can be easily extended to collaborative agents by regarding them as a super node.

What information can the attacker obtain should be defined first. It is assumed that every internal agent (curious or not) can store all the information transmitted to it and knows the graph topology and the parameters ϵ1\epsilon_{1} and ϵ2\epsilon_{2}. Thus, for any agent ii, we define the set of all information that it can access as its information set Φi\Phi_{i}, which is defined as

Φi=\displaystyle\Phi_{i}= {𝒢; ϵ1,ϵ2; yj(i)[0],j𝒩iin; xj[k],k1,\displaystyle\{\mathcal{G};\mbox{ }\epsilon_{1},\epsilon_{2};\mbox{ }y_{j}^{(i)}[0],j\in\mathcal{N}_{i}^{in};\mbox{ }x_{j}[k],k\geq 1, (7)
j𝒩iin; xi[k],k0; pi(j),j𝒩iout,\displaystyle j\in\mathcal{N}_{i}^{in};\mbox{ }x_{i}[k],k\geq 0;\mbox{ }p_{i}^{(j)},j\in\mathcal{N}_{i}^{out},
the form of Algorithm 1}.\displaystyle\mbox{ }the\mbox{ }form\mbox{ }of\mbox{ }Algorithm\mbox{ }\ref{alg:our_alg_discrete}\}.

Inspired by [11], we present a lemma before moving forward. Without loss of generality, we arbitrarily choose an agent in the network and label it as agent 1. As the graph is strongly connected and balanced, its in-neighbor set 𝒩1in\mathcal{N}_{1}^{in} is not empty. We arbitrarily choose an agent from 𝒩1in\mathcal{N}_{1}^{in} and label it as agent 2, and let 𝒱3=𝒱\{1,2}\mathcal{V}^{3}=\mathcal{V}\backslash\{1,2\} be the set of other agents. Denoted by 𝐱3[k]\mathbf{x}_{3}[k] the aggregated states of agents in 𝒱3\mathcal{V}^{3}. In a compatible manner, 𝐋,𝐀\mathbf{L},\mathbf{A} and 𝐏\mathbf{P} are partitioned to the following subblock matrices:

𝐋=[d11𝐀13a21d2𝐀23𝐀31𝐀32𝐋33],𝐏=[p1(1)p1(2)𝐏1(3)p2(1)p2(2)𝐏2(3)𝐏3(1)𝐏3(2)𝐏3(3)].\mathbf{L}=\begin{bmatrix}d_{1}&-1&-\mathbf{A}_{13}\\ -a_{21}&d_{2}&-\mathbf{A}_{23}\\ -\mathbf{A}_{31}&-\mathbf{A}_{32}&\mathbf{L}_{33}\end{bmatrix},\mathbf{P}=\begin{bmatrix}p_{1}^{(1)}&p_{1}^{(2)}&\mathbf{P}_{1}^{(3)}\\ p_{2}^{(1)}&p_{2}^{(2)}&\mathbf{P}_{2}^{(3)}\\ \mathbf{P}_{3}^{(1)}&\mathbf{P}_{3}^{(2)}&\mathbf{P}_{3}^{(3)}\end{bmatrix}. (8)
Lemma 3

Consider two implementations of our algorithm. In the first implementation, agents’ initial states are x1[0],x2[0],,xN[0]x_{1}[0],x_{2}[0],\cdots,x_{N}[0] and the generated perturbation signals are pi(j)p_{i}^{(j)}s. In the second implementation, agents’ initial states are

x~1[0], x~2[0]=x1[0]+x2[0]x~1[0],\displaystyle\tilde{x}_{1}[0]\in\mathbb{R},\mbox{ }\tilde{x}_{2}[0]=x_{1}[0]+x_{2}[0]-\tilde{x}_{1}[0], (9)
𝐱~3[0]=𝐱3[0],\displaystyle\mathbf{\tilde{x}}_{3}[0]=\mathbf{x}_{3}[0],

and the perturbations are given as

p~1(1)=p1(1)+d1(x~1[0]x1[0]),\displaystyle\tilde{p}_{1}^{(1)}=p_{1}^{(1)}+d_{1}(\tilde{x}_{1}[0]-x_{1}[0]), (10)
p~1(2)=p1(2)a21(x~1[0]x1[0]),\displaystyle\tilde{p}_{1}^{(2)}=p_{1}^{(2)}-a_{21}(\tilde{x}_{1}[0]-x_{1}[0]),
𝐏~1(3)=𝐏1(3)𝐀31T(x~1[0]x1[0]),\displaystyle\mathbf{\tilde{P}}_{1}^{(3)}=\mathbf{P}_{1}^{(3)}-\mathbf{A}_{31}^{T}(\tilde{x}_{1}[0]-x_{1}[0]),
p~2(1)=p2(1)+(1ϵ11)(x~2[0]x2[0]),\displaystyle\tilde{p}_{2}^{(1)}=p_{2}^{(1)}+(\frac{1}{\epsilon_{1}}-1)(\tilde{x}_{2}[0]-x_{2}[0]),
p~2(2)=p2(2)+(d21ϵ1)(x~2[0]x2[0]),\displaystyle\tilde{p}_{2}^{(2)}=p_{2}^{(2)}+(d_{2}-\frac{1}{\epsilon_{1}})(\tilde{x}_{2}[0]-x_{2}[0]),
𝐏~2(3)=𝐏2(3)𝐀32T(x~2[0]x2[0]),\displaystyle\mathbf{\tilde{P}}_{2}^{(3)}=\mathbf{P}_{2}^{(3)}-\mathbf{A}_{32}^{T}(\tilde{x}_{2}[0]-x_{2}[0]),
𝐏~3(j)=𝐏3(j), j1,2,3.\displaystyle\mathbf{\tilde{P}}_{3}^{(j)}=\mathbf{P}_{3}^{(j)},\mbox{ }j\in{1,2,3}.

Then, in both implementations, accurate average consensus can be achieved and the the convergence values are the same, i.e., limkxi[k]=x~i[k]\lim_{k\rightarrow\infty}x_{i}[k]=\tilde{x}_{i}[k]. Moreover, if agent i𝒱3i\in\mathcal{V}^{3}, then in both implementations agent ii’s information sets are the same, i.e., Φi=Φ~i\Phi_{i}=\tilde{\Phi}_{i}.

Refer to caption
Figure 1: The graph described in Lemma 3. The dashed arrows means that these edges may or may not really exist, while the solid arrow means that this edge does exist.
Proof 2

By (10), we see that 𝐏~𝟏N=0\mathbf{\tilde{P}}\mathbf{1}_{N}=0 holds, implying that p~i(j)\tilde{p}_{i}^{(j)}s are a group of possible perturbation signals. Then by Theorem 1, in both implementations accurate average consensus can be guaranteed. Moreover, from (9) we know 𝟏NT𝐱~[0]=𝟏NT𝐱[0]\mathbf{1}_{N}^{T}\mathbf{\tilde{x}}[0]=\mathbf{1}_{N}^{T}\mathbf{x}[0]. Therefore, limkxi[k]=x~i[k]\lim_{k\rightarrow\infty}x_{i}[k]=\tilde{x}_{i}[k] holds.

For any i𝒱3i\in\mathcal{V}^{3}, the key point to compare Φi\Phi_{i} and Φ~i\tilde{\Phi}_{i} is to focus on the transmitted messages when k=0k=0. We denote the message transmitted from agent ii to agent jj when k=0k=0 in the two implementations as yi(j)[0]y_{i}^{(j)}[0] and y~i(j)[0]\tilde{y}_{i}^{(j)}[0], respectively. Define δyi(j)[0]=y~i(j)[0]yi(j)[0]\delta y_{i}^{(j)}[0]=\tilde{y}_{i}^{(j)}[0]-y_{i}^{(j)}[0]. By (9) and (10), we have

δyi(j)={0,if (i,j)\(2,1),1ϵ1(x~2[0]x2[0]),if (i,j)=(2,1),\delta{y}_{i}^{(j)}=\begin{cases}0,&\mbox{if }(i,j)\in\mathcal{E}\backslash(2,1),\\ \frac{1}{\epsilon_{1}}(\tilde{x}_{2}[0]-x_{2}[0]),&\mbox{if }(i,j)=(2,1),\end{cases} (11)

implying that for any i𝒱3i\in\mathcal{V}^{3}, all the messages it transmits and receives when k=0k=0 are the same in the two implementations.

Now we focus on how the two systems evolve. Define δ𝐱=𝐱~𝐱\delta\mathbf{x}=\mathbf{\tilde{x}}-\mathbf{x} and Δ𝐏=𝐏~𝐏\Delta\mathbf{P}=\mathbf{\tilde{P}}-\mathbf{P}. By (5), (9) and (10), we have

δ𝐱[1]=𝐱~[1]𝐱[1]\displaystyle\delta\mathbf{x}[1]=\mathbf{\tilde{x}}[1]-\mathbf{x}[1] (12)
=\displaystyle= (𝐈ϵ1𝐋)δ𝐱[0]+ϵ1Δ𝐏T𝟏N\displaystyle(\mathbf{I}-\epsilon_{1}\mathbf{L})\delta\mathbf{x}[0]+\epsilon_{1}\Delta\mathbf{P}^{T}\mathbf{1}_{N}
=\displaystyle= [1ϵ1d1ϵ1ϵ1𝐀13ϵ1a211ϵ1d2ϵ1𝐀23ϵ1𝐀31ϵ1𝐀32𝐈ϵ1𝐋33][δx1[0]δx2[0]𝟎]+\displaystyle\begin{bmatrix}1-\epsilon_{1}d_{1}&\epsilon_{1}&\epsilon_{1}\mathbf{A}_{13}\\ \epsilon_{1}a_{21}&1-\epsilon_{1}d_{2}&\epsilon_{1}\mathbf{A}_{23}\\ \epsilon_{1}\mathbf{A}_{31}&\epsilon_{1}\mathbf{A}_{32}&\mathbf{I}-\epsilon_{1}\mathbf{L}_{33}\end{bmatrix}\begin{bmatrix}\delta x_{1}[0]\\ \delta x_{2}[0]\\ \mathbf{0}\end{bmatrix}+
+\displaystyle+ ϵ1[d1δx1[0]a21δx1[0]𝐀31Tδx1[0](1ϵ11)δx2[0](d21ϵ1)δx2[0]𝐀32Tδx2[0]𝟎𝟎𝟎]T𝟏N\displaystyle\epsilon_{1}\begin{bmatrix}d_{1}\delta x_{1}[0]&-a_{21}\delta x_{1}[0]&-\mathbf{A}_{31}^{T}\delta x_{1}[0]\\ (\frac{1}{\epsilon_{1}}-1)\delta x_{2}[0]&(d_{2}-\frac{1}{\epsilon_{1}})\delta x_{2}[0]&-\mathbf{A}_{32}^{T}\delta x_{2}[0]\\ \mathbf{0}&\mathbf{0}&\mathbf{0}\end{bmatrix}^{T}\mathbf{1}_{N}
=\displaystyle= 𝟎,\displaystyle\mathbf{0},

implying that 𝐱~[1]=𝐱[1]\mathbf{\tilde{x}}[1]=\mathbf{x}[1]. Hereafter, the two systems are exactly the same at every iteration, so does the transmitted messages in the two implementations from k1k\geq 1.

The proof is then completed. \blacksquare

Remark 2

As shown in Lemma 3, the main difference between the two implementations is that x1[0]x~1[0]x_{1}[0]\neq\tilde{x}_{1}[0] and x2[0]x~2[0]x_{2}[0]\neq\tilde{x}_{2}[0]. However, this difference does not influence those agents in 𝒱3\mathcal{V}^{3} at all, as their trajectories of states and their collected information are unchanged. This fact actually implies that all agents in 𝒱3\mathcal{V}^{3} cannot distinguish any variation of x1[0]x_{1}[0] and x2[0]x_{2}[0]. For any agent i𝒱3i\in\mathcal{V}^{3} and a given information set Φi\Phi_{i}, there exists infinite number of possible initial conditions and corresponding perturbation signals for agent 1 and 2 which can lead to the same information set Φi\Phi_{i}.

Theorem 2

For any agent υ𝒱\upsilon\in\mathcal{V}, the internal honest-but-curious agent mm can evaluate its initial value if and only

{dυ=1,(m,υ), (υ,m).\begin{cases}&d_{\upsilon}=1,\\ &(m,\upsilon),\mbox{ }(\upsilon,m)\in\mathcal{E}.\end{cases} (13)
Proof 3

(Necessity) When (13) holds, agent υ\upsilon can only communicate with the internal honest-but-curious agent mm (see Fig. 2). From Algorithm 1, we have

yυ(m)[0]=xυ[0]+pυ(m),\displaystyle y_{\upsilon}^{(m)}[0]=x_{\upsilon}[0]+p_{\upsilon}^{(m)}, (14)
xυ[1]=xυ[0]+ϵ1(ym(υ)[0]xυ[0])+ϵ1pυ(υ).\displaystyle x_{\upsilon}[1]=x_{\upsilon}[0]+\epsilon_{1}(y_{m}^{(\upsilon)}[0]-x_{\upsilon}[0])+\epsilon_{1}p_{\upsilon}^{(\upsilon)}.

Combing (14) and pυ(υ)=pυ(m)p_{\upsilon}^{(\upsilon)}=-p_{\upsilon}^{(m)}, agent mm can compute xυ[0]x_{\upsilon}[0] with the following equation:

xυ[0]=xυ[1]+ϵ1(yυ(m)[0]ym(υ)[0]).x_{\upsilon}[0]=x_{\upsilon}[1]+\epsilon_{1}(y_{\upsilon}^{(m)}[0]-y_{m}^{(\upsilon)}[0]). (15)

(Sufficiency) Now we are going to show that agent υ\upsilon’s privacy is preserved if (13) does not hold. Note that agent mm’s information set Φm\Phi_{m} plays an important role in the analysis. By definition, Φm\Phi_{m} is the set of agent mm’s collected information and is the only thing that agent mm can make use of to evaluate other agents’ privacy. It is natural to see that if Φm\Phi_{m} can be exactly the same when agent υ\upsilon’s initial state is an arbitrary value, then there is no doubt that agent mm cannot distinguish between these possible initial values of agent υ\upsilon, implying that agent mm cannot uniquely determine the value of xυ[0]x_{\upsilon}[0].

Note that N3N\geq 3 and the graph is strongly connected and balanced, if (13) does not hold, at least one of the following two cases holds:

Case 1: There exists an agent ω\omega such that (ω,υ)(\omega,\upsilon)\in\mathcal{E}. We can regard agent ω\omega and agent υ\upsilon as the agent 2 and the agent 1 in Lemma 3. It follows from Lemma 3 that all other agents’ information sets, including agent mm’s information set Φm\Phi_{m}, can stay exactly unchanged even when agent υ\upsilon’s initial state changes from xυ[0]x_{\upsilon}[0] to x~υ[0]\tilde{x}_{\upsilon}[0], where x~υ[0]\tilde{x}_{\upsilon}[0] can be arbitrarily chosen from \mathbb{R}. Thus, there exists infinite number of possible initial values of agent υ\upsilon that can lead to the same information set Φm\Phi_{m}. We then conclude that agent mm cannot reconstruct agent υ\upsilon’s initial state. In this sense, we say that agent υ\upsilon’s privacy is preserved under Algorithm 1.

Case 2: There exists an agent ω\omega such that (υ,ω)(\upsilon,\omega)\in\mathcal{E}. Following similar lines, we can prove that agent υ\upsilon’s privacy is preserved.

The proof is then completed. \blacksquare

Refer to caption
Figure 2: The case where agent υ\upsilon’s privacy will be disclosed.
Remark 3

Theorem 5 states that an agent ii’s privacy can be still preserved even when agent ii and all its neighbors are connected to the honest-but-curious agent, in which case the algorithms in [4] and [10] will fail. Compared with [11], our algorithm performs better when honest-but-curious agents exist. In [11], an agent ii’s privacy will be disclosed if and only if (i,m)(i,m)\in\mathcal{E} and 𝒩iin𝒩min{m}\mathcal{N}_{i}^{in}\subset\mathcal{N}_{m}^{in}\bigcup\{m\}. By Theorem 5, we can see that our algorithm is still valid even when the above condition in [11] is violated. Therefore, our algorithm improves over those in the aforementioned related works.

Remark 4

In this paper, the added perturbation signals are edge-based, meaning that each agent adds different perturbation signals to the information transmitted to different out-neighbors. Thus, our algorithm has more degrees of freedom on perturbation signal design than those in [4, 12, 11] where node-based method is used. It should be mentioned that [15] also proposed an edge-based algorithm, whose basic idea is to make sure that the distributions of the honest agents’ initial states conditioned on their sum and the attackers’ view are identical. The privacy analysis in this section is significantly different from the probability viewpoint in [15]. Moreover, the privacy-preserving algorithm in the current paper is simpler than that in [15] and can be applicable to a more broad class of network topologies which are not necessarily undirected.

III-B Privacy Preservation Against External Eavesdroppers

In this subsection, we consider another case that external eavesdroppers exist. Here we should define the information sets of these eavesdroppers. It is assumed the graph and all transmitted information between agents are accessible to them. And we assume that value of the parameter ϵ1\epsilon_{1} is not known by them. Denoted by Θ\Theta an external eavesdropper’s information set, which is defined as

Θ={\displaystyle\Theta=\{ 𝒢; ϵ2; yi(j)[0],(i,j)𝒱; xi[k],i𝒱,k1,\displaystyle\mathcal{G};\mbox{ }\epsilon_{2};\mbox{ }y_{i}^{(j)}[0],\forall(i,j)\in\mathcal{V};\mbox{ }x_{i}[k],\forall i\in\mathcal{V},k\geq 1, (16)
the form of Algorithm 1}.\displaystyle the\mbox{ }form\mbox{ }of\mbox{ }Algorithm\mbox{ }\ref{alg:our_alg_discrete}\}.

Similar with the last subsection, the following lemma is presented at first.

Lemma 4

Assuming that there are two implementations of our algorithm. In the first one, agents’ initial states are 𝐱[0]=(x1[0],,xN[0])T\mathbf{x}[0]=(x_{1}[0],\cdots,x_{N}[0])^{T}, the perturbation signals are pi(j)p_{i}^{(j)}s, and parameters are ϵ1\epsilon_{1} and ϵ2\epsilon_{2}. The initial conditions, perturbations and parameters of the second implementation are given as

𝐱^[0]\displaystyle\mathbf{\hat{x}}[0] =𝐱[0]+δϵ(𝐋𝐱[0]𝐏T𝟏N),\displaystyle=\mathbf{x}[0]+\delta\epsilon(\mathbf{L}\mathbf{x}[0]-\mathbf{P}^{T}\mathbf{1}_{N}), (17)
p^i(j)\displaystyle\hat{p}_{i}^{(j)} =pi(j)(x^i[0]xi[0]), (i,j),\displaystyle=p_{i}^{(j)}-(\hat{x}_{i}[0]-x_{i}[0]),\mbox{ }\forall(i,j)\in\mathcal{E},
p^i(i)\displaystyle\hat{p}_{i}^{(i)} =j𝒩ioutp^i(j),\displaystyle=-\sum_{j\in\mathcal{N}_{i}^{out}}\hat{p}_{i}^{(j)},
ϵ^1\displaystyle\hat{\epsilon}_{1} =ϵ1+δϵ,ϵ^2=ϵ2,\displaystyle=\epsilon_{1}+\delta\epsilon,~{}\hat{\epsilon}_{2}=\epsilon_{2},

where δϵ\delta\epsilon\in\mathbb{R} is a constant.

Then, in both implementations, accurate average consensus can be achieved and limkxi[k]=x^i[k]\lim_{k\rightarrow\infty}x_{i}[k]=\hat{x}_{i}[k]. Moreover, the external eavesdropper’s information sets are exactly the same in the two implementations, i.e., Θ=Θ^\Theta=\hat{\Theta}.

Proof 4

We first focus on convergence. It is easy to see that 𝐏^𝟏N=0\mathbf{\hat{P}}\mathbf{1}_{N}=0 holds. Thus, accurate average consensus can be achieved in both implementations according to Theorem 1. Moreover, note that 𝟏N𝐱[0]=𝟏N𝐱^[0]\mathbf{1}_{N}\mathbf{x}[0]=\mathbf{1}_{N}\hat{\mathbf{x}}[0], we have limkxi[k]=x^i[k]\lim_{k\rightarrow\infty}x_{i}[k]=\hat{x}_{i}[k].

Consider the transmitted messages at the first iteration (k=0k=0), it is not difficult to verify from (17) that y^i(j)[0]=yi(j)[0]\hat{y}_{i}^{(j)}[0]=y_{i}^{(j)}[0], (i,j)\forall(i,j)\in\mathcal{E} holds. It means that all transmitted messages at the first iteration are exactly the same in the two implementations. Then we consider the case that k1k\geq 1. Define δ𝐱=𝐱^𝐱\delta\mathbf{x}=\mathbf{\hat{x}}-\mathbf{x} and Δ𝐏=𝐏^𝐏\Delta\mathbf{P}=\mathbf{\hat{P}}-\mathbf{P}. By definition and in light of (5), we have

δ𝐱[1]=𝐱^[1]𝐱[1]\displaystyle\delta\mathbf{x}[1]=\mathbf{\hat{x}}[1]-\mathbf{x}[1] (18)
=\displaystyle= (𝐈ϵ^1𝐋)𝐱^[0]+ϵ^1𝐏^T𝟏N(𝐈ϵ1𝐋)𝐱[0]ϵ1𝐏T𝟏N\displaystyle(\mathbf{I}-\hat{\epsilon}_{1}\mathbf{L})\mathbf{\hat{x}}[0]+\hat{\epsilon}_{1}\mathbf{\hat{P}}^{T}\mathbf{1}_{N}-(\mathbf{I}-\epsilon_{1}\mathbf{L})\mathbf{x}[0]-\epsilon_{1}\mathbf{P}^{T}\mathbf{1}_{N}
=\displaystyle= (𝐈ϵ1𝐋δϵ𝐋)(𝐱[0]+δ𝐱[0])+(ϵ1+δϵ)\displaystyle(\mathbf{I}-\epsilon_{1}\mathbf{L}-\delta\epsilon\mathbf{L})(\mathbf{x}[0]+\delta\mathbf{x}[0])+(\epsilon_{1}+\delta\epsilon)
×(𝐏+Δ𝐏)T𝟏N(𝐈ϵ1𝐋)𝐱[0]ϵ1𝐏T𝟏N\displaystyle\times(\mathbf{P}+\Delta\mathbf{P})^{T}\mathbf{1}_{N}-(\mathbf{I}-\epsilon_{1}\mathbf{L})\mathbf{x}[0]-\epsilon_{1}\mathbf{P}^{T}\mathbf{1}_{N}
=\displaystyle= δ𝐱[0]δϵ(𝐋𝐱[0]𝐏T𝟏N)ϵ^(𝐋δ𝐱[0]Δ𝐏T𝟏N).\displaystyle\delta\mathbf{x}[0]-\delta\epsilon(\mathbf{L}\mathbf{x}[0]-\mathbf{P}^{T}\mathbf{1}_{N})-\hat{\epsilon}(\mathbf{L}\delta\mathbf{x}[0]-\Delta\mathbf{P}^{T}\mathbf{1}_{N}).

It follows from (17) that

𝐋δ𝐱[0]Δ𝐏T𝟏N\displaystyle\mathbf{L}\delta\mathbf{x}[0]-\Delta\mathbf{P}^{T}\mathbf{1}_{N} (19)
=\displaystyle= 𝐋δ𝐱[0]\displaystyle\mathbf{L}\delta\mathbf{x}[0]
[d1δx1[0]a12δx2[0]a1NδxN[0]a21δx1[0]d2δx2[0]a2NδxN[0]aN1δx1[0]a2Nδx2[0]dNδxN[0]]𝟏N\displaystyle-\begin{bmatrix}d_{1}\delta x_{1}[0]&-a_{12}\delta x_{2}[0]&\cdots&-a_{1N}\delta x_{N}[0]\\ -a_{21}\delta x_{1}[0]&d_{2}\delta x_{2}[0]&\cdots&-a_{2N}\delta x_{N}[0]\\ \vdots&\vdots&&\vdots\\ -a_{N1}\delta x_{1}[0]&-a_{2N}\delta x_{2}[0]&\cdots&d_{N}\delta x_{N}[0]\end{bmatrix}\mathbf{1}_{N}
=\displaystyle= 𝟎.\displaystyle\mathbf{0}.

Substituting (17) and (19) into (18), we have

δ𝐱[1]=𝟎,\delta\mathbf{x}[1]=\mathbf{0}, (20)

implying that x^i[1]=xi[1], i𝒱\hat{x}_{i}[1]=x_{i}[1],\mbox{ }\forall i\in\mathcal{V}. It means that from k=1k=1, the two systems are exactly the same at every iteration. Thus, for k1k\geq 1, the transmitted signals are also identical in the two implementations.

Then it is easy to see Θ=Θ^\Theta=\hat{\Theta} holds. \blacksquare

Theorem 3

If all perturbation signals pi(j)p_{i}^{(j)}s are randomly chosen from \mathbb{R}, under Algorithm 1 all agents’ privacy is preserved, in spite of the existence of external eavesdroppers.

Proof 5

The proof of this theorem is similar to the proof of Theorem 2. The above Lemma 4 shows that even when 𝐱[0]\mathbf{x}[0] is changed to 𝐱[0]+δϵ(𝐋𝐱[0]𝐏T𝟏N)\mathbf{x}[0]+\delta\epsilon(\mathbf{L}\mathbf{x}[0]-\mathbf{P}^{T}\mathbf{1}_{N}), the information set of the external attacker may stay unchanged. Let η=[η1,,ηN]T=𝐋𝐱[0]𝐏T𝟏N\eta=[\eta_{1},\cdots,\eta_{N}]^{T}=\mathbf{L}\mathbf{x}[0]-\mathbf{P}^{T}\mathbf{1}_{N}. Then for any agent ii, even if its initial state changes from xi[0]x_{i}[0] to xi[0]+δϵηix_{i}[0]+\delta\epsilon\cdot\eta_{i}, it is possible that Θ\Theta is not influenced at all. Note that δϵ\delta\epsilon\in\mathbb{R} is an arbitrary value, then if ηi0\eta_{i}\neq 0, xi[0]+δϵηix_{i}[0]+\delta\epsilon\cdot\eta_{i} can be any arbitrary value in \mathbb{R}. In this case, there exists infinite number of possible initial values of agent ii and the external eavesdropper cannot reconstruct the value of xi[0]x_{i}[0] so agent ii’s privacy is preserved. Note that when all pi(j)p_{i}^{(j)}s are randomly chosen from \mathbb{R}, the possibility of ηi=0\eta_{i}=0 is zero. The proof is then completed. \blacksquare

Remark 5

It should be noted that many works based on injecting correlated-noise like [4] and [12] are vulnerable to the external eavesdropper attackers. By contrast, our algorithm is valid when they exist. It is also worth noting that those works that rely on cryptology, e.g., there in [7] and [6], can also deal with this type of attackers well, as it is hard for the attackers to decrypt the encrypted messages. However, these methods suffer from huge costs on computation and communication, while the algorithm in this paper is light-weight.

IV Continuous-time Privacy Preserving Updating Rule and Privacy Analysis

This section considers continuous-time privacy preserving average consensus, which is the continuous-time counterpart of the discrete-time algorithm in Section 3.

Our continuous-time privacy-preserving average consensus algorithm is divided into two phases, one of which is for obfuscation and the other is for reaching consensus. Assume that all agents know a specific instant t0(0,)t_{0}\in(0,\infty). When t[0,t0]t\in[0,t_{0}], each agent generates some perturbation signals and then adds them to the edges leading out of it. Specifically, for each agent ii, if (i,j)(i,j)\in\mathcal{E}, then agent ii generates a bounded perturbation signal pi(j)(t):[0,t0]p_{i}^{(j)}(t):[0,t_{0}]\rightarrow\mathbb{R} and transmit yi(j)(t)=xi(t)+pi(j)(t)y_{i}^{(j)}(t)=x_{i}(t)+p_{i}^{(j)}(t) to its out-neighbor, agent jj. There are some constraints on those perturbation signals, and we refer to them as admissible perturbation signals.

Definition 1

We say a group of perturbation signals pi(j)(t):[0,t0]p_{i}^{(j)}(t):[0,t_{0}]\rightarrow\mathbb{R}, generated by agent ii, is admissible, if i) If j𝒩iout{i}j\notin\mathcal{N}_{i}^{out}\cup\{i\}, pi(j)(t)0p_{i}^{(j)}(t)\equiv 0; ii) All pi(j)(t)p_{i}^{(j)}(t)s are bounded and continuous on the interval [0,t0][0,t_{0}]; iii) pi(i)(t)=j𝒩ioutpi(j)(t)p_{i}^{(i)}(t)=-\sum_{j\in\mathcal{N}_{i}^{out}}p_{i}^{(j)}(t), t[0,t0]\forall t\in[0,t_{0}], i𝒱\forall i\in\mathcal{V}.

When t[0,t0]t\in[0,t_{0}], all the agents update their states as

x˙i(t)=c1j=1Naij(xi(t)yj(i)(t))+c1pi(i)(t),\displaystyle\dot{x}_{i}(t)=-c_{1}\sum_{j=1}^{N}a_{ij}(x_{i}(t)-y_{j}^{(i)}(t))+c_{1}p_{i}^{(i)}(t), (21)

where c1(,0)(0,)c_{1}\in(-\infty,0)\cup(0,\infty) is a constant that is known to all agents. Then, when t>t0t>t_{0}, each agent updates following the normal rule in Lemma 2, i.e.,

x˙i(t)=c2j=1Naij(xi(t)xj(t)), t>t0,\dot{x}_{i}(t)=-c_{2}\sum_{j=1}^{N}a_{ij}(x_{i}(t)-x_{j}(t)),\mbox{ }\forall t>t_{0}, (22)

where c2>0c_{2}>0 is a constant.

We summarize the proposed algorithm in Algorithm 2.

Algorithm 2 Privacy-Preserving Average Consensus Algorithm.
1:  Each agent ii generates a group of admissible perturbation signals pi(j)(t)p_{i}^{(j)}(t)s, j𝒩iout{i}\forall j\in\mathcal{N}_{i}^{out}\bigcup\{i\};

When tt0t\leq t_{0}:

1:  Agent ii transmits yi(j)(t)=xi(t)+pi(j)(t)y_{i}^{(j)}(t)=x_{i}(t)+p_{i}^{(j)}(t) to agent jj, j𝒩iout\forall j\in\mathcal{N}_{i}^{out};
2:  Each agent ii updates its state according to (21);

When t>t0t>t_{0}:

1:  All agents update via the normal rule (22).
Theorem 4

Accurate average consensus can be achieved by Algorithm 1.

Proof 6

Firstly, we prove that the coordinated scrambling phase does not change the sum of agents’ states. Define the perturbation matrix as 𝐏(t)=[pi(j)(t)]\mathbf{P}(t)=[p_{i}^{(j)}(t)]. From the fact that pi(i)(t)=j𝒩ioutpi(j)(t)p_{i}^{(i)}(t)=-\sum_{j\in\mathcal{N}_{i}^{out}}p_{i}^{(j)}(t), we have

𝐏(t)𝟏N=𝟎, t[0,t0].\mathbf{P}(t)\mathbf{1}_{N}=\mathbf{0},\mbox{ }\forall t\in[0,t_{0}]. (23)

The updating rule (21) can be written in the matrix form as

𝐱˙(t)=c1𝐋𝐱(t)+c1𝐏T(t)𝟏N, t[0,t0],\mathbf{\dot{x}}(t)=-c_{1}\mathbf{L}\mathbf{x}(t)+c_{1}\mathbf{P}^{T}(t)\mathbf{1}_{N},\mbox{ }\forall t\in[0,t_{0}], (24)

where 𝐱(t)=[x1(t),,xN(t)]T\mathbf{x}(t)=[x_{1}(t),\cdots,x_{N}(t)]^{T}. It then follows from the properties of balanced graphs and (23), (24) that

𝟏NT𝐱˙(t)\displaystyle\mathbf{1}_{N}^{T}\dot{\mathbf{x}}(t) =c1𝟏NT𝐋𝐱(t)+c1𝟏NT𝐏T(t)𝟏N\displaystyle=-c_{1}\mathbf{1}_{N}^{T}\mathbf{L}\mathbf{x}(t)+c_{1}\mathbf{1}_{N}^{T}\mathbf{P}^{T}(t)\mathbf{1}_{N} (25)
=0, t[0,t0],\displaystyle=0,\mbox{ }\forall t\in[0,t_{0}],

implying that the sum of all agents’ states stays invariant. So we have 𝟏NT𝐱(t0)=𝟏NT𝐱(0)\mathbf{1}_{N}^{T}\mathbf{x}(t_{0})=\mathbf{1}_{N}^{T}\mathbf{x}(0).

From t0t_{0}, a normal average consensus updating rule is applied, so it follows from Lemma 2 that accurate average consensus can be achieved. \blacksquare

IV-A Privacy Preservation Against Honest-but-curious Agents

In this subsection, we are going to evaluate how our Algorithm 2 performs when there exists an internal honest-but-curious agent in the network. Similar with the discrete-time case, we consider the case where there exists only one honest-but-curious agent mm. And we assume that every internal agent (curious or not) can store all the information transmitted to it and knows the graph topology, and they all have access to the parameters c1c_{1} and c2c_{2}. Thus, an agent ii’s information set 𝒮i\mathcal{S}_{i} is

𝒮i=\displaystyle\mathcal{S}_{i}= {𝒢;c1,c2,t0;yj(i)(t),j𝒩iin,tt0;xj(t),\displaystyle\{\mathcal{G};c_{1},c_{2},t_{0};y_{j}^{(i)}(t),j\in\mathcal{N}_{i}^{in},t\leq t_{0};x_{j}(t), (26)
j𝒩iin,t>t0;xi(t),t0;pi(j)(t),j𝒩iout,\displaystyle j\in\mathcal{N}_{i}^{in},t>t_{0};x_{i}(t),t\geq 0;p_{i}^{(j)}(t),j\in\mathcal{N}_{i}^{out},
tt0; the form of Algorithm 2}.\displaystyle t\leq t_{0};\mbox{ }the\mbox{ }form\mbox{ }of\mbox{ }Algorithm\mbox{ }\ref{alg:our_alg_continuous}\}.

Before moving forward, we partition the perturbation matrix 𝐏(t)\mathbf{P}(t) to subblock matrices as

𝐏(t)\displaystyle\mathbf{P}(t) =[p1(1)(t)p1(2)(t)𝐏1(3)(t)p2(1)(t)p2(2)(t)𝐏2(3)(t)𝐏3(1)(t)𝐏3(2)(t)𝐏3(3)(t)],\displaystyle=\begin{bmatrix}p_{1}^{(1)}(t)&p_{1}^{(2)}(t)&\mathbf{P}_{1}^{(3)}(t)\\ p_{2}^{(1)}(t)&p_{2}^{(2)}(t)&\mathbf{P}_{2}^{(3)}(t)\\ \mathbf{P}_{3}^{(1)}(t)&\mathbf{P}_{3}^{(2)}(t)&\mathbf{P}_{3}^{(3)}(t)\end{bmatrix}, (27)

and we then present the following lemma.

Lemma 5

Consider two implementations of Algorithm 1. In the first implementation, agents’ initial states are x1(0),x2(0),𝐱3(0)x_{1}(0),x_{2}(0),\mathbf{x}_{3}(0) and the generated perturbation signals are pi(j)(t)p_{i}^{(j)}(t)s. In the second implementation, agents’ initial states and system parameters are

x~1(0), x~2(0)=x1(0)+x2(0)x~1(0),\displaystyle\tilde{x}_{1}(0)\in\mathbb{R},\mbox{ }\tilde{x}_{2}(0)=x_{1}(0)+x_{2}(0)-\tilde{x}_{1}(0), (28)
𝐱~3(0)=𝐱3(0),\displaystyle\mathbf{\tilde{x}}_{3}(0)=\mathbf{x}_{3}(0),
c~1=c1,c~2=c2,\displaystyle\tilde{c}_{1}=c_{1},\tilde{c}_{2}=c_{2},

and perturbation signals are given as

p~1(2)(t)=\displaystyle\tilde{p}_{1}^{(2)}(t)= p1(2)(t)a21s(t),\displaystyle p_{1}^{(2)}(t)-a_{21}s(t), (29)
𝐏~1(3)(t)=\displaystyle\mathbf{\tilde{P}}_{1}^{(3)}(t)= 𝐏1(3)(t)𝐀31Ts(t),\displaystyle\mathbf{P}_{1}^{(3)}(t)-\mathbf{A}_{31}^{T}s(t),
p~1(1)(t)=\displaystyle\tilde{p}_{1}^{(1)}(t)= p1(1)(t)+d1s(t),\displaystyle p_{1}^{(1)}(t)+d_{1}s(t),
p~2(1)(t)=\displaystyle\tilde{p}_{2}^{(1)}(t)= p2(1)(t)ec1t0(x1~(0)x1(0))1ec1t0,\displaystyle p_{2}^{(1)}(t)-\frac{e^{-c_{1}t_{0}}(\tilde{x_{1}}(0)-x_{1}(0))}{1-e^{-c_{1}t_{0}}},
𝐏~2(3)(t)=\displaystyle\mathbf{\tilde{P}}_{2}^{(3)}(t)= 𝐏2(3)(t)+𝐀32Ts(t),\displaystyle\mathbf{P}_{2}^{(3)}(t)+\mathbf{A}_{32}^{T}s(t),
p~2(2)(t)=\displaystyle\tilde{p}_{2}^{(2)}(t)= p2(2)(t)(d21)s(t)\displaystyle p_{2}^{(2)}(t)-(d_{2}-1)s(t)
+ec1t0(x1~(0)x1(0))1ec1t0,\displaystyle+\frac{e^{-c_{1}t_{0}}(\tilde{x_{1}}(0)-x_{1}(0))}{1-e^{-c_{1}t_{0}}},
𝐏~3(j)(t)=\displaystyle\mathbf{\tilde{P}}_{3}^{(j)}(t)= 𝐏3(j)(t),j=1,2,3,\displaystyle\mathbf{P}_{3}^{(j)}(t),j=1,2,3,

where s(t)s(t) is given as

s(t)=ec1tec1t01ec1t0(x1~(0)x1(0)).s(t)=\frac{e^{-c_{1}t}-e^{-c_{1}t_{0}}}{1-e^{-c_{1}t_{0}}}\cdot(\tilde{x_{1}}(0)-x_{1}(0)). (30)

Then, accurate average consensus can be achieved in both implementations and limtxi(t)=x~i(t)\lim_{t\rightarrow\infty}x_{i}(t)=\tilde{x}_{i}(t). Moreover, if agent i𝒱3i\in\mathcal{V}^{3}, then in both implementations agent ii’s information sets are the same, i.e., 𝒮i=𝒮~i\mathcal{S}_{i}=\tilde{\mathcal{S}}_{i}.

Proof 7

Note that s(t)s(t) is bounded and for any i𝒱i\in\mathcal{V}, p~i(i)(t)=j𝒩ioutp~i(j)(t)\tilde{p}_{i}^{(i)}(t)=\sum_{j\in\mathcal{N}_{i}^{out}}\tilde{p}_{i}^{(j)}(t) holds. Using Theorem 4, accurate average consensus can be achieved in both implementations. Moreover, we can easily get from (28) that 𝟏NT𝐱~(0)=𝟏NT𝐱(0)\mathbf{1}_{N}^{T}\tilde{\mathbf{x}}(0)=\mathbf{1}_{N}^{T}\mathbf{x}(0), meaning that limtxi(t)=x~i(t)\lim_{t\rightarrow\infty}x_{i}(t)=\tilde{x}_{i}(t).

Define two variables δxi(t)=x~(t)xi(t)\delta x_{i}(t)=\tilde{x}(t)-x_{i}(t), δpi(j)(t)=p~i(j)(t)pi(j)(t)\delta p_{i}^{(j)}(t)=\tilde{p}_{i}^{(j)}(t)-p_{i}^{(j)}(t) and let Δ𝐱(t)=[δx1(t),,δxN(t)]T\Delta\mathbf{x}(t)=[\delta x_{1}(t),\cdots,\delta x_{N}(t)]^{T}, Δ𝐏(t)=[δpi(j)(t)]\Delta\mathbf{P}(t)=[\delta p_{i}^{(j)}(t)]. By (24), we have

Δ𝐱˙(t)=c1𝐋Δ𝐱(t)+c1Δ𝐏T(t)𝟏N, t[0,t0].\Delta\mathbf{\dot{x}}(t)=-c_{1}\mathbf{L}\Delta\mathbf{x}(t)+c_{1}\Delta\mathbf{P}^{T}(t)\mathbf{1}_{N},\mbox{ }\forall t\in[0,t_{0}]. (31)

Substituting (28) and (29) into (31), we obtain that Δ𝐱(t)\Delta\mathbf{x}(t) satisfies the following equation:

[δx˙1(t)δx˙2(t)Δ𝐱˙3(t)]=c1[d11𝐀13a21d2𝐀23𝐀31𝐀32𝐋33][δx1(t)δx2(t)Δ𝐱3(t)]\displaystyle\begin{bmatrix}\delta\dot{x}_{1}(t)\\ \delta\dot{x}_{2}(t)\\ \Delta\mathbf{\dot{x}}_{3}(t)\end{bmatrix}=c_{1}\begin{bmatrix}-d_{1}&1&\mathbf{A}_{13}\\ a_{21}&-d_{2}&\mathbf{A}_{23}\\ \mathbf{A}_{31}&\mathbf{A}_{32}&-\mathbf{L}_{33}\end{bmatrix}\begin{bmatrix}\delta x_{1}(t)\\ \delta x_{2}(t)\\ \Delta\mathbf{x}_{3}(t)\end{bmatrix} (32)
+c1[d1s(t)ec1t0δx1(0)1ec1t0(a21+d21)s(t)+ec1t0δx1(0)1ec1t0𝐀31s(t)+𝐀32s(t)]\displaystyle+c_{1}\begin{bmatrix}d_{1}s(t)-\frac{e^{-c_{1}t_{0}}\delta{x}_{1}(0)}{1-e^{-c_{1}t_{0}}}\\ -(a_{21}+d_{2}-1)s(t)+\frac{e^{-c_{1}t_{0}}\delta{x}_{1}(0)}{1-e^{-c_{1}t_{0}}}\\ -\mathbf{A}_{31}s(t)+\mathbf{A}_{32}s(t)\end{bmatrix}

with the following initial condition:

δx1(0),δx2(0)=δx1(0),Δ𝐱3(0)=𝟎.\displaystyle\delta x_{1}(0)\in\mathbb{R},\delta x_{2}(0)=-\delta x_{1}(0),\Delta\mathbf{x}_{3}(0)=\mathbf{0}. (33)

It is not difficult to verify that

δx1(t)\displaystyle\delta x_{1}(t) =s(t),δx2(t)=s(t),\displaystyle=s(t),\delta x_{2}(t)=-s(t), (34)
Δ𝐱3(t)\displaystyle\Delta\mathbf{x}_{3}(t) =0,t[0,t0].\displaystyle=0,\quad\forall t\in[0,t_{0}].

is the solution to (32) with the initial condition (33).

Now we are ready to compare 𝒮i\mathcal{S}_{i} to 𝒮~i\tilde{\mathcal{S}}_{i} (i𝒱3i\in\mathcal{V}^{3}). The key point is to analyze the difference between y~i(j)(t)\tilde{y}_{i}^{(j)}(t) and yi(j)(t)y_{i}^{(j)}(t). Note that when t[0,t0]t\in[0,t_{0}], the transmitted message from agent ii to agent jj is yi(j)(t)=xi(t)+pi(j)(t)y_{i}^{(j)}(t)=x_{i}(t)+p_{i}^{(j)}(t), (i,j)\forall(i,j)\in\mathcal{E}. It follows from (29) and (34) that t[0,t0]\forall t\in[0,t_{0}],

y~i(j)(t)={yi(j)(t),if (i,j)\(2,1),y2(1)(t)ec1tδx1(0)1ec1t0,if (i,j)=(2,1).\displaystyle\tilde{y}_{i}^{(j)}(t)=\begin{cases}y_{i}^{(j)}(t),&\mbox{if }(i,j)\in\mathcal{E}\backslash(2,1),\\ y_{2}^{(1)}(t)-\frac{e^{-c_{1}t}\delta x_{1}(0)}{1-e^{-c_{1}t_{0}}},&\mbox{if }(i,j)=(2,1).\end{cases} (35)

Equation (35) shows that when t[0,t0]t\in[0,t_{0}], only the messages transmitted from agent 2 to agent 1 are different in the two implementations. From t=t0t=t_{0}, no perturbation signals are injected, so all agents would send their real states to their out-neighbors. Note that from (34) we have

δx1(t0)=s(t0)=0, δx2(t0)=s(t0)=0,\delta x_{1}(t_{0})=s(t_{0})=0,\mbox{ }\delta x_{2}(t_{0})=-s(t_{0})=0, (36)

implying that 𝐱~(t0)=𝐱(t0)\mathbf{\tilde{x}}(t_{0})=\mathbf{x}(t_{0}). Then it is obvious that from t=t0t=t_{0}, 𝐱~(t)=𝐱(t)\mathbf{\tilde{x}}(t)=\mathbf{x}(t) always holds for all tt0t\geq t_{0}. This means that all the messages transmitted in the two scenarios are exactly the same since t=t0t=t_{0}.

To sum up, when t[0,t0]t\in[0,t_{0}], all messages in the network except the one transmitted from agent 2 to agent 1 are identical in the two implementations. When t>t0t>t_{0}, there is no difference between those transmitted messages in the two scenarios. Thus, 𝒮i=𝒮~i,i𝒱3\mathcal{S}_{i}=\tilde{\mathcal{S}}_{i},\forall i\in\mathcal{V}^{3}. \blacksquare

Now we are ready to analyze Algorithm 2’s privacy property against internal attackers.

Theorem 5

Let υ\upsilon be an agent in the network, then the internal honest-but-curious agent mm can reconstruct its initial value if and only if the following condition holds:

dυ=1,\displaystyle d_{\upsilon}=1, (37)
(m,υ),(υ,m).\displaystyle(m,\upsilon),(\upsilon,m)\in\mathcal{E}.
Proof 8

(Necessity) If (37) holds, then the honest-but-curious agent mm is the only agent that communicates with agent υ\upsilon (see Fig. 2). For tt0t\leq t_{0}, we have

x˙υ(t)=c1(ym(υ)(t)xυ(t))+c1pυ(υ)(t),\dot{x}_{\upsilon}(t)=c_{1}(y_{m}^{(\upsilon)}(t)-x_{\upsilon}(t))+c_{1}p_{\upsilon}^{(\upsilon)}(t), (38)

and

yυ(m)(t)=xυ(t)+pυ(m)(t)=xυ(t)pυ(υ)(t).y_{\upsilon}^{(m)}(t)=x_{\upsilon}(t)+p_{\upsilon}^{(m)}(t)=x_{\upsilon}(t)-p_{\upsilon}^{(\upsilon)}(t). (39)

When t>t0t>t_{0} no perturbations are added, so agent mm knows xυ(t0)x_{\upsilon}(t_{0}). Combining (38) and (39), the honest-but-curious agent mm can compute xυ(0)x_{\upsilon}(0) by

xυ(0)=xυ(t0)c10t0[ym(υ)(τ)yυ(m)(τ)]𝑑τ.x_{\upsilon}(0)=x_{\upsilon}(t_{0})-c_{1}\int_{0}^{t_{0}}[y_{m}^{(\upsilon)}(\tau)-y_{\upsilon}^{(m)}(\tau)]d\tau. (40)

(Sufficiency) We are going to prove that if (37) does not hold, then the attacker mm cannot reconstruct agent υ\upsilon’s initial value. The proof is similar with the proof of Theorem 2. As 𝒮m\mathcal{S}_{m} is the set of all the information that agent mm can access and is the only thing that it can make use of to evaluate other agents’ initial states, it plays an important role in privacy analysis. If 𝒮m\mathcal{S}_{m} can be exactly the same when agent υ\upsilon’s initial state is changed, it is impossible for agent mm to uniquely determine the value of xυ(0)x_{\upsilon}(0).

Due to the fact that N3N\geq 3 and in light of Assumption 1, when (37) does not hold, at least one of the following two cases is true.

Case 1: There exists another agent ω\omega such that ω𝒩υin\omega\in\mathcal{N}_{\upsilon}^{in}, i.e., (ω,υ)(\omega,\upsilon)\in\mathcal{E}. In this case, let agent ω\omega be the agent 2 and agent υ\upsilon be the agent 1 in Lemma 5. From Lemma 5, we know that all other agents’ information sets, including agent mm’s information set 𝒮m\mathcal{S}_{m}, can stay exactly unchanged even when agent υ\upsilon’s initial state changes from xυ(0)x_{\upsilon}(0) to x~υ(0)\tilde{x}_{\upsilon}(0), where x~υ(0)\tilde{x}_{\upsilon}(0) can be arbitrarily chosen in \mathbb{R}. Thus, agent mm cannot reconstruct agent υ\upsilon’s initial state, implying that agent υ\upsilon’s privacy is preserved.

Case 2: There exists another agent ω\omega such that ω𝒩υout\omega\in\mathcal{N}_{\upsilon}^{out}, i.e. , (υ,ω)(\upsilon,\omega)\in\mathcal{E}. It can be similarly shown that Algorithm 2 preserves agent υ\upsilon’s privacy.

The proof is then completed. \blacksquare

Remark 6

[11] also considered the privacy preserving average consensus problem in continuous-time case. As we have already pointed out, our algorithm performs better on privacy preservation against honest-but-curious agents. It is also worth mentioning that another feature of Algorithm 2 is that the perturbations are only added in a finite time interval [0,t0][0,t_{0}], while they are added at every moment in [11].

IV-B Privacy Preservation Against External Eavesdroppers

Algorithm 2’s privacy preserving performance against external eavesdroppers is another issue worthy of concern. The information sets of these eavesdroppers need to be first defined. Assume the graph and all transmitted information among neighboring agents are accessible but the parameter c1c_{1} is invisible to them. For an external eavesdropper, its information set is given by

={\displaystyle\mathcal{M}=\{ 𝒢;t0,c2;yi(j)(t),(i,j),t[0,t0];\displaystyle\mathcal{G};t_{0},c_{2};y_{i}^{(j)}(t),\forall(i,j)\in\mathcal{E},\forall t\in[0,t_{0}]; (41)
xi(t),i𝒱,t>t0;\displaystyle x_{i}(t),\forall i\in\mathcal{V},\forall t>t_{0};
the form of Algorithm 2}.\displaystyle the\mbox{ }form\mbox{ }of\mbox{ }Algorithm\mbox{ }\ref{alg:our_alg_continuous}\}.

Before giving the main result, we first present a lemma.

Lemma 6

Consider two implementations of Algorithm 2. The first implementation’s initial conditions are 𝐱(0)=(x1(0),,xN(0))T\mathbf{x}(0)=(x_{1}(0),\cdots,x_{N}(0))^{T}, the perturbation signals are pi(j)(t)p_{i}^{(j)}(t)s, and parameters are c1c_{1} and c2c_{2}. In the second implementation, the initial condition is

𝐱^(0)\displaystyle\mathbf{\hat{x}}(0) =𝐱(0)+δcξ,\displaystyle=\mathbf{x}(0)+\delta c\cdot\mathbf{\xi}, (42)

and perturbation signals and parameters are

p^i(j)(t)\displaystyle\hat{p}_{i}^{(j)}(t) =pi(j)(t)hi(t), (i,j),\displaystyle=p_{i}^{(j)}(t)-h_{i}(t),\mbox{ }\forall(i,j)\in\mathcal{E}, (43)
p^i(i)(t)\displaystyle\hat{p}_{i}^{(i)}(t) =j𝒩ioutp^i(j)(t)=pi(i)(t)+dihi(t),\displaystyle=-\sum_{j\in\mathcal{N}_{i}^{out}}\hat{p}_{i}^{(j)}(t)=p_{i}^{(i)}(t)+d_{i}h_{i}(t),
c^1\displaystyle\hat{c}_{1} =c1+δc,c^2=c2,\displaystyle=c_{1}+\delta c,\hat{c}_{2}=c_{2},

where δc\delta c\in\mathbb{R} is a constant, ξN\mathbf{\xi}\in\mathbb{R}^{N} is a vector and hi(t)h_{i}(t) denotes the i-th item of 𝐡(t)\mathbf{h}(t), which are given as

ξ=\displaystyle\mathbf{\xi}= 1c1[ec1𝐋t0𝐱(0)\displaystyle-\frac{1}{c_{1}}[e^{-c_{1}\mathbf{L}t_{0}}\mathbf{x}(0) (44)
+c10t0ec1𝐋(t0τ)𝐏T(τ)𝟏Ndτ𝐱(0)],\displaystyle+c_{1}\int_{0}^{t_{0}}e^{-c_{1}\mathbf{L}(t_{0}-\tau)}\mathbf{P}^{T}(\tau)\mathbf{1}_{N}d\tau-\mathbf{x}(0)],
𝐡(t)=\displaystyle\mathbf{h}(t)= δcξ+δcc1[ec1𝐋t𝐱(0)\displaystyle\delta c\cdot\mathbf{\xi}+\frac{\delta c}{c_{1}}[e^{-c_{1}\mathbf{L}t}\mathbf{x}(0) (45)
+c10tec1𝐋(tτ)𝐏T(τ)𝟏Ndτ𝐱(0)].\displaystyle+c_{1}\int_{0}^{t}e^{-c_{1}\mathbf{L}(t-\tau)}\mathbf{P}^{T}(\tau)\mathbf{1}_{N}d\tau-\mathbf{x}(0)].

Then, in both implementations, accurate average consensus can be achieved and limtxi(t)=x^i(t)\lim_{t\rightarrow\infty}x_{i}(t)=\hat{x}_{i}(t). Moreover, the external eavesdropper’s information sets are exactly the same, i.e., =^\mathcal{M}=\hat{\mathcal{M}}.

Proof 9

From (43), it is obvious that for each i𝒱i\in\mathcal{V}, p^i(j)(t)\hat{p}_{i}^{(j)}(t)s are also admissible perturbation signals. By Theorem 4, average consensus can be achieved in both implementations. Moreover, it can be verified by (44) that ξ=1c1(𝐱(t0)𝐱(0))\mathbf{\xi}=-\frac{1}{c_{1}}(\mathbf{x}(t_{0})-\mathbf{x}(0)). Thus, we have

𝟏NT𝐱^(0)\displaystyle\mathbf{1}_{N}^{T}\mathbf{\hat{x}}(0) =𝟏NT𝐱(0)+δc𝟏NTξ\displaystyle=\mathbf{1}_{N}^{T}\mathbf{x}(0)+\delta c\cdot\mathbf{1}_{N}^{T}\mathbf{\xi} (46)
=𝟏NT𝐱(0)δcc1𝟏NT[𝐱(t0)𝐱(0)]\displaystyle=\mathbf{1}_{N}^{T}\mathbf{x}(0)-\frac{\delta c}{c_{1}}\mathbf{1}_{N}^{T}[\mathbf{x}(t_{0})-\mathbf{x}(0)]
=𝟏NT𝐱(0),\displaystyle=\mathbf{1}_{N}^{T}\mathbf{x}(0),

where we have used the fact that 𝟏NT𝐱(t0)=𝟏NT𝐱(0)\mathbf{1}_{N}^{T}\mathbf{x}(t_{0})=\mathbf{1}_{N}^{T}\mathbf{x}(0). (46) implies that limtxi(t)=x^i(t)\lim_{t\rightarrow\infty}x_{i}(t)=\hat{x}_{i}(t).

Now we are going to prove that =^\mathcal{M}=\hat{\mathcal{M}}. The proof consists of two steps.

Step 1: we are going to show that when t[0,t0]t\in[0,t_{0}], all the transmitted information are identical in the two implementations. Let Δ𝐱(t)=𝐱^(t)𝐱(t)\Delta\mathbf{x}(t)=\mathbf{\hat{x}}(t)-\mathbf{x}(t) and Δ𝐏(t)=𝐏^(t)𝐏(t)\Delta\mathbf{P}(t)=\mathbf{\hat{P}}(t)-\mathbf{P}(t). Then the dynamic equation of Δ𝐱(t)\Delta\mathbf{x}(t) is

Δ𝐱˙(t)=\displaystyle\Delta\mathbf{\dot{x}}(t)= c^1𝐋𝐱^(t)+c1^𝐏^T(t)𝟏N\displaystyle-\hat{c}_{1}\mathbf{L}\mathbf{\hat{x}}(t)+\hat{c_{1}}\mathbf{\hat{P}}^{T}(t)\mathbf{1}_{N} (47)
+c1𝐋𝐱(t)c1𝐏T(t)𝟏N\displaystyle+c_{1}\mathbf{L}\mathbf{x}(t)-c_{1}\mathbf{P}^{T}(t)\mathbf{1}_{N}
=\displaystyle= c^1(𝐋Δ𝐱(t)+Δ𝐏T(t)𝟏N)\displaystyle\hat{c}_{1}(-\mathbf{L}\Delta\mathbf{x}(t)+\Delta\mathbf{P}^{T}(t)\mathbf{1}_{N})
+δc(𝐋𝐱(t)+𝐏T(t)𝟏N),\displaystyle+\delta c(-\mathbf{L}\mathbf{x}(t)+\mathbf{P}^{T}(t)\mathbf{1}_{N}),

with the initial condition

Δ𝐱(0)=δcξ.\Delta\mathbf{x}(0)=\delta c\cdot\mathbf{\xi}. (48)

Note that from (45) we have h(t)=δcc1[𝐱(t)𝐱(t0)]h(t)=\frac{\delta c}{c_{1}}[\mathbf{x}(t)-\mathbf{x}(t_{0})]. By solving the linear ordinary differential equation (47) with the initial condition (48), we can obtain that

Δ𝐱(t)=𝐡(t).\Delta\mathbf{x}(t)=\mathbf{h}(t). (49)

Combining (43) and (49), we obtain that for every (i,j)(i,j)\in\mathcal{E},

y^i(j)(t)=\displaystyle\hat{y}_{i}^{(j)}(t)= x^i(t)+p^i(j)(t)\displaystyle\hat{x}_{i}(t)+\hat{p}_{i}^{(j)}(t) (50)
=\displaystyle= xi(t)+δxi(t)+pi(j)(t)hi(t)\displaystyle x_{i}(t)+\delta x_{i}(t)+p_{i}^{(j)}(t)-h_{i}(t)
=\displaystyle= yi(j)(t), t[0,t0],\displaystyle y_{i}^{(j)}(t),\mbox{ }\forall t\in[0,t_{0}],

which means that when t[0,t0]t\in[0,t_{0}], all the transmitted information are identical in the two implementations.

Step 2: we are going to analyze the case where t>t0t>t_{0}. From (45) and (49) we have Δ𝐱(t0)=𝟎\Delta\mathbf{x}(t_{0})=\mathbf{0}, i.e., 𝐱^(t0)=𝐱(t0)\mathbf{\hat{x}}(t_{0})=\mathbf{x}(t_{0}). Then by c^2=c2\hat{c}_{2}=c_{2}, from t=t0t=t_{0} the two systems are exactly the same at every moment so that there is no difference between the transmitted information in these two implementations. \blacksquare

Theorem 6

Let \mathcal{F} be the set of all possible bounded and continuous functions defined on [0,t0][0,t_{0}], i.e., ={p(t):[0,t0]|p(t) is bounded and continuous}\mathcal{F}=\{p(t):[0,t_{0}]\rightarrow\mathbb{R}|p(t)\mbox{ }is\mbox{ }bounded\mbox{ }and\mbox{ }continuous\}. If all agents randomly choose the perturbation signals pi(j)(t)p_{i}^{(j)}(t)s (j𝒩iout)(j\in\mathcal{N}_{i}^{out}) from \mathcal{F}, Algorithm 2 can preserve all agents’ privacy against external eavesdroppers, in the sense that the external eavesdroppers cannot construct any agent’s initial value.

Proof 10

This theorem can be proved by following similar lines in the proof of Theorem 5. According to Lemma 6, the external attacker’s information set \mathcal{M} remains unchanged, even when 𝐱(0)\mathbf{x}(0) is changed to 𝐱(0)+δcξ\mathbf{x}(0)+\delta c\cdot\mathbf{\xi}. Let ξ=[ξ1,,ξN]T\mathbf{\xi}=[\xi_{1},\cdots,\xi_{N}]^{T}. Then, for any agent ii, the external eavesdroppers cannot distinguish xi(0)x_{i}(0) from xi(0)+δcξx_{i}(0)+\delta c\cdot\mathbf{\xi}. If ξi0\xi_{i}\neq 0, since δc\delta c can be an arbitrary value, it follows that xi(0)+δcξix_{i}(0)+\delta c\cdot\xi_{i} can be also an arbitrary value, implying that there is infinite number of possible initial values of agent ii and the external eavesdropper cannot reconstruct the value of xi(0)x_{i}(0). As for the case of ξi=0\xi_{i}=0, when all perturbation signals are randomly chosen from \mathcal{F}, the possibility of ξi=0\xi_{i}=0 is zero. This completes the proof. \blacksquare

Remark 7

Although external eavesdroppers are also considered in [14] and [19], the ideas in these two papers and the current paper are different. In [14], preventing an eavesdropper from obtaining agents’ privacy is achieved by letting the generated perturbation signals satisfy some certain global condition, which is invisible to the eavesdropper. In [19], it is achieved by assuming that the coupling weights between agents are not accessible to the eavesdropper. By contrast, based on the assumption that the system parameter c1c_{1} is not known to the eavesdropper, we accomplishes this goal by making use of the extra degree of freedom on c1c_{1}’s choosing.

V Numerical Simulation

In this section, we focus on discrete-time case and use a numerical simulation to demonstrate Algorithm 1’s effectiveness. Consider a network shown in Fig. 3.

Refer to caption
Figure 3: The communication topology.

First, we show that Algorithm 1 can deal with internal honest-but-curious agents. Assume that agent 5 is honest-but-curious. If the algorithm in [11] is applied, agents 3’s and 4’s privacy will be revealed to agent 5. By contrast, our algorithm can preserve their privacy according to Theorem 2. To show this, we consider two implementations of our algorithm. In the first one, all pi(j)p_{i}^{(j)}s are randomly generated, and the initial conditions and parameters are given as

x1[0]=7,x2[0]=3,x3[0]=1,x4[0]=2,x5[0]=15,\displaystyle x_{1}[0]=7,x_{2}[0]=3,x_{3}[0]=1,x_{4}[0]=-2,x_{5}[0]=-15, (51)
ϵ1=1,ϵ2=130.01.\displaystyle\epsilon_{1}=1,\epsilon_{2}=\frac{1}{3}-0.01.

Let agent 4 (respectively, agent 3) be the agent 2 (respectively, agent 1) in the Lemma 3. Thus, in the second implementation, the initial conditions, parameters and perturbation signals are given as

x~1[0]=x1[0],x~2[0]=x2[0],x~5[0]=x5[0],\displaystyle\tilde{x}_{1}[0]=x_{1}[0],\tilde{x}_{2}[0]=x_{2}[0],\tilde{x}_{5}[0]=x_{5}[0], (52)
x~3[0]=x3[0]+9=10,x~4[0]=x4[0]9=11,\displaystyle\tilde{x}_{3}[0]=x_{3}[0]+9=10,\tilde{x}_{4}[0]=x_{4}[0]-9=-11,
ϵ~1=ϵ1,ϵ~2=ϵ2,\displaystyle\tilde{\epsilon}_{1}=\epsilon_{1},\tilde{\epsilon}_{2}=\epsilon_{2},
p~3(3)=p3(3)+18,p~3(4)=p3(4)9,p~3(5)=p3(5)9,\displaystyle\tilde{p}_{3}^{(3)}=p_{3}^{(3)}+18,\tilde{p}_{3}^{(4)}=p_{3}^{(4)}-9,\tilde{p}_{3}^{(5)}=p_{3}^{(5)}-9,
p~4(3)=p4(3),p~4(4)=p4(4)9,p~4(5)=p4(5)+9,\displaystyle\tilde{p}_{4}^{(3)}=p_{4}^{(3)},\tilde{p}_{4}^{(4)}=p_{4}^{(4)}-9,\tilde{p}_{4}^{(5)}=p_{4}^{(5)}+9,
p~i(j)=pi(j), otherwise.\displaystyle\tilde{p}_{i}^{(j)}=p_{i}^{(j)},\mbox{ }otherwise.

Fig. 4(a) shows the state trajectories in these two implementations, from which we can observe that accurate average consensus are achieved in both cases. Moreover, it is shown that x~i[k]=xi[k]\tilde{x}_{i}[k]=x_{i}[k], k1\forall k\geq 1 holds. Besides, as depicted in Fig. 4(b), y~i(j)[0]=yi(j)[0]\tilde{y}_{i}^{(j)}[0]=y_{i}^{(j)}[0] always holds if (i,j)(4,3)(i,j)\neq(4,3), implying that even though agent 3’s and 4’s initial states are changed, all transmitted messages at the first iteration are identical in the two implementations, except the one transmitted from agent 4 to agent 3. Combining these two figures, we see that Φ~5=Φ5\tilde{\Phi}_{5}=\Phi_{5}, which implies that agent 5 cannot obtain agent 3’s and 4’s initial states.

Refer to caption
(a) Trajectories of the agents’ states in the implementation (51) and the implementation (52).
Refer to caption
(b) Transmitted information at k=0k=0 in the implementations (51) and (52): yi(j)[0]y_{i}^{(j)}[0]s vs y~i(j)[0]\tilde{y}_{i}^{(j)}[0]s.

Now we are going the show how our algorithm deals with external eavesdroppers. According to Theorem 3, the attacker cannot obtain any agent’s privacy. To show this, another implementation of our algorithm is considered, which is given as follows (here let δϵ=0.8\delta\epsilon=0.8):

𝐱^[0]\displaystyle\mathbf{\hat{x}}[0] =𝐱[0]+δϵ(𝐋𝐱[0]𝐏T𝟏N)\displaystyle=\mathbf{x}[0]+\delta\epsilon(\mathbf{L}\mathbf{x}[0]-\mathbf{P}^{T}\mathbf{1}_{N}) (53)
=[25.392.4818.642.6750.22]T,\displaystyle=\begin{bmatrix}25.39&-2.48&18.64&2.67&-50.22\end{bmatrix}^{T},
ϵ^1\displaystyle\hat{\epsilon}_{1} =(ϵ1+δϵ)=1.8, ϵ^2=ϵ2,\displaystyle=(\epsilon_{1}+\delta\epsilon)=1.8,\mbox{ }\hat{\epsilon}_{2}=\epsilon_{2},
p^i(j)\displaystyle\hat{p}_{i}^{(j)} =pi(j)(x^i[0]xi[0]), (i,j),\displaystyle=p_{i}^{(j)}-(\hat{x}_{i}[0]-x_{i}[0]),\mbox{ }\forall(i,j)\in\mathcal{E},
p^i(i)\displaystyle\hat{p}_{i}^{(i)} =j𝒩ioutp^i(j).\displaystyle=-\sum_{j\in\mathcal{N}_{i}^{out}}\hat{p}_{i}^{(j)}.

As depicted in Fig. 4(c), in the above implementation (53), accurate average consensus is also achieved and x^i[k]=xi[k]\hat{x}_{i}[k]=x_{i}[k], k1\forall k\geq 1 holds. Moreover, from Fig. 4(d) we can see that y^i(j)[0]=yi(j)[0]\hat{y}_{i}^{(j)}[0]=y_{i}^{(j)}[0] always holds, implying that all transmitted messages when k=0k=0 are identical in the implementations (51) and (53). Combining these two figures, it is easy to see that the external attacker’s information sets Θ\Theta and Θ^\hat{\Theta} are exactly the same, even though all agents’ initial state values are changed. Thus, all agents’ privacy is preserved.

Refer to caption
(c) Trajectories of the agents’ states in the implementation (51) and the implementation (53).
Refer to caption
(d) Transmitted information at k=0k=0 in the implementations (51) and (53): yi(j)[0]y_{i}^{(j)}[0]s vs y^i(j)[0]\hat{y}_{i}^{(j)}[0]s.

VI Conclusions

This paper has proposed a novel algorithm to achieve average consensus with privacy preservation over strongly connected and balanced graphs in both discrete- and continuous-time cases. By adding edge-based perturbation signals to transmitted information, the privacy is guaranteed not to be disclosed to either internal honest-but-curious agents or external eavesdroppers. Our algorithm can guarantee accurate average consensus and has a better performance on privacy preservation than the existing related works. The proposed privacy-preserving idea can be expected to be applicable to the distributed optimization problem, which needs future study.

References

  • [1] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Transactions on automatic control, vol. 49, no. 9, pp. 1520–1533, 2004.
  • [2] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems & Control Letters, vol. 53, no. 1, pp. 65–78, 2004.
  • [3] Z. Li, Z. Duan, and G. Chen, “Consensus of discrete-time linear multi-agent systems with observer-type protocols,” Discrete and Continuous Dynamical Systems-Series B, vol. 16, no. 2, pp. 489–505, 2011.
  • [4] Y. Mo and R. M. Murray, “Privacy preserving average consensus,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 753–765, 2016.
  • [5] H. Gao, C. Zhang, M. Ahmad, and Y. Wang, “Privacy-preserving average consensus on directed graphs using push-sum,” in 2018 IEEE Conference on Communications and Network Security (CNS), pp. 1–9, IEEE, 2018.
  • [6] T. Yin, Y. Lv, and W. Yu, “Accurate privacy preserving average consensus,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 4, pp. 690–694, 2019.
  • [7] M. Ruan, H. Gao, and Y. Wang, “Secure and privacy-preserving consensus,” IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 4035–4049, 2019.
  • [8] C. N. Hadjicostis and A. D. Dominguez-Garcia, “Privacy-preserving distributed averaging via homomorphically encrypted ratio consensus,” IEEE Transactions on Automatic Control, 2020.
  • [9] Z. Huang, S. Mitra, and G. Dullerud, “Differentially private iterative synchronous consensus,” in Proceedings of the 2012 ACM workshop on Privacy in the electronic society, pp. 81–90, 2012.
  • [10] N. E. Manitara and C. N. Hadjicostis, “Privacy-preserving asymptotic average consensus,” in 2013 European Control Conference (ECC), pp. 760–765, IEEE, 2013.
  • [11] N. Rezazadeh and S. S. Kia, “Privacy preservation in a continuous-time static average consensus algorithm over directed graphs,” in 2018 Annual American Control Conference (ACC), pp. 5890–5895, IEEE, 2018.
  • [12] J. He, L. Cai, C. Zhao, P. Cheng, and X. Guan, “Privacy-preserving average consensus: privacy analysis and algorithm design,” IEEE Transactions on Signal and Information Processing over Networks, vol. 5, no. 1, pp. 127–138, 2018.
  • [13] J. He, L. Cai, and X. Guan, “Preserving data-privacy with added noises: Optimal estimation and privacy analysis,” IEEE Transactions on Information Theory, vol. 64, no. 8, pp. 5677–5690, 2018.
  • [14] N. Rezazadeh and S. S. Kia, “Privacy preservation in continuous-time average consensus algorithm via deterministic additive perturbation signals,” arXiv preprint arXiv:1904.05286, 2019.
  • [15] N. Gupta, J. Katz, and N. Chopra, “Information-theoretic privacy in distributed average consensus,” arXiv preprint arXiv:1809.01794, 2018.
  • [16] J. Cortés, G. E. Dullerud, S. Han, J. Le Ny, S. Mitra, and G. J. Pappas, “Differential privacy in control and network systems,” in 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 4252–4272, IEEE, 2016.
  • [17] C. Dwork, A. Roth, et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
  • [18] E. Nozari, P. Tallapragada, and J. Cortés, “Differentially private average consensus: Obstructions, trade-offs, and optimal algorithm design,” Automatica, vol. 81, pp. 221–231, 2017.
  • [19] Y. Wang, “Privacy-preserving average consensus via state decomposition,” IEEE Transactions on Automatic Control, vol. 64, no. 11, pp. 4711–4716, 2019.
  • [20] C. Altafini, “A system-theoretic framework for privacy preservation in continuous-time multiagent dynamics,” Automatica, vol. 122, p. 109253, 2020.
  • [21] I. L. D. Ridgley, R. A. Freeman, and K. M. Lynch, “Private and hot-pluggable distributed averaging,” IEEE Control Systems Letters, vol. 4, no. 4, pp. 988–993, 2020.
  • [22] S. S. Kia, J. Cortés, and S. Martinez, “Dynamic average consensus under limited control authority and privacy requirements,” International Journal of Robust and Nonlinear Control, vol. 25, no. 13, pp. 1941–1966, 2015.
  • [23] S. Pequito, S. Kar, S. Sundaram, and A. P. Aguiar, “Design of communication networks for distributed computation with privacy guarantees,” in 53rd IEEE Conference on Decision and Control, pp. 1370–1376, IEEE, 2014.
  • [24] A. Alaeddini, K. Morgansen, and M. Mesbahi, “Adaptive communication networks with privacy guarantees,” in 2017 American Control Conference (ACC), pp. 4460–4465, IEEE, 2017.
  • [25] Z. Li and Z. Duan, Cooperative Control of Multi-agent Systems: A Consensus Region Approach. CRC Press, Boca Raton, FL, 2014.