This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\AtAppendix\AtAppendix

Deterministic Privacy Preservation in
Static Average Consensus Problem

Amir-Salar Esteki
Dept. of Mechanical and Aerospace Eng.
University of California, Irvine
Irvine, CA 92617
[email protected]
&Solmaz S. Kia
Dept. of Mechanical and Aerospace Eng.
University of California, Irvine
Irvine, CA 92617
[email protected]
Abstract

In this paper we consider the problem of privacy preservation in the static average consensus problem. This problem normally is solved by proposing privacy preservation augmentations for the popular first order Laplacian-based algorithm. These mechanisms however come with computational overhead, may need coordination among the agents to choose their parameters and also alter the transient response of the algorithm. In this paper we show that an alternative iterative algorithm that is proposed in the literature in the context of dynamic average consensus problem has intrinsic privacy preservation and can be used as a privacy preserving algorithm that yields the same performance behavior as the well-known Laplacian consensus algorithm but without the overheads that come with the existing privacy preservation methods.

1 Introduction

This paper considers the problem of privacy preservation in the in-network average consensus (PrivCon) problem, where the objective is to enable a group of 𝒱={1,,N}\mathcal{V}\!=\!\{1,\cdots,N\} communicating agents interacting over a strongly connected and weight-balanced digraph 𝒢(𝒱,,𝗔)\mathcal{G}(\mathcal{V},\mathcal{E},\boldsymbol{\mathbf{\mathsf{A}}})111For graph theoretic definitions used hereafter see Section 2., see Fig. 1, to calculate 𝗋avg=1Nj=1N𝗋j\mathsf{r}^{\text{avg}}\!=\!\frac{1}{N}\sum_{j=1}^{N}\mathsf{r}^{j} using local interactions but without disclosing their local reference value 𝗋i\mathsf{r}^{i}, i𝒱i\!\in\!\mathcal{V}, to each other. The privacy preservation for average consensus is normally formalized as concealing the reference value of each agent from a malicious agent that is defined as follows.

Definition 1 (Malicious agent).

A malicious agent in the network is an agent that wants to obtain the reference value of the other agents without perturbing/interrupting the execution of the average consensus algorithm. The knowledge set of this malicious agent consists of (a) the network topology 𝒢(𝒱,,𝗔)\mathcal{G}(\mathcal{V},\mathcal{E},\boldsymbol{\mathbf{\mathsf{A}}}), (b) its own local states and reference input, (c) the received signals from its out-neighbors, and (d) the agreement state xi(k)x^{i}(k) of each agent i𝒱i\in\mathcal{V} converges asymptotically to 𝗋avg\mathsf{r}^{\text{avg}}. \Box

The solutions to the PrivCon problem in the literature mainly center around designing privacy preservation augmentation mechanisms for the popular average consensus algorithm

xi(k+1)=xi(k)Δj=1N𝗮ij(xi(k)xj(k)),\displaystyle x^{i}(k+1)=x^{i}(k)-\Delta\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(x^{i}(k)-x^{j}(k)), (1)
xi(0)=𝗋i,i𝒱,\displaystyle x^{i}(0)=\mathsf{r}^{i},\quad i\in\mathcal{V},

which with proper choice of stepsize Δ\Delta guarantees xi𝗋avgx^{i}\!\to\!\mathsf{r}^{\text{avg}}, i𝒱i\in\mathcal{V}, as kk\!\to\!\infty [1]. In a weighted digraph, 𝗮ij>0\boldsymbol{\mathbf{\mathsf{a}}}_{ij}>0 if there is an edge from agent ii to agent jj, i.e., agent ii can obtain information from agent jj, otherwise 𝗮ij=0\boldsymbol{\mathbf{\mathsf{a}}}_{ij}=0. Thus, algorithm (1) requires each agent i𝒱i\in\mathcal{V} to share its reference value 𝗋i\mathsf{r}^{i} with its in-neighbors (agents that receive messages from ii) in the first step of the algorithm, resulting in a trivial breach of privacy. Standard observability analysis [2] shows also that when the adjacency matrix 𝗔=[𝗮ij]\boldsymbol{\mathbf{\mathsf{A}}}=[\boldsymbol{\mathbf{\mathsf{a}}}_{ij}] of the network topology is known to an agent i𝒱i\in\mathcal{V}, agent ii can obtain the initial condition of algorithm (1), and consequently, the reference value of all or some of the other agents of the network; see [3] for details.

A popular approach explored to induce privacy preservation to algorithm (1) is the encryption where either a trusted third-party generates the public-key [4] or agents share their keys in which the network is restricted to a point-to-point or tagged undirected communication framework [5, 6]. Also, [5] requires extra communication channels for key generation purposes. Moreover, in [5, 6], the network should be connected at the first step of communication and therefore, it does not have robustness to possible switching in the topology. Differential privacy [7, 8, 9] and additive obfuscation noise methods [10, 11, 12] are other popular techniques to conceal the exact reference value of the agents, but the malicious agent can obtain an estimate on the reference value using a stochastic estimator. In addition, final convergence is perturbed in [7, 8, 9, 10] and convergence rate is altered in [11]. Moreover, these classes of privacy preserving measures cause noisy transient response for the algorithm, which results in excessive energy expenditure if agents use local controls to track the consensus trajectory as a reference input. Lastly, the guarantees established in all the methods discussed above are only for connected undirected graphs. Interested reader can also find privacy preservation methods for the continuous-time implementation of algorithm (1) in [13] and [14].

Refer to caption
Figure 1: The network in plot (a) is an example of a strongly connected and weight-balanced digraph, while the network in plot (b) is an example of a connected undirected graph. An edge i𝗮𝑖𝑗ji\xrightarrow{\mathit{\boldsymbol{\mathbf{\mathsf{a}}}_{ij}}}j from an agent ii to agent jj means that agent ii can obtain information from agent jj; 𝗮ij>0\boldsymbol{\mathbf{\mathsf{a}}}_{ij}\!>\!0 is the corresponding adjacency matrix element.

In this paper, we focus on a solution for PrivCon problem that does not perturb the final convergence value. We address the PrivCon problem by proposing to use an alternative algorithm (see algorithm (2)) that yields similar transient and convergence performance to that of algorithm (1). We analyze the privacy preservation of this algorithm carefully and show that this algorithm intrinsically yields exactly the same privacy preservation guarantees as in [10, 11, 12] in terms of which agents’ privacy can be preserved. More precisely, we show that similar to the results in [10, 11, 12] the privacy of any agent that has at least one out-neighbor that is not the out-neighbor of the malicious agent is preserved. In comparison to the privacy preservation by use of the additive obfuscation noise methods [10, 11, 12], however, algorithm (2) offers a deterministic and stronger sense of privacy preservation, i.e., it meets the privacy preservation definition given below.

Definition 2 (Privacy preservation).

Privacy of an agent is preserved from a malicious agent if the malicious agent cannot obtain any estimate on the reference value of the agent.  \Box

To show that the privacy of an agent ii is preserved in the sense of Definition 2, we show that there exists other values of 𝗋i\mathsf{r}^{i}, arbitrarily different from the true one, that give the same trajectory for the information the malicious agent receives from its out-neighbors. Therefore, the malicious agent cannot distinguish between the true reference value and the arbitrary alternatives. This, in essence, is a stronger notion of privacy preservation than that of the ϵ\epsilon-differential privacy [15] or that of [11] where even though the malicious agent cannot obtain the exact reference value of the agents but it obtains an estimate on the reference value.

In our demonstration study in Section 4, we compare in particular our suggested PrivCon problem solution to the additive obfuscation noise method of [11][11] is a prominent solution in the class of additive obfuscation noise methods that guarantees exact convergence while not allowing a malicious agent to obtain the exact reference value of the agents under the connectivity condition stated earlier. However, in the framework of [11] there are disclosure of information about the reference value in two levels. First, as shown in [11], the malicious agent can employ a stochastic observer to obtain an estimate on the reference value of other agents. The normalized variance of this estimator can be very small compared to the size of the reference value of the agents. Moreover, in [11] agents need to coordinate to choose a common parameter ϕ\phi in their noise generator and also use a common variance. The knowledge about the noise parameters also is another level of disclosure of an estimate on the reference value of the agents, as the transmit message of each agent at the initial step of the algorithm is the reference value plus a value generated by their local noise whose variance is the same for all the agents. Moreover, in choosing the variance value for the noise, agents need to consider the size of their local reference values to yield meaningful obfuscation. We note that using a noise with large variance results in a more perturbed transient response and a slower convergence. The advantage of our proposed solution is to meet the privacy preservation as given in Definition 1 while rendering a similar transient and convergence behavior to that of algorithm (1), having no need for coordination to choose the parameters of the algorithm, and no extra computations. Our notation is standard, though certain pieces of notation are defined as need arises.

2 Problem Setting

Our graph theoretic definitions follow [16]. A weighted digraph is a triplet 𝒢(𝒱,,𝗔)\mathcal{G}(\mathcal{V},\mathcal{E},\boldsymbol{\mathbf{\mathsf{A}}}) where 𝒱\mathcal{V} is the node set, 𝒱×𝒱\mathcal{E}\subset\mathcal{V}\times\mathcal{V} is the edge set and 𝗔)=[𝗮ij]\boldsymbol{\mathbf{\mathsf{A}}})=[\boldsymbol{\mathbf{\mathsf{a}}}_{ij}] is the weighted adjacency matrix defined such that 𝗮ij>0\boldsymbol{\mathbf{\mathsf{a}}}_{ij}>0 if (i,j)(i,j)\in\mathcal{E}, otherwise 𝗮ij=0\boldsymbol{\mathbf{\mathsf{a}}}_{ij}=0. Given an edge (i,j)(i,j), ii is called an in-neighbor of jj, and jj is called an out-neighbor of ii. The (weighted) in-degree and out-degree of each node i𝒱i\in\mathcal{V} are respectively are 𝖽outi=j=1N𝗮ij\mathsf{d}_{\text{out}}^{i}=\sum_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij} and 𝖽ini=j=1N𝗮ji\mathsf{d}_{\text{in}}^{i}=\sum_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ji}. A weighted digraph is undirected if 𝗮ij=𝗮ji\boldsymbol{\mathbf{\mathsf{a}}}_{ij}=\boldsymbol{\mathbf{\mathsf{a}}}_{ji} for all i,j𝒱i,j\in\mathcal{V}. A digraph is weight-balanced if at every node i𝒱i\in\mathcal{V}, 𝖽outi=𝖽ini\mathsf{d}_{\text{out}}^{i}=\mathsf{d}_{\text{in}}^{i}. A digraph is strongly connected if there is a directed path from every node to every other node. The Laplacian matrix of a digraph 𝒢\mathcal{G} is 𝗟=Diag(𝖽out1,,𝖽outN)𝗔\boldsymbol{\mathbf{\mathsf{L}}}\!=\!\operatorname{Diag}(\mathsf{d}_{\text{out}}^{1},\cdots,\mathsf{d}_{\text{out}}^{N})\!-\!\boldsymbol{\mathbf{\mathsf{A}}}.

Let the interaction topology of the agents be a strongly connected and weight-balanced digraph 𝒢=(𝒱,,𝗔)\mathcal{G}\!=\!(\mathcal{V},\mathcal{E},\boldsymbol{\mathbf{\mathsf{A}}}). We consider the average consensus algorithm

vi(k+1)\displaystyle v^{i}(k+1) =vi(k)+Δj=1N𝗮ij(xi(k)xj(k)),\displaystyle=v^{i}(k)+\Delta\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(x^{i}(k)-x^{j}(k)), (2a)
xi(k+1)\displaystyle x^{i}(k+1) =xi(k)+Δ((xi(k)𝗋i)\displaystyle=x^{i}(k)+\Delta\big{(}-(x^{i}(k)-\mathsf{r}^{i})
j=1N𝗮ij(xi(k)xj(k))vi(k)),\displaystyle-\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(x^{i}(k)-x^{j}(k))-v^{i}(k)\big{)}, (2b)
xi(0),vi(0),i𝒱,s.t.i=1vi(0)=0,\displaystyle\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!x^{i}(0),v^{i}(0)\in\real,~{}i\in\mathcal{V},~{}\text{s.t.}~{}\sum\nolimits_{i=1}v^{i}(0)=0, (2c)

proposed in [17], as a solution for privacy preserving average consensus algorithm over 𝒢\mathcal{G}. As discussed in [18],  (2) yields a similar transient response to that of algorithm (1). In our analysis, the privacy preservation in the sense of Definition 2 is against a malicious agent defined as in Definition 1. Our study is different than the privacy preservation study in [17] where the continuous-time representation of algorithm (2) is considered and the reference values are assumed to be dynamic time-varying signals. The guarantees provided crucially depend on the time varyingness of the reference values.

Let 𝐱=(x1,,xN)\boldsymbol{\mathbf{x}}=(x^{1},\cdots,x^{N}), 𝐯=(v1,,vN)\boldsymbol{\mathbf{v}}=(v^{1},\cdots,v^{N}), 𝚷=IN1N𝟭N𝟭N\boldsymbol{\mathbf{\Pi}}=\textbf{I}_{N}-\frac{1}{N}\boldsymbol{\mathbf{\mathsf{1}}}_{N}\boldsymbol{\mathbf{\mathsf{1}}}_{N}^{\top}, and 𝗿¯=𝗋avg𝟭N\bar{\boldsymbol{\mathbf{\mathsf{r}}}}=\mathsf{r}^{\text{avg}}\boldsymbol{\mathbf{\mathsf{1}}}_{N}. Moreover, let 𝖗=1N𝟏N\boldsymbol{\mathbf{\mathfrak{r}}}=\frac{1}{\sqrt{N}}\boldsymbol{\mathbf{1}}_{N}, and 𝕽\boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}} such that

[𝖗𝕽][𝖗𝕽]=[𝖗𝕽][𝖗𝕽]=𝐈N.\begin{bmatrix}\boldsymbol{\mathbf{\mathfrak{r}}}&~{}\boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}\end{bmatrix}\begin{bmatrix}\boldsymbol{\mathbf{\mathfrak{r}}}^{\top}\\ \boldsymbol{\mathbf{\mathfrak{R}}}^{\top}\end{bmatrix}=\begin{bmatrix}\boldsymbol{\mathbf{\mathfrak{r}}}^{\top}\\ \boldsymbol{\mathbf{\mathfrak{R}}}^{\top}\end{bmatrix}\begin{bmatrix}\boldsymbol{\mathbf{\mathfrak{r}}}&~{}\boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}\end{bmatrix}=\boldsymbol{\mathbf{I}}_{N}.

Then, using the change of variable

[𝗊1𝗾2:N𝗉1𝗽2:N]=[[𝖗𝕽][𝟬𝕽]𝟬[𝖗𝕽]][𝘃𝚷𝗿𝘅𝗿¯],\begin{bmatrix}\mathsf{q}_{1}\\ \boldsymbol{\mathbf{\mathsf{q}}}_{2:N}\\ \mathsf{p}_{1}\\ \boldsymbol{\mathbf{\mathsf{p}}}_{2:N}\end{bmatrix}=\begin{bmatrix}\begin{bmatrix}\boldsymbol{\mathbf{\mathsf{\mathfrak{r}}}}^{\top}\\ \boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}^{\top}\end{bmatrix}&\begin{bmatrix}\boldsymbol{\mathbf{\mathsf{0}}}\\ \boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}^{\top}\end{bmatrix}\\ \boldsymbol{\mathbf{\mathsf{0}}}&\begin{bmatrix}\boldsymbol{\mathbf{\mathfrak{r}}}^{\top}\\ \boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}^{\top}\end{bmatrix}\end{bmatrix}\begin{bmatrix}\boldsymbol{\mathbf{\mathsf{v}}}-\boldsymbol{\mathbf{\Pi}}\boldsymbol{\mathbf{\mathsf{r}}}\\ \boldsymbol{\mathbf{\mathsf{x}}}-\bar{\boldsymbol{\mathbf{\mathsf{r}}}}\end{bmatrix},

algorithm (1) reads as

[𝗉1(k+1)𝗽2:N(k+1)]=[1𝟬0𝐈Δ𝗟+][𝗉1(k)𝗽2:N(k)],\begin{bmatrix}\mathsf{p}_{1}(k+1)\\ \boldsymbol{\mathbf{\mathsf{p}}}_{2:N}(k+1)\end{bmatrix}=\begin{bmatrix}1&\boldsymbol{\mathbf{\mathsf{0}}}\\ 0&~{}\boldsymbol{\mathbf{I}}-\Delta\boldsymbol{\mathbf{\mathsf{L}}}^{+}\end{bmatrix}\begin{bmatrix}\mathsf{p}_{1}(k)\\ \boldsymbol{\mathbf{\mathsf{p}}}_{2:N}(k)\end{bmatrix},

and algorithm (2) as

[𝗊1(k+1)𝗾2:N(k+1)𝗉1(k+1)𝗽2:N(k+1)]=[1𝟎0𝟎0(1Δ)𝐈0𝟎Δ𝟎(1Δ)𝟎0Δ𝐈0𝐈Δ𝗟+][𝗊1(k)𝗾2:N(k)𝗉1(k)𝗽2:N(k)],\displaystyle\begin{bmatrix}\mathsf{q}_{1}(k+1)\\ \boldsymbol{\mathbf{\mathsf{q}}}_{2:N}(k+1)\\ \mathsf{p}_{1}(k+1)\\ \boldsymbol{\mathbf{\mathsf{p}}}_{2:N}(k+1)\end{bmatrix}\!\!=\!\!\begin{bmatrix}1&\boldsymbol{\mathbf{0}}&0&\boldsymbol{\mathbf{0}}\\ 0&(1-\Delta)\boldsymbol{\mathbf{I}}&0&\boldsymbol{\mathbf{0}}\\ -\Delta&\boldsymbol{\mathbf{0}}&(1-\Delta)&\boldsymbol{\mathbf{0}}\\ 0&-\Delta\boldsymbol{\mathbf{I}}&0&\boldsymbol{\mathbf{I}}-\Delta\boldsymbol{\mathbf{\mathsf{L}}}^{+}\end{bmatrix}\!\!\!\begin{bmatrix}\mathsf{q}_{1}(k)\\ \boldsymbol{\mathbf{\mathsf{q}}}_{2:N}(k)\\ \mathsf{p}_{1}(k)\\ \boldsymbol{\mathbf{\mathsf{p}}}_{2:N}(k)\end{bmatrix},

where 𝗟+=𝕽𝐋𝕽\boldsymbol{\mathbf{\mathsf{L}}}^{+}\!=\!\boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}^{\top}\boldsymbol{\mathbf{L}}\boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}. These equivalent representations show the connection between the internal dynamics of algorithms (1) and (2). For a strongly connected and weight-balanced digraph, 𝗟\boldsymbol{\mathbf{\mathsf{L}}} has a simple zero eigenvalue and the rest of the eigenvalues {λi}i=2N\{\lambda_{i}\}_{i=2}^{N} have non-zero positive real parts. Then, the eigenvalues of 𝗟+\boldsymbol{\mathbf{\mathsf{L}}}^{+} are {λi}i=2N\{\lambda_{i}\}_{i=2}^{N}. By invoking [18, Lemma S1], we note that the exponential stability of (1) and (2) is guaranteed, respectively, for any Δ(0,Δ¯)\Delta\!\in\!(0,\bar{\Delta}) and Δ(0,min{2,Δ¯})\Delta\!\in\!(0,\min\{2,\bar{\Delta}\}), where Δ¯=min{2Re(λi)|λi|2}i=2N\bar{\Delta}\!=\!\min\big{\{}2\frac{\operatorname{Re}(\lambda_{i})}{|\lambda_{i}|^{2}}\big{\}}_{i=2}^{N}. Given a similar performance, an immediate appeal of algorithm (2) over algorithm (1) is that the reference value of agents is not trivially transmit to the in-neighbors. The question that we address in the subsequent section is whether the malicious agent can use its knowledge set to compute the reference values of other agents when agents implement algorithm (2).

3 Privacy Preservation Analysis

In this section we show that algorithm (2) intrinsically yields the same privacy preservation guarantees as in [10, 11, 12] in terms of which agents’ privacy can be preserved. However, the guarantees of algorithm (2) are valid also for strongly connected and weight-balanced digraphs. Moreover, unlike the privacy preservation of [10, 11, 12] which is obtained by using additive noises and comes with disclosure of a stochastic estimate on the private value of the agents, algorithm (2) offers a deterministic and stronger sense of privacy preservation, i.e., it meets the privacy preservation objective defined in Definition 2. In what follows without loss of generality we assume that agent 11 is the malicious agent. The initialization condition i=1Nvi(0)=0\sum_{i=1}^{N}v^{i}(0)=0 is trivially satisfied when every agent i𝒱i\in\mathcal{V} uses vi(0)=0v^{i}(0)\!=\!0. Other choices need coordination among agents with no strong guarantee that the choices are private. Thus , we assume that vi(0)=0v^{i}(0)=0, i𝒱i\in\mathcal{V} and is known to the malicious agent. Moreover, 𝒩ini\mathcal{N}_{\text{in}}^{i} and 𝒩outi\mathcal{N}_{\text{out}}^{i} are, respectively, the set of in-neighbors and out-neighbors of agent ii. We also define 𝒩ini+=𝒩ini{i}\mathcal{N}_{\text{in}}^{i+}=\mathcal{N}_{\text{in}}^{i}\cup\{i\}.

Lemma 3.1 (A sufficient condition for privacy preservation when algorithm (2) is implemented).

Let the interaction topology 𝒢\mathcal{G} of the agents implementing algorithm (2) with Δ(0,min{2,Δ¯})\Delta\in(0,\min\{2,\bar{\Delta}\}), initialized at xi(0)x^{i}(0)\in\real and vi(0)=0v^{i}(0)=0, i𝒱i\in\mathcal{V}, be strongly connected and weight-balanced digraph. Let the knowledge set of malicious agent 11 be as in Definition 1. The privacy of any agent i𝒱\{1}i\in\mathcal{V}\backslash\{1\} is preserved from agent 11 if agent ii has at least one out-neighbor that is not an out-neighbor of agent 11.

Proof.

To prove the statement, we consider the worst case where agent ii whose privacy is being evaluated is an out-neighbor of agent 11, i.e., agent 11 receives the transmitted message of agent ii. Without loss of generality let this agent be labeled agent 22. Let agent 33 be the out-neighbor of agent 22 that is not an out-neighbor of agent 11, i.e., 2𝒩in32\in\mathcal{N}_{\text{in}}^{3} but 1𝒩in31\not\in\mathcal{N}_{\text{in}}^{3}. To show privacy preservation for agent 22, we show that there are infinitely many executions of algorithm (2) with different values of 𝗋2\mathsf{r}^{2}\in\real and 𝗋3\mathsf{r}^{3}\in\real that yield exactly the same received signal by agent 11. To this end, consider two executions of algorithm (2). Let exi=x1ix2ie_{x}^{i}=x_{1}^{i}-x_{2}^{i} and evi=v1iv2ie_{v}^{i}=v_{1}^{i}-v_{2}^{i}, i𝒱i\in\mathcal{V}. Moreover, let the reference value of the agents in the first execution be {𝗋1i}i=1N\{\mathsf{r}_{1}^{i}\}_{i=1}^{N} and in the second execution be {𝗋2i}i=1N\{\mathsf{r}_{2}^{i}\}_{i=1}^{N} such that e𝗋i=𝗋1i𝗋2i0e^{i}_{\mathsf{r}}=\mathsf{r}_{1}^{i}-\mathsf{r}_{2}^{i}\neq 0 if i𝒩in3+i\in\mathcal{N}_{\text{in}}^{3+}, otherwise e𝗋i=0e^{i}_{\mathsf{r}}=0. Moreover, i=1Ne𝗋i=0\sum\nolimits_{i=1}^{N}e^{i}_{\mathsf{r}}=0, i.e., 𝗋1avg=𝗋2avg\mathsf{r}_{1}^{\text{avg}}=\mathsf{r}_{2}^{\text{avg}}. Lastly, exi(0)=0e_{x}^{i}(0)=0, for i𝒱\{3}i\in\mathcal{V}\backslash\{3\}. Next, we show that for any ex3(0)e_{x}^{3}(0)\in\real there exists e𝗋i0e_{\mathsf{r}}^{i}\neq 0, i𝒩in3+i\in\mathcal{N}_{\text{in}}^{3+}, such that exj(k)0e_{x}^{j}(k)\equiv 0, j𝒱\{3}j\in\mathcal{V}\backslash\{3\} for k0k\in{\mathbb{Z}}_{\geq 0}. Therefore, the signals received by agent 11 for these two distinct executions is exactly the same and agent 11 cannot distinguish between them.

The error dynamics at each agent i𝒱i\in\mathcal{V} reads as

evi(k+1)\displaystyle e_{v}^{i}(k+1) =evi(k)+Δj=1N𝗮ij(exi(k)exj(k)),\displaystyle=e_{v}^{i}(k)+\Delta\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(e_{x}^{i}(k)-e_{x}^{j}(k)), (3a)
exi(k+1)\displaystyle e_{x}^{i}(k+1) =exi(k)+Δ((exi(k)e𝗋i)\displaystyle=e_{x}^{i}(k)+\Delta\big{(}-(e_{x}^{i}(k)-e_{\mathsf{r}}^{i})
j=1N𝗮ij(exi(k)exj(k))evi(k)),\displaystyle-\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(e_{x}^{i}(k)-e_{x}^{j}(k))-e_{v}^{i}(k)\big{)}, (3b)
evi(0)=0,{exi(0)=0,ifi3,exi(0)0ifi=3.\displaystyle\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!e_{v}^{i}(0)=0,~{}~{}\begin{cases}e_{x}^{i}(0)=0,&\text{if}~{}i\neq 3,\\ e_{x}^{i}(0)\neq 0&\text{if}~{}i=3\end{cases}. (3c)

If exi(k)0e_{x}^{i}(k)\equiv 0 for any i𝒱\{3}i\in\mathcal{V}\backslash\{3\}, then exi(k)0e_{x}^{i}(k)\equiv 0, and evi(k)0e_{v}^{i}(k)\equiv 0 for k0k\in{\mathbb{Z}}_{\geq 0} satisfy the error dynamics (3) for any i𝒱\𝒩in3+i\in\mathcal{V}\backslash\mathcal{N}_{\text{in}}^{3+}. On the other hand, for i𝒩in3i\in\mathcal{N}_{\text{in}}^{3}

evi(k+1)\displaystyle e^{i}_{v}(k+1) =evi(k)Δ𝗮i3ex3(k),\displaystyle=e^{i}_{v}(k)-\Delta\boldsymbol{\mathbf{\mathsf{a}}}_{i3}\,e^{3}_{x}(k), (4a)
0\displaystyle 0 =e𝗋i+𝗮i3ex3(k)evi(k),\displaystyle=e^{i}_{\mathsf{r}}+\boldsymbol{\mathbf{\mathsf{a}}}_{i3}\,e^{3}_{x}(k)-e^{i}_{v}(k), (4b)

and since i=1N𝗮3i=𝖽out3\sum_{i=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{3i}=\mathsf{d}_{\text{out}}^{3}, (3) for agent 33 reads as

ev3(k+1)\displaystyle e^{3}_{v}(k+1) =ev3(k)+Δ𝖽out3ex3(k),\displaystyle=e^{3}_{v}(k)+\Delta\mathsf{d}_{\text{out}}^{3}e^{3}_{x}(k), (5a)
ex3(k+1)\displaystyle e^{3}_{x}(k+1) =ex3(k)+Δ(e𝗋3(1+𝖽out3)ex3(k)ev3(k)).\displaystyle=e^{3}_{x}(k)\!+\!\Delta(e^{3}_{\mathsf{r}}\!-\!(1\!+\!\mathsf{d}_{\text{out}}^{3})e^{3}_{x}(k)-e^{3}_{v}(k)). (5b)

Note that with adding (4a) and (5a) we have i𝒩in3+evi(k)=0\sum_{i\in\mathcal{N}^{3+}_{\text{in}}}e_{v}^{i}(k)=0. Considering i=1N𝗮i3=𝖽in3=𝖽out3\sum_{i=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{i3}=\mathsf{d}_{\text{in}}^{3}=\mathsf{d}_{\text{out}}^{3} and i𝒩in3+e𝗋i=0\sum_{i\in\mathcal{N}^{3+}_{\text{in}}}e_{\mathsf{r}}^{i}=0 also, it follows from adding up (5b) and (4b) for i𝒩in3+i\in\mathcal{N}^{3+}_{\text{in}} that

ex3(k+1)\displaystyle e^{3}_{x}(k+1) =(1Δ)ex3(k).\displaystyle=(1-\Delta)e^{3}_{x}(k). (6)

Then, since ev3(0)=0e_{v}^{3}(0)\!=\!0, it follows from (5b), (6), and (4b) that

e𝗋3=𝖽out3ex3(0),e𝗋i=𝗮i3ex3(0),i𝒩in3.\displaystyle e^{3}_{\mathsf{r}}=\mathsf{d}_{\text{out}}^{3}e^{3}_{x}(0),\quad e^{i}_{\mathsf{r}}=-\boldsymbol{\mathbf{\mathsf{a}}}_{i3}e^{3}_{x}(0),\quad i\in\mathcal{N}_{\text{in}}^{3}. (7)

Since i=1N𝗮i3=𝖽in3=𝖽out3\sum_{i=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{i3}=\mathsf{d}_{\text{in}}^{3}=\mathsf{d}_{\text{out}}^{3}, it follows from (7) that indeed i𝒩in3+e𝗋i=0\sum\nolimits_{i\in\mathcal{N}_{\text{in}}^{3+}}e^{i}_{\mathsf{r}}=0. Therefore, we can conclude that 𝗋1avg=𝗋2avg\mathsf{r}_{1}^{\text{avg}}=\mathsf{r}_{2}^{\text{avg}}. It is then proven that with an unbounded error in the initial state of ex3e_{x}^{3}, e𝗋2e^{2}_{\mathsf{r}} varies unboundedly as well yet the malicious agent receives the same signals from its out-neighbors, i.e., exi(k)0e_{x}^{i}(k)\equiv 0, k0k\in\mathbb{Z}_{\geq 0}, for i𝒩out1i\in\mathcal{N}_{\text{out}}^{1}. Note here that since |1Δ|<1|1-\Delta|<1, it follows from (6) that ex3(k)e_{x}^{3}(k) also converges to zero as kk\to\infty showing that all the agents converge to the same final average point in both execution 11 and 22. ∎

Our next result shows that when the malicious agent has access to the messages transmitted to and from an agent ii, similar to the methods proposed in [10, 11, 12], the privacy of agent ii is not preserved. To establish this result we show that the malicious agent can implement an observer to obtain 𝗋i\mathsf{r}^{i}.

Lemma 3.2 (A sufficient condition for loss of privacy preservation when algorithm (2) is implemented).

Let the interaction topology 𝒢\mathcal{G} of the agents implementing algorithm (2), initialized at xi(0)x^{i}(0)\in\real and vi(0)=0v^{i}(0)=0, i𝒱i\in\mathcal{V}, be a strongly connected and weight-balanced digraph. Let the knowledge set of malicious agent 11 be as in Definition 1. If agent 11 is the in-neighbor of agent o𝒱\{1}o\in\mathcal{V}\backslash\{1\} and all its out-neighbors, then at any k1k\in\mathbb{Z}_{\geq 1} agent 11 can obtain 𝗋o\mathsf{r}^{o} using xo(k)x^{o}(k), xo(k1)x^{o}(k-1) and {xj(k1)}j𝒩outo\{x^{j}(k-1)\}_{j\in\mathcal{N}_{\text{out}}^{o}}.

Proof.

Proof of Lemma 3.2 follows from a simple algebraic manipulation over (2) to obtain 𝗋o\mathsf{r}^{o} in terms of xo(k)x^{o}(k), xo(k1)x^{o}(k-1) and {xj(k1)}j𝒩outo\{x^{j}(k-1)\}_{j\in\mathcal{N}_{\text{out}}^{o}}. ∎

Lemma 3.1 and 3.2 gives us the following main statement on the privacy preservation guarantees of Algorithm (2).

Theorem 3.3 (Privacy Preservation guarantee of algorithm (2)).

Let the interaction topology 𝒢\mathcal{G} of the agents implementing algorithm (2), initialized at xi(0)x^{i}(0)\in\real and vi(0)=0v^{i}(0)=0, i𝒱i\in\mathcal{V}, be strongly connected and weight-balanced digraph. Let the knowledge set of malicious agent 11 be as in Definition 1. Privacy of agent i𝒱\{1}i\in\mathcal{V}\backslash\{1\} is preserved from agent 11 if and only if agent ii has an out-neighbor that is not an out-neighbor of agent 1.

Remark 3.1 (Privacy preservation for an initialization free average consensus algorithm for connected undirected graphs ).

We can show through similar arguments that the privacy preservation guarantees of algorithm (2) hold for the average consensus algorithm (8) as well,

vi(k+1)=\displaystyle v^{i}(k+1)=\, vi(k)Δj=1N𝗮ij(xi(k)xj(k)),\displaystyle v^{i}(k)-\Delta\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(x^{i}(k)-x^{j}(k)), (8a)
xi(k+1)=\displaystyle x^{i}(k+1)=\, xi(k)+Δ((xi(k)𝗋i)\displaystyle x^{i}(k)+\Delta\big{(}-(x^{i}(k)-\mathsf{r}^{i})
j=1N𝗮ij(xi(k)xj(k))+j=1N𝗮ij(vi(k)vj(k))),\displaystyle-\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(x^{i}(k)-x^{j}(k))+\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(v^{i}(k)-v^{j}(k))\big{)}, (8b)
xi(0),vi(0),i𝒱.\displaystyle~{}~{}~{}x^{i}(0),v^{i}(0)\in\real,~{}i\in\mathcal{V}. (8c)

Algorithm (8) does not require special initialization of vi(0)=0v^{i}(0)=0, i𝒱i\in\mathcal{V}, therefore it is robust to agent drop-off. However, this algorithm works over connected undirected graphs, and also requires an extra message exchange between the neighboring agents. \Box

So far, we have shown the intrinsic privacy property of Algorithm (2). Now the natural question is whether we can loosen the topology restriction of Theorem 3.3 using additive perturbation signals and preserve privacy for agents whose all communication signals (in-coming and out-going) are available to the malicious agent. Herein, we show that this mechanism has no contribution and the malicious agent can still derive local information asymptotically. A comprehensive protection via perturbation signals suggests adding a signal gi(k):0g^{i}(k):\mathbb{Z}_{\geq 0}\to\real to the transmitted message of every agent i𝒱i\in\mathcal{V}, and additive signal f1i(k):0f_{1}^{i}(k):\mathbb{Z}_{\geq 0}\to\real and f2i(k):0f_{2}^{i}(k):\mathbb{Z}_{\geq 0}\to\real to the right hand side of (2a) and (2b), respectively. However, without loss of generality, the effect of all these signals can be captured via adding only a dynamic perturbation signal fi(k):0f^{i}(k):\mathbb{Z}_{\geq 0}\to\real, i𝒱i\in\mathcal{V} to the right hand side of (2b) as follows,

vi(k+1)=vi(k)+Δj=1N𝗮ij(xi(k)xj(k)),\displaystyle v^{i}(k+1)=v^{i}(k)+\Delta\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(x^{i}(k)-x^{j}(k)), (9a)
xi(k+1)=xi(k)+Δ((xi(k)𝗋i)\displaystyle x^{i}(k+1)=x^{i}(k)+\Delta(-(x^{i}(k)-\mathsf{r}^{i})
j=1N𝗮ij(xi(k)xi(k))vi(k)+fi(k)).\displaystyle\quad\quad-\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(x^{i}(k)-x^{i}(k))-v^{i}(k)+f^{i}(k)). (9b)

Instead of using a particular perturbation signal as in [10, 11, 12], we follow the approach in [19] and first investigate for what set of perturbation signals, which we call admissible perturbation signals, the final convergence point of the algorithm is not perturbed. It is natural that any necessary condition defining the admissible perturbation signal is known to all the agents. Then, we show that by knowing a necessary condition on the perturbation, the malicious agent can employ an observer to obtain the reference input regardless of what the exact additive admissible perturbation signal the agent uses.

Theorem 3.4 (A necessary condition on admissible perturbation signals).

Let the interaction topology 𝒢\mathcal{G} of the agents implementing algorithm (9) with Δ(0,min{2,Δ¯})\Delta\in(0,\min\{2,\bar{\Delta}\}), initialized at xi(0)x^{i}(0)\in\real and vi(0)=0v^{i}(0)=0, i𝒱i\in\mathcal{V}, be strongly connected and weight-balanced digraph. Let the knowledge set of malicious agent 11 be as in Definition 1. A necessary condition for preserving the average consensus, i.e., limkxi(k)=𝗋avg\lim_{k\to\infty}x^{i}(k)=\mathsf{r}^{\text{avg}} for i𝒱i\in\mathcal{V} is

limki=1Nm=0k(1Δ)kmfi(m)=0,i𝒱.\displaystyle\lim_{k\to\infty}\sum\nolimits_{i=1}^{N}\sum\nolimits_{m=0}^{k}(1-\Delta)^{k-m}f^{i}(m)=0,\quad i\!\in\!\mathcal{V}. (10)
Proof.

Using the change of variable introduced in Section II, (9) can be written as

𝗊1(k+1)\displaystyle\mathsf{q}_{1}(k+1) =𝗊1(k)=0,\displaystyle=\mathsf{q}_{1}(k)=0,
𝗾2:N(k+1)\displaystyle\boldsymbol{\mathbf{\mathsf{q}}}_{2:N}(k+1) =(1Δ)𝗾2:N(k)+Δ𝕽𝗳(k),\displaystyle=(1-\Delta)\boldsymbol{\mathbf{\mathsf{q}}}_{2:N}(k)+\Delta\,\boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}^{\top}\boldsymbol{\mathbf{\mathsf{f}}}(k),
𝗉1(k+1)\displaystyle\mathsf{p}_{1}(k+1) =(1Δ)𝗉1(k)+Δ𝖗𝗳(k),\displaystyle=(1-\Delta)\mathsf{p}_{1}(k)+\Delta\,\boldsymbol{\mathbf{\mathsf{\mathfrak{r}}}}^{\top}\boldsymbol{\mathbf{\mathsf{f}}}(k),
𝗽2:N(k+1)\displaystyle\boldsymbol{\mathbf{\mathsf{p}}}_{2:N}(k+1) =(𝐈Δ𝗟+)𝗽2:N(k)Δ𝗾2:N(k)+Δ𝕽𝗳(k),\displaystyle=(\boldsymbol{\mathbf{I}}-\Delta\boldsymbol{\mathbf{\mathsf{L}}}^{+})\boldsymbol{\mathbf{\mathsf{p}}}_{2:N}(k)\!-\!\Delta\boldsymbol{\mathbf{\mathsf{q}}}_{2:N}(k)+\Delta\boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}^{\top}\boldsymbol{\mathbf{\mathsf{f}}}(k),

where we used 𝗊1(0)=i=0Nvi(0)=0\mathsf{q}_{1}(0)\!=\!\sum_{i=0}^{N}v^{i}(0)=0. Then,

𝗉1(k)=(1Δ)k𝗉1(0)m=0k(1Δ)kmΔ(𝖗𝗳(m)).\displaystyle\mathsf{p}_{1}(k)=(1-\Delta)^{k}\mathsf{p}_{1}(0)-\sum\nolimits_{m=0}^{k}(1-\Delta)^{k-m}\Delta(-\boldsymbol{\mathbf{\mathsf{\mathfrak{r}}}}^{\top}\boldsymbol{\mathbf{\mathsf{f}}}(m)). (12)

Since [𝗉1𝗽2:N]=[𝖗𝕽](𝐱𝗋avg𝟭N)\left[\begin{smallmatrix}\mathsf{p}_{1}\\ \boldsymbol{\mathbf{\mathsf{p}}}_{2:N}\end{smallmatrix}\right]=\left[\begin{smallmatrix}\boldsymbol{\mathbf{\mathsf{\mathfrak{r}}}}^{\top}\\ \boldsymbol{\mathbf{\mathsf{\mathfrak{R}}}}^{\top}\end{smallmatrix}\right](\boldsymbol{\mathbf{x}}-\mathsf{r}^{\text{avg}}\boldsymbol{\mathbf{\mathsf{1}}}_{N}), the necessary condition for reaching average consensus then is limk𝗽(k)=𝟬\lim_{k\to\infty}\boldsymbol{\mathbf{\mathsf{p}}}(k)=\boldsymbol{\mathbf{\mathsf{0}}}. Given that limk(1Δ)k=0\lim_{k\to\infty}(1-\Delta)^{k}=0, then it follows from (12) that (10) is a necessary condition for limk𝗉1(k)=0\lim_{k\to\infty}\mathsf{p}_{1}(k)=0. ∎

Remark 3.2.

Condition (10) that defines the admissible perturbation signals couples the choices of all the agents. In a decentralized setting, since there is no third-party to assign local perturbation signals fif^{i}, for agents to satisfy (10) without any coordination among themselves, agents choose their admissible perturbations according to

limkm=0k(1Δ)kmfi(m)=0,i𝒱.\displaystyle\lim_{k\to\infty}\sum\nolimits_{m=0}^{k}(1-\Delta)^{k-m}f^{i}(m)=0,\quad i\in\mathcal{V}. (13)

\Box

In our next statement, we investigate whether a malicious agent can derive the local reference value of an agent if it knows condition (13) and all the transmitted messages to and from the agent.

Theorem 3.5 (Use of locally chosen perturbation signals in (9) does not increase privacy protection level).

Let the interaction topology 𝒢\mathcal{G} of the agents implementing algorithm (9) with Δ(0,min{2,Δ¯})\Delta\in(0,\min\{2,\bar{\Delta}\}), initialized at xi(0)x^{i}(0)\in\real and vi(0)=0v^{i}(0)=0, i𝒱i\in\mathcal{V}, be strongly connected and weight-balanced digraph. Suppose agents are implementing locally chosen admissible perturbation signals that satisfy (13), and let the knowledge set of malicious agent 11 be as in Definition 1. If agent 11 is the in-neighbor of agent i𝒱\{1}i\in\mathcal{V}\backslash\{1\} and all its out-neighbors, then agent 11 can employ the observer

v^i(k+1)\displaystyle\hat{v}^{i}(k+1) =v^i(k)Δj=1N𝗮ij(xi(k)xj(k)),\displaystyle=\hat{v}^{i}(k)-\Delta\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(x^{i}(k)-x^{j}(k)), (14a)
x^i(k+1)\displaystyle\hat{x}^{i}(k+1) =x^i(k)+Δ(j=1N𝗮ij(xi(k)xj(k))\displaystyle=\hat{x}^{i}(k)+\Delta\big{(}\sum\nolimits_{j=1}^{N}\boldsymbol{\mathbf{\mathsf{a}}}_{ij}(x^{i}(k)-x^{j}(k))
+v^i(k)x^i(k)),\displaystyle+\hat{v}^{i}(k)-\hat{x}^{i}(k)\big{)}, (14b)
z^i(k)\displaystyle\hat{z}^{i}(k) =x^i(k)+xi(k),\displaystyle=\hat{x}^{i}(k)+x^{i}(k), (14c)

initialized at v^i(0)=x^i(0)=0\hat{v}^{i}(0)\!=\!\hat{x}^{i}(0)\!=\!0 to have z^i(k)𝗋i\hat{z}^{i}(k)\!\to\!\mathsf{r}^{i} as kk\!\to\!\infty.

Proof.

By substitution, from (14c) we obtain

z^i(k+1)=z^i(k)+Δ(z^i(k)(vi(k)v^i(k))+𝗋i+fi(k)),\hat{z}^{i}(k+1)=\hat{z}^{i}(k)+\Delta(-\hat{z}^{i}(k)-(v^{i}(k)-\hat{v}^{i}(k))+\mathsf{r}^{i}+f^{i}(k)),

which gives us

z^i(k)=(1Δ)kz^i(0)+Δm=0k(1Δ)km(𝗋i+fi(m)).\hat{z}^{i}(k)=(1-\Delta)^{k}\hat{z}^{i}(0)+\Delta\sum_{m=0}^{k}(1-\Delta)^{k-m}(\mathsf{r}^{i}+f^{i}(m)).

We note that using the sum of geometric series we can write

m=0k(1Δ)km=1(1Δ)k+1Δ.\sum_{m=0}^{k}(1-\Delta)^{k-m}=\frac{1-(1-\Delta)^{k+1}}{\Delta}.

Then, since |1Δ|<1|1-\Delta|<1 and given the necessary condition (13) the observer converges to the local reference value of agent ii, i.e., z^i(k)𝗋i\hat{z}^{i}(k)\to\mathsf{r}^{i} as kk\to\infty. ∎

We proved that an additive perturbation signal has no contribution in the level of privacy provided by algorithm (2).

4 Demonstration Examples

To provide a context to appreciate the implications of our privacy preserving solution for average consensus problem versus the method of [11], we consider an optimal power dispatch (OPD) problem over the undirected connected graph in Fig. 1(b). [11] is a representative of the common method of privacy preservation via additive noises for algorithm (1). We conduct our study over an undirected connected graph since the results in [11] are established only for such graphs. In our OPD problem a group of generators interacting over the graph of Fig. 1(b) are expected to meet the demand of 𝒫D=1500\mathcal{P}_{D}=1500 MW (p1++p5=1500p^{1}+\cdots+p^{5}=1500) while minimizing their collective cost i=15fi(pi)\sum_{i=1}^{5}f^{i}(p^{i}). The parameters of the local cost fi(pi)=12βi(pi+αi)2+γif^{i}(p^{i})=\frac{1}{2\beta^{i}}(p^{i}+\alpha^{i})^{2}+\gamma^{i}, i{1,,5}i\in\{1,\cdots,5\}, are chosen from the IEEE bus 118 generator list according to corresponding components of {αi}i=14={188.3,592.5,2567.2,1793.3,2567.2}\{\alpha^{i}\}_{i=1}^{4}=\{188.3,592.5,2567.2,1793.3,2567.2\}, {βi}i=15={7.17,45.9,208.2,166.6,208.2}\{\beta^{i}\}_{i=1}^{5}=\{7.17,45.9,208.2,166.6,208.2\}. Here αi\alpha^{i} and βi\beta^{i} are supposed to be the private value of each agent i𝒱i\in\mathcal{V}. The optimal solution of this problem for each agent i{1,,5}i\in\{1,\cdots,5\} is given by pi=βi𝒫D+i=1NαiiNβiαip^{i\star}=\beta^{i}\frac{\mathcal{P}_{D}+\sum_{i=1}^{N}\alpha^{i}}{\sum_{i}^{N}\beta^{i}}-\alpha^{i} [20]. To generate this solution in a distributed manner, let us assume that these N=5N=5 agents employ two static average consensus algorithms to obtain α¯=1Ni=1Nαi\bar{\alpha}=\frac{1}{N}\sum_{i=1}^{N}\alpha^{i} and β¯=1NiNβi\bar{\beta}=\frac{1}{N}\sum_{i}^{N}\beta^{i}.

​​​​​Refer to caption ​​​​​Refer to caption

Figure 2: Trajectories of the average consensus algorithms to obtain α¯\bar{\alpha} (left plot) and β¯\bar{\beta} (right plot). The thick gray lines and the thin gray lines show trajectories of two executions of method M1 [11, Corollary 1], each employing a different noise realization. The blue lines show the trajectories of the agreement state of algorithm (2).

Then, by knowing NN and 𝒫D\mathcal{P}_{D}, agents have all the information to obtain pip^{i\star}. Fig. 1(b) is used in the numerical example of [11] where agent 55 is the malicious agent, so as we assume here. The privacy of agents {1,2,3}\{1,2,3\} is preserved if agents use algorithm (1) augmented by the additive noise method of [11, Corollary 1] (hereafter referred to as method M1) or algorithm (2) (see Theorem 3.3). Let us assume that when using the method M1, the agents use the same ϕ=0.9\phi=0.9 as [11, Section VI] and given the value of their parameters, they agree to use an additive Gaussian noise with mean 0 and standard deviation σ=100\sigma=100. Note that these values should be common for all agents, and thus agents need to coordinate to choose them. Fig. 2 shows the trajectories generated by method M1 for two different executions (each uses a different noise realization) overlaid over the trajectories generated by Algorithm (2). As seen, the trajectories of method M1 are quite noisy and convergence is slower. Using method M1, malicious agent 55 cannot obtain the exact value of {αi}i=13\{\alpha^{i}\}_{i=1}^{3} and {βi}i=13\{\beta^{i}\}_{i=1}^{3} because as shown in [11, Fig.4] the covariance of a maximum likelihood estimator that agent 55 uses to obtain the private reference value of {1,2,3}\{1,2,3\} has a steady state non-vanishing value, see Fig. 3. However, agent 55 knows that in 99.799.7% of the times (3σ3\sigma rule) the error rate to obtain each {αi}i=13\{\alpha^{i}\}_{i=1}^{3} and {βi}i=13\{\beta^{i}\}_{i=1}^{3} is respectively (0.972%,0.618%,0.071%)(0.972\%,0.618\%,0.071\%) and (25.512%,7.972%,0.879%)(25.512\%,7.972\%,0.879\%) according to the computed normalized covariances PiiP_{ii} as in Fig. 3. Given the numerical values of these parameters, this level of protection gives a good approximate value of the optimal operating point of the supposedly private agents {1,2,3}\{1,2,3\} to the malicious agent. On the other hand, the privacy preservation guarantees that algorithm (2) provides for agents {1,2,3}\{1,2,3\} is stronger, as the malicious agent cannot obtain any estimate on the private values of the agent. See Fig. 4 for an example scenario, where two alternative cases of reference input signals for αi\alpha^{i} denoted by {α2i}i=15={1311.6,3592.5,1067.2,1793.3,2567.2}\{\alpha_{2}^{i}\}_{i=1}^{5}\!=\!\{-1311.6,3592.5,1067.2,1793.3,2567.2\}, and {α3i}i=15={1688.3,2407.4,4067.2,1793.3,2567.2}\{\alpha_{3}^{i}\}_{i=1}^{5}\!=\!\{1688.3,-2407.4,4067.2,1793.3,2567.2\}. where 1Ni=15αji=1Ni=15α1i\frac{1}{N}\sum_{i=1}^{5}\!{\alpha}_{j}^{i}\!=\!\frac{1}{N}\sum_{i=1}^{5}\!\alpha_{1}^{i}, j{2,3}j\!\in\!\{2,3\} result in exactly the same transmit message to the malicious agent 55. Therefore, agent 55 cannot distinguish between these different references for agents {1,2,3}\{1,2,3\}. Here, {α1i}i=15\{\alpha_{1}^{i}\}_{i=1}^{5} are the actual inputs given in the OPD problem definition earlier.

Refer to caption
Figure 3: Normalized covariances of the maximum likelihood estimator that the malicious agent uses to obtain an estimate of reference input of the agents implementing method M1. The dashed line is the steady-state value and the solid lines show the time history. This plot is the same as [11, Fig. 4] where σ=1\sigma\!=\!1 is used.
Refer to caption
(a) {xji(k)}i=15,j{1,2,3}\{x_{j}^{i}(k)\}_{i=1}^{5},j\!\in\!\{1,2,3\} vs. kk are shown, respectively, by the dashed blue, the thin gray and the thick gray lines.
Refer to caption
(b) exi(k)=x1i(k)x2i(k)e_{x}^{i}(k)=x_{1}^{i}(k)-x_{2}^{i}(k) vs. kk
Refer to caption
(c) exi(k)=x1i(k)x3i(k)e_{x}^{i}(k)=x_{1}^{i}(k)-x_{3}^{i}(k) vs. kk
Figure 4: Three executions of (2) yield exactly the same received signals by the malicious agent 55, while converging to the same average value of 1Ni=15αi\frac{1}{N}\sum_{i=1}^{5}\alpha^{i}.

5 Conclusion

We considered the problem of privacy preservation in the static average consensus problem. This problem normally is solved by proposing privacy preservation mechanism that are added to the popular first order Laplacian-based algorithm. These mechanisms come with computational overheads or pre-coordinating among the agents to choose the parameters of the algorithm. They also alter the transient response of the algorithm. In this paper we showed that an alternative algorithm that is proposed in the literature in the context of dynamic average consensus problem can be a simple solution for privacy preservation for average consensus problem. The advantage of our proposed solution over existing results in the literature was to provide a stronger notion of privacy while rendering a similar transient and convergence behavior to that of the well-known Laplacian average consensus algorithm, having no need for coordination to choose the parameters of the algorithm, and no extra computations.

References

  • [1] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Transactions on automatic control, vol. 49, no. 9, pp. 1520–1533, 2004.
  • [2] R. E. Kalman, “On the general theory of control systems,” in Proceedings First International Conference on Automatic Control, Moscow, USSR, 1960.
  • [3] S. Pequito, S. Kar, S. Sundaram, and A. P. Aguiar, “Design of communication networks for distributed computation with privacy guarantees,” in 53rd IEEE Conference on Decision and Control, pp. 1370–1376, IEEE, 2014.
  • [4] C. N. Hadjicostis, “Privacy preserving distributed average consensus via homomorphic encryption,” in 2018 IEEE Conference on Decision and Control (CDC), pp. 1258–1263, IEEE, 2018.
  • [5] M. Ruan, H. Gao, and Y. Wang, “Secure and privacy-preserving consensus,” IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 4035–4049, 2019.
  • [6] T. Yin, Y. Lv, and W. Yu, “Accurate privacy preserving average consensus,” IEEE Transactions on Circuits and Systems II: Express Briefs, 2019.
  • [7] E. Nozari, P. Tallapragada, and J. Cortés, “Differentially private average consensus: Obstructions, trade-offs, and optimal algorithm design,” Automatica, vol. 81, pp. 221–231, 2017.
  • [8] Z. Huang, S. Mitra, and G. Dullerud, “Differentially private iterative synchronous consensus,” in Proceedings of the 2012 ACM workshop on Privacy in the electronic society, pp. 81–90, 2012.
  • [9] E. Nozari, P. Tallapragada, and J. Cortés, “Differentially private average consensus with optimal noise selection,” IFAC-PapersOnLine, vol. 48, no. 22, pp. 203–208, 2015.
  • [10] N. E. Manitara and C. N. Hadjicostis, “Privacy-preserving asymptotic average consensus,” in 2013 European Control Conference (ECC), pp. 760–765, IEEE, 2013.
  • [11] Y. Mo and R. M. Murray, “Privacy preserving average consensus,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 753–765, 2016.
  • [12] J. He, L. Cai, C. Zhao, P. Cheng, and X. Guan, “Privacy-preserving average consensus: privacy analysis and algorithm design,” IEEE Transactions on Signal and Information Processing over Networks, vol. 5, no. 1, pp. 127–138, 2018.
  • [13] C. Altafini, “A system-theoretic framework for privacy preservation in continuous-time multiagent dynamics,” Automatica, vol. 122, p. 109253, 2020.
  • [14] N. Rezazadeh and S. S. Kia, “Privacy preservation in a continuous-time static average consensus algorithm over directed graphs,” in American Control Conference, pp. 5890–5895, IEEE, 2018.
  • [15] C. Dwork, A. Roth, et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
  • [16] F. Bullo, J. Cortes, and S. Martinez, Distributed control of robotic networks: a mathematical approach to motion coordination algorithms, vol. 27. Princeton University Press, 2009.
  • [17] S. S. Kia, J. Cortés, and S. Martínez, “Dynamic average consensus under limited control authority and privacy requirements,” International Journal on Robust and Nonlinear Control, vol. 25, no. 13, pp. 1941–1966, 2015.
  • [18] S. S. Kia, B. Van Scoy, J. Cortes, R. A. Freeman, K. M. Lynch, and S. Martinez, “Tutorial on dynamic average consensus: The problem, its applications, and the algorithms,” IEEE Control Systems Magazine, vol. 39, no. 3, pp. 40–72, 2019.
  • [19] N. Rezazadeh and S. S. Kia, “Privacy preservation in continuous-time average consensus algorithm via deterministic additive perturbation signals,” 2020. Available at https://arxiv.org/pdf/1904.05286.pdf.
  • [20] A. J. Wood, B. F. Wollenberg, and G. B. Sheblé, Power Generation, Operation, and Control. John Wiley & Sons, 2013.