Deterministic Privacy Preservation in
Static Average Consensus Problem
Abstract
In this paper we consider the problem of privacy preservation in the static average consensus problem. This problem normally is solved by proposing privacy preservation augmentations for the popular first order Laplacian-based algorithm. These mechanisms however come with computational overhead, may need coordination among the agents to choose their parameters and also alter the transient response of the algorithm. In this paper we show that an alternative iterative algorithm that is proposed in the literature in the context of dynamic average consensus problem has intrinsic privacy preservation and can be used as a privacy preserving algorithm that yields the same performance behavior as the well-known Laplacian consensus algorithm but without the overheads that come with the existing privacy preservation methods.
1 Introduction
This paper considers the problem of privacy preservation in the in-network average consensus (PrivCon) problem, where the objective is to enable a group of communicating agents interacting over a strongly connected and weight-balanced digraph 111For graph theoretic definitions used hereafter see Section 2., see Fig. 1, to calculate using local interactions but without disclosing their local reference value , , to each other. The privacy preservation for average consensus is normally formalized as concealing the reference value of each agent from a malicious agent that is defined as follows.
Definition 1 (Malicious agent).
A malicious agent in the network is an agent that wants to obtain the reference value of the other agents without perturbing/interrupting the execution of the average consensus algorithm. The knowledge set of this malicious agent consists of (a) the network topology , (b) its own local states and reference input, (c) the received signals from its out-neighbors, and (d) the agreement state of each agent converges asymptotically to .
The solutions to the PrivCon problem in the literature mainly center around designing privacy preservation augmentation mechanisms for the popular average consensus algorithm
(1) | ||||
which with proper choice of stepsize guarantees , , as [1]. In a weighted digraph, if there is an edge from agent to agent , i.e., agent can obtain information from agent , otherwise . Thus, algorithm (1) requires each agent to share its reference value with its in-neighbors (agents that receive messages from ) in the first step of the algorithm, resulting in a trivial breach of privacy. Standard observability analysis [2] shows also that when the adjacency matrix of the network topology is known to an agent , agent can obtain the initial condition of algorithm (1), and consequently, the reference value of all or some of the other agents of the network; see [3] for details.
A popular approach explored to induce privacy preservation to algorithm (1) is the encryption where either a trusted third-party generates the public-key [4] or agents share their keys in which the network is restricted to a point-to-point or tagged undirected communication framework [5, 6]. Also, [5] requires extra communication channels for key generation purposes. Moreover, in [5, 6], the network should be connected at the first step of communication and therefore, it does not have robustness to possible switching in the topology. Differential privacy [7, 8, 9] and additive obfuscation noise methods [10, 11, 12] are other popular techniques to conceal the exact reference value of the agents, but the malicious agent can obtain an estimate on the reference value using a stochastic estimator. In addition, final convergence is perturbed in [7, 8, 9, 10] and convergence rate is altered in [11]. Moreover, these classes of privacy preserving measures cause noisy transient response for the algorithm, which results in excessive energy expenditure if agents use local controls to track the consensus trajectory as a reference input. Lastly, the guarantees established in all the methods discussed above are only for connected undirected graphs. Interested reader can also find privacy preservation methods for the continuous-time implementation of algorithm (1) in [13] and [14].

In this paper, we focus on a solution for PrivCon problem that does not perturb the final convergence value. We address the PrivCon problem by proposing to use an alternative algorithm (see algorithm (2)) that yields similar transient and convergence performance to that of algorithm (1). We analyze the privacy preservation of this algorithm carefully and show that this algorithm intrinsically yields exactly the same privacy preservation guarantees as in [10, 11, 12] in terms of which agents’ privacy can be preserved. More precisely, we show that similar to the results in [10, 11, 12] the privacy of any agent that has at least one out-neighbor that is not the out-neighbor of the malicious agent is preserved. In comparison to the privacy preservation by use of the additive obfuscation noise methods [10, 11, 12], however, algorithm (2) offers a deterministic and stronger sense of privacy preservation, i.e., it meets the privacy preservation definition given below.
Definition 2 (Privacy preservation).
Privacy of an agent is preserved from a malicious agent if the malicious agent cannot obtain any estimate on the reference value of the agent.
To show that the privacy of an agent is preserved in the sense of Definition 2, we show that there exists other values of , arbitrarily different from the true one, that give the same trajectory for the information the malicious agent receives from its out-neighbors. Therefore, the malicious agent cannot distinguish between the true reference value and the arbitrary alternatives. This, in essence, is a stronger notion of privacy preservation than that of the -differential privacy [15] or that of [11] where even though the malicious agent cannot obtain the exact reference value of the agents but it obtains an estimate on the reference value.
In our demonstration study in Section 4, we compare in particular our suggested PrivCon problem solution to the additive obfuscation noise method of [11]. [11] is a prominent solution in the class of additive obfuscation noise methods that guarantees exact convergence while not allowing a malicious agent to obtain the exact reference value of the agents under the connectivity condition stated earlier. However, in the framework of [11] there are disclosure of information about the reference value in two levels. First, as shown in [11], the malicious agent can employ a stochastic observer to obtain an estimate on the reference value of other agents. The normalized variance of this estimator can be very small compared to the size of the reference value of the agents. Moreover, in [11] agents need to coordinate to choose a common parameter in their noise generator and also use a common variance. The knowledge about the noise parameters also is another level of disclosure of an estimate on the reference value of the agents, as the transmit message of each agent at the initial step of the algorithm is the reference value plus a value generated by their local noise whose variance is the same for all the agents. Moreover, in choosing the variance value for the noise, agents need to consider the size of their local reference values to yield meaningful obfuscation. We note that using a noise with large variance results in a more perturbed transient response and a slower convergence. The advantage of our proposed solution is to meet the privacy preservation as given in Definition 1 while rendering a similar transient and convergence behavior to that of algorithm (1), having no need for coordination to choose the parameters of the algorithm, and no extra computations. Our notation is standard, though certain pieces of notation are defined as need arises.
2 Problem Setting
Our graph theoretic definitions follow [16]. A weighted digraph is a triplet where is the node set, is the edge set and is the weighted adjacency matrix defined such that if , otherwise . Given an edge , is called an in-neighbor of , and is called an out-neighbor of . The (weighted) in-degree and out-degree of each node are respectively are and . A weighted digraph is undirected if for all . A digraph is weight-balanced if at every node , . A digraph is strongly connected if there is a directed path from every node to every other node. The Laplacian matrix of a digraph is .
Let the interaction topology of the agents be a strongly connected and weight-balanced digraph . We consider the average consensus algorithm
(2a) | ||||
(2b) | ||||
(2c) |
proposed in [17], as a solution for privacy preserving average consensus algorithm over . As discussed in [18], (2) yields a similar transient response to that of algorithm (1). In our analysis, the privacy preservation in the sense of Definition 2 is against a malicious agent defined as in Definition 1. Our study is different than the privacy preservation study in [17] where the continuous-time representation of algorithm (2) is considered and the reference values are assumed to be dynamic time-varying signals. The guarantees provided crucially depend on the time varyingness of the reference values.
Let , , , and . Moreover, let , and such that
Then, using the change of variable
algorithm (1) reads as
and algorithm (2) as
where . These equivalent representations show the connection between the internal dynamics of algorithms (1) and (2). For a strongly connected and weight-balanced digraph, has a simple zero eigenvalue and the rest of the eigenvalues have non-zero positive real parts. Then, the eigenvalues of are . By invoking [18, Lemma S1], we note that the exponential stability of (1) and (2) is guaranteed, respectively, for any and , where . Given a similar performance, an immediate appeal of algorithm (2) over algorithm (1) is that the reference value of agents is not trivially transmit to the in-neighbors. The question that we address in the subsequent section is whether the malicious agent can use its knowledge set to compute the reference values of other agents when agents implement algorithm (2).
3 Privacy Preservation Analysis
In this section we show that algorithm (2) intrinsically yields the same privacy preservation guarantees as in [10, 11, 12] in terms of which agents’ privacy can be preserved. However, the guarantees of algorithm (2) are valid also for strongly connected and weight-balanced digraphs. Moreover, unlike the privacy preservation of [10, 11, 12] which is obtained by using additive noises and comes with disclosure of a stochastic estimate on the private value of the agents, algorithm (2) offers a deterministic and stronger sense of privacy preservation, i.e., it meets the privacy preservation objective defined in Definition 2. In what follows without loss of generality we assume that agent is the malicious agent. The initialization condition is trivially satisfied when every agent uses . Other choices need coordination among agents with no strong guarantee that the choices are private. Thus , we assume that , and is known to the malicious agent. Moreover, and are, respectively, the set of in-neighbors and out-neighbors of agent . We also define .
Lemma 3.1 (A sufficient condition for privacy preservation when algorithm (2) is implemented).
Let the interaction topology of the agents implementing algorithm (2) with , initialized at and , , be strongly connected and weight-balanced digraph. Let the knowledge set of malicious agent be as in Definition 1. The privacy of any agent is preserved from agent if agent has at least one out-neighbor that is not an out-neighbor of agent .
Proof.
To prove the statement, we consider the worst case where agent whose privacy is being evaluated is an out-neighbor of agent , i.e., agent receives the transmitted message of agent . Without loss of generality let this agent be labeled agent . Let agent be the out-neighbor of agent that is not an out-neighbor of agent , i.e., but . To show privacy preservation for agent , we show that there are infinitely many executions of algorithm (2) with different values of and that yield exactly the same received signal by agent . To this end, consider two executions of algorithm (2). Let and , . Moreover, let the reference value of the agents in the first execution be and in the second execution be such that if , otherwise . Moreover, , i.e., . Lastly, , for . Next, we show that for any there exists , , such that , for . Therefore, the signals received by agent for these two distinct executions is exactly the same and agent cannot distinguish between them.
The error dynamics at each agent reads as
(3a) | ||||
(3b) | ||||
(3c) |
If for any , then , and for satisfy the error dynamics (3) for any . On the other hand, for
(4a) | ||||
(4b) |
and since , (3) for agent reads as
(5a) | ||||
(5b) |
Note that with adding (4a) and (5a) we have . Considering and also, it follows from adding up (5b) and (4b) for that
(6) |
Then, since , it follows from (5b), (6), and (4b) that
(7) |
Since , it follows from (7) that indeed . Therefore, we can conclude that . It is then proven that with an unbounded error in the initial state of , varies unboundedly as well yet the malicious agent receives the same signals from its out-neighbors, i.e., , , for . Note here that since , it follows from (6) that also converges to zero as showing that all the agents converge to the same final average point in both execution and . ∎
Our next result shows that when the malicious agent has access to the messages transmitted to and from an agent , similar to the methods proposed in [10, 11, 12], the privacy of agent is not preserved. To establish this result we show that the malicious agent can implement an observer to obtain .
Lemma 3.2 (A sufficient condition for loss of privacy preservation when algorithm (2) is implemented).
Let the interaction topology of the agents implementing algorithm (2), initialized at and , , be a strongly connected and weight-balanced digraph. Let the knowledge set of malicious agent be as in Definition 1. If agent is the in-neighbor of agent and all its out-neighbors, then at any agent can obtain using , and .
Proof.
Lemma 3.1 and 3.2 gives us the following main statement on the privacy preservation guarantees of Algorithm (2).
Theorem 3.3 (Privacy Preservation guarantee of algorithm (2)).
Let the interaction topology of the agents implementing algorithm (2), initialized at and , , be strongly connected and weight-balanced digraph. Let the knowledge set of malicious agent be as in Definition 1. Privacy of agent is preserved from agent if and only if agent has an out-neighbor that is not an out-neighbor of agent 1.
Remark 3.1 (Privacy preservation for an initialization free average consensus algorithm for connected undirected graphs ).
We can show through similar arguments that the privacy preservation guarantees of algorithm (2) hold for the average consensus algorithm (8) as well,
(8a) | ||||
(8b) | ||||
(8c) |
Algorithm (8) does not require special initialization of , , therefore it is robust to agent drop-off. However, this algorithm works over connected undirected graphs, and also requires an extra message exchange between the neighboring agents.
So far, we have shown the intrinsic privacy property of Algorithm (2). Now the natural question is whether we can loosen the topology restriction of Theorem 3.3 using additive perturbation signals and preserve privacy for agents whose all communication signals (in-coming and out-going) are available to the malicious agent. Herein, we show that this mechanism has no contribution and the malicious agent can still derive local information asymptotically. A comprehensive protection via perturbation signals suggests adding a signal to the transmitted message of every agent , and additive signal and to the right hand side of (2a) and (2b), respectively. However, without loss of generality, the effect of all these signals can be captured via adding only a dynamic perturbation signal , to the right hand side of (2b) as follows,
(9a) | |||
(9b) |
Instead of using a particular perturbation signal as in [10, 11, 12], we follow the approach in [19] and first investigate for what set of perturbation signals, which we call admissible perturbation signals, the final convergence point of the algorithm is not perturbed. It is natural that any necessary condition defining the admissible perturbation signal is known to all the agents. Then, we show that by knowing a necessary condition on the perturbation, the malicious agent can employ an observer to obtain the reference input regardless of what the exact additive admissible perturbation signal the agent uses.
Theorem 3.4 (A necessary condition on admissible perturbation signals).
Proof.
Remark 3.2.
Condition (10) that defines the admissible perturbation signals couples the choices of all the agents. In a decentralized setting, since there is no third-party to assign local perturbation signals , for agents to satisfy (10) without any coordination among themselves, agents choose their admissible perturbations according to
(13) |
In our next statement, we investigate whether a malicious agent can derive the local reference value of an agent if it knows condition (13) and all the transmitted messages to and from the agent.
Theorem 3.5 (Use of locally chosen perturbation signals in (9) does not increase privacy protection level).
Let the interaction topology of the agents implementing algorithm (9) with , initialized at and , , be strongly connected and weight-balanced digraph. Suppose agents are implementing locally chosen admissible perturbation signals that satisfy (13), and let the knowledge set of malicious agent be as in Definition 1. If agent is the in-neighbor of agent and all its out-neighbors, then agent can employ the observer
(14a) | ||||
(14b) | ||||
(14c) |
initialized at to have as .
Proof.
We proved that an additive perturbation signal has no contribution in the level of privacy provided by algorithm (2).
4 Demonstration Examples
To provide a context to appreciate the implications of our privacy preserving solution for average consensus problem versus the method of [11], we consider an optimal power dispatch (OPD) problem over the undirected connected graph in Fig. 1(b). [11] is a representative of the common method of privacy preservation via additive noises for algorithm (1). We conduct our study over an undirected connected graph since the results in [11] are established only for such graphs. In our OPD problem a group of generators interacting over the graph of Fig. 1(b) are expected to meet the demand of MW () while minimizing their collective cost . The parameters of the local cost , , are chosen from the IEEE bus 118 generator list according to corresponding components of , . Here and are supposed to be the private value of each agent . The optimal solution of this problem for each agent is given by [20]. To generate this solution in a distributed manner, let us assume that these agents employ two static average consensus algorithms to obtain and .
Then, by knowing and , agents have all the information to obtain . Fig. 1(b) is used in the numerical example of [11] where agent is the malicious agent, so as we assume here. The privacy of agents is preserved if agents use algorithm (1) augmented by the additive noise method of [11, Corollary 1] (hereafter referred to as method M1) or algorithm (2) (see Theorem 3.3). Let us assume that when using the method M1, the agents use the same as [11, Section VI] and given the value of their parameters, they agree to use an additive Gaussian noise with mean and standard deviation . Note that these values should be common for all agents, and thus agents need to coordinate to choose them. Fig. 2 shows the trajectories generated by method M1 for two different executions (each uses a different noise realization) overlaid over the trajectories generated by Algorithm (2). As seen, the trajectories of method M1 are quite noisy and convergence is slower. Using method M1, malicious agent cannot obtain the exact value of and because as shown in [11, Fig.4] the covariance of a maximum likelihood estimator that agent uses to obtain the private reference value of has a steady state non-vanishing value, see Fig. 3. However, agent knows that in % of the times ( rule) the error rate to obtain each and is respectively and according to the computed normalized covariances as in Fig. 3. Given the numerical values of these parameters, this level of protection gives a good approximate value of the optimal operating point of the supposedly private agents to the malicious agent. On the other hand, the privacy preservation guarantees that algorithm (2) provides for agents is stronger, as the malicious agent cannot obtain any estimate on the private values of the agent. See Fig. 4 for an example scenario, where two alternative cases of reference input signals for denoted by , and . where , result in exactly the same transmit message to the malicious agent . Therefore, agent cannot distinguish between these different references for agents . Here, are the actual inputs given in the OPD problem definition earlier.




5 Conclusion
We considered the problem of privacy preservation in the static average consensus problem. This problem normally is solved by proposing privacy preservation mechanism that are added to the popular first order Laplacian-based algorithm. These mechanisms come with computational overheads or pre-coordinating among the agents to choose the parameters of the algorithm. They also alter the transient response of the algorithm. In this paper we showed that an alternative algorithm that is proposed in the literature in the context of dynamic average consensus problem can be a simple solution for privacy preservation for average consensus problem. The advantage of our proposed solution over existing results in the literature was to provide a stronger notion of privacy while rendering a similar transient and convergence behavior to that of the well-known Laplacian average consensus algorithm, having no need for coordination to choose the parameters of the algorithm, and no extra computations.
References
- [1] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Transactions on automatic control, vol. 49, no. 9, pp. 1520–1533, 2004.
- [2] R. E. Kalman, “On the general theory of control systems,” in Proceedings First International Conference on Automatic Control, Moscow, USSR, 1960.
- [3] S. Pequito, S. Kar, S. Sundaram, and A. P. Aguiar, “Design of communication networks for distributed computation with privacy guarantees,” in 53rd IEEE Conference on Decision and Control, pp. 1370–1376, IEEE, 2014.
- [4] C. N. Hadjicostis, “Privacy preserving distributed average consensus via homomorphic encryption,” in 2018 IEEE Conference on Decision and Control (CDC), pp. 1258–1263, IEEE, 2018.
- [5] M. Ruan, H. Gao, and Y. Wang, “Secure and privacy-preserving consensus,” IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 4035–4049, 2019.
- [6] T. Yin, Y. Lv, and W. Yu, “Accurate privacy preserving average consensus,” IEEE Transactions on Circuits and Systems II: Express Briefs, 2019.
- [7] E. Nozari, P. Tallapragada, and J. Cortés, “Differentially private average consensus: Obstructions, trade-offs, and optimal algorithm design,” Automatica, vol. 81, pp. 221–231, 2017.
- [8] Z. Huang, S. Mitra, and G. Dullerud, “Differentially private iterative synchronous consensus,” in Proceedings of the 2012 ACM workshop on Privacy in the electronic society, pp. 81–90, 2012.
- [9] E. Nozari, P. Tallapragada, and J. Cortés, “Differentially private average consensus with optimal noise selection,” IFAC-PapersOnLine, vol. 48, no. 22, pp. 203–208, 2015.
- [10] N. E. Manitara and C. N. Hadjicostis, “Privacy-preserving asymptotic average consensus,” in 2013 European Control Conference (ECC), pp. 760–765, IEEE, 2013.
- [11] Y. Mo and R. M. Murray, “Privacy preserving average consensus,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 753–765, 2016.
- [12] J. He, L. Cai, C. Zhao, P. Cheng, and X. Guan, “Privacy-preserving average consensus: privacy analysis and algorithm design,” IEEE Transactions on Signal and Information Processing over Networks, vol. 5, no. 1, pp. 127–138, 2018.
- [13] C. Altafini, “A system-theoretic framework for privacy preservation in continuous-time multiagent dynamics,” Automatica, vol. 122, p. 109253, 2020.
- [14] N. Rezazadeh and S. S. Kia, “Privacy preservation in a continuous-time static average consensus algorithm over directed graphs,” in American Control Conference, pp. 5890–5895, IEEE, 2018.
- [15] C. Dwork, A. Roth, et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
- [16] F. Bullo, J. Cortes, and S. Martinez, Distributed control of robotic networks: a mathematical approach to motion coordination algorithms, vol. 27. Princeton University Press, 2009.
- [17] S. S. Kia, J. Cortés, and S. Martínez, “Dynamic average consensus under limited control authority and privacy requirements,” International Journal on Robust and Nonlinear Control, vol. 25, no. 13, pp. 1941–1966, 2015.
- [18] S. S. Kia, B. Van Scoy, J. Cortes, R. A. Freeman, K. M. Lynch, and S. Martinez, “Tutorial on dynamic average consensus: The problem, its applications, and the algorithms,” IEEE Control Systems Magazine, vol. 39, no. 3, pp. 40–72, 2019.
- [19] N. Rezazadeh and S. S. Kia, “Privacy preservation in continuous-time average consensus algorithm via deterministic additive perturbation signals,” 2020. Available at https://arxiv.org/pdf/1904.05286.pdf.
- [20] A. J. Wood, B. F. Wollenberg, and G. B. Sheblé, Power Generation, Operation, and Control. John Wiley & Sons, 2013.