Privacy-Preserved Average Consensus Algorithms with Edge-based Additive Perturbations
Abstract
In this paper, we consider the privacy preservation problem in both discrete- and continuous-time average consensus algorithms with strongly connected and balanced graphs, against either internal honest-but-curious agents or external eavesdroppers. A novel algorithm is proposed, which adds edge-based perturbation signals to the process of consensus computation. Our algorithm can be divided into two phases: a coordinated scrambling phase, which is for privacy preservation, and a convergence phase. In the scrambling phase, each agent is required to generate some perturbation signals and add them to the edges leading out of it. In the convergence phase, the agents update their states following a normal updating rule. It is shown that an internal honest-but-curious agent can obtain the privacy of a target agent if and only if no other agents can communicate with the target agent. As for external eavesdroppers, it is proved that this kind of attackers can never obtain any agent’s privacy.
Index Terms:
Multi-agent systems, average consensus, privacy preservation, edge-based perturbation signals.I Introduction
In the last decade, preserving privacy in average consensus has received increasing attentions in the systems and control community. Traditional consensus algorithms, e.g., those in [1, 2, 3], require agents to directly exchange their states information for computation. This brings some security issues in the sense that the agents’ initial states would be easily revealed to their out-neighbors and possible external adversaries. The disclosure of agents’ initial states may be undesirable, since they usually contain some important and sensitive information. An example is that some participants of an organization want to reach a common opinion with a consensus algorithm but they do not want their own opinions known by others [4]. Therefore, as long as cyber security is concerned, consensus algorithms should be secure and have the ability of preventing internal or external adversaries from obtaining agents’ privacy, i.e., their initial states.
The privacy preserving average consensus problem has been extensively investigated by many researchers. A natural idea is to encrypt the transmitted messages so that they will not be accessed by the public or third parties [5, 6]. By making use of the Pailler cryptosystem and designing an ingenious communication mechanism, privacy preservation was successfully embedded in pairwise communication in [7]. The ratio consensus and homomorphic cryptosystems are combined in [8] to preserve privacy. Nevertheless, cryptology-based methods share a common weakness that they usually put heavy burden on the communication and computation on the network.
Many works, e.g., [9, 4, 10, 11, 12, 13, 14, 15], protect privacy by adding random noises or deterministic perturbation signals to the transmitted messages. Some of them utilized a tool called differential privacy [16, 17]. For instance, [9] and [18] added random noises according to a Laplace distribution to messages in order to protect the real initial states. It should be pointed out that in the existing works based on differential privacy, accurate average consensus cannot be reached. To overcome this drawback, some researchers designed correlated noises ingeniously. In [4] and [12], a sequence of zero-sum correlated noises were added to the messages to preserve privacy. Nevertheless, these correlated-noise-based works require undirected graphs and does not consider external adversaries. Instead of adding random noises, the authors in [11] inserted some well-designed deterministic perturbation signals into the classic consensus algorithm. In [15], the authors proposed an interesting privacy-preserving consensus algorithm, which adds edge-based perturbations. There are also some other methods to preserve privacy, such as the state decomposition [19], output masking [20], hot-pluggable methods [21] and observability based methods [22, 23, 24].
In this paper, we intend to propose a novel privacy preserving consensus algorithm, which can achieve accurate average consensus for a broader class of network graphs and under a much relaxed privacy-preserving condition, compared to the existing literature. It is observed that in those aforementioned works such as [4, 12, 11], each agent adds noises or perturbation signals to its original state and then broadcasts the obfuscated value to all its neighboring agents. The privacy preserving performance, nevertheless, might be compromised, since the heterogeneity of the network in the sense that the agents usually have different amounts of neighbors is not fully exploited. Motivated by this observation, in this paper we propose to insert some well-designed perturbation signals to the communication edges in the network. This method of adding edge-based perturbation signals, different from the node-based methods in the aforementioned works, brings more degrees of design freedom.
Specifically, in this paper we consider the privacy preservation problem in both discrete- and continuous-time average consensus algorithms with strongly connected and balanced graphs, against either internal or external attackers. By the proposed edge-based method, each agent designs a different perturbation signal for every edge leading out of it and then sends the corrupted values to its neighbors. A distinct feature is that perturbation signals does not to be added all the time. In the discrete-time case, each agent only adds perturbation signals at the first iteration, while in the continuous-time case the perturbation signals are injected only in a finite time interval. It is shown that an internal honest-but-curious agent can obtain privacy of a target agent if and only if no other agents except the internal attacker can communicate with the target agent. It is further proved that external eavesdroppers cannot obtain any agent’s privacy.
Compared to the existing privacy preserving average consensus algorithms, the algorithm proposed in this paper have several advantages. In contrast to the differential-privacy-based methods in [9, 18], our method can achieve accurate average consensus. Besides, our algorithm is valid when the graph is a balanced directed graph, while most existing works, e.g., [7, 4, 12, 19], require undirected graphs. Compared to those in [11], our algorithm might perform better against the internal adversaries. In [11], an internal attacker can obtain an agent’s privacy if and only if the agent itself and all its in-neighbors are the attacker’s in-neighbors. By contrast, in the current paper the privacy disclosure happens only when the internal attacker is the only agent that can communicate with the target agent. Note that [15] also proposed an edge-based privacy-preserving consensus algorithm, which requires undirected graphs and considers only internal attackers. The algorithm in this paper, nevertheless, is simpler and is applicable to the case when the graph is directed and balanced and when external attackers exist. Moreover, the algorithm design and privacy analysis in this paper are based on a system-theorectic framework, significantly different form those in [15] which are from a perspective of probability analysis.
The rest of this paper is organized as follows. The privacy preservation problem is formulated in Section II. A discrete-time privacy preserving average consensus algorithm is proposed and its properties are analyzed in Section III. Extensions to the continuous-time case are considered in Section IV. Numerical simulations are conducted in Section V to demonstrate the effectiveness of our discrete-time algorithm. Finally, this paper is concluded in Section VI.
II Problem Formulation
In this paper, we consider a network of () agents. The communication among agents is modeled by a graph , consisting of a node set and an edge set . If , it means that agent can send information to agent , in which case agent is said to be an in-neighbor of agent and agent is said to be an out-neighbor of agent . A graph with the property that , is said to be balanced, where (respectively, ) denotes the cardinality of agent ’s in-neighbor set (respectively, out-neighbor set ). In such a graph, define each node’s degree as .
Assumption 1
The graph is strongly connected and balanced.
Two types of attackers are considered in this paper. The first type is called internal honest-but-curious agents. This type of attackers are some agents in the network, which implement our algorithm correctly but will try to evaluate other agents’ initial states via the collected information. The second type is called external eavesdroppers. This type of attackers exist outside the network and does not take part in the process of consensus computation, but they are able to hack into communication links and have access to all transmitted information among neighboring agents.
Lemma 1 ([1, 25])
For a strongly connected and balanced graph, the following updating rule guarantees accurate average consensus:
(1) |
where is a constant and denotes the -th entry of the adjacency matrix , defined as , if and otherwise.
Lemma 2 ([1, 25])
For a strongly connected and balanced graph, the average consensus is achieved under the following continuous-time updating rule:
(2) |
where is a constant.
The consensus algorithms (1) and (2) cannot preserve privacy. For example, it is obvious in (1) that all agents’ initial states information will be transmitted at the first iteration, implying that an agent’s privacy will be revealed to its out-neighbors and all agents’ privacy will be revealed to the external eavesdroppers.
The goal of this paper is to design a novel consensus algorithm that can fulfill two tasks: i) to achieve accurate average consensus, and ii) to prevent the aforementioned two types of attackers from obtaining agents’ privacy.
III Discrete-time Privacy Preserving Updating Rule and Privacy Analysis
In this section, we consider the discrete-time consensus case.
The proposed algorithm is stated as follows. At the first iteration, each agent generates some perturbation signals and then add them to the edges leading out of it. Specially, for each agent , if , then agent choose a real number and transmit to its out-neighbor, agent . All the agents then update their states as
(3) |
where is a constant and . For , all agents transmit their true state values and implement the algorithm in Lemma 1, i.e.,
(4) |
where is a constant.
The algorithm is summarized in Algorithm 1.
Theorem 1
By Algorithm 1, accurate average consensus is achieved.
Proof 1
Firstly, we prove that the first iteration (3) does not change the sum of agents’ states. Define the perturbation matrix as , where is defined to be 0 if . Note that , we have where is a vector with unitary elements. By definition, we rewrite the first iteration (3) as
(5) |
where , denotes the identity matrix, and represents the Laplacian matrix, defined as and if .. Note that as the graph is balanced, we have
(6) |
implying that the sum of agents’ states is unchanged after the first iteration.
Remark 1
The injected perturbations are locally zero-sum, i.e., . This is vital to guaranteeing the accuracy of the final convergence value.
III-A Privacy Preservation Against Internal Honest-but-curious Agents
The privacy preservation property against internal honest-but-curious agents is firstly analyzed. We assume that there exists only one internal honest-but-curious agent and it is represented by . Nevertheless, it is worth noting that our results can be easily extended to collaborative agents by regarding them as a super node.
What information can the attacker obtain should be defined first. It is assumed that every internal agent (curious or not) can store all the information transmitted to it and knows the graph topology and the parameters and . Thus, for any agent , we define the set of all information that it can access as its information set , which is defined as
(7) | ||||
Inspired by [11], we present a lemma before moving forward. Without loss of generality, we arbitrarily choose an agent in the network and label it as agent 1. As the graph is strongly connected and balanced, its in-neighbor set is not empty. We arbitrarily choose an agent from and label it as agent 2, and let be the set of other agents. Denoted by the aggregated states of agents in . In a compatible manner, and are partitioned to the following subblock matrices:
(8) |
Lemma 3
Consider two implementations of our algorithm. In the first implementation, agents’ initial states are and the generated perturbation signals are s. In the second implementation, agents’ initial states are
(9) | ||||
and the perturbations are given as
(10) | ||||
Then, in both implementations, accurate average consensus can be achieved and the the convergence values are the same, i.e., . Moreover, if agent , then in both implementations agent ’s information sets are the same, i.e., .

Proof 2
By (10), we see that holds, implying that s are a group of possible perturbation signals. Then by Theorem 1, in both implementations accurate average consensus can be guaranteed. Moreover, from (9) we know . Therefore, holds.
For any , the key point to compare and is to focus on the transmitted messages when . We denote the message transmitted from agent to agent when in the two implementations as and , respectively. Define . By (9) and (10), we have
(11) |
implying that for any , all the messages it transmits and receives when are the same in the two implementations.
Now we focus on how the two systems evolve. Define and . By (5), (9) and (10), we have
(12) | ||||
implying that . Hereafter, the two systems are exactly the same at every iteration, so does the transmitted messages in the two implementations from .
The proof is then completed.
Remark 2
As shown in Lemma 3, the main difference between the two implementations is that and . However, this difference does not influence those agents in at all, as their trajectories of states and their collected information are unchanged. This fact actually implies that all agents in cannot distinguish any variation of and . For any agent and a given information set , there exists infinite number of possible initial conditions and corresponding perturbation signals for agent 1 and 2 which can lead to the same information set .
Theorem 2
For any agent , the internal honest-but-curious agent can evaluate its initial value if and only
(13) |
Proof 3
(Necessity) When (13) holds, agent can only communicate with the internal honest-but-curious agent (see Fig. 2). From Algorithm 1, we have
(14) | ||||
Combing (14) and , agent can compute with the following equation:
(15) |
(Sufficiency) Now we are going to show that agent ’s privacy is preserved if (13) does not hold. Note that agent ’s information set plays an important role in the analysis. By definition, is the set of agent ’s collected information and is the only thing that agent can make use of to evaluate other agents’ privacy. It is natural to see that if can be exactly the same when agent ’s initial state is an arbitrary value, then there is no doubt that agent cannot distinguish between these possible initial values of agent , implying that agent cannot uniquely determine the value of .
Note that and the graph is strongly connected and balanced, if (13) does not hold, at least one of the following two cases holds:
Case 1: There exists an agent such that . We can regard agent and agent as the agent 2 and the agent 1 in Lemma 3. It follows from Lemma 3 that all other agents’ information sets, including agent ’s information set , can stay exactly unchanged even when agent ’s initial state changes from to , where can be arbitrarily chosen from . Thus, there exists infinite number of possible initial values of agent that can lead to the same information set . We then conclude that agent cannot reconstruct agent ’s initial state. In this sense, we say that agent ’s privacy is preserved under Algorithm 1.
Case 2: There exists an agent such that . Following similar lines, we can prove that agent ’s privacy is preserved.
The proof is then completed.

Remark 3
Theorem 5 states that an agent ’s privacy can be still preserved even when agent and all its neighbors are connected to the honest-but-curious agent, in which case the algorithms in [4] and [10] will fail. Compared with [11], our algorithm performs better when honest-but-curious agents exist. In [11], an agent ’s privacy will be disclosed if and only if and . By Theorem 5, we can see that our algorithm is still valid even when the above condition in [11] is violated. Therefore, our algorithm improves over those in the aforementioned related works.
Remark 4
In this paper, the added perturbation signals are edge-based, meaning that each agent adds different perturbation signals to the information transmitted to different out-neighbors. Thus, our algorithm has more degrees of freedom on perturbation signal design than those in [4, 12, 11] where node-based method is used. It should be mentioned that [15] also proposed an edge-based algorithm, whose basic idea is to make sure that the distributions of the honest agents’ initial states conditioned on their sum and the attackers’ view are identical. The privacy analysis in this section is significantly different from the probability viewpoint in [15]. Moreover, the privacy-preserving algorithm in the current paper is simpler than that in [15] and can be applicable to a more broad class of network topologies which are not necessarily undirected.
III-B Privacy Preservation Against External Eavesdroppers
In this subsection, we consider another case that external eavesdroppers exist. Here we should define the information sets of these eavesdroppers. It is assumed the graph and all transmitted information between agents are accessible to them. And we assume that value of the parameter is not known by them. Denoted by an external eavesdropper’s information set, which is defined as
(16) | ||||
Similar with the last subsection, the following lemma is presented at first.
Lemma 4
Assuming that there are two implementations of our algorithm. In the first one, agents’ initial states are , the perturbation signals are s, and parameters are and . The initial conditions, perturbations and parameters of the second implementation are given as
(17) | ||||
where is a constant.
Then, in both implementations, accurate average consensus can be achieved and . Moreover, the external eavesdropper’s information sets are exactly the same in the two implementations, i.e., .
Proof 4
We first focus on convergence. It is easy to see that holds. Thus, accurate average consensus can be achieved in both implementations according to Theorem 1. Moreover, note that , we have .
Consider the transmitted messages at the first iteration (), it is not difficult to verify from (17) that , holds. It means that all transmitted messages at the first iteration are exactly the same in the two implementations. Then we consider the case that . Define and . By definition and in light of (5), we have
(18) | ||||
It follows from (17) that
(19) | ||||
Substituting (17) and (19) into (18), we have
(20) |
implying that . It means that from , the two systems are exactly the same at every iteration. Thus, for , the transmitted signals are also identical in the two implementations.
Then it is easy to see holds.
Theorem 3
If all perturbation signals s are randomly chosen from , under Algorithm 1 all agents’ privacy is preserved, in spite of the existence of external eavesdroppers.
Proof 5
The proof of this theorem is similar to the proof of Theorem 2. The above Lemma 4 shows that even when is changed to , the information set of the external attacker may stay unchanged. Let . Then for any agent , even if its initial state changes from to , it is possible that is not influenced at all. Note that is an arbitrary value, then if , can be any arbitrary value in . In this case, there exists infinite number of possible initial values of agent and the external eavesdropper cannot reconstruct the value of so agent ’s privacy is preserved. Note that when all s are randomly chosen from , the possibility of is zero. The proof is then completed.
Remark 5
It should be noted that many works based on injecting correlated-noise like [4] and [12] are vulnerable to the external eavesdropper attackers. By contrast, our algorithm is valid when they exist. It is also worth noting that those works that rely on cryptology, e.g., there in [7] and [6], can also deal with this type of attackers well, as it is hard for the attackers to decrypt the encrypted messages. However, these methods suffer from huge costs on computation and communication, while the algorithm in this paper is light-weight.
IV Continuous-time Privacy Preserving Updating Rule and Privacy Analysis
This section considers continuous-time privacy preserving average consensus, which is the continuous-time counterpart of the discrete-time algorithm in Section 3.
Our continuous-time privacy-preserving average consensus algorithm is divided into two phases, one of which is for obfuscation and the other is for reaching consensus. Assume that all agents know a specific instant . When , each agent generates some perturbation signals and then adds them to the edges leading out of it. Specifically, for each agent , if , then agent generates a bounded perturbation signal and transmit to its out-neighbor, agent . There are some constraints on those perturbation signals, and we refer to them as admissible perturbation signals.
Definition 1
We say a group of perturbation signals , generated by agent , is admissible, if i) If , ; ii) All s are bounded and continuous on the interval ; iii) , , .
When , all the agents update their states as
(21) |
where is a constant that is known to all agents. Then, when , each agent updates following the normal rule in Lemma 2, i.e.,
(22) |
where is a constant.
We summarize the proposed algorithm in Algorithm 2.
Theorem 4
Accurate average consensus can be achieved by Algorithm 1.
Proof 6
Firstly, we prove that the coordinated scrambling phase does not change the sum of agents’ states. Define the perturbation matrix as . From the fact that , we have
(23) |
The updating rule (21) can be written in the matrix form as
(24) |
where . It then follows from the properties of balanced graphs and (23), (24) that
(25) | ||||
implying that the sum of all agents’ states stays invariant. So we have .
From , a normal average consensus updating rule is applied, so it follows from Lemma 2 that accurate average consensus can be achieved.
IV-A Privacy Preservation Against Honest-but-curious Agents
In this subsection, we are going to evaluate how our Algorithm 2 performs when there exists an internal honest-but-curious agent in the network. Similar with the discrete-time case, we consider the case where there exists only one honest-but-curious agent . And we assume that every internal agent (curious or not) can store all the information transmitted to it and knows the graph topology, and they all have access to the parameters and . Thus, an agent ’s information set is
(26) | ||||
Before moving forward, we partition the perturbation matrix to subblock matrices as
(27) |
and we then present the following lemma.
Lemma 5
Consider two implementations of Algorithm 1. In the first implementation, agents’ initial states are and the generated perturbation signals are s. In the second implementation, agents’ initial states and system parameters are
(28) | ||||
and perturbation signals are given as
(29) | ||||
where is given as
(30) |
Then, accurate average consensus can be achieved in both implementations and . Moreover, if agent , then in both implementations agent ’s information sets are the same, i.e., .
Proof 7
Note that is bounded and for any , holds. Using Theorem 4, accurate average consensus can be achieved in both implementations. Moreover, we can easily get from (28) that , meaning that .
Define two variables , and let , . By (24), we have
(31) |
Substituting (28) and (29) into (31), we obtain that satisfies the following equation:
(32) | |||
with the following initial condition:
(33) |
It is not difficult to verify that
(34) | ||||
Now we are ready to compare to (). The key point is to analyze the difference between and . Note that when , the transmitted message from agent to agent is , . It follows from (29) and (34) that ,
(35) |
Equation (35) shows that when , only the messages transmitted from agent 2 to agent 1 are different in the two implementations. From , no perturbation signals are injected, so all agents would send their real states to their out-neighbors. Note that from (34) we have
(36) |
implying that . Then it is obvious that from , always holds for all . This means that all the messages transmitted in the two scenarios are exactly the same since .
To sum up, when , all messages in the network except the one transmitted from agent 2 to agent 1 are identical in the two implementations. When , there is no difference between those transmitted messages in the two scenarios. Thus, .
Now we are ready to analyze Algorithm 2’s privacy property against internal attackers.
Theorem 5
Let be an agent in the network, then the internal honest-but-curious agent can reconstruct its initial value if and only if the following condition holds:
(37) | ||||
Proof 8
(Necessity) If (37) holds, then the honest-but-curious agent is the only agent that communicates with agent (see Fig. 2). For , we have
(38) |
and
(39) |
When no perturbations are added, so agent knows . Combining (38) and (39), the honest-but-curious agent can compute by
(40) |
(Sufficiency) We are going to prove that if (37) does not hold, then the attacker cannot reconstruct agent ’s initial value. The proof is similar with the proof of Theorem 2. As is the set of all the information that agent can access and is the only thing that it can make use of to evaluate other agents’ initial states, it plays an important role in privacy analysis. If can be exactly the same when agent ’s initial state is changed, it is impossible for agent to uniquely determine the value of .
Due to the fact that and in light of Assumption 1, when (37) does not hold, at least one of the following two cases is true.
Case 1: There exists another agent such that , i.e., . In this case, let agent be the agent 2 and agent be the agent 1 in Lemma 5. From Lemma 5, we know that all other agents’ information sets, including agent ’s information set , can stay exactly unchanged even when agent ’s initial state changes from to , where can be arbitrarily chosen in . Thus, agent cannot reconstruct agent ’s initial state, implying that agent ’s privacy is preserved.
Case 2: There exists another agent such that , i.e. , . It can be similarly shown that Algorithm 2 preserves agent ’s privacy.
The proof is then completed.
Remark 6
[11] also considered the privacy preserving average consensus problem in continuous-time case. As we have already pointed out, our algorithm performs better on privacy preservation against honest-but-curious agents. It is also worth mentioning that another feature of Algorithm 2 is that the perturbations are only added in a finite time interval , while they are added at every moment in [11].
IV-B Privacy Preservation Against External Eavesdroppers
Algorithm 2’s privacy preserving performance against external eavesdroppers is another issue worthy of concern. The information sets of these eavesdroppers need to be first defined. Assume the graph and all transmitted information among neighboring agents are accessible but the parameter is invisible to them. For an external eavesdropper, its information set is given by
(41) | ||||
Before giving the main result, we first present a lemma.
Lemma 6
Consider two implementations of Algorithm 2. The first implementation’s initial conditions are , the perturbation signals are s, and parameters are and . In the second implementation, the initial condition is
(42) |
and perturbation signals and parameters are
(43) | ||||
where is a constant, is a vector and denotes the i-th item of , which are given as
(44) | ||||
(45) | ||||
Then, in both implementations, accurate average consensus can be achieved and . Moreover, the external eavesdropper’s information sets are exactly the same, i.e., .
Proof 9
From (43), it is obvious that for each , s are also admissible perturbation signals. By Theorem 4, average consensus can be achieved in both implementations. Moreover, it can be verified by (44) that . Thus, we have
(46) | ||||
where we have used the fact that . (46) implies that .
Now we are going to prove that . The proof consists of two steps.
Step 1: we are going to show that when , all the transmitted information are identical in the two implementations. Let and . Then the dynamic equation of is
(47) | ||||
with the initial condition
(48) |
Note that from (45) we have . By solving the linear ordinary differential equation (47) with the initial condition (48), we can obtain that
(49) |
Theorem 6
Let be the set of all possible bounded and continuous functions defined on , i.e., . If all agents randomly choose the perturbation signals s from , Algorithm 2 can preserve all agents’ privacy against external eavesdroppers, in the sense that the external eavesdroppers cannot construct any agent’s initial value.
Proof 10
This theorem can be proved by following similar lines in the proof of Theorem 5. According to Lemma 6, the external attacker’s information set remains unchanged, even when is changed to . Let . Then, for any agent , the external eavesdroppers cannot distinguish from . If , since can be an arbitrary value, it follows that can be also an arbitrary value, implying that there is infinite number of possible initial values of agent and the external eavesdropper cannot reconstruct the value of . As for the case of , when all perturbation signals are randomly chosen from , the possibility of is zero. This completes the proof.
Remark 7
Although external eavesdroppers are also considered in [14] and [19], the ideas in these two papers and the current paper are different. In [14], preventing an eavesdropper from obtaining agents’ privacy is achieved by letting the generated perturbation signals satisfy some certain global condition, which is invisible to the eavesdropper. In [19], it is achieved by assuming that the coupling weights between agents are not accessible to the eavesdropper. By contrast, based on the assumption that the system parameter is not known to the eavesdropper, we accomplishes this goal by making use of the extra degree of freedom on ’s choosing.
V Numerical Simulation
In this section, we focus on discrete-time case and use a numerical simulation to demonstrate Algorithm 1’s effectiveness. Consider a network shown in Fig. 3.

First, we show that Algorithm 1 can deal with internal honest-but-curious agents. Assume that agent 5 is honest-but-curious. If the algorithm in [11] is applied, agents 3’s and 4’s privacy will be revealed to agent 5. By contrast, our algorithm can preserve their privacy according to Theorem 2. To show this, we consider two implementations of our algorithm. In the first one, all s are randomly generated, and the initial conditions and parameters are given as
(51) | ||||
Let agent 4 (respectively, agent 3) be the agent 2 (respectively, agent 1) in the Lemma 3. Thus, in the second implementation, the initial conditions, parameters and perturbation signals are given as
(52) | ||||
Fig. 4(a) shows the state trajectories in these two implementations, from which we can observe that accurate average consensus are achieved in both cases. Moreover, it is shown that , holds. Besides, as depicted in Fig. 4(b), always holds if , implying that even though agent 3’s and 4’s initial states are changed, all transmitted messages at the first iteration are identical in the two implementations, except the one transmitted from agent 4 to agent 3. Combining these two figures, we see that , which implies that agent 5 cannot obtain agent 3’s and 4’s initial states.
Now we are going the show how our algorithm deals with external eavesdroppers. According to Theorem 3, the attacker cannot obtain any agent’s privacy. To show this, another implementation of our algorithm is considered, which is given as follows (here let ):
(53) | ||||
As depicted in Fig. 4(c), in the above implementation (53), accurate average consensus is also achieved and , holds. Moreover, from Fig. 4(d) we can see that always holds, implying that all transmitted messages when are identical in the implementations (51) and (53). Combining these two figures, it is easy to see that the external attacker’s information sets and are exactly the same, even though all agents’ initial state values are changed. Thus, all agents’ privacy is preserved.
VI Conclusions
This paper has proposed a novel algorithm to achieve average consensus with privacy preservation over strongly connected and balanced graphs in both discrete- and continuous-time cases. By adding edge-based perturbation signals to transmitted information, the privacy is guaranteed not to be disclosed to either internal honest-but-curious agents or external eavesdroppers. Our algorithm can guarantee accurate average consensus and has a better performance on privacy preservation than the existing related works. The proposed privacy-preserving idea can be expected to be applicable to the distributed optimization problem, which needs future study.
References
- [1] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Transactions on automatic control, vol. 49, no. 9, pp. 1520–1533, 2004.
- [2] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems & Control Letters, vol. 53, no. 1, pp. 65–78, 2004.
- [3] Z. Li, Z. Duan, and G. Chen, “Consensus of discrete-time linear multi-agent systems with observer-type protocols,” Discrete and Continuous Dynamical Systems-Series B, vol. 16, no. 2, pp. 489–505, 2011.
- [4] Y. Mo and R. M. Murray, “Privacy preserving average consensus,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 753–765, 2016.
- [5] H. Gao, C. Zhang, M. Ahmad, and Y. Wang, “Privacy-preserving average consensus on directed graphs using push-sum,” in 2018 IEEE Conference on Communications and Network Security (CNS), pp. 1–9, IEEE, 2018.
- [6] T. Yin, Y. Lv, and W. Yu, “Accurate privacy preserving average consensus,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 4, pp. 690–694, 2019.
- [7] M. Ruan, H. Gao, and Y. Wang, “Secure and privacy-preserving consensus,” IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 4035–4049, 2019.
- [8] C. N. Hadjicostis and A. D. Dominguez-Garcia, “Privacy-preserving distributed averaging via homomorphically encrypted ratio consensus,” IEEE Transactions on Automatic Control, 2020.
- [9] Z. Huang, S. Mitra, and G. Dullerud, “Differentially private iterative synchronous consensus,” in Proceedings of the 2012 ACM workshop on Privacy in the electronic society, pp. 81–90, 2012.
- [10] N. E. Manitara and C. N. Hadjicostis, “Privacy-preserving asymptotic average consensus,” in 2013 European Control Conference (ECC), pp. 760–765, IEEE, 2013.
- [11] N. Rezazadeh and S. S. Kia, “Privacy preservation in a continuous-time static average consensus algorithm over directed graphs,” in 2018 Annual American Control Conference (ACC), pp. 5890–5895, IEEE, 2018.
- [12] J. He, L. Cai, C. Zhao, P. Cheng, and X. Guan, “Privacy-preserving average consensus: privacy analysis and algorithm design,” IEEE Transactions on Signal and Information Processing over Networks, vol. 5, no. 1, pp. 127–138, 2018.
- [13] J. He, L. Cai, and X. Guan, “Preserving data-privacy with added noises: Optimal estimation and privacy analysis,” IEEE Transactions on Information Theory, vol. 64, no. 8, pp. 5677–5690, 2018.
- [14] N. Rezazadeh and S. S. Kia, “Privacy preservation in continuous-time average consensus algorithm via deterministic additive perturbation signals,” arXiv preprint arXiv:1904.05286, 2019.
- [15] N. Gupta, J. Katz, and N. Chopra, “Information-theoretic privacy in distributed average consensus,” arXiv preprint arXiv:1809.01794, 2018.
- [16] J. Cortés, G. E. Dullerud, S. Han, J. Le Ny, S. Mitra, and G. J. Pappas, “Differential privacy in control and network systems,” in 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 4252–4272, IEEE, 2016.
- [17] C. Dwork, A. Roth, et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
- [18] E. Nozari, P. Tallapragada, and J. Cortés, “Differentially private average consensus: Obstructions, trade-offs, and optimal algorithm design,” Automatica, vol. 81, pp. 221–231, 2017.
- [19] Y. Wang, “Privacy-preserving average consensus via state decomposition,” IEEE Transactions on Automatic Control, vol. 64, no. 11, pp. 4711–4716, 2019.
- [20] C. Altafini, “A system-theoretic framework for privacy preservation in continuous-time multiagent dynamics,” Automatica, vol. 122, p. 109253, 2020.
- [21] I. L. D. Ridgley, R. A. Freeman, and K. M. Lynch, “Private and hot-pluggable distributed averaging,” IEEE Control Systems Letters, vol. 4, no. 4, pp. 988–993, 2020.
- [22] S. S. Kia, J. Cortés, and S. Martinez, “Dynamic average consensus under limited control authority and privacy requirements,” International Journal of Robust and Nonlinear Control, vol. 25, no. 13, pp. 1941–1966, 2015.
- [23] S. Pequito, S. Kar, S. Sundaram, and A. P. Aguiar, “Design of communication networks for distributed computation with privacy guarantees,” in 53rd IEEE Conference on Decision and Control, pp. 1370–1376, IEEE, 2014.
- [24] A. Alaeddini, K. Morgansen, and M. Mesbahi, “Adaptive communication networks with privacy guarantees,” in 2017 American Control Conference (ACC), pp. 4460–4465, IEEE, 2017.
- [25] Z. Li and Z. Duan, Cooperative Control of Multi-agent Systems: A Consensus Region Approach. CRC Press, Boca Raton, FL, 2014.