Communication efficient privacy-preserving distributed optimization using adaptive differential quantization
Abstract
Privacy issues and communication cost are both major concerns in distributed optimization. There is often a trade-off between them because the encryption methods required for privacy-preservation often incur expensive communication bandwidth. To address this issue, we, in this paper, propose a quantization-based approach to achieve both communication efficient and privacy-preserving solutions in the context of distributed optimization. By deploying an adaptive differential quantization scheme, we allow each node in the network to achieve its optimum solution with a low communication cost while keeping its private data unrevealed. Additionally, the proposed approach is general and can be applied in various distributed optimization methods, such as the primal-dual method of multipliers (PDMM) and the alternating direction method of multipliers (ADMM). Moveover, we consider two widely used adversary models: passive and eavesdropping. Finally, we investigate the properties of the proposed approach using different applications and demonstrate its superior performance in terms of several parameters including accuracy, privacy, and communication cost.
Index Terms:
distributed optimization, quantization, communication cost, privacy, information-theoretic, ADMM, PDMMI Introduction
With the emergence of interconnected or networked systems, distributed optimization is widely used to process its massive amount of data. As the primary computation units in these distributed networks are often personal devices, such as mobile phones and tablets [1, 2], the underlying networked data are private in nature. Furthermore, the available computational resources are also limited by the hardware and energy consumption. As a consequence, novel distributed optimization tools are required that are able to address the privacy concern in a way that is efficient in terms of communication and computational resources.
Existing approaches mostly address the above challenges only partially. To achieve computationally lightweight solutions, noise insertion approaches, which add noise to obfuscate the private data, are widely used in the literature. These methods can be broadly classified into three classes. The first one is the class of differentially private distributed optimization approaches [3, 4, 5, 6, 7, 8, 9]. The main idea is to guarantee that the posterior guess of the private data is only slightly better than the prior guess. The downside of these algorithms is that they compromise the algorithm accuracy, i.e., they have an inherent trade-off between privacy and accuracy. Additionally, it is hard to achieve differential privacy in practice. The second class is that of secret-sharing based distributed optimization approaches [10, 11] which deploy secret sharing to prevent privacy leakage, a technique used in secure multiparty computation [12, 13]. Secret sharing works by splitting the private data into a number of so-called shares and distributes them over the nodes such that without a sufficient number of nodes cooperating the private data cannot be reconstructed. This, however, comes with additional communication costs as the distribution of shares requires extra communication. The third class is the class of subspace perturbation based distributed optimization approaches [14, 15, 16] which, by inserting noise in a subspace determined by the graph topology, alleviates the privacy-accuracy trade-off without severely increasing the communication costs.
Note that when considering the communication cost, aside from the number of times the communication channel is used, there is another critical parameter, namely the communication bandwidth or the corresponding bit-rate. The communication bandwidth is often omitted in privacy related approaches by assuming infinite precision. However, there is often a fundamental trade-off between privacy and communication cost in noise insertion type methods. The reason for this is that a higher privacy level usually requires a larger amount of noise insertion, which, in turn, increases the noise entropy and thereby the bit-rate.
I-A Paper contributions
The main contribution of this paper that we propose a new privacy-preserving distributed optimization approach to mitigate the aforementioned trade-off between privacy and communication bandwidth. To do so, we first give an information-theoretical analysis on this trade-off in noise insertion approaches. After that we propose to address this trade-off by exploiting an adaptive differential quantization scheme. To the best of our knowledge, this is the first approach which provides information theoretical privacy guarantees for distributed optimization with quantization. The proposed approach has a number of attractive properties:
-
•
It addresses the privacy and communication bandwidth trade-off by significantly reducing the overall communication cost compared to existing approaches including secret sharing, subspace perturbation and differential privacy based approaches.
-
•
The accuracy of the optimization result is not compromised by considering both privacy and quantization.
-
•
It is generally applicable to various distibuted optimizers such as ADMM, PDMM and related algorithms.
-
•
It provides privacy guarantees under two widely-considered adversary models: eavesdropping and passive adversary models.
I-B Outlines and notation
The rest of the paper is organized as follows. Section II reviews fundamentals of distributed optimization. Section III introduces important concepts about privacy and then formulate the problem to be solved. Section IV explains the reason why there is always a trade-off between privacy and communication bandwidth and Section V introduces the proposed approach to address it. Numerical results and comparisons with existing approaches are demonstrated in Section VI. Finally, the conclusion is given in Section VII.
We use the following notation throughout this paper. Lowercase letters denote scalars, lowercase boldface letters denote vectors, uppercase boldface letters denote matrices. and denote the -th and -th entry of the vector and the matrix , respectively. Denote calligraphic letters as sets and uppercase letters denote random variables having realizations . and denote the Shannon entropy and differential entropy of a random variable , respectively.
II Fundamentals
This section reviews necessary fundamentals for distributed optimization.
II-A Distributed optimization over networks
We model a distributed network as a graph where is the set of nodes and is the set of edges. Moreover, let denote the set of neighboring nodes and denote the degree (number of neighboring nodes) of node . A standard distributed convex optimization problem with constraints over the network can be formulated as
(1) |
where denotes the optimization variable of node , denotes the local objective function which is assumed to be convex, and are edge-related matrices (weights) and denotes the constraint imposed at edge . For simplicity, we assume and , but the results can easily be generalized to arbitrary dimension and cases where . With this, and are related to entries of the incidence matrix of the graph: , if and only if and ,
II-B Distributed optimizers
To solve the problem in (1) in a decentralized manner, i.e., where each node is only allowed to exchange information with its neighboring nodes, a number of distributed, iterative optimizers have been proposed, including ADMM [17] and PDMM [18, 19, 20]. It has been shown using monotone operator theory and operator splitting techniques that ADMM and PDMM are closely related [20] (see [21] for details on monotone operator theory). The updating equations at iteration can be generally represented as
where is a constant for controlling the convergence rate. Each edge is associated to two auxiliary variables and . Stacking all auxiliary variables together we have . is a constant for controlling the averaging of the nonexpansive operators. For example, results in Peaceman-Rachford splitting (PDMM) and results in Douglas-Rachford splitting (ADMM). For simplicity, we will use , i.e., PDMM, as an example to explain the main idea but the conclusions hold for all .
III Problem definition
In this section we first introduce important concepts regarding privacy-preservation and then define the problem to be solved and its evaluation metrics.
III-A Privacy concern
In distributed optimization, sensitive personal information is often embedded in each node’s local objective function . The main reason is that the local objective function contains node-specific data as input and such data are often private in nature. As an example, consider a smart grid application. Assume each household/node in the network has its own power consumption data and the goal is to compute the global average of these power consumption data, i.e., . In this context the local objective function is given by and the overall problem setup can be formulated as follows:
(2) |
Note that the power consumption data , contained in the local objective function , of each household should be protected from being revealed to others, because it can reveal information regarding the householders such as their activities and even their health conditions like whether they are disabled or not [22]. Hence, information regarding the local objective function is considered to be sensitive and should be protected from being revealed in the process of solving the optimization problem.
III-B Adversary model
To analyze the privacy, we must specify an adversary model. The purpose of such a model is to quantify the system robustness in dealing with different security attacks. In this paper, we consider two types of adversary models: the passive and the eavesdropping model. The passive (also called semi-honest or honest-but-curious) adversary model is a classical model considered in distributed networks [23]. It works by colluding a number of nodes, referred to as corrupted nodes. The corrupted nodes are assumed to follow the algorithm instructions (called the protocol) but will share information together. We call an edge in the graph corrupted as long as there is one corrupted node at its ends. All messages transmitted along such an edge will be known to the corrupted nodes, thus also to the passive adversary. See Fig.1 for a toy example. Hence, the passive adversary will collect all information from the corrupted nodes to infer private data of the other non-colluding nodes (referred to as honest nodes).
The eavesdropping adversary is assumed to attack the system by listening to the messages transmitted along the communication channels, i.e., edges. This model receives little attention as it can be addressed by assuming all communication channels are securely encrypted such that the transmitted messages cannot be eavesdropped, e.g., secret sharing based approaches [24, 11, 10]. However, this assumption is particularly expensive to realize in distributed optimization applications, as a large number of iterations is often required.
We assume that the considered two adversaries can cooperate, i.e., they share information together with the aim of inferring the private data of the honest nodes.

III-C Main requirements and related metrics
Putting things together, we now state the main requirements that communication efficient privacy-preserving distributed optimization should satisfy and introduce the related metrics.
-
1.
Output correctness: Each node should obtain the optimal solution to (1), denoted by , at the end of the algorithm. A typical way to quantify the output correctness is to adopt certain distance metrics to calculate the difference between the estimated output and the optimum output . In this paper we use the overall mean square error (MSE) to quantify it, i.e., .
-
2.
Communication cost: After the algorithm execution, the cost of all communications should be as low as possible. The communication cost is given by , where is the total number of iterations, is the total amount of messages transmitted at each iteration ( per node) and is the number of bits needed to represent each message.
-
3.
Individual privacy: Each node’s private information, embedded in , should be protected under both eavesdropping and passive adversaries throughout the algorithm. As we are focusing on noise insertion approaches, we will focus on information-theoretical metrics to describe the privacy. In the context of distributed processing, two commonly used metrics are the so-called -differential privacy and mutual information [25]. In this paper we choose mutual information as the individual privacy metric. The main reasons are as follows. (1) Mutual information has been proven effective in the context of privacy-preserving distributed processing [26], and has been applied in various applications [27, 28, 29, 30, 31, 32]. (2) It is closely related to -differential privacy (see [33] for more details), and is easier to realize in practice [34, 35, 36]. (3) -differential privacy does not work if the private data is correlated [37].
IV Trade-off in noise insertion approaches
In this section, we aim to show that there is typically a trade-off between privacy and communication bandwidth in noise insertion approaches. To do so we will first explain a simple noise insertion scheme and then introduce how to compute the communication bandwidth, i.e., bit-rate, after applying quantization. Finally, we give an example to demonstrate this trade-off.
IV-A Additive noise insertion
Assume a scenario that a number of people, each having his/her own private data, would like to participate in a project which takes the private data of all these participants as input. Let denote the private data held by you and you are reluctant to share your private data to others due to privacy concerns. The idea of noise insertion is to insert certain noise, denoted by , to obfuscate the private data and then share the obfuscated data, denoted by , to others. One of the most simple yet widely-used way of noise insertion is to directly add the noise to the private data for protecting it from being revealed to others, referred to as additive noise insertion, and defined as
(3) |
Intuitively, a higher privacy level will be achieved if the obfuscated data is less correlated with the private data . We have the following result.
Proposition 1.
(Privacy guarantee for additive noise insertion) Let and be continuous random variables with variance , denoting the inserted noise and private data, respectively, and assume that and are statistically independent. Let . Given an arbitrarily small , there exists such that for
(4) |
In the case that the noise is Gaussian distributed, we have
(5) |
Proof.
See [26, Proposition ]. ∎
Hence, the more noise is inserted, the higher privacy level can be obtained. However, we remark that more noise will inevitably increase the noise entropy and thus requires a higher bit-rate (i.e., communication bandwidth). In what follows we will explain this in more detail.
IV-B Quantization and bit-rate
The main idea of quantization is to establish a mapping of the input data to a countable set of reproduction values, which is referred to as a codebook. More specifically, a quantizer divides the input domain into so-called quantization cells (Voronoi regions) where all elements within a cell are represented by a unique reproduction or code value. Let denote the number of bits to represent the reproduction values, in total. Although there exists many different quantization schemes, we will introduce a simple yet effective one, namely uniform quantization. In this quantizer, all quantization cells have the same size, except for the cells at the boundary of the domain in the case of fixed bit-rate quantization. For example, a one-dimensional 2-bit uniform mid-rise quantizer with cell-width will divide the input range into four regions, each represented by a unique code value. That is, the quantization cells are given by and respresented by , and , respectively.
Using a finite number of bits to quantize a continuous random variable will always produce an error, or distortion. Intuitively, the fewer bits are used, the more distortion will occur. To have an idea on how many bits are required for transmitting a message, we can determine its entropy since this gives a lower bound on the number of bits needed to represent the data. With a uniform quantizer, the entropy of a quantized continuous random variable at sufficiently high rate can be approximated as [38]:
(6) |
where is the discrete random variable after quantizing , denotes the quantization cell width and is the quantization noise variance. Here, is the Shannon entropy of , and is the differential entropy of , assuming it exists. Since the differential entropy of a random variable with fixed variance is upper bounded by
(7) |
we have
(8) |
IV-C Trade-off between privacy and bits
Assume the inserted noise is Gaussian distributed and the desired privacy level is given. The entropy of the obfuscated data is then given by
(9) |
where (a) holds as and are independent and (b) follows by setting equal to given by (5). By inspection of (9), we can see that the smaller the information leakage is, i.e., the higher the privacy level is, the higher the amount of bits for representing the quantized obfuscated data will be. Clearly there is a trade-off between them. In Fig. 2 we give an example based on (9) to demonstrate this trade-off, where we set and .

V Proposed approach
After explaining why there is a trade-off between privacy and communication cost in noise insertion approaches, we now proceed to introduce the proposed approach which addresses this trade-off by using the adaptive differential quantization scheme of [39, 40]. The reason of choosing this quantization scheme is because it can help to save the communication cost without compromising privacy. In addition, we exploit the idea of additive noise insertion to guarantee privacy after applying quantization.
In this section we will first briefly introduce the adaptive differential quantization technique. After that, we introduce the proposed approach and explain how to guarantee privacy by making use of the auxiliary variable , see Section II-B, as noise. Finally, we analyze the performance of the proposed approach based on the concerned requirements mentioned in Section III-C, i.e., individual privacy, output correctness and communication cost.
The main idea of applying adaptive differential quantization is based on the observation that for fixed point iterations the difference of successive iterations will converge to zero (i.e, it is a Cauchy sequence), which implies that the entropy of the difference of successive iterations will decrease to zero as the number of iteration increases. Motivated by this, the adaptive differential quantization scheme proposed in [39, 40] quantize the difference of the auxiliary variable at every two successive iterations with an adaptive cell-width decreasing with increasing iterations, details will be given below. By doing so, low data rate transmission between nodes can be achieved without compromising the accuracy of the algorithm.
With the adaptive differential quantization scheme, the process of the proposed approach is given as follows. At initialization , each node initializes its auxiliary variables and sends the corresponding to each and every neighboring node , respectively. Then update and as
(10) | |||
(11) |
Let denote the quantized version of . At iteration , each node does not transmit the unquantized to node directly, instead it first defines the difference variable as
(14) |
Let denote the quantization operation. Applying quantization to the difference variable we have
(15) |
where denotes the noise introduced by quantizing . The adaptive differential quantizater quantizes the difference variable with a geometrically decreasing cell-width , where denotes the initial cell-width and denotes the rate of growth. After obtaining , the quantized can be obtained by
(18) |
Note that all can be reconstructed when knowing and the quantized as
(19) |
After constructing , each node can update the local variables and using the quantized from the previous iteration, i.e.,
(20) | |||
(21) |
Overall, we conclude that all messages that need to be transmitted are the initialized and the quantized . Because all can be computed using and , all can be computed using and using (10) and (20).
Having introduced how to apply adaptive differential quantization to reduce the communication bandwidth, we now proceed to explain how to guarantee privacy. Motivated based on the idea of using additive noise insertion to achieve privacy-preservation, instead of inserting extra noise we propose to make use of the initialized as noise. To guarantee privacy, we assume that it is transmitted at high precision (see the following individual privacy analysis for details).


V-A Individual privacy
By inspection of (20), for the updates satisfy
(22) |
We can see that the private data is only contained in the subgradient . As a consequence, the goal of the privacy analysis is to see what information regarding is revealed during the iterations. Note that for we have .
For simplicity, assume for all , and thus . Denote and as the set of corrupted nodes and honest nodes, respectively. Let and denote the set of the corrupted and honest neighbors of the node , respectively. In addition, we assume a worse case that each honest node has at least one corrupted neighboring node, i.e., . Combining (14) and (15) we conclude that , so that (18) can be expressed as
(23) |
For node , using (21) and (23), we can express the left-hand side of (22) as
(24) |
To quantify the amount of information about the private data learned by the adversaries, we must first inspect what information is available to them. We first consider the passive adversary. As it can collect all the information available to the corrupted nodes, it has the following knowledge:
where denotes the set of corrupted edges. With the above knowledge, the passive adversary is able to compute both using (19) and in (24). After computing these known terms, the unknown terms in (24) are given by
(25) |
Next, we consider the eavesdropping adversary. To minimize the expense of requiring securely encrypted communication channels, we propose to use secure channel encryption only once. More specifically, no channel encryption is involved except for transmitting during the initialization step. As a consequence, the eavesdropping adversary can listen to all transmitted messages after initialization, i.e.,
note that it does not have knowledge about . Based on (19), we can, therefore, deduce from in (25) as it is known to the eavesdropping adversary. Consequently, we conclude that all what the passive and eavesdropping adversaries observe about the honest node is given by
(26) |
where the last term will converge to the all-zero vector as the iterations proceed. If node has at least one honest neighbor, i.e., , the term can be considered as noise. Hence, we can, by Proposition 1, protect the private data from being revealed by making the variance of arbitrarily large at the initialization step. Therefore, arbitrarily small information leakage regarding can be achieved at every iteration.
We remark that the proposed quantized approach can be seen as a privacy-enhanced version of the existing subspace perturbation based approach [16] since it introduces an additional noise component in (26) which helps to further protect the private data. Hence, we conclude that the conditions to guarantee the privacy of the data of honest node for both passive and eavesdropping adversaries are given by:
-
•
It has at least one honest neighbor. That is, .
-
•
The communication channels are securely encrypted in the initialization phase when transmitting .



V-B Output correctness and communication cost
In [40] it has been shown that if the sequence is finitely summable, then Douglas-Rachford splitting will convergence to a fixed point which is the solution to (1). Hence, the output correctness requirement is satisfied. As for the communication cost, we can see that with the help of quantization, the communication cost of the proposed approach is substantially reduced. In fact, as we will demonstrate in the next section, we can quantize the data with a one-bit quanitzer () and still obtain perfect output correctness and individual privacy.
The details of the proposed algorithm are summarized in Algorithm 1.
VI Numerical results
In this section, we will demonstrate simulation results for the proposed approach and compare this with existing approaches. We simulated a geometric network with nodes where every two nodes are allowed to transmit messages if their distance is within a radius of , as this condition ensures that the corresponding graph is connected with high probability [42].
VI-A Performance of the proposed approach
We use two applications to test the performance of the proposed approach: distributed average consensus and distributed least squares. The main reason for choosing these two applications is that they are intensively investigated in the literature [43, 44, 45, 24, 11, 46, 47, 48, 49, 14, 15]. The detailed problem formulation of distributed average consensus is already introduced in (2). As for distributed least squares, assume each node has partial knowledge of a linear system (assuming overdetermined) including an input observation, denoted as and a decision vector, denoted as . Stacking the partial knowledge together we denote and , where . The goal of privacy-preserving distributed least squares is to allow each node to achieve the global optimum solution , without revealing its private data, i.e., . With distributed optimization this problem can be formulated as
(27) |
In all experiments, we randomly draw the private data, i.e., in the case of distributed average consensus and in the case of distributed least squares, from a zero-mean Gaussian distribution with unit variance. In addition, we set and the auxiliary variable is initialized with zero-mean Gaussian distributed noise having a variance , where is the initial quantization cell-width. Moreover, for the proposed quantized approach, a one-bit (mid-rise) quantizer is used with cell-width , which means that we only transmit the signs of the s which will be reconstructed at the receiver by .
The performance of the proposed approach is demonstrated in terms of the three requirements mentioned in Section III-C.
-
1.
Output correctness: From Fig.3 we can see that applying the proposed adaptive differential quantization scheme to both PDMM (QPDMM) and ADMM (QADMM), the quantization noise converges to zero and the optimization variable converges to the optimal . These results validate the claim stated in Section V-B: if the sequence is finite summable, the output correctness will be guaranteed. Hence, we conclude that the proposed approach satisfies the output correctness requirement, i.e., accuracy is not compromised by considering both quantization and privacy. Additionally, it is generally applicable to both ADMM and PDMM.
-
2.
Communication cost: Fig. 4 demonstrates the total communication cost (the amount of bits) of the proposed QADMM and QPDMM under three different privacy levels: . Note that for the proposed approach we have since a one-bit quantizer is used. We can see that the convergence rate of the proposed approach is invariant to the privacy level.
-
3.
Individual privacy: Fig.5 shows the individual privacy of an arbitrary honest node over iterations using the proposed approach under the condition that there is only one honest neighboring node when applied to the distributed average consensus problem. That is, the normalized mutual information measured based on (26) when . We can see that the larger is, the less individual privacy is revealed, i.e., the higher the privacy level is. Hence, the proposed approach is able to guarantee individual privacy by controlling the variance of the initialized , i.e., .

VI-B Comparison with existing approaches
We now compare the performance of the proposed QADMM with existing approaches including subspace perturbation (SP) based approach [16], secret sharing (SS) based approach [44] and differential privacy (DP) based approach [46]. To ensure a fair comparison, we insert the same amount of noise in all these algorithms. More specifically, all inserted noise are Gaussian distributed with zero mean and variance . Additionally, all algorithms are based on the ADMM optimizer. Note that since none of the existing algorithms use a quantization scheme, the number of bits to represent each massage is set to (MATLAB double precision floating-point format). From Fig.6, we can see that
-
•
among these approaches, differential privacy based approach does not output an accurate result, i.e., it suffers from a privacy-accuracy trade-off;
-
•
as expected, compared to all existing approaches the proposed algorithm significantly reduces the communication cost.
One remark is placed here. Among all existing approaches: subspace perturbation, secret sharing and differential privacy based approaches, the proposed approach is obtained by applying adaptive differential quantization to the subspace perturbation based approaches such that the communication cost is reduced without sacrificing the individual privacy and output correctness. For the other two types of approaches it would also be interesting to investigate how quantization affects their performances in terms of privacy and accuracy.
VII Conclusion
In this paper, we proposed a novel yet general communication efficient privacy-preserving distributed optimization approach using adaptive differential quantization. By adopting an adaptive quantizer that dynamically decreases its cell-width for each iteration to reduce the communication cost and making use of additive noise insertion to achieve privacy-preservation, we are able to alleviate the trade-off between privacy and communication cost without compromising the algorithm accuracy. In addition, the proposed algorithm is able to protect privacy of any honest node against the passive adversary by requiring only one honest neighboring node. Moreover, the proposed method is computationally very lightweight in its way of dealing with an eavesdropping adversary as no secure encryption is needed, except for in the initialization step. Finally, numerical results were conducted, which confirm the desirable properties of the proposed approach in terms of accuracy, privacy and communication cost and show that the proposed approach has superior performance compared to the existing approaches.
References
- [1] M. Anderson, Technology device ownership, 2015, Pew Research Center, 2015.
- [2] J. Poushter and others, “Smartphone ownership and internet usage continues to climb in emerging economies,” Pew Research Center, vol. 22, pp. 1–44, 2016.
- [3] Z. Huang, S. Mitra, and N. Vaidya, “Differentially private distributed optimization., pp. 1–10,” in Proc. Int. Conf. Distrib. Comput. Netw, 2015.
- [4] S. Han, U. Topcu, and G. J. Pappas, “Differentially private distributed constrained optimization,” IEEE Trans. Autom. Control., vol. 62, no. 1, pp 50-64, 2016.
- [5] E. Nozari, P. Tallapragada, and J. Cortés, “Differentially private distributed convex optimization via functional perturbation,” IEEE Trans. Control Netw. Syst., vol. 5, no. 1, pp 395-408, 2018.
- [6] T. Zhang and Q. Zhu, “Dynamic differential privacy for ADMM-based distributed classification learning,” IEEE Trans. Inf. Forensics Security, vol. 12, no. 1, pp. 172–187, 2016.
- [7] X. Zhang, M. M. Khalili, and M. Liu, “Recycled ADMM: Improve privacy and accuracy with less computation in distributed algorithms,” in in Proc. 56th Annu. Allerton Conf. Commun., Control, Comput. pp.959–965, 2018.
- [8] X. Zhang, M. M. Khalili, and M. Liu, “Improving the privacy and accuracy of ADMM-based distributed algorithms,” Proc. Int. Conf. Mach. Lear. pp. 5796–5805, 2018.
- [9] Y. Xiong, J. Xu, K. You, J. Liu and L. Wu, “Privacy preserving distributed online optimization over unbalanced digraphs via subgradient rescaling,” IEEE Trans. Control Netw. Syst., 2020.
- [10] K. Tjell and R. Wisniewski, “Privacy preservation in distributed optimization via dual decomposition and ADMM,” in Proc. IEEE 58th Conf. Decis. Control., pp. 7203–7208, 2020.
- [11] K. Tjell, I. Cascudo and R. Wisniewski, “Privacy preserving recursive least squares solutions,” in Proc. Eur. Control Conf., pp.3490–3495, 2019.
- [12] R. Cramer, I. B. Damgård, and J. B. Nielsen, Secure Multiparty Computation and Secret Sharing, Cambridge University Press, 2015.
- [13] I. Damgård, V. Pastro, N. Smart, and S. Zakarias, “Multiparty computation from somewhat homomorphic encryption,” in Advances in Cryptology–CRYPTO, pp. 643–662. Springer, 2012.
- [14] Q. Li, R. Heusdens and M. G. Christensen, “Convex optimisation-based privacy-preserving distributed average consensus in wireless sensor networks,” in Proc. Int. Conf. Acoust., Speech, Signal Process., pp. 5895-5899, 2020.
- [15] Q. Li, R. Heusdens and M. G. Christensen, “Convex optimization-based privacy-preserving distributed least squares via subspace perturbation,” in Proc. Eur. Signal Process. Conf., pp. 2110-2114, 2021.
- [16] Q. Li, R. Heusdens and M. G. Christensen, “Privacy-preserving distributed optimization via subspace perturbation: A general framework,” in IEEE Trans. Signal Process., vol. 68, pp. 5983 - 5996, 2020.
- [17] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine learning, vol. 3, no. 1, pp. 1–122, 2011.
- [18] G. Zhang and R. Heusdens, “Bi-alternating direction method of multipliers over graphs,” in Proc. Int. Conf. Acoustics, Speech, Signal Proc. pp. 3571–3575, 2015.
- [19] G. Zhang and R. Heusdens, “Distributed optimization using the primal-dual method of multipliers,” IEEE Trans. Signal Process., vol. 4, no. 1, pp. 173–187, 2018.
- [20] T. Sherson, R. Heusdens, W. B. Kleijn, “Derivation and analysis of the primal-dual method of multipliers based on monotone operator theory,” IEEE Trans. Signal Inf. Process. Netw., vol. 5, no. 2, pp 334-347, 2018.
- [21] E. Ryu, S. P. Boyd, “Primer on monotone operator methods,” Appl. Comput. Math., vol. 15, no. 1, pp. 3-43,, 2016.
- [22] G. Giaconi, D. Gündüz, H. V. Poor, “Privacy-aware smart metering: Progress and challenges,” IEEE Signal Process. Mag., vol. 35, no. 6, pp. 59-78, 2018.
- [23] D. Bogdanov, S. Laur, J. Willemson, “Sharemind: A framework for fast privacy-preserving computations,” in Proc. 13th Eur. Symp. Res. Comput. Security: Comput. Security, pp. 192-206,, 2008.
- [24] Q. Li and M. G. Christensen, “A privacy-preserving asynchronous averaging algorithm based on shamir’s secret sharing,” in Proc. Eur. Signal Process. Conf., pp. 1-5, 2019.
- [25] T. M. Cover and J. A. Tomas, Elements of information theory, John Wiley & Sons, 2012.
- [26] Q. Li, J. S. Gundersen, R. Heusdens and M. G. Christensen, “Privacy-preserving distributed processing: Metrics, bounds, and algorithms,” in IEEE Trans. Inf. Forensics Security. vol. 16, pp. 2090–2103, 2021.
- [27] M. Lopuhaä-Zwakenberg, B. Škorić and N. Li, “Information-theoretic metrics for local differential privacy protocols,” arXiv preprint arXiv:1910.07826, 2019.
- [28] Q. Li, M. Coutino, G. Leus and M. G. Christensen, “Privacy-preserving distributed graph filtering,” in Proc. Eur. Signal Process. Conf., pp. 2155-2159, 2021.
- [29] S. Yagli, A. Dytso and H.V. Poor, “Information-theoretic bounds on the generalization error and privacy leakage in federated learning,” in IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2020, pp. 1–5.
- [30] Y. Bu, S. Zou, and V. V. Veeravalli, “Tightening mutual information-based bounds on generalization error,” IEEE J. Sel. Areas Info. Theory. vol. 1, no. 1, pp. 121–130, 2020.
- [31] A. Pensia, V. Jog, and P. L. Loh, “Generalization error bounds for noisy, iterative algorithms,” in Proc. of IEEE Int. Symp. Inform. Theory (ISIT), pp. 546–550, 2018.
- [32] Jeffrey Negrea, Mahdi Haghifam, Gintare Karolina Dziugaite, Ashish Khisti, and Daniel M Roy, “Information-theoretic generalization bounds for sgld via data-dependent estimates.,” in Proc. of Neural Information Processing Systems (NeurIPS), pp. 11013–11023, 2019.
- [33] P. Cuff and L. Yu, “Differential privacy as a mutual information constraint,” in Proc. 23rd ACM SIGSAC Conf. Comput. Commun. Secur., pp 43–54, 2016.
- [34] M. Gtz, A. Machanavajjhala, G. Wang, X. Xiao, J. Gehrke, “Publishing search logs—a comparative study of privacy guarantees,” IEEE Trans. Knowl. Data. Eng. vol. 24, pp. 520 - 532, 2011.
- [35] A. Haeberlen, B. C. Pierce, A. Narayan, “Differential privacy under fire.,” in Proc. 20th USENIX Conf. Security., vol. 33, 2011.
- [36] A. Korolova, K. Kenthapadi, N. Mishra, A. Ntoulas, “Releasing search queries and clicks privately,” in Proc. Int’l Conf. World Wide Web, pp. 171–180, 2009.
- [37] D. Kifer and A. Machanavajjhala, “No free lunch in data privacy,” in SIGMOD, pp. 193–204, 2011.
- [38] J. Østergaard, “Multiple-description lattice vector quantization,” in Ph.D. dissertation, Delft University of Technology. IEEE, 2007.
- [39] D. H. M. Schellekens, T. Sherson, and R. Heusdens, “Quantisation effects in PDMM: A first study for synchronous distributed averaging,” in Proc. Int. Conf. Acoust., Speech, Signal Process., pp. 4237–4241, 2017.
- [40] J. A. G. Jonkman, T. Sherson, and R. Heusdens, “Quantisation effects in distributed optimisation,” in Proc. Int. Conf. Acoust., Speech, Signal Process., pp. 3649–3653, 2018.
- [41] D. Dolev, C. Dwork, O. Waarts, M. Yung, “Perfectly secure message transmission,” J. Assoc. Comput. Mach., vol. 40, no. 1, pp. 17-47,, 1993.
- [42] J. Dall and M. Christensen, “Random geometric graphs,” Physical review E, vol. 66, no. 1, pp. 016121, 2002.
- [43] N. Gupta, J. Katz, N. Chopra, “Privacy in distributed average consensus,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 9515-9520, 2017.
- [44] N. Gupta, J. Kat and N. Chopra, “Statistical privacy in distributed average consensus on bounded real inputs,” in ACC, pp 1836-1841, 2019.
- [45] Q. Li, I. Cascudo, and M. G. Christensen, “Privacy-preserving distributed average consensus based on additive secret sharing,” in Proc. Eur. Signal Process. Conf., pp. 1-5, 2019.
- [46] E. Nozari, P. Tallapragada, and J. Cortés, “Differentially private average consensus: Obstructions, trade-offs, and optimal algorithm design,” Automatica, vol. 81, pp. 221–231, 2017.
- [47] J. He, L. Cai, C. Zhao, P. Cheng, X. Guan, “Privacy-preserving average consensus: privacy analysis and algorithm design,” IEEE Trans. Signal Inf. Process. Netw., vol. 5, no. 1, pp. 127–138, 2019.
- [48] M. H. Ruan, M. Ahmad, Y. Q. Wang, “Secure and privacy-preserving average consensus,” in Proc. Workshop Cyber-Phys. Syst. Secur. Privacy, pp. 123-129, 2017.
- [49] Y. Mo and R. M. Murray, “Privacy preserving average consensus,” IEEE Trans. Automat Contr., vol. 62, no. 2, pp. 753–765, 2017.