Private and Accurate Decentralized Optimization via
Encrypted and Structured Functional Perturbation
Abstract
We propose a decentralized optimization algorithm that preserves the privacy of agents’ cost functions without sacrificing accuracy, termed EFPSN. The algorithm adopts Paillier cryptosystem to construct zero-sum functional perturbations. Then, based on the perturbed cost functions, any existing decentralized optimization algorithm can be utilized to obtain the accurate solution. We theoretically prove that EFPSN is ()-differentially private and can achieve nearly perfect privacy under deliberate parameter settings. Numerical experiments further confirm the effectiveness of the algorithm.
I Introduction
The problem of optimizing a global objective function through the cooperation of multiple agents has gained increased attention in recent years [1, 2]. This is driven by the wide applicability of the problem to many engineering and scientific domains, ranging from cooperative control, distributed sensing, multi-agent systems, and sensor networks to large-scale machine learning; see, e.g., [3, 4, 5, 6].
In this paper, we consider a peer-to-peer network of agents that solves the following optimization problem cooperatively:
(1) |
where is the common parameter of all the agents, is the domain of , and denotes the collection of all the agents.
Despite the enormous success of gradient-based distributed optimization algorithms, they all require agents to explicitly share optimization variables and/or gradient estimates in every iteration. This would become a problem in applications involving sensitive data. Zhu et al. [7] showed that obtaining private training data from publicly shared gradients is possible, and the recovery is pixelwise accurate for images and token-wise matching for texts. Consequently, we wish to solve (1) in a private way.
In the machine learning community, one of the common forms of the individual function is , where is the local data distribution of agent , and is a data sample or a batch of data samples. In such a case, each function contains information on the data distribution , which is usually sensitive. It is thus crucial to keep the objective functions private. More specifically, by privacy, we refer to keeping all from being inferred by adversaries. In this work, we consider two types of adversaries:
-
•
The eavesdropper: an external adversary having access to all information transmitted through the communication channels within the network.
-
•
The curious-but-honest adversary: an external adversary that corrupts a subset of the agents. The adversary knows all the information of every corrupted agent , including all the information within agent , e.g., the individual function , and all the information passed from its neighbor to agent . However, each corrupted agent obeys the optimization protocol precisely.
Plenty of efforts have been reported to counteract potential privacy breaches in distributed optimization. Privacy preserving algorithms either process the messages being transmitted between agents or change the functions to be optimized. We refer to them as message-based and function-based methods, respectively. Differential Privacy (DP) [8, 9], a de facto standard for privacy serving, has been introduced to the context of distributed optimization [10, 11, 12, 13]. The DP-based methods can be categorized into message-perturbing [10, 11, 12] and function-perturbing ones [13]. The message-perturbing methods inject noises to the messages each agent sends, while the latter adds functional noises into each agent’s cost function .
However, the direct combination of DP and distributed optimization suffers from the accuracy-privacy trade-off. Huang et al. [12] observed that, by fixing the other parameters, the accuracy level of the obtained solution is in the order of , where is the privacy budget that is inversely proportional to privacy. The papers [14, 15] adopted encryption techniques to preserve privacy in distributed optimization. Nevertheless, the heavy communication and computation overhead prevents the methods from many real-world applications.
Adding structured noise [16] is a workaround for the accuracy-privacy trade-off and the heavy overhead mentioned above. The paper [16] constructed a set of zero-sum Gaussian noises to perturb the affine terms of the objective functions that are assumed to be quadratic. Nonetheless, the method fails under an eavesdropping attack due to the naive communication method. Besides, the privacy analysis in [16] is carried out under a self-defined privacy framework. We provide a categorization of the current privacy-preserving distributed optimization algorithms in Table I.
Paper Index | [10, 11, 12] | [13] | [14, 15] | [16] | Ours |
---|---|---|---|---|---|
Message-based | ✓ | ✗ | ✓ | ✗ | ✗ |
Function-based | ✗ | ✓ | ✗ | ✓ | ✓ |
Structured-noise | ✗ | ✗ | ✗ | ✓ | ✓ |
Encryption | ✗ | ✗ | ✓ | ✗ | ✓ |
DP-based | ✓ | ✓ | ✗ | ✗ | ✓ |
In this paper, we integrate the encryption-based scheme and the structured noise method deliberately under the functional DP framework. The proposed new method, termed the Encrypted Functional Perturbation with Structured Noise algorithm (EFPSN), enjoys the benefits of several previous methods. In particular, EFPSN adopts the Paillier encryption scheme to construct zero-sum noises among the agents secretly. The noises are subsequently used to generate functional perturbations for each agent’s cost function. Such a procedure differs from those in [14, 15] and bypasses the heavy communication and computation overhead caused by encryption at every iteration. Then, based on the perturbed cost functions, any existing decentralized optimization algorithm can be utilized to obtain the accurate solution to problem (1) thanks to the structured noises. We further theoretically prove that EFPSN is ()-differentially private and can achieve nearly perfect privacy under deliberate parameter settings. In other words, EFPSN can achieve nearly perfect privacy without sacrificing accuracy.
The rest of this paper is organized as follows: Section 2 specifies the notation and provides related background knowledge. Then, we develop the Encrypted Functional Perturbation with Structured Noise algorithm in Section 3. Privacy analysis under the DP framework is carried out in Section 4. Finally, simulation examples and conclusion are presented in Section 5 and Section 6 correspondingly.
II Preliminaries
This section introduces the notation, graph-related concepts and some background knowledge on Hilbert space and Paillier Cryptosystem, since generating functional perturbations relies on the orthonormal systems in some Hilbert space, and Paillier Cryptosystem [17] is adopted to construct structured noises privately.
II-A Notation
We use to denote the set of real numbers and the Euclidean space of dimension . The space of scalar-valued infinite sequences are denoted by . Let be the set of integers and positive integers, respectively. Given and , denotes the set of positive integers which are smaller than and do not have common factors other than 1 with . Let be the space of infinite square summable sequences. For , denotes the set of square-integrable measurable functions over . denotes a column vector with all entries equal to 1. A vector is viewed as a column vector unless otherwise stated. denotes the transpose of the matrix and denotes the scalar product of two vectors and . We use to denote the inner product and to denote the Euclidean norm for a vector (induced Euclidean norm for a matrix). A square matrix is column-stochastic when its elements in every column add up to one. A matrix is said to be doubly stochastic when both and are column-stochastic matrices.
We use to denote the probability of an event , the probability density function of random variable , and the expectation of a random variable with denoting the sigma algebra, which will be omitted when clear from the context. For an encryption scheme, represent the encoder and decoder respectively. Let be the greatest common divider, the least common multiple, and the modulo operator, respectively. is the (multivariate) Gaussian distribution with (vector) mean and variance (covariance matrix) . represents the degenerate Gaussian distribution.
II-B Graph Related Concepts
We assume that the agents interact on an undirected graph, described by a matrix . More specifically, if agents and can communicate and interact with each other, then , the -th entry of , is positive. Otherwise, equals zero. The neighbor set of agent is defined as the set of agents . Note that always holds. Denote as the graph Laplacian depicted by . Let be the eigenvalues of and be the unitary matrix that satisfies . Denote as the second smallest eigenvalue of and as the largest eigenvalue of .
II-C Hilbert Spaces
A Hilbert space is a complete inner-product space. A set is an orthonormal system if for and for all . If, in addition, the set of linear combinations of is dense in , then is an orthonormal basis. If is separable, then any orthonormal basis is countable, and we have
(2) |
for any . Define the coefficient sequence by for . Then and, by Parseval’s identity, . Let be the linear bijection that maps the coefficient sequence to . For an arbitrary is a Hilbert space, and the inner product is the integral of the product of functions. Moreover, is separable. In this paper, we denote as an orthonormal basis for and the corresponding linear bijection between coefficient sequences and functions.
II-D Paillier Cryptosystem
The Paillier Cryptosystem is an algorithm for public key cryptography. The algorithm applies to the scenario of sending a private message over open and insecure communication links, and it consists of key generation, encryption, and decryption steps as follows:
-
•
Key Generation
-
–
The message receiver chooses two large prime numbers and randomly and independently of each other such that . This property is assured if both primes are of equal length [18].
-
–
Compute and
-
–
Seletect random integer , where such that the modular multiplicative inverse exists.
-
–
Let the public key , the private key .
-
–
-
•
Encryption: To encrypt a plaintext , the sender selects a random number and computes the ciphertext .
-
•
Decryption: To decrypt the ciphertext , the receiver computes the decrypted text .
One notable homomorphic property the Paillier encryption scheme holds is: given any . If .
III Algorithm Design
In this section, we propose the Encrypted Functional Perturbation with Structured Noise algorithm (EFPSN in short) that privately solves the problem (1). Unlike the majority of privacy-preserving algorithms, EFPSN does not sacrifice accuracy for privacy.
To achieve privacy, EFPSN adds structured functional perturbations on individual cost functions . Specifically, the algorithm aims at making the functional perturbations zero-sum. EFPSN consists of two phases, as shown in Algorithm 1.
In phase I, the agents in the network cooperate to generate functional perturbations in a way that is immune to eavesdropping attacks and honest-but-curious attacks to some extent. First, they generate keys and random numbers required by the Paillier encryption scheme. Then, they encrypt and send random noise to their neighbors, and the noises are further decrypted to construct the zero-sum perturbation. Due to encryption, the signals are transferred privately and securely under eavesdropping. However, since Paillier encryption only works for integers, we set a precision order , and encrypt and decrypt .
Subsequently, in Line 10, each agent calculates by subtracting the sum of noises it receives from the sum of noises it sends. Due to Paillier encryption’s homomorphic property, each agent only needs to decode once for each . This saves computation, especially when each agent has a large number of neighbors.
Paillier encryption scheme guarantees privacy under eavesdropping attacks. In terms of the honest-but-curious attacks, the noise coefficient sequence of each agent remains unknown to the attacker as long as the attacker does not corrupt all ’s neighbors. Under such circumstances, the privacy of agent is maintained.
It is worth noting that for all . Such a construction forces to hold for all . Therefore, we have generated a set of zero-sum signals given large .
Finally in Line 12, the agents perturb the cost functions by adding . is a function that maps a sequence in to a function in . Such a construction depends on the orthonormal system in one chooses. For instance, given the orthonormal system and a sequence , . The orthonormal system we use will be specified later.
Since is zero-sum when , we have
(3) | ||||
Therefore, the set of perturbing functions is zero-sum when . Additionally, the decrypted text in Line 10 is of precision . Consequently, the error brought by will be dominated by the floating-point error once is set to a moderately large value. Though is zero-sum, each agent gains privacy from the non-zero functional perturbation .
In phase II, the agents may conduct any distributed optimization algorithm on . Since , the obtained solution solves the original problem (1) when . Namely, EFPSN solves problem (1) without any accuracy degeneration given proper .
Remark III.1.
EFPSN combines encryption, functional perturbation, and structured noise and is superior to considering any one of the techniques. In previous message-based methods, encryption at every iteration results in heavy communication and computation overhead, whereas the function-level encryption in EFPSN alleviates such a pain: only insignificant communication and computation overhead occur in phase I. Regarding the existing function-based methods, the obtained solution after functional perturbation suffers from a privacy-related deviation from the solution to problem (1), and thus leads to privacy-accuracy tradeoff. For EFPSN, however, the optimization accuracy is independent of the privacy budget, which will be elaborated more in the following sections. Moreover, using structured noise alone fails with the presence of eavesdropping, limiting its applicability.
IV Privacy Analysis
In this section, we analyse the privacy-related property of EFPSN under the framework of differential privacy. Particularly, we prove the mechanism of generating the masked functions is differentially private. And since DP is compatible with any post-processing, EFPSN remains differentially private.
First, we introduce the definition of -adjacency, which was originally proposed in [13].
Definition 1 (-adjacency).
Given any normed vector space with , two sets of functions are -adjacent if there exists such that
(4) |
We adopt the standard -DP definition under our functional setting.
Definition 2 (-Differential Privacy).
Consider a random map from the function space to an arbitrary set . Given , the map is -differentially private if, for any two -adjacent sets of functions and , one has
(5) |
We choose our adjacency space as follows. Given , consider the weight sequence and define the adjacency vector space to be the image of the resulting weighted space under , i.e.,
(6) |
where is the th element of . The rationale of considering such a space will be made clear from the analysis. Moreover,
(7) |
is a norm on .
Now we introduce our main theorem about the privacy-preserving property of EFPSN.
Theorem IV.1.
Given , the chosen space, and , the mechanism in Alg. 1 is -differential private when the precision order and the perturbation order , with , where , and is an arbitrary positive real number.
Proof.
When the precision order , the encryption process is precise. For convenience, we can ignore the encryption process and the flooring in Alg. 1. Denote
(8) | ||||
Under such a case, in algorithm 1 is essentially adding functional perturbations constructed from a set of degenerate Gaussian noises. Specifically, given , we have
(9) |
and
(10) |
are the zero-sum coefficients.
Let be a set of -adjacent functions of that differ only in the -th element. Let be the map such that . Define , as the map that returns the first coefficients and the -th coefficient of , respectively. Similarly, define as the map that returns the first columns and the -th column of , respectively. For any , is the th element of . Let and .
We have
(12) | ||||
and
(13) | ||||
By the linearity of , we have for all and , where . Denoting , we have
(14) |
(15) | ||||
To prove that is -DP, it suffices to show the ratio of over is bounded by with probability at least .
We know that
(16) | ||||
Denote
We show that Rat is bounded with certain probability.
Since and only differ in one element, has at most one non-zero element. Noting that is random and drawn from , is a univariate Gaussian random variable. From (10), each element of is at most of variance . Thus, is a univariate Gaussian random variable with variance less than or equal to .
We further bound the summation term in its variance following a similar way in the proof of Theorem V.2 in [13]:
(17) | ||||
Let be an arbitrary positive real number. When holds, we have
Rat | (18) | |||
By adopting the Chernoff bound for Gaussian random variable, we have
(19) |
Namely, Rat is bounded by with probability , where , , and .
Therefore, under proper parameter settings, the mechanism is -DP with defined above. ∎
Remark IV.2.
Several factors contribute to the privacy parameter . First, the communication graph topology affects and . Since is bounded by , where is the maximum degree, and represents the algebraic connectivity affected by the graph connectivity, a strongly and evenly connected graph helps EFPSN to achieve the best privacy performance. In addition, the parameters need to be deliberately chosen to decrease . Since is the noise magnitude, it is natural that a larger contributes to a smaller and, consequently, better privacy.
Remark IV.3.
The term measures the privacy-preserving capacity of EFPSN from a different perspective. Given some privacy budget , the larger it is, the more functions can be protected by our mechanism. Additionally, this term is inversely proportional to , suggesting protecting a larger dataset requires more noise.
Remark IV.4.
Choosing arbitrarily large while keeping results in arbitrarily small and simultaneously. Namely, we are able to attain a nearly perfectly private mechanism while the algorithmic accuracy is not sacrificed at the same time.
V Numerical Experiments
In this section, we evaluate the performance of the proposed EFPSN algorithm using numerical experiments. We consider both convex and non-convex objective functions, and EFPSN is compared with the non-zero-sum functional perturbation mechanism proposed in [13]. Before displaying the results, we first demonstrate how the orthonormal system based on which we generate the coefficients-function mapping , is constructed.
V-A Generating the Orthonormal System
Since the number of elements of an orthonormal basis grows factorially as the number of variables increases, even the simplest real-world applications, such as conducting logistic regression on pictures, require at least hundreds of parameters, making the number of elements in a basis prohibitively large. Consequently, for a function with variables, we pick variables to perturb. Also, instead of generating an orthonormal basis, we only generate an orthonormal system in with elements.
Since the generated orthonormal system must belong to some orthonormal basis, perturbing the orthonormal system is equivalent to perturbing the orthonormal basis with some noise coefficient being . Therefore, all of our previous discussion on accuracy and privacy holds under such a perturbing mechanism.
The orthonormal system is constructed from the Gram-Schimidt orthonormalization of the Taylor functions. Given the tuple , we randomly generate elements of Taylor functions of variables, of which the sum of the order is smaller than or equal to . Then we orthonormalizaing all terms using Gram-Schimidt method.
We present an example of one perturbing function when under noise level . The orthonormal system is and the noise sequence is . The corresponding perturbing function is . We visualize the perturbing function in Fig. 1.

V-B Accuracy Test
In this part, we validate the accuracy of the proposed EFPSN method.
V-B1 Convex Case
We consider a classification task on MNIST dataset using logistic regression.
We implement the logistic regression model using PyTorch. Specifically, since the picture in MNIST is of size and of channel 1, there is only one linear layer in the model, of which the input and output dimensions are 784 and 10 respectively. Together with bias, the model has 7850 parameters. We set the perturbing parameters . And the experiments are conducted under different noise level .
Consider 5 agents in the network, connected as shown in Fig. 2. Each agent holds the same number of randomly assigned training data points. We adopt decentralized stochastic gradient descent in Phase II in Alg. 1. The batch size is set to . The initial learning rate equals . Each agent conducts gradient updates. The learning rate remains fixed in the first steps, and drops to at the last step.

The result is shown in Fig. 3. In Fig. 3(a), the horizontal axis displays different noise magnitude , and the vertical axis represents the deviation from the optimal solution, which is . Specifically, , and is the solution generated by centralized gradient descent.
When , the results from EFPSN and the non-zero-sum algorithm are nearly identical, both close to the noise-free case. However, as increases, the solution obtained from the non-zero-sum method starts to deviate quickly from optima, and such deviation spikes at a roughly constant rate. The blue line (EFPSN solution) does not rise until , at which point the orange line is 4 magnitudes higher.
The rise in the blue line stems from the slight disagreements between local decision variables . Note that our perturbed function only guarantees zero-sum when each agent holds the same decision variable. Our previous error analysis assumes such exact agreement between agents. Yet, in practice, slight differences between exist due to finite step size. With EFPSN, one can always generate a more accurate solution using a finer learning rate.
Fig. 3(b) demonstrates the classification accuracy of the logistic model under different algorithms and noise levels. The pattern matches that of Fig. 3(a). Basically, for the non-zero-sum algorithm, the test accuracy starts to drop dramatically when reaches . In contrast, the model trained by EFPSN remains as accurate as the noise-free case until . Namely, EFPSN provides much more (at least 4 magnitudes larger) privacy budgets while not degenerating accuracy.


V-B2 Non-convex Case
To further justify our method under the non-convex setting, we consider an image classification task on MNIST using Convolutional Neural Network. We adopt the classic LeNet [19], which consists of 13426 parameters. We choose to perturb the bias of its last linear layer, and again we set . Except for the model, all the settings are identical to the convex case.


In fig. 4(a), the y-axis depicts the squared norm of the average gradient of all the agents, . Trends similar to the convex case reappear in the non-convex case. Unlike the non-zero-sum method, EFPSN generates solutions closer to the stationary point under all noise levels. Also, in fig. 4(b), EFPSN remains as accurate as the noise-free case even when . The non-zero-sum method, nonetheless, could barely hold its accuracy as .
The experiments imply the efficacy of our method under both convex and non-convex problems.
V-C Privacy Test
To further validate EFPSN’s efficacy in privacy-preserving, we conduct DLG [7] attack on agents with zero-sum and non-zero-sum noise, respectively. The general idea of DLG is to construct some dummy data and try to match its gradient with the ground truth. The algorithm is shown in alg. 2. For better results, we use iDLG [20], a more efficient and stable version of DLG.
With either EFPSN or the non-zero-sum noise method, each agent receives some non-zero functional perturbation of roughly the same magnitude. Since iDLG is carried out at the agent level, it makes no difference between the two algorithms. Therefore, we conduct iDLG on agent 1 in fig. 2, with the noise generated from EFPSN at different noise level ().
Specifically, we assume the mixing matrix is known to the attacker. And the attacker has access to at least one of the communication channels connected to agent 1 (either eavesdropping or corrupting 1’s neighbor will do). Therefore, the attacker knows agent 1’s perturbed gradient and agent 1’s decision parameters . The attacker does not know the functional perturbation . Consequently, the true gradient remains unrevealed. Namely, the attacker is trying to recover the raw data using inexact gradient information. And the larger the noise level , the more inexact the gradient is.
Fig. 5 and fig. 6 represent the iDLG attacker’s typical inference result on the Logistic model and LeNet. The top left subfigure is the raw data. And the remaining subfigures are the adversary’s estimate of the raw data at different iterations (from 0 to 240). As increases, the retrieved picture becomes blurred. Interestingly, though we are perturbing the original problem functionally, it is equivalent to directly perturbing the dataset. Generally, after , the recovered picture is unrecognizable for humans. When , however, the accuracy of the model trained by the non-zero-sum method has dropped below 10% for both convex and non-convex problems (as shown in fig. 3 and fig. 4). This suggests that the non-zero-sum solution would be too inaccurate to provide enough privacy. In contrast, the EFPSN solution has comparable accuracy to the noise-free case. Thus, EFPSN is capable of preserving privacy without degenerating accuracy.








VI Conclusion
In the paper, we proposed the Encrypted Functional Perturbation with Structured Noise algorithm that solves the decentralized optimization problem 1 privately and accurately. Given exact consensus between agents, EFPSN could eliminate the privacy-accuracy trade-off by constructing a zero-sum functional perturbation. Since such construction requires secure communication between agents, we adopt the Paillier encryption scheme to fight against eavesdropping attackers. We rigorously proved the privacy property of EFPSN under the differential privacy framework. Simulations confirmed the efficacy of EFPSN in protecting privacy while maintaining accuracy.
References
- [1] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009.
- [2] S. Pu and A. Nedić, “Distributed stochastic gradient tracking methods,” Mathematical Programming, vol. 187, no. 1, pp. 409–457, 2021.
- [3] S. Pu, A. Garcia, and Z. Lin, “Noise reduction by swarming in social foraging,” IEEE Transactions on Automatic Control, vol. 61, no. 12, pp. 4007–4013, 2016.
- [4] T. Yan, Y. Gu, T. He, and J. A. Stankovic, “Design and optimization of distributed sensing coverage in wireless sensor networks,” ACM Transactions on Embedded Computing Systems (TECS), vol. 7, no. 3, pp. 1–40, 2008.
- [5] R. Murphey and P. M. Pardalos, Cooperative control and optimization, vol. 66. Springer Science & Business Media, 2002.
- [6] A. G. Roy, S. Siddiqui, S. Pölsterl, N. Navab, and C. Wachinger, “Braintorrent: A peer-to-peer environment for decentralized federated learning,” arXiv preprint arXiv:1905.06731, 2019.
- [7] L. Zhu, Z. Liu, and S. Han, “Deep Leakage from Gradients,” p. 11.
- [8] J. Cortes, G. E. Dullerud, S. Han, J. Le Ny, S. Mitra, and G. J. Pappas, “Differential privacy in control and network systems,” in 2016 IEEE 55th Conference on Decision and Control (CDC), (Las Vegas, NV), pp. 4252–4272, IEEE, Dec. 2016.
- [9] C. Dwork and A. Roth, “The Algorithmic Foundations of Differential Privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3-4, pp. 211–407, 2013.
- [10] X. Chen, L. Huang, L. He, S. Dey, and L. Shi, “A Differential Private Method for Distributed Optimization in Directed Networks via State Decomposition,” p. 8, 2021.
- [11] Y. Lou, L. Yu, and S. Wang, “Privacy Preservation in Distributed Subgradient Optimization Algorithms,” arXiv:1512.08822 [cs, math], Dec. 2015. arXiv: 1512.08822.
- [12] Z. Huang, S. Mitra, and N. Vaidya, “Differentially Private Distributed Optimization,” in Proceedings of the 2015 International Conference on Distributed Computing and Networking, (Goa India), pp. 1–10, ACM, Jan. 2015.
- [13] E. Nozari, P. Tallapragada, and J. Cortes, “Differentially Private Distributed Convex Optimization via Functional Perturbation,” IEEE Transactions on Control of Network Systems, vol. 5, pp. 395–408, Mar. 2016.
- [14] Y. Lu and M. Zhu, “Privacy preserving distributed optimization using homomorphic encryption,” Automatica, vol. 96, pp. 314–325, Oct. 2018.
- [15] C. Zhang and Y. Wang, “Enabling Privacy-Preservation in Decentralized Optimization,” IEEE Transactions on Control of Network Systems, vol. 6, pp. 679–689, June 2019.
- [16] N. Gupta, S. Gade, N. Chopra, and N. H. Vaidya, “Preserving Statistical Privacy in Distributed Optimization,” IEEE Control Systems Letters, vol. 5, pp. 779–784, July 2021.
- [17] P. Paillier, “Public-key cryptosystems based on composite degree residuosity classes,” in International conference on the theory and applications of cryptographic techniques, pp. 223–238, Springer, 1999.
- [18] B. Fulton, “Review of introduction to modern cryptography by jonathan katz and yehuda lindell publisher: Chapman; hall-crc 2008 1-58488-551-3,” ACM SIGACT News, vol. 41, no. 4, p. 44–47, 2010.
- [19] Y. LeCun, L. Bottou, Y. Bengio, and P. Ha, “Gradient-Based Learning Applied to Document Recognition,” p. 46, 1998.
- [20] B. Zhao, K. R. Mopuri, and H. Bilen, “idlg: Improved deep leakage from gradients,” CoRR, vol. abs/2001.02610, 2020.