Convergence and Privacy of Decentralized Nonconvex Optimization with Gradient Clipping and Communication Compression
Abstract
Achieving communication efficiency in decentralized machine learning has been attracting significant attention, with communication compression recognized as an effective technique in algorithm design. This paper takes a first step to understand the role of gradient clipping, a popular strategy in practice, in decentralized nonconvex optimization with communication compression. We propose PORTER, which considers two variants of gradient clipping added before or after taking a mini-batch of stochastic gradients, where the former variant PORTER-DP allows local differential privacy analysis with additional Gaussian perturbation, and the latter variant PORTER-GC helps to stabilize training. We develop a novel analysis framework that establishes their convergence guarantees without assuming the stringent bounded gradient assumption. To the best of our knowledge, our work provides the first convergence analysis for decentralized nonconvex optimization with gradient clipping and communication compression, highlighting the trade-offs between convergence rate, compression ratio, network connectivity, and privacy.
Keywords: communication compression, gradient clipping, convergence rate, local differential privacy
1 Introduction
Decentralized machine learning has been attracting significant attention in recent years, which can be often modeled as a nonconvex finite-sum optimization problem:
(1) |
where and denote the optimization parameter and one data sample, denotes the sample loss function that is nonconvex in , and and denote the local objective function at agent and the global objective function. In addition, denotes the dataset at agent , denotes the local sample size, and denotes the number of agents. An undirected communication graph is used to model the connectivity between any two agents, where there is an edge between agent and only if they can communicate. The goal is to efficiently optimize the global objective function in a decentralized manner, subject to the network connectivity constraints specified by .
Communication efficiency is critical to decentralized optimization algorithms, as communication can quickly become bottleneck of the system as the number of agents and the size of the model increase. This has led to the development of communication compression (or quantization) techniques, which can significantly reduce the communication burden per round by transferring compressed information, especially when the communication bandwidth is limited. Therefore, a number of recent works have focused on designing decentralized nonconvex optimization algorithms with communication compression, including but not limited to [KSJ19, SDGD21, ZLL+22, TLQ+19, HP23, YCC+23].
Built upon this line of work, the paper aims to understand the role of gradient clipping in decentralized nonconvex optimization algorithms with communication compression. On the one hand, gradient clipping has been used widely in privacy-preserving algorithms [ACG+16] to enable (local) differential privacy guarantees [Dwo08]. On the other hand, gradient clipping is also observed to be beneficial in stabilizing neural network training [ZHSJ20a]. However, since gradient clipping necessarily introduces bias, the characterization of the convergence becomes much more challenging compared to their unclipped counterpart. As a result, most of the existing theoretical analyses for stochastic gradient algorithms with clipping — in the context of centralized and server/client settings — make strong assumptions such as the bounded gradient assumption [ACG+16, WJEG19, LZLC22] and the uniformly bounded gradient assumption [YZCL22, ZJFW20, ZHSJ20a]. To the best of our knowledge, the convergence of stochastic gradient algorithms with clipping in the decentralized setting has not been investigated before.
1.1 Our contributions
This paper proposes PORTER (cf. Algorithm 1),111The name is coined for two reasons: 1) PORTER has strong connection to the prior algorithm BEER (porter is a kind of dark beer), and 2) the authors developed this algorithm in Porter Hall at Carnegie Mellon University. a communication-efficient decentralized algorithm for nonconvex finite-sum optimization with gradient clipping and communication compression. PORTER is built on BEER [ZLL+22] — a fast decentralized algorithm with communication compression proposed recently — by introducing gradient clipping to the local stochastic gradient computation at agents, while inheriting the desirable designs such as error feedback and stochastic gradient tracking that are crucial in enabling the fast convergence of BEER. PORTER considers two variants of gradient clipping, corresponding to adding it before or after taking a mini-batch of stochastic gradients. In particular, the former variant PORTER-DP allows local differential privacy (LDP) analysis with additional Gaussian perturbation, and the latter variant PORTER-GC helps to stabilize training. Assuming a smooth clipping operator (Definition 2) and general compression operators (Definition 3), the highlights of our contributions are as follows.
-
1.
We establish that PORTER-DP (cf. Algorithm 1) achieves -LDP under appropriate Gaussian perturbation. Under the bounded gradient assumption (when gradient clipping can be ignored), PORTER-DP converges in average squared gradient norm as in iterations, where is the average parameter, is the baseline utility for a centralized stochastic algorithm to achieve -DP with data samples [ACG+16], is the compression ratio, and is the mixing rate of the topology.
-
2.
However, the bounded gradient assumption might be too stringent to hold in practice. Instead we further establish that under the local variance and bounded dissimilarity assumptions, PORTER-DP converges in minimum gradient norm as in iterations.
-
3.
We establish that under the local variance and bounded dissimilarity assumptions, by properly choosing the mini-batch size, PORTER-GC converges in minimum gradient norm as which matches the convergence rate of classical centralized stochastic algorithms.
Our work develops a novel analysis framework that establishes their convergence guarantees without assuming the stringent bounded gradient assumption, highlighting comprehensive trade-offs between convergence rate, compression ratio, network connectivity, and privacy. To the best of our knowledge, our work provides the first private decentralized optimization algorithm with communication compression, and a systematic investigation of gradient clipping in the fully decentralized setting. Table 1 provides a detailed comparison of PORTER-DP with prior art on private server-client algorithms, where the bounded gradient assumption is all in effect except ours.
(1) , where is the parameter for unbiased compression.
Algorithm | Privacy | Compression | Bounded | Utility | Communication |
---|---|---|---|---|---|
operator | gradient | rounds | |||
DP-SGD | DP | - | ✓ | - | |
[ACG+16] | |||||
DDP-SRM | DP | - | ✓ | ||
[WJEG19] | |||||
Soteria-SGD (1) | LDP | Unbiased | ✓ | ||
[LZLC22] | |||||
PORTER-DP | LDP | General | ✓ | ||
(Algorithm 1) | |||||
PORTER-DP | LDP | General | ✗ | ||
(Algorithm 1) |
1.2 Related works
Decentralized optimization algorithms have been extensively studied to solve large-scale optimization problems. We review most closely related works in this section, and refer readers to more comprehensive reviews in [NRB20, XPNK20].
Decentralized stochastic nonconvex optimization.
Decentralized stochastic algorithms have been a actively researched area in recent years. Various algorithms have been proposed by directly adapting existing centralized algorithms, e.g., [KDG03, XB04, Sha07, BJ13, LZZ+17, ALBR19, WJ21]. However, the simple adaptations usually fail to achieve better convergence rates. Gradient tracking [ZM10], originally proposed by the control theory community, can be used to track the global gradient at each agent, which leads to a simple systematic framework for extending existing centralized algorithms to the decentralized setting. Gradient tracking can be used for both deterministic optimization algorithms, e.g., [DLS16, NOS17, QL18, LCCC20], and stochastic algorithms, e.g., [SLH20, XKK22b, XKK22a, LLC22, HSZ+22, LY22].
Communication compression.
In [DSZOR15, AGL+17], gradient compression was adopted to create a server/client distributed SGD algorithm, however, the large variance of compressed gradients leads to a sub-optimal convergence rate. [SFD+14] first proposed the use of error feedback to compensate for the variance induced by compression. [SCJ18, AHJ+18, MGTR19, LKQR20, GBLR21, LR21] all equipped similar mechanism to improve convergence for server/client distributed optimization algorithms, and [RSF21, FSG+21] formalized the error feedback mechanism and reaches an convergence rate for smooth nonconvex objective functions. [TGZ+18, KSJ19, SDGD21, TMHP20, ZLL+22, YCC+23, LLP23, ZBLR21] further extended communication compression schemes to the decentralized setting.
Private optimization algorithms.
The concern of leaking sensitive data has been increasing with the rapid development of large-scale machine learning systems. To address this concern, the concept of differential privacy is widely adopted [DMNS06, Dwo08], where a popular approach to protect privacy is adding noise to the model or gradients. This approach is first adopted in the single server setting to design differentially private optimization algorithms [ACG+16, WYX17, INS+19, FKT20, CWH20, WXDX20], while [HDJ+20, ACCÖ21, NBD22, DLC+22, LZLC22, MS23, ZCH+22] considered differential privacy for the server/client distributed setting.
Gradient clipping.
Understanding gradient clipping has gained significant attention in recent years. Earlier works, e.g. [PMB13, BBP13, KLL16, KFI17, YGG17], used gradient clipping as a pure heuristic to solve gradient vanishing/exploding problems without theoretical understandings. Then, [ZHSJ20b, ZKV+20, ZJFW20, RLDJ23] introduced theoretical analyses to understand its impact on the convergence rate and training performance. This question is also investigated in [CWH20, ZCH+22, DKX+22, FLFL23], which applies gradient clipping to limit the magnitude of each of the sample gradients, so that the variance of privacy perturbation can be decided without the bounded gradient assumption. While finishing up this paper, we became aware of [KHS23], which also develops convergence guarantees on the minimum gradient norm of clipped stochastic gradient algorithms in the centralized setting with a piece-wise linear clipping operator. In contrast, our focus is on the decentralized setting with a smooth clipping operator, where extra care is taken to deal with the discrepancy between the local and global objective functions.
1.3 Paper organization and notation
Section 2 introduces preliminary concepts, Section 3 describes the algorithm development, Section 4 shows the theoretical performance guarantees for PORTER, Section 5 provides numerical evidence to support the analysis, and Section 6 concludes the paper. Proofs are postponed to appendices.
Throughout this paper, we use uppercase and lowercase boldface letters to represent matrices and vectors, respectively. We use for matrix operator norm, for Frobenius norm, for the -dimensional identity matrix, for the -dimensional all-one vector and for the -dimensional zero matrix. For two real functions and defined on , we say or if there exists some universal constant such that . The notation or means .
2 Preliminaries
Mixing.
The information mixing between agents is conducted by updating the local information via a weighted sum of information from neighbors, which is characterized by a mixing (gossiping) matrix. Concerning this matrix is an important quantity called the mixing rate, defined in Definition 1.
Definition 1 (Mixing matrix and mixing rate).
The mixing matrix is a matrix , such that if agent and are not connected according to the communication graph . Furthermore, and . The mixing rate of a mixing matrix is defined as
(2) |
The mixing rate describes the connectivity of a communication graph and the speed of information sharing. Generally, a better connected graph leads to a smaller mixing rate, for example, can be the averaging matrix for a fully connected communication network, which results in . A comprehensive list of bounds on is provided by [NOR18, Proposition 5]. Our analysis does not require the mixing matrix to be doubly stochastic, while allows us to use a non-symmetric matrix with negative values as the mixing matrix, such as the FDLA matrix [XB04], which has a smaller mixing rate under the same connectivity pattern.
Gradient clipping.
In practice, gradient clipping is frequently adopted to ensure the gradients are within a predetermined region, so that the variance of privacy perturbation can be decided accordingly. The clipping operator we adopt is a smooth clipping operator [YZCL22] defined in Definition 2, which scales a vector into a ball of radius centered at the origin.
Definition 2 (Smooth clipping operator).
For , the clipping operator is defined as
For , the distributed clipping operator is defined as
Remark 1.
Another widely used clipping operator is the piece-wise linear clipping operator, which scales inputs whenever its gradient norm is larger than and does nothing otherwise, defined by
Figure 1 plots the norm of a vector before and after clipping for these two clipping operators, which show that they behave quite similarly.

Compression operators.
Following [RSF21, FSG+21], Definition 3 defines a randomized general compression operator that only guarantees the expected compression error is less than the magnitude of original message .
Definition 3 (General compression operator).
A randomized map : is a -compression operator if and some , the following inequality holds:
Many widely used compression schemes can be modeled as special cases, for example, random sparsificiation and top- compression.
Example 1 (Random sparsification).
Random sparsification keeps an element from a -dimensional vector with probability . Let where , then random sparsification is defined as , which satisfies Definition 3 with .
Local differential privacy.
In decentralized learning systems, all agents share information with their neighbors that are potentially sensitive. If some agents are exploited by adversaries, the system will face a risk of privacy leakage even when the system-level privacy is protected. Therefore, we introduce local differential privacy (LDP) — defined in Definition 4 — following [DJW13, CABP13, XYD19], which protects each agent’s privacy from leaking to other agents.
Definition 4 (Local differential privacy (LDP)).
A randomized mechanism with domain and range satisfies -local differential privacy for client , if for any two neighboring dataset at client and for any subset of outputs , it holds that
(3) |
The two datasets and are called neighboring if they are only different by one data point at client .
Definition 4 is a stricter privacy guarantee because it can imply general differential privacy (DP). Consequently, LDP requires a larger perturbation variance than general DP. To identify the impact of the decentralized LDP setting compared to centralized DP setting, we define the baseline utility
(4) |
which can be understood as the final utility of a centralized system with data samples that guarantees -DP. For typical private problems, the local sample size has to be large enough for the privacy perturbation to deliver meaning guarantees, we impose a mild assumption that to simplify presentation. For example, the problem defined in (1) has in total data samples, running an -DP algorithm on one server that can access all data will achieve utility in iterations.
3 Proposed algorithm
We propose PORTER, a novel stochastic decentralized optimization algorithm for finding first-order stationary points of nonconvex finite-sum problems with gradient clipping and communication compression; the details are described in Algorithm 1. On a high level, PORTER is composed of local stochastic gradient updates and neighboring information sharing, following a similar structure as BEER [ZLL+22], in terms of the use of error feedback [RSF21], which accelerates the convergence with biased compression operators, and stochastic gradient tracking to track the global gradient locally at each agent. A key difference, however, is the use of gradient clipping. Motivated by efficient training and privacy preserving, we consider two variants, corresponding to clipping before the mini-batch with privacy perturbation (PORTER-DP), and after taking the mini-batch (PORTER-GC), respectively.
Before proceeding, we introduce some notation convenient for describing decentralized algorithms. Let be the optimization variable at agent , we define the collection of all optimization variables as a matrix , and the average as . The gradient estimates , stochastic gradients , perturbation noise , compressed surrogate and , and their corresponding agent-wise values are defined analogously. The distributed gradient is defined as .
To provide more detail, PORTER initializes gradient-related variables to and other variables to the same random value , which improves stability in early iterations and simplifies analysis, but has no impact on the convergence rates. Within each iteration, PORTER is consisted of major steps.
-
(1)
Computing clipped stochastic gradients. We consider two options. The first option PORTER-DP corresponds to differentially-private SGD (6-7), where 6 computes a batch of clipped stochastic gradient on each agent, and then 7 adds Gaussian noise to ensure privacy. The second option PORTER-GC corresponds to SGD with gradient clipping (9-10), where 9 computes a batch stochastic gradient on each agent, and 10 applies clipping to each batch stochastic gradient.
-
(2)
Updating gradient estimates. 11 updates the auxiliary variable , which is a compressed surrogate of , by adding the compressed difference to itself . Meanwhile, each agent sends its compressed difference to all of its neighbors, so that every neighbor can reconstruct the auxiliary variable by accumulating this difference. 12 then adds a correction term , and applies stochastic gradient tracking to update gradient estimates.
- (3)
4 Theoretical guarantees
This section theoretically analyzes the privacy and convergence properties of PORTER under various assumptions. Section 4.1 lists all assumptions required for convergence analysis, Section 4.2 shows the privacy and convergence of PORTER-DP using a specific perturbation variance, and Section 4.3 shows the convergence of PORTER corresponding to clipped SGD without privacy.
4.1 Assumptions
We start with smoothness assumption in Assumption 1, which is standard and required for all of our analysis.
Assumption 1 (-smoothness).
For any and any datum in dataset ,
Note that the gradient clipping operator is utilized to ensure gradients are bounded. In addition, the boundedness of the gradient ensures the application of differentially-private mechanisms. However, stochastic gradients at different agents lose correct scaling after clipping, which breaks the stationary point property at local minima and introduces bias. To simplify analysis, one assumption that has been adopted widely in theoretical analysis [LZLC22] is the following bounded gradient assumption.
Assumption 2 (Bounded gradient).
For any and any datum in dataset , .
Under Assumption 2, PORTER can skip the clipping operator, and becomes an unbiased estimator of local gradient , while still allowing privacy analysis. However, Assumption 2 is rather strong and seldomly met in practice. For example, the gradient of a quadratic loss function is not bounded. In addition, it may result in an overly pessimistic clipping operation when there are possibly adversarial gradients with large norms in the samples. Going beyond the strong bounded gradient assumption, we consider a much milder assumption that bounds the local variance as follows, which is more standard in the analysis of unclipped stochastic algorithms.
Assumption 3 (Bounded local variance).
For any and ,
An additional challenge is associated with dealing with the decentralized environment, where the local objective functions can be rather distinct from the global one. Our analysis identifies the following assumption, called bounded gradient dissimilarity, which says that the difference between the local gradient and the global gradient should be small relative to the global one.
Assumption 4 (Bounded gradient dissimilarity).
For any and ,
4.2 Privacy and convergence guarantees of PORTER-DP
We start by analyzing the privacy and convergence guarantees of PORTER-DP, assuming the batch size .
Privacy guarantee.
Theorem 1 proves that PORTER-DP is -LDP when setting the variance of Gaussian perturbation properly.
Theorem 1 (Local differential privacy).
Let and . For any and , PORTER-DP is -LDP after iterations if we set
(5) |
Theorem 1 shows that PORTER-DP can achieve -LDP regardless of whether the bounded gradient assumption presents, because using the clipping operator can guarantee all the stochastic gradients’ norms are bounded by , so that the perturbation variance can be set accordingly.
Convergence with bounded gradient assumption.
We start by analyzing the convergence when the gradients are bounded under Assumption 2, in which case PORTER-DP can omit the clipping operator. Theorem 2 presents the convergence result of PORTER-DP using general compression operators (Definition 3).
Theorem 2 (Convergence of PORTER-DP with bounded gradient assumption).
Assume Assumptions 1 and 2 hold, and use general compression operators (Definition 3). Let . Set , , , and . PORTER-DP converges in average squared gradient norm as
(6) |
Theorem 2 shows the convergence error of the squared gradient norm with explicit dependency on the compression ratio and mixing rate . When we fix and , Theorem 2 reaches an final average squared gradient norm, which matches the result of SoteriaFL-SGD [LZLC22], the state-of-the-art stochastic algorithm with local differential privacy guarantees and unbiased communication compression in the server-client setting. However, due to extra complexities induced by the decentralized setting and biased compression, PORTER-DP takes iterations to converge while SoteriaFL-SGD only takes iterations; in addition, PORTER-DP has a slightly worse dependency on the compression ratio .
Convergence with bounded gradient assumption.
A more interesting and challenging scenario is when Assumption 2 does not hold, PORTER-DP applies gradient clipping to ensure gradients are bounded to suit the privacy constraints. Fortunately, Theorem 3 describes the convergence behavior of Algorithm 1 in this case, under the much weaker bounded local variance and bounded dissimilarity assumptions.
Theorem 3 (Convergence of PORTER-DP without bounded gradient assumption).
Assume Assumptions 1, 3 and 4 hold, and use general compression operators (Definition 3). Let . Set , , , , and . Algorithm 1 converges in minimum gradient norm as
(7) |
Theorem 3 shows PORTER-DP converges in minimum gradient norm with explicit dependency on compression ratio , mixing rate and gradient variance , under much weaker assumptions. To compare with Theorem 2, we can take the square root of (6), which translates to minimum gradient norm convergence on the order of In comparison, although Theorem 3 has worse dependency on compression ratio and mixing rate , it matches the dependency on the baseline privacy loss .
4.3 Convergence guarantees of PORTER-GC
Theorem 4 further establishes the convergence of PORTER-GC without the bounded gradient assumption, which applies the clipping operator to mini-batch stochastic gradients without privacy perturbation.
Theorem 4 (Convergence of PORTER-GC without bounded gradient assumption).
Assume Assumptions 1, 3 and 4 hold, and use general compression operators (Definition 3). Let . Set , , and . PORTER-GC converges in minimum gradient norm as
Theorem 4 suggests that by picking the clipping threshold and batch size properly, PORTER-GC converges at an rate. In comparison, standard centralized SGD converges in average squared gradient norm at an rate, which also translates to a minimum gradient norm convergence in the form of . Therefore, using gradient clipping for decentralized SGD does not affect the convergence rate, providing proper hyper parameter choices.
When the gradients are bounded, we can omit the clipping operator in PORTER-GC, which become the same as BEER [ZLL+22]. Recall that BEER guarantees a minimum gradient norm convergence at the rate . In comparison, Theorem 4 has a better dependency on the mixing rate , but has a slightly worse dependency on the compression ratio , which again emphasizes that gradient clipping does not harm convergence.
5 Numerical experiments
This section presents numerical experiments to examine the performance of PORTER-DP, with comparison to the state-of-the-art server/client private stochastic algorithm SoteriaFL-SGD, which also utilizes communication compression and guarantees local differential privacy. More specifically, we evaluate the convergence of utility and accuracy in terms of communication rounds and communication bits, to analyze the privacy-utility-communication trade-offs of different algorithms.
For all experiments, we split shuffled datasets evenly to agents that are connected by an Erdős-Rényi random graph with connecting probability . We use the FDLA matrix [XB04] as the mixing matrix to perform weighted information aggregation to accelerate convergence. We use biased random sparsification (cf. Example 1) for all algorithms where , i.e., the compressor randomly selects 5% elements from each vector. We also apply gradient clipping with to all algorithms for simplicity. For each experiment, all algorithms are initialized to the same starting point, and use best-tuned learning rates, batch size and .
5.1 Logistic regression with nonconvex regularization
We run experiments on logistic regression with nonconvex regularization on the a9a dataset [CL11]. Following [WJZ+19], the objective function is defined as
where represents a training tuple, is the feature vector and is the label, and is the regularization parameter which is set to .
![]() |
![]() |
(a) -LDP | (b) -LDP |
Figure 2 shows the convergence results of PORTER-DP and SoteriaFL-SGD for logistic regression with nonconvex regularization on the a9a dataset to reach -LDP and -LDP, respectively. Under -LDP, which is a stricter privacy setting, PORTER-DP converges faster than SoteriaFL-SGD in test accuracy, while converges slightly slower in train utility. Under -LDP, PORTER-DP performs slightly worse than SoteriaFL-SGD. Given that PORTER-DP operates under the decentralized topology with much weaker information exchange, the results highlight PORTER-DP’s communication efficiency by showing it can achieve similar performance as its server/client counterpart, i.e. SoteriaFL-SGD, especially under strict privacy constraints.
5.2 One-hidden-layer neural network training
We evaluate PORTER-DP by training a one-hidden layer neural network on the MNIST dataset [LJB+95]. The network uses hidden neurons, sigmoid activation functions and cross-entropy loss, where the loss function over a training pair is defined as
Here, the model parameter is defined by , where the dimensions of the network parameters , , , are , , , and , respectively.
![]() |
![]() |
(a) -LDP | (b) -LDP. |
Figure 3 shows the convergence results of PORTER-DP and SoteriaFL-SGD for training a one-hidden-layer neural network on the MNIST dataset to reach -LDP and -LDP, respectively. Under both privacy settings, PORTER-DP converges at a similar rate as SoteriaFL-SGD in train utility. However, in terms of convergence in test accuracy, PORTER-DP outperforms SoteriaFL-SGD under the stricter -LDP, while the two algorithms have similar performance under the other setting. This experiment again emphasizes PORTER-DP’s communication efficiency in comparison to the server/client algorithm SoteriaFL-SGD.
6 Conclusions
In this paper, we propose an algorithmic framework called PORTER, which incorporates practically-relevant gradient clipping and communication compression simultaneously in the design of decentralized nonconvex optimization algorithms. We propose two variants: PORTER-DP and PORTER-GC. While they share a similar structure that makes use of gradient tracking, communication compression, and error feedback, their focuses are on different perspectives motivated by applications in privacy preserving and neural network training, respectively. PORTER-DP adds privacy perturbation to clipped gradients to protect the local differential privacy of each agent, with explicit utility and communication complexities. PORTER applies gradient clipping to mini-batch stochastic gradients, which converges in minimum gradient norm at similar rate as centralized SGD without clipping under proper choices of hyperparameters. The development of PORTER offers a simple analysis framework to understand gradient clipping in decentralized nonconvex optimization without bounded gradient assumptions, highlighting the potential of achieving both communication efficiency and privacy preserving in the decentralized framework.
Acknowledgements
The work of B. Li and Y. Chi is supported in part by ONR under the grant N00014-19-1-2404, by AFRL under FA8750-20-2-0504, and by NSF under the grants CCF-1901199, CCF-2007911 and CNS-2148212. B. Li is also gratefully supported by the Wei Shen and Xuehong Zhang Presidential Fellowship at Carnegie Mellon University.
References
- [ACCÖ21] S. Asoodeh, W.-N. Chen, F. P. Calmon, and A. Özgür. Differentially private federated learning: An information-theoretic perspective. In 2021 IEEE International Symposium on Information Theory (ISIT), pages 344–349, July 2021.
- [ACG+16] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308–318, October 2016.
- [AGL+17] D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic. QSGD: Communication-efficient sgd via gradient quantization and encoding. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
- [AHJ+18] D. Alistarh, T. Hoefler, M. Johansson, N. Konstantinov, S. Khirirat, and C. Renggli. The convergence of sparsified gradient methods. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
- [ALBR19] M. Assran, N. Loizou, N. Ballas, and M. Rabbat. Stochastic gradient push for distributed deep learning. In International Conference on Machine Learning, pages 344–353. PMLR, 2019.
- [BBP13] Y. Bengio, N. Boulanger-Lewandowski, and R. Pascanu. Advances in optimizing recurrent networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8624–8628, May 2013.
- [BJ13] P. Bianchi and J. Jakubowicz. Convergence of a multi-agent projected stochastic gradient algorithm for non-convex optimization. IEEE Transactions on Automatic Control, 58(2):391–405, February 2013.
- [CABP13] K. Chatzikokolakis, M. E. Andrés, N. E. Bordenabe, and C. Palamidessi. Broadening the scope of differential privacy using metrics. In E. De Cristofaro and M. Wright, editors, Privacy Enhancing Technologies, Lecture Notes in Computer Science, pages 82–102, Berlin, Heidelberg, 2013. Springer.
- [CL11] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, May 2011.
- [CWH20] X. Chen, S. Z. Wu, and M. Hong. Understanding gradient clipping in private SGD: A geometric perspective. In Advances in Neural Information Processing Systems, volume 33, pages 13773–13782. Curran Associates, Inc., 2020.
- [DJW13] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 429–438, October 2013.
- [DKX+22] R. Das, S. Kale, Z. Xu, T. Zhang, and S. Sanghavi. Beyond uniform Lipschitz condition in differentially private optimization. arXiv preprint arXiv:2206.10713, 2022.
- [DLC+22] J. Du, S. Li, X. Chen, S. Chen, and M. Hong. Dynamic differential-privacy preserving sgd. arXiv preprint arXiv:2111.00173, 2022.
- [DLS16] P. Di Lorenzo and G. Scutari. Next: In-network nonconvex optimization. IEEE Transactions on Signal and Information Processing over Networks, 2(2):120–136, 2016.
- [DMNS06] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In S. Halevi and T. Rabin, editors, Theory of Cryptography, Lecture Notes in Computer Science, pages 265–284, Berlin, Heidelberg, 2006. Springer.
- [DSZOR15] C. M. De Sa, C. Zhang, K. Olukotun, and C. Ré. Taming the wild: A unified analysis of Hogwild-style algorithms. Advances in Neural Information Processing Systems, 28, 2015.
- [Dwo08] C. Dwork. Differential privacy: A survey of results. In M. Agrawal, D. Du, Z. Duan, and A. Li, editors, Theory and Applications of Models of Computation, Lecture Notes in Computer Science, pages 1–19, Berlin, Heidelberg, 2008. Springer.
- [FKT20] V. Feldman, T. Koren, and K. Talwar. Private stochastic convex optimization: Optimal rates in linear time. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, pages 439–449, New York, NY, USA, June 2020. Association for Computing Machinery.
- [FLFL23] H. Fang, X. Li, C. Fan, and P. Li. Improved convergence of differential private SGD with gradient clipping. In The Eleventh International Conference on Learning Representations, 2023.
- [FSG+21] I. Fatkhullin, I. Sokolov, E. Gorbunov, Z. Li, and P. Richtárik. EF21 with bells & whistles: Practical algorithmic extensions of modern error feedback. arXiv preprint arXiv:2110.03294, 2021.
- [GBLR21] E. Gorbunov, K. P. Burlachenko, Z. Li, and P. Richtarik. MARINA: Faster non-convex distributed learning with compression. In Proceedings of the 38th International Conference on Machine Learning, pages 3788–3798. PMLR, July 2021.
- [HDJ+20] X. Huang, Y. Ding, Z. L. Jiang, S. Qi, X. Wang, and Q. Liao. DP-FL: A novel differentially private federated learning framework for the unbalanced data. World Wide Web, 23(4):2529–2545, July 2020.
- [HP23] K. Huang and S. Pu. CEDAS: A compressed decentralized stochastic gradient method with improved convergence. arXiv preprint arXiv:2301.05872, 2023.
- [HSZ+22] Y. Huang, Y. Sun, Z. Zhu, C. Yan, and J. Xu. Tackling data heterogeneity: A new unified framework for decentralized sgd with sample-induced topology. In Proceedings of the 39th International Conference on Machine Learning, pages 9310–9345. PMLR, June 2022.
- [INS+19] R. Iyengar, J. P. Near, D. Song, O. Thakkar, A. Thakurta, and L. Wang. Towards practical differentially private convex optimization. In 2019 IEEE Symposium on Security and Privacy (SP), pages 299–316, May 2019.
- [KDG03] D. Kempe, A. Dobra, and J. Gehrke. Gossip-based computation of aggregate information. In 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings., pages 482–491, October 2003.
- [KFI17] S. Kanai, Y. Fujiwara, and S. Iwamura. Preventing gradient explosions in gated recurrent units. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
- [KHS23] A. Koloskova, H. Hendrikx, and S. U. Stich. Revisiting gradient clipping: Stochastic bias and tight convergence guarantees. In Proceedings of the 40th International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR, 23–29 Jul 2023.
- [KLL16] J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1646–1654, 2016.
- [KSJ19] A. Koloskova, S. Stich, and M. Jaggi. Decentralized stochastic optimization and gossip algorithms with compressed communication. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3478–3487. PMLR, 09–15 Jun 2019.
- [LCCC20] B. Li, S. Cen, Y. Chen, and Y. Chi. Communication-efficient distributed optimization in networks with gradient tracking and variance reduction. Journal of Machine Learning Research, 21(180):1–51, 2020.
- [LJB+95] Y. LeCun, L. D. Jackel, L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. Muller, E. Sackinger, and P. Simard. Learning algorithms for classification: A comparison on handwritten digit recognition. Neural networks: the statistical mechanics perspective, 261(276):2, 1995.
- [LKQR20] Z. Li, D. Kovalev, X. Qian, and P. Richtarik. Acceleration for compressed gradient descent in distributed and federated optimization. In Proceedings of the 37th International Conference on Machine Learning, pages 5895–5904. PMLR, November 2020.
- [LLC22] B. Li, Z. Li, and Y. Chi. DESTRESS: Computation-optimal and communication-efficient decentralized nonconvex finite-sum optimization. SIAM Journal on Mathematics of Data Science, 4(3):1031–1051, September 2022.
- [LLP23] Y. Liao, Z. Li, and S. Pu. A linearly convergent robust compressed Push-Pull method for decentralized optimization. arXiv preprint arXiv:2303.07091, 2023.
- [LR21] Z. Li and P. Richtarik. CANITA: Faster rates for distributed convex optimization with communication compression. In Advances in Neural Information Processing Systems, volume 34, pages 13770–13781. Curran Associates, Inc., 2021.
- [LY22] L. Luo and H. Ye. An optimal stochastic algorithm for decentralized nonconvex finite-sum optimization. arXiv preprint arXiv:2210.13931, 2022.
- [LZLC22] Z. Li, H. Zhao, B. Li, and Y. Chi. SoteriaFL: A unified framework for private federated learning with communication compression. In Advances in Neural Information Processing Systems, volume 35, pages 4285–4300. Curran Associates, Inc., 2022.
- [LZZ+17] X. Lian, C. Zhang, H. Zhang, C.-J. Hsieh, W. Zhang, and J. Liu. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 5330–5340, 2017.
- [MGTR19] K. Mishchenko, E. Gorbunov, M. Takáč, and P. Richtárik. Distributed learning with compressed gradient differences. arXiv preprint arXiv:1901.09269, 2019.
- [MS23] T. Murata and T. Suzuki. DIFF2: Differential private optimization via gradient differences for nonconvex distributed learning. arXiv preprint arXiv:2302.03884, 2023.
- [NBD22] M. Noble, A. Bellet, and A. Dieuleveut. Differentially private federated learning on heterogeneous data. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, pages 10110–10145. PMLR, May 2022.
- [NOR18] A. Nedić, A. Olshevsky, and M. G. Rabbat. Network topology and communication-computation tradeoffs in decentralized optimization. Proceedings of the IEEE, 106(5):953–976, 2018.
- [NOS17] A. Nedić, A. Olshevsky, and W. Shi. Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM Journal on Optimization, 27(4):2597–2633, 2017.
- [NRB20] M. Nokleby, H. Raja, and W. U. Bajwa. Scaling-Up Distributed Processing of Data Streams for Machine Learning. Proceedings of the IEEE, 108(11):1984–2012, November 2020.
- [PMB13] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, pages 1310–1318. PMLR, May 2013.
- [QL18] G. Qu and N. Li. Harnessing smoothness to accelerate distributed optimization. IEEE Transactions on Control of Network Systems, 5(3):1245–1260, 2018.
- [RLDJ23] A. Reisizadeh, H. Li, S. Das, and A. Jadbabaie. Variance-reduced clipping for non-convex optimization. arXiv preprint arXiv:2303.00883, 2023.
- [RSF21] P. Richtarik, I. Sokolov, and I. Fatkhullin. EF21: A new, simpler, theoretically better, and practically faster error feedback. In Advances in Neural Information Processing Systems, volume 34, pages 4384–4396. Curran Associates, Inc., 2021.
- [SCJ18] S. U. Stich, J.-B. Cordonnier, and M. Jaggi. Sparsified SGD with memory. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
- [SDGD21] N. Singh, D. Data, J. George, and S. Diggavi. SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization. IEEE Journal on Selected Areas in Information Theory, 2(3):954–969, September 2021.
- [SFD+14] F. Seide, H. Fu, J. Droppo, G. Li, and D. Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs. In Interspeech 2014, pages 1058–1062. ISCA, September 2014.
- [Sha07] D. Shah. Gossip algorithms. Foundations and Trends® in Networking, 3(1):1–125, 2007.
- [SLH20] H. Sun, S. Lu, and M. Hong. Improving the sample and communication complexity for decentralized non-convex optimization: Joint gradient estimation and tracking. In International Conference on Machine Learning, pages 9217–9228. PMLR, 2020.
- [TGZ+18] H. Tang, S. Gan, C. Zhang, T. Zhang, and J. Liu. Communication compression for decentralized training. Advances in Neural Information Processing Systems, 31:7652–7662, 2018.
- [TLQ+19] H. Tang, X. Lian, S. Qiu, L. Yuan, C. Zhang, T. Zhang, and J. Liu. Deepsqueeze: Decentralization meets error-compensated compression. arXiv preprint arXiv:1907.07346, 2019.
- [TMHP20] H. Taheri, A. Mokhtari, H. Hassani, and R. Pedarsani. Quantized decentralized stochastic learning over directed graphs. In Proceedings of the 37th International Conference on Machine Learning, pages 9324–9333. PMLR, November 2020.
- [WJ21] J. Wang and G. Joshi. Cooperative SGD: A unified framework for the design and analysis of local-update SGD algorithms. The Journal of Machine Learning Research, 22(1):213:9709–213:9758, January 2021.
- [WJEG19] L. Wang, B. Jayaraman, D. Evans, and Q. Gu. Efficient privacy-preserving stochastic nonconvex optimization. arXiv preprint arXiv:1910.13659, 2019.
- [WJZ+19] Z. Wang, K. Ji, Y. Zhou, Y. Liang, and V. Tarokh. SpiderBoost and momentum: Faster variance reduction algorithms. In Advances in Neural Information Processing Systems, pages 2406–2416, 2019.
- [WXDX20] D. Wang, H. Xiao, S. Devadas, and J. Xu. On differentially private stochastic convex optimization with heavy-tailed data. In Proceedings of the 37th International Conference on Machine Learning, pages 10081–10091. PMLR, November 2020.
- [WYX17] D. Wang, M. Ye, and J. Xu. Differentially private empirical risk minimization revisited: Faster and more general. Advances in Neural Information Processing Systems, 30, 2017.
- [XB04] L. Xiao and S. Boyd. Fast linear iterations for distributed averaging. Systems & Control Letters, 53(1):65–78, September 2004.
- [XKK22a] R. Xin, U. A. Khan, and S. Kar. Fast decentralized nonconvex finite-sum optimization with recursive variance reduction. SIAM Journal on Optimization, 32(1):1–28, March 2022.
- [XKK22b] R. Xin, U. A. Khan, and S. Kar. A Fast Randomized Incremental Gradient Method for Decentralized Nonconvex Optimization. IEEE Transactions on Automatic Control, 67(10):5150–5165, October 2022.
- [XPNK20] R. Xin, S. Pu, A. Nedić, and U. A. Khan. A general framework for decentralized optimization with first-order methods. Proceedings of the IEEE, 108(11):1869–1889, 2020.
- [XYD19] H. Xiao, Y. Ye, and S. Devadas. Local differential privacy in decentralized optimization. arXiv preprint arXiv:1902.06101, 2019.
- [YCC+23] Y. Yan, J. Chen, P.-Y. Chen, X. Cui, S. Lu, and Y. Xu. Compressed decentralized proximal stochastic gradient method for nonconvex composite problems with heterogeneous data. arXiv preprint arXiv:2302.14252, 2023.
- [YGG17] Y. You, I. Gitman, and B. Ginsburg. Scaling SGD batch size to 32k for ImageNet training. Technical Report UCB/EECS-2017-156, EECS Department, University of California, Berkeley, Sep 2017.
- [YZCL22] X. Yang, H. Zhang, W. Chen, and T.-Y. Liu. Normalized/clipped SGD with perturbation for differentially private non-convex optimization. arXiv preprint arXiv:2206.13033, 2022.
- [ZBLR21] H. Zhao, K. Burlachenko, Z. Li, and P. Richtárik. Faster rates for compressed federated learning with client-variance reduction. arXiv preprint arXiv:2112.13097, 2021.
- [ZCH+22] X. Zhang, X. Chen, M. Hong, S. Wu, and J. Yi. Understanding clipping for federated learning: Convergence and client-level differential privacy. In Proceedings of the 39th International Conference on Machine Learning, pages 26048–26067. PMLR, June 2022.
- [ZHSJ20a] J. Zhang, T. He, S. Sra, and A. Jadbabaie. Why gradient clipping accelerates training: A theoretical justification for adaptivity. In International Conference on Learning Representations, 2020.
- [ZHSJ20b] J. Zhang, T. He, S. Sra, and A. Jadbabaie. Why gradient clipping accelerates training: A theoretical justification for adaptivity. In International Conference on Learning Representations, March 2020.
- [ZJFW20] B. Zhang, J. Jin, C. Fang, and L. Wang. Improved analysis of clipping algorithms for non-convex optimization. Advances in Neural Information Processing Systems, 33:15511–15521, 2020.
- [ZKV+20] J. Zhang, S. P. Karimireddy, A. Veit, S. Kim, S. J. Reddi, S. Kumar, and S. Sra. Why are adaptive methods good for attention models? arXiv preprint arXiv:1912.03194, October 2020.
- [ZLL+22] H. Zhao, B. Li, Z. Li, P. Richtárik, and Y. Chi. Beer: Fast O (1/T) rate for decentralized nonconvex optimization with communication compression. Advances in Neural Information Processing Systems, 35, 2022.
- [ZM10] M. Zhu and S. Martínez. Discrete-time dynamic average consensus. Automatica, 46(2):322–329, 2010.
Appendix A Proof of Theorem 1
This section proves Theorem 1 in the following steps: 1) define privacy loss and moment generating function, 2) define mechanisms and sub-mechanisms, and 3) bound the overall moment generating function and show the choice of perturbation variance satisfies all conditions.
Moment generating function.
Let and aux denote an outcome and an auxiliary input, respectively. Then, we can define the privacy loss of an outcome on neighboring dataset and as
and its log moment generating functions as
Taking maximum over conditions, the unconditioned log moment generating function is
Sub-mechanisms.
Definition 4 defines the LDP mechanism, but it is not enough to model decentralized algorithms. To model the perturbation operation happens on agent at time , we define a sub-mechanism as , where , which can be understood as the perturbation added on agent at time . In addition, we define another mechanism to model the compression operator and to represent the full update at an agent, and use to represent the full algorithm.
Proof of LDP.
The overall log moment generating function for agent can be bounded using [LZLC22, Lemma 2] as
Let denote the probability each data sample is chosen. For agent and , assume and . We can apply [ACG+16, Lemma 3] to bound each as
To conclude the proof, we can verify there exists some that satisfies the following inequalities when choosing and , namely,
Appendix B Proof of Theorem 2
This section proves Theorem 2 in the following subsections: Section B.1 derives the descent inequality, Sections B.2 and B.3 create two linear systems to bound the sum of consensus errors in the descent inequality, and finally Section B.4 specifies hyper parameters to obtain convergence rate.
With Assumption 2, we can skip the compression operator, in PORTER-DP. To reuse this section’s results in later analysis, we assume Assumption 3 in deriving descent lemma and linear systems, and lift this assumption when computing convergence rate in Section B.4 using .
Denote
B.1 Function value descent
Using Taylor expansion, and taking expectation conditioned on time ,
where the last equality is due to that can be proved by induction.
Because and stochastic gradients are unbiased,
(8) |
where the last inequality is due to Assumption 1.
Let . Take full expectation and average (8) over , the expected utility can be bounded by
(9) |
B.2 Sum of variable consensus errors
This subsection creates a linear system to bound by and . To simplify notations, let , and denote the mixing rate of by . Lemma 1 analyzes the mixing rate of the regularized mxing matrix.
Lemma 1 (Mixing rate of regularized mixing matrix).
Assuming . The mixing rate of can be bounded as
(10) |
Proof of Lemma 1.
Let denote the eigenvalues of . Corresponding eigenvalues of are , .
The mixing rate of is
∎
B.2.1 Variable consensus error
Take expectation conditioned on time , and use Young’s inequality, the variable consensus error can be bounded as
(11) |
where (i) is obtained by , (ii) uses , and Definition 3.
B.2.2 Variable quantization error
Assume satisfies the following inequality (which will be verified in Section B.4)
(12) |
Taking expectation conditioned on time , the variable quantization error can be decomposed and bounded as
(13) |
where (i) is obtained by applying Young’s inequality, (ii) uses Definition 3, (iii) uses the fact , and (iv) uses (12).
B.2.3 Linear system
We can compute and verify all its entries are positive:
(15) |
where we assume the following inequality to to prove (15), which will be validated in Section B.4:
(16) |
Sum expected error vectors over ,
Reorganize terms, multiply on both sides and use , the sum of error vectors can be bounded as
(17) |
The sum of consensus error can be computed as
(18) |
where we use the equality for (i) and use (12) for (ii).
B.3 Sum of gradient consensus errors
This section creates a linear system to bound the sum of gradient consensus error by and constant terms.
B.3.1 Gradient consensus error
Take expectation conditioned on time and reorganize terms, the gradient consensus error can be expanded as
Then, take full expectation, use the update formula and Young’s inequality similarly to (11),
(19) |
where (i) is proved using the facts and , (ii) is due to Definition 3 and
(20) |
B.3.2 Gradient quantization error
B.3.3 Linear system
Because , we can use the same argument as in Section B.2.3, and use (15) to prove
(22) |
where we use (12) to prove the last inequality.
B.4 Convergence rate
Note bounded gradient assumption can imply Assumption 3 for some , we can bound the expected norm of average gradient estimate as
(24) |
We assume
(25) |
Using (23) (24), expected utility (9) can be bounded by
(26) |
where we use (25) for (i), substitute and for (ii), and substitute for (iii).
We set the step size as
(26) can be further bounded as
where we use Lemma 1 to reach the last inequality.
Lastly, set the hyper parameter as
Appendix C Proof of Theorem 3
This section proves Theorem 3 in subsections. Section C.1 derives the descent inequality using results from Sections B.2 and B.3. Section C.2 first assumes all expected gradient norm are greater than a threshold (i.e. for all ), then specifies parameters and proves the average of expected gradient norm is smaller than that threshold , which contradicts the assumption hence proves the algorithm reaches within steps.
C.1 Function value descent
Let and . Similar to Section B.1, use Taylor expansion and take expectation conditioned on , we can expand the function value descent as
(27) |
The term in (27) is the error introduced by gradient clipping, which can be analyzed by splitting it to terms as following
(28) | |||
(29) | |||
(30) | |||
(31) |
Next, we bound each term separately using triangle inequality, Assumptions 3 and 4.
Bound the first term (28) as
(32) |
Bound the second term (29) as
(33) |
Bound (31) as
(35) |
C.2 Convergence rate
Different from (24), with the use of gradient clipping operator, we can only bound the expected norm of average gradient estimate as
(37) |
Let . The techniques used is similar to that used in Appendix B, so that we can reuse results from Sections B.2 and B.3, namely (23), in the following proof. Take full expectation and use (37), sum (36) over ,
(38) |
To be able to analyze the expected clipped gradient norm , we need to use convexity and monotonicity from Lemma 2.
Lemma 2.
Let and . When , and increase monotonically, while is concave and is convex.
Proof of Lemma 2.
It is sufficient to prove Lemma 2 by evaluating the first-order and second-order derivatives of and .
Because and , and increase monotonically.
is concave because .
is convex because . ∎
Next, we substitute (cf. Theorem 2), and assume the following inequality
(39) |
By Lemma 2, the expectation of clipped gradients can be bounded as
(40) |
Using (37), (40) Assumptions 4 and 3, and set , we can further bound the RHS of (38) as
(41) | ||||
(42) |
where we use the condition to prove (i) and substitute to prove (42).
Reorganize terms, (42) can be further bounded as
(43) |
where we substitute to prove (i), and use (25) for (ii).
Set and , (43) can be further bounded as
(44) |
Appendix D Proof of Theorem 4
This section proves Theorem 4 based on results from Appendix C. We first assume all expected gradient norm are greater than a threshold (i.e. for all ). Set and in (41), we can reach the following descent inequality
(45) |
Set and , the average norm of gradients can be bounded as
which implies that there exists some , such that which contradicts the assumption, and proves that PORTER-GC reaches within iterations