Stability and Generalization of the Decentralized Stochastic Gradient Descent††thanks: This work is sponsored in part by National Key R&D Program of China (2018YFB0204300), and the National Science Foundation of China (No. 61932001 and 61906200).
Abstract
The stability and generalization of stochastic gradient-based methods provide valuable insights into understanding the algorithmic performance of machine learning models. As the main workhorse for deep learning, stochastic gradient descent has received a considerable amount of studies. Nevertheless, the community paid little attention to its decentralized variants. In this paper, we provide a novel formulation of the decentralized stochastic gradient descent. Leveraging this formulation together with (non)convex optimization theory, we establish the first stability and generalization guarantees for the decentralized stochastic gradient descent. Our theoretical results are built on top of a few common and mild assumptions and reveal that the decentralization deteriorates the stability of SGD for the first time. We verify our theoretical findings by using a variety of decentralized settings and benchmark machine learning models.
Introduction
The great success of deep learning (LeCun, Bengio, and Hinton 2015) gives impetus to the development of stochastic gradient descent (SGD) (Robbins and Monro 1951) and its variants (Nemirovski et al. 2009; Duchi, Hazan, and Singer 2011; Rakhlin, Shamir, and Sridharan 2012; Kingma and Ba 2014; Wang et al. 2020). Although the convergence results of SGD are abundant, the effects caused by the training data is absent. To this end, the generalization error (Hardt, Recht, and Singer 2016; Lin, Camoriano, and Rosasco 2016; Bousquet and Elisseeff 2002; Bottou and Bousquet 2008) is developed as an alternative method to analyze SGD. The generalization bound reveals the performance of stochastic algorithms and characterizes how the training data and stochastic algorithm jointly affect the target machine learning model. To mathematically describe generalization, Hardt, Recht, and Singer (2016); Bousquet and Elisseeff (2002); Elisseeff, Evgeniou, and Pontil (2005) introduce the algorithmic stability for SGD, which mainly depends on the landscape of the underlying loss function, to study the generalization bound of SGD. The stability theory of SGD has been further developed (Charles and Papailiopoulos 2018; Kuzborskij and Lampert 2018; Lei and Ying 2020).
SGD has already been widely used in parallel and distributed settings (Agarwal and Duchi 2011; Dekel et al. 2012; Recht et al. 2011), e.g., the decentralized SGD (D-SGD) (Ram, Nedić, and Veeravalli 2010b; Lan, Lee, and Zhou 2020; Srivastava and Nedic 2011; Lian et al. 2017). D-SGD is implemented without a centralized parameter server, and all nodes are connected through an undirected graph. Compared to the centralized SGD, the decentralized one requires much less communication with the busiest node (Lian et al. 2017), accelerating the whole computational system.
From the theoretical viewpoint, although there exist plenty of convergence analysis of D-SGD (Sirb and Ye 2016; Lan, Lee, and Zhou 2020; Lian et al. 2017, 2018), the stability and generalization analysis of D-SGD remains rare.
Contributions
In this paper, we establish the first theoretical result on the stability and generalization of the D-SGD. We elaborate on our contributions below.
-
1.
Stability of D-SGD: We provide the uniform stability of D-SGD in the general convex, strongly convex, and nonconvex cases. Our theory shows that besides the learning rate, data size, and iteration number, the stability and generalization of D-SGD are also dependent on the connected graph structure. To the best of our knowledge, our result is the first theoretical stability guarantee for D-SGD. In the general convex setting, we also present the stability of D-SGD in terms of the ergodic average instead of the last iteration for the excess generalization analysis.
-
2.
Computational errors for D-SGD with convexity and projection: We consider more general schemes of D-SGD, that is, D-SGD with projection. In the previous work (Ram, Nedić, and Veeravalli 2010b), to get the convergence rate, the authors need to make additional assumptions on the graph ([Assumptions 2 and 3, (Ram, Nedić, and Veeravalli 2010b)]). In this paper, we remove these assumptions, and we present the computational errors of D-SGD with projections in the strongly convex setting.
-
3.
Generalization bounds for D-SGD with convexity: We derive (excess) generalization bounds for convex D-SGD. The excess generalization is controlled by the computational error and the generalization bound, which can be directly obtained from the stability.
-
4.
Numerical results: We numerically verify our theoretical results by using various benchmark machine learning models, ranging from strongly convex and convex to nonconvex settings, in different decentralized settings.
Prior Art
In this section, we briefly review two kinds of related works: decentralized optimization and stability and generalization analysis of SGD.
Decentralized and distributed optimization Decentralized algorithms arise in calculating the mean of data distributed over multiple sensors (Boyd et al. 2005; Olfati-Saber, Fax, and Murray 2007). The decentralized (sub)gradient descent (DGD) algorithms are propose and studied by (Nedic and Ozdaglar 2009; Yuan, Ling, and Yin 2016). Recently, DGD has been generalized to the stochastic settings. With a local Poisson clock assumption on each agent, Ram, Nedić, and Veeravalli (2010a) proposes an asynchronous gossip algorithm. The decentralized algorithm with a random communication graph is proposed in (Srivastava and Nedic 2011; Ram, Nedić, and Veeravalli 2010b). Sirb and Ye (2016); Lan, Lee, and Zhou (2020); Lian et al. (2017) consider the randomness caused by the stochastic gradients and proposed the decentralized SGD (D-SGD). The complexity analysis of D-SGD has been done in (Sirb and Ye 2016). In (Lan, Lee, and Zhou 2020), the authors propose another kind of D-SGD that leverages dual information, and provide the related computational complexity. In the paper (Lian et al. 2017), the authors show the advantage of D-SGD compared to the centralized SGD. In a recent paper (Lian et al. 2018), the authors developed asynchronous D-SGD with theoretical convergence guarantees. The biased decentralized SGD is proposed and studied by (Sun et al. 2019). In (Richards and others 2020), the authors studied the stability for a non-fully decentralized training method, in which each node needs to communicate extra gradient information. Paper (Richards and others 2020) is closed to ours, but we consider the DSGD, which is different from the algorithm investigated by (Richards and others 2020) and more general. Further more, we studied the nonconvex settings.
Stability and Generalization of SGD In (Shalev-Shwartz et al. 2010), on-average stability is proposed and further studied by Kuzborskij and Lampert (2018). The uniform stability of empirical risk minimization (ERM) under strongly convex objectives is considered by Bousquet and Elisseeff (2002). Extended results are proved with the pointwise-hypothesis assumption, which shows that a class of learning algorithms is convergent with global optimum (Charles and Papailiopoulos 2018). In order to prove uniform stability of SGD, Hardt, Recht, and Singer (2016) reformulate SGD as a contractive iteration. In (Lei and Ying 2020), a new stability notion is proposed to remove the bounded gradient assumptions. In (Bottou and Bousquet 2008), the authors establish a framework for the generalization performance of SGD. Hardt, Recht, and Singer (2016) connects the uniform stability with generalization error. The generalization errors with strong convexity are established in (Hardt, Recht, and Singer 2016; Lin, Camoriano, and Rosasco 2016). The stability and generalization are also studied for the Langevin dynamics (Li, Luo, and Qiao 2019; Mou et al. 2018).
Setup
This part contains preliminaries and mathematical descriptions of our problem. Analyzing the stability of D-SGD is more complicated than that of SGD due to the challenge arises from the mixing matrix in D-SGD. We cannot directly adapt the analysis for SGD to D-SGD. To this end, we reformulate D-SGD as an operator iteration with an error term, which is followed by bounding the error in each iteration.
Stability and Generalization
The population risk minimization is an important model in machine learning and statistics, whose mathematical formulation reads as
where denotes the loss of the model associated with data and is the data distribution. Due to the fact that is usually unknown or very complicated, we consider the following surrogate ERM
where and is a given data.
For a specific stochastic algorithm act on with output , the generalization error of is defined as Here, the expectation is taken over the algorithm and the data. The generalization bound reflects the joint effects caused by the data and the algorithm . We are also interested in the excess generalization error, which is defined as , where is the minimizer of . Let be the minimizer of . Due to the unbiased expectation of the data , we have . Thus, Bottou and Bousquet (2008) point out can be decomposed as follows
Notice that , therefore
The uniform stability is used to bound the generalization error of a given algorithm (Hardt, Recht, and Singer 2016; Elisseeff, Evgeniou, and Pontil 2005).
Definition 1
We say that the randomized algorithm is -uniformly stable if for any two data sets with samples that differ in one example, we have
It has been proved that the uniform stability directly implies the generalization bound.
Lemma 1 ((Hardt, Recht, and Singer 2016))
Let be -uniformly stable, it follows
Thus, to get the generalization bound of a random algorithm, we just need to compute the uniform stability bound .
Problem Formulation
Notation: We use the following notations throughout the paper. We denote the norm of as . For a matrix , denotes its transpose, we denote the spectral norm of as . Given another matrix , means that is positive define; and means is positive semidefinite. The identity matrix is defined as . We use to denote the expectation of with respect to the underlying probability space. For two positive constants and , we denote if there exists such that , and hides a logarithmic factor of .
Let () denote the data stored in the th client, which follow the same distribution of 111For simplicity, we assume all clients have the same amount of samples.. In this paper, we consider solving the objective function (1) by the DGD, where
(1) |
Note that (1) is a decentralized approximation to the following population risk function
(2) |
To distinguish from the objective functions in the last subsection, we use rather than here. The decentralized optimization is usually associated with a mixing matrix, which is designed by the users according to a given graph structure. In particular, we consider the connected graph with vertex set and edge set with edge represents the communication link between nodes and . Before proceeding, let us recall the definition of the mixing matrix.
Definition 2 (Mixing matrix)
For any given graph , the mixing matrix is defined on the edge set that satisfies: (1) If and , then ; otherwise, ; (2) ; (3) ; (4)
Note that is a doubly stochastic matrix (Marshall, Olkin, and Arnold 1979), and the mixing matrix is non-unique for a given graph. Several common examples for include the Laplacian matrix and the maximum-degree matrix (Boyd, Diaconis, and Xiao 2004). A crucial constant that characterizes the mixing matrix is
where denotes the th largest eigenvalue of . The definition of the mixing matrix implies that .
Lemma 2 (Corollary 1.14., (Montenegro and Tetali 2006))
Let be the matrix whose elements are all . Given any , the mixing matrix satisfies
Note the fact that the stationary distribution of an irreducible aperiodic finite Markov chain is uniform if and only if its transition matrix is doubly stochastic. Thus, corresponds to some Markov chain’s transition matrix, and the parameter characterizes the speed of convergence to the stationary state.
We consider a general decentralized stochastic gradient descent with projection, which carries out in the following manner: in the -th iteration, 1) client applies an approximate copy to calculate a unbiased gradient estimate , where is the local random index; 2) client replaces its local parameters with the weighted average of its neighbors, i.e.,
(3) |
3) client updates its parameters as
(4) |
with learning rate , and stands for projecting the quantity into the space . We stress that, in practice, we do not need to compute the average in each iteration, and we take the average only in the last iteration.
In the following, we draw necessary assumptions, which are all common and widely used in the nonconvex analysis community.
Assumption 1
The loss function is nonnegative and differentiable with respect to , and is bounded by the constant over , i.e., .
Assumption 1 implies that for all and any .
Assumption 2
The gradient of with respect to is -Lipschitz, i.e., for all and any .
Assumption 3
The set forms a closed ball in .
Compared with the scheme presented in (Lian et al. 2017), our algorithm accommodates a projection after each update in each client. When is non-strongly convex, can be set as the full space and Algorithm 1 reduces to the scheme given in (Lian et al. 2017), whose convergence has been well studied. Such a projection is more general and is necessary for the strongly convex analysis; we explain this necessary claim as follows: if is -strongly convex, then with being the minimizer of (Karimi, Nutini, and Schmidt 2016). Thus, when is far from , the gradient is unbounded, which breaks Assumption 1. However, with the projection procedure, D-SGD (Algorithm 1) actually minimizes function (1) over the set . The strong convexity gives us , which indicates . Thus, when the radius of is large enough, the projection does not change the output of D-SGD.
Stability of D-SGD
In this section, we prove the stability theory for D-SGD in strongly convex, convex, and nonconvex settings.
General Convexity
This part contains the stability result of D-SGD when is generally convex.
Theorem 1
Compared to the results of minimizing (1) by using centralized SGD with step sizes [Theorem 3.8, (Hardt, Recht, and Singer 2016)], which yields the uniformly stable bound as . Theorem 1 shows that D-SGD suffers from an additional term , which does not vanish when .
If we set , with Lemma 3, it is easy to check that ; However, if we use a constant learning rate, (i.e., ), when , we have and . The result indicates that although decentralization reduces the busiest node’s communication, it hurts the stability.
Theorem 1 provides the uniform stability for the last-iterate of D-SGD. However, the computational error of D-SGD in general convexity case uses the following average
(5) |
Such a mismatch leads to the difficulty in characterizing the excess generalization bound. It is thus necessary describe to the uniform stability in terms of . To this end, we consider that D-SGD outputs instead of in the -th iteration. The uniform stability, in this case, is defined as , and we have the following result.
Proposition 1
Unlike the uniform stability for , the average turns out to be a very complicated one. We thus just present two classical kinds of step size.






Strong Convexity
In the convex setting, for a fixed iteration number , as the data size increases and decreases, gets smaller for both diminishing and constant learning rates. However, similar to SGD, D-SGD also fails to have under control when increases. This drawback does not exist in the strongly convex setting.
Strongly convex loss functions appear in the regularized machine learning models. As mentioned in Section 2, to guarantee the bounded gradient, the set should be restricted to a closed ball. We formulate the uniform stability results in this case in Theorem 2.
Theorem 2
The uniformly stability bound for SGD with strong convexity is (Hardt, Recht, and Singer 2016), which is smaller than the one of D-SGD. From Theorem 2, we see that the uniform stability bound of D-SGD is independent on the iterative number . Moreover, D-SGD enjoys a smaller uniformly stable bound when the data size is larger and is smaller.
Nonconvexity
We now present the stability result for nonconvex loss functions.
Theorem 3
Without the convexity assumption, the uniform stable bound of D-SGD deteriorated. Theorem 3 shows that , which is much larger than the bounds in the convex case .



Excess Generalization for Convex Problems
In the nonconvex case, the optimization error of the function value is unclear. Thus, the excess generalization error is absent. We are also interested in the excess generalization associated with the computational optimization error. The existing computational errors of Algorithm 1 require extra assumptions on the graph for projections. However, these assumptions may fail to hold in many applications. Thus, we first present the optimization error of D-SGD when is convex without extra assumptions.
Optimization Error of Convex D-SGD
This part consists of optimization errors of D-SGD for convex and strongly convex settings. Assume is the minimizer of over the set , i.e., .
Lemma 3
It is worth mentioning that the optimization error is established on the average point for technical reasons.
In the following, we provide the results for the strongly convex setting.
Lemma 4
The result shows that D-SGD with projection converges sublinearly in the strongly convex case. To reach an error, we shall set the iteration number as . Our result coincides with the existing convergence results of SGD with strong convexity (Rakhlin, Shamir, and Sridharan 2012). What is different is that D-SGD is affected by the parameter , which is determined by the structure of the connected graph222For the strongly convex case, we avoid showing the result under general step size due to the complicated form..
General Convexity
Notice that the computational error of D-SGD, in this case, is described by . Thus, we need to estimate the generalization bound about .
Theorem 4


















Strong Convexity
Now, we present the excess generalization of D-SGD under strong convexity.
Numerical Results
We numerically verify our theoretical findings in this section, with a focus on testing three kinds of models, namely, strongly convex, convex, and nonconvex. For all the above three scenarios, we set the number of nodes to 10 and conduct two kinds of experiments: the first kind of experiments is to verify the stability and generalization results. Given a fixed graph, we use two sets of samples that are of the same amount, and the entries are differing by a small portion. We compare the training loss and training accuracy of D-SGD on these two datasets; the second kind is to demonstrate the effects due to the structure of the connected graph. We run our experiments on different types of connected graphs with the same dataset. In particular, we test six different connected graphs, as shown in Figure 1.
Convex case
We consider the following optimization problem
which arises from a simple regression problem. Here, we use the Body Fat dataset (Johnson 1996) which contains 252 samples. We run D-SGD on two subsets of the Body Fat dataset, and both of size 200. Let and be the outputs of the D-SGD on the two different subsets. We define the absolute loss difference as For the above six graphs, we record the absolute difference in the value of function for a set of learning rate, namely, in Figure 3. In the second test, we use the learning rate and compare the absolute loss difference with different graphs in Figure 2 (a). Our results show that the smaller learning rate usually yields a smaller loss difference, and the complete graph can achieve the smallest bound. These observations are consistent with our theoretical results for the convex D-SGD.
Strongly convex case
To verify our theory on the strongly convex case, we consider the regularized logistic regression model as follows
We use the benchmark ijcnn1 dataset(Rennie and Rifkin 2001) and set . Two 8000-sample sub-datasets with 1000 different samples are used as the test set. We conduct experiments on the two datasets with the same set of learning rates that are used in the last subsection. The absolute loss difference under different learning rates is plotted in Figure 4, and the performance under different graphs is reported in Figure 2 (b). The results of D-SGD in the strongly convex case is similar to the convex case. Also, note that the absolute loss difference increases as the learning rates grow.
Nonconvex case
We test ResNet-20 (He et al. 2016) for CIFAR10 classification (Krizhevsky 2009). We adopt two different 40000-sample subsets. The loss values are built on the test set. The absolute loss difference with the learning rate set versus the epochs is presented in Figure 5, and the absolute loss difference with different graphs are shown in Figure 5 (c). 100 epochs are used in the nonconvex test. The results show that the nonconvex D-SGD is much more unstable than the convex ones, which matches our theoretical findings.
Conclusion
In this paper, we develop the stability and generalization error for the (projected) decentralized stochastic gradient descent (D-SGD) in strongly convex, convex, and nonconvex settings. In contrast to the previous works on the analysis of the projected decentralized gradient descent, our theories are built on much more relaxed assumptions. Our theoretical results show that the stability and generalization of D-SGD depend on the learning rate and the structure of the connected graph. Furthermore, we prove that decentralization deteriorates the stability of D-SGD. Our theoretical results are empirically supported by experiments on training different machine learning models in different decentralization settings. There are numerous avenues for future work: 1) deriving the improved stability and generalization bounds of D-SGD in the general convex and nonconvex cases, 2) proving the high probability bounds, 3) studying the stability and generalization bound of the moment variance of D-SGD.
References
- Agarwal and Duchi (2011) Agarwal, A., and Duchi, J. C. 2011. Distributed delayed stochastic optimization. In Advances in Neural Information Processing Systems, 873–881.
- Bottou and Bousquet (2008) Bottou, L., and Bousquet, O. 2008. The tradeoffs of large scale learning. In Advances in neural information processing systems, 161–168.
- Bousquet and Elisseeff (2002) Bousquet, O., and Elisseeff, A. 2002. Stability and generalization. Journal of machine learning research 2(Mar):499–526.
- Boyd et al. (2005) Boyd, S.; Ghosh, A.; Prabhakar, B.; and Shah, D. 2005. Gossip algorithms: Design, analysis and applications. In Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies., volume 3, 1653–1664. IEEE.
- Boyd, Diaconis, and Xiao (2004) Boyd, S.; Diaconis, P.; and Xiao, L. 2004. Fastest mixing markov chain on a graph. SIAM review 46(4):667–689.
- Charles and Papailiopoulos (2018) Charles, Z., and Papailiopoulos, D. 2018. Stability and generalization of learning algorithms that converge to global optima. In International Conference on Machine Learning, 745–754.
- Dekel et al. (2012) Dekel, O.; Gilad-Bachrach, R.; Shamir, O.; and Xiao, L. 2012. Optimal distributed online prediction using mini-batches. The Journal of Machine Learning Research 13:165–202.
- Duchi, Hazan, and Singer (2011) Duchi, J.; Hazan, E.; and Singer, Y. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research 12(7).
- Elisseeff, Evgeniou, and Pontil (2005) Elisseeff, A.; Evgeniou, T.; and Pontil, M. 2005. Stability of randomized learning algorithms. Journal of Machine Learning Research 6(Jan):55–79.
- Hardt, Recht, and Singer (2016) Hardt, M.; Recht, B.; and Singer, Y. 2016. Train faster, generalize better: Stability of stochastic gradient descent. ICML 2016.
- He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
- Johnson (1996) Johnson, R. W. 1996. Fitting percentage of body fat to simple body measurements. Journal of Statistics Education 4(1).
- Karimi, Nutini, and Schmidt (2016) Karimi, H.; Nutini, J.; and Schmidt, M. 2016. Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 795–811. Springer.
- Kingma and Ba (2014) Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. ICLR.
- Kuzborskij and Lampert (2018) Kuzborskij, I., and Lampert, C. 2018. Data-dependent stability of stochastic gradient descent. In International Conference on Machine Learning, 2815–2824.
- Lan, Lee, and Zhou (2020) Lan, G.; Lee, S.; and Zhou, Y. 2020. Communication-efficient algorithms for decentralized and stochastic optimization. Mathematical Programming 180(1):237–284.
- LeCun, Bengio, and Hinton (2015) LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep learning. nature 521(7553):436–444.
- Lei and Ying (2020) Lei, Y., and Ying, Y. 2020. Fine-grained analysis of stability and generalization for sgd. In Proceedings of the 37th International Conference on Machine Learning, 2020.
- Li, Luo, and Qiao (2019) Li, J.; Luo, X.; and Qiao, M. 2019. On generalization error bounds of noisy gradient methods for non-convex learning. Proceedings of Machine Learning Research 1:37.
- Lian et al. (2017) Lian, X.; Zhang, C.; Zhang, H.; Hsieh, C.-J.; Zhang, W.; and Liu, J. 2017. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Advances in Neural Information Processing Systems, 5330–5340.
- Lian et al. (2018) Lian, X.; Zhang, W.; Zhang, C.; and Liu, J. 2018. Asynchronous decentralized parallel stochastic gradient descent. In International Conference on Machine Learning, 3043–3052.
- Lin, Camoriano, and Rosasco (2016) Lin, J.; Camoriano, R.; and Rosasco, L. 2016. Generalization properties and implicit regularization for multiple passes sgm. In International Conference on Machine Learning, 2340–2348.
- Marshall, Olkin, and Arnold (1979) Marshall, A. W.; Olkin, I.; and Arnold, B. C. 1979. Inequalities: theory of majorization and its applications, volume 143. Springer.
- Montenegro and Tetali (2006) Montenegro, R. R., and Tetali, P. 2006. Mathematical aspects of mixing times in Markov chains. Now Publishers Inc.
- Mou et al. (2018) Mou, W.; Wang, L.; Zhai, X.; and Zheng, K. 2018. Generalization bounds of sgld for non-convex learning: Two theoretical viewpoints. In Conference on Learning Theory, 605–638.
- Nedic and Ozdaglar (2009) Nedic, A., and Ozdaglar, A. 2009. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control 54(1):48–61.
- Nemirovski et al. (2009) Nemirovski, A.; Juditsky, A.; Lan, G.; and Shapiro, A. 2009. Robust stochastic approximation approach to stochastic programming. SIAM Journal on optimization 19(4):1574–1609.
- Olfati-Saber, Fax, and Murray (2007) Olfati-Saber, R.; Fax, J. A.; and Murray, R. M. 2007. Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE 95(1):215–233.
- Rakhlin, Shamir, and Sridharan (2012) Rakhlin, A.; Shamir, O.; and Sridharan, K. 2012. Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Coference on International Conference on Machine Learning, 1571–1578.
- Ram, Nedić, and Veeravalli (2010a) Ram, S. S.; Nedić, A.; and Veeravalli, V. V. 2010a. Asynchronous gossip algorithm for stochastic optimization: Constant stepsize analysis. In Recent Advances in Optimization and its Applications in Engineering. Springer. 51–60.
- Ram, Nedić, and Veeravalli (2010b) Ram, S. S.; Nedić, A.; and Veeravalli, V. V. 2010b. Distributed stochastic subgradient projection algorithms for convex optimization. Journal of optimization theory and applications 147(3):516–545.
- Recht et al. (2011) Recht, B.; Re, C.; Wright, S.; and Niu, F. 2011. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in neural information processing systems, 693–701.
- Rennie and Rifkin (2001) Rennie, J. D., and Rifkin, R. 2001. Improving multiclass text classification with the support vector machine.
- Richards and others (2020) Richards, D., et al. 2020. Graph-dependent implicit regularisation for distributed stochastic subgradient descent. Journal of Machine Learning Research 21(2020).
- Robbins and Monro (1951) Robbins, H., and Monro, S. 1951. A stochastic approximation method. The annals of mathematical statistics 400–407.
- Shalev-Shwartz et al. (2010) Shalev-Shwartz, S.; Shamir, O.; Srebro, N.; and Sridharan, K. 2010. Learnability, stability and uniform convergence. The Journal of Machine Learning Research 11:2635–2670.
- Sirb and Ye (2016) Sirb, B., and Ye, X. 2016. Consensus optimization with delayed and stochastic gradients on decentralized networks. In 2016 IEEE International Conference on Big Data (Big Data), 76–85. IEEE.
- Srivastava and Nedic (2011) Srivastava, K., and Nedic, A. 2011. Distributed asynchronous constrained stochastic optimization. IEEE Journal of Selected Topics in Signal Processing 5(4):772–790.
- Sun et al. (2019) Sun, T.; Chen, T.; Sun, Y.; Liao, Q.; and Li, D. 2019. Decentralized markov chain gradient descent. arXiv preprint arXiv:1909.10238.
- Wang et al. (2020) Wang, B.; Nguyen, T. M.; Sun, T.; Bertozzi, A. L.; Baraniuk, R. G.; and Osher, S. J. 2020. Scheduled restart momentum for accelerated stochastic gradient descent. arXiv preprint arXiv:2002.10583.
- Yuan, Ling, and Yin (2016) Yuan, K.; Ling, Q.; and Yin, W. 2016. On the convergence of decentralized gradient descent. SIAM Journal on Optimization 26(3):1835–1854.
- Krizhevsky (2009) Krizhevsky, A. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report.
Supplementary materials for
Stability and Generalization Bounds of Decentralized Stochastic Gradient Descent
Appendix A More results of the test
We present the absolute accuracy difference in the decentralized neutral networks training.






Appendix B Technical Lemmas
Lemma 5
For any and , it holds
with .
Lemma 6
[Lemmas 2.5 and 3.7, (Hardt, Recht, and Singer 2016)] Fix an arbitrary sequence of updates and another sequence Let be a starting point in and define where are defined recursively through
Then, we have the recurrence relation ,
For a nonnegative step size and a function we define as
where is a closed convex set. Assume that is -smooth. Then, the following properties hold.
-
•
is -expansive.
-
•
If is convex. Then, for any the gradient update is -expansive.
-
•
If is -strongly convex. Then, for , is -expansive.
Lemma 7
Let Assumption 1 hold. Assume two sample sets and just differs at one sample in the first sample. And let and be the corresponding outputs of D-SGD applied to these two sets after steps. Then, for every and every under both the random update rule and the random permutation rule, we have
Lemma 8
Given the stepsize and assume are generated by D-SGD for all . If Assumption 3 holds, we have the following bound
(6) |
Lemma 9
Denote the matrix . Assume the conditions of Lemma 8 hold, we then get
(7) |
Appendix C Proofs of Technical Lemmas
Proof of Lemma 5
For any , and , we then get
(8) | ||||
We turn to the bound
With (8), we are then led to
where we used the fact . Now, we provide the bounds for . Note that , whose derivative is . It is easy to check that achieves the maximum, which indicates
Similarly, we get
Proof of Lemma 7
Proof of Lemma 8
We denote that
With Assumption 1, . Then the global scheme can be presented as
(9) |
where . Noticing the fact
(10) |
With Lemma 2, we have
Multiplication of both sides of (9) with together with (10) then tells us
(11) | ||||
It is easy to check that and . Thus, it follows . From (11), letting be the identical matrix,
(12) | ||||
where uses the fact , and depends on Lemma 2. Then, we derive
(13) |
where we used initialization .
Proof of Lemma 9
Direct computations give
where we used . Thus, we obtain
Appendix D Proof of Theorem 1
Without loss of generalization , assume two sample sets and just differs at one sample in the first sample. And let and be the corresponding outputs of D-SGD applied to these two sets. Denoting that
At iteration , with probability ,
(14) | ||||
With gradient smooth assumption,
(15) | ||||
Due to the convexity, is 1-expansive when . Thus, we have
With probability ,
(16) | ||||
It is easy to check that . With Lemma 6, we have
Thus, we derive
That also means
Noticing the fact
(17) |
we then proved the result.
Appendix E Proof of Proposition 1
Let and be the output of D-SGD with only one different sample after iterations. We have
As the same proofs in Theorem 1, we can derive
1) when , we have
Thus, we get
2) when , we have
Thus, we get
The Lipschitz continuity of the loss function then proves the result.
Appendix F Proof of Theorem 2
Without loss of generalization , we assume . Due to the strong convexity with , is -contractive when . Thus, in (14), it holds
and in (16),
Therefore, we can get
(18) |
a) When , and . Then we are led to
The recursion of inequality gives us
Once using (17), we then get the result.
Appendix G Proof of Theorem 3
Appendix H Proof of Lemma 3
Appendix I Proof of Lemma 4
Appendix J Proofs of Theorem 4 and 5
Using the fact and noticing that is computational error, we then get the result by proved results given above.