Tianyi Chen \Email[email protected]
\addrRensselaer Polytechnic Institute
\NameZiye Guo \Email[email protected]
\addrRensselaer Polytechnic Institute
\NameYuejiao Sun \Email[email protected]
\addrUniversity of California, Los Angeles
\NameWotao Yin \Email [email protected]
\addrUniversity of California, Los Angeles
CADA: Communication-Adaptive Distributed Adam
Abstract
Stochastic gradient descent (SGD) has taken the stage as the primary workhorse for large-scale machine learning. It is often used with its adaptive variants such as AdaGrad, Adam, and AMSGrad. This paper proposes an adaptive stochastic gradient descent method for distributed machine learning, which can be viewed as the communication-adaptive counterpart of the celebrated Adam method — justifying its name CADA. The key components of CADA are a set of new rules tailored for adaptive stochastic gradients that can be implemented to save communication upload. The new algorithms adaptively reuse the stale Adam gradients, thus saving communication, and still have convergence rates comparable to original Adam. In numerical experiments, CADA achieves impressive empirical performance in terms of total communication round reduction.
1 Introduction
Stochastic gradient descent (SGD) method [Robbins and Monro(1951)] is prevalent in solving large-scale machine learning problems during the last decades. Although simple to use, the plain-vanilla SGD is often sensitive to the choice of hyper-parameters and sometimes suffer from the slow convergence. Among various efforts to improve SGD, adaptive methods such as AdaGrad [Duchi et al.(2011)Duchi, Hazan, and Singer], Adam [Kingma and Ba(2014)] and AMSGrad [Reddi et al.(2018)Reddi, Kale, and Kumar] have well-documented empirical performance, especially in training deep neural networks.
To achieve “adaptivity,” these algorithms adaptively adjust the update direction or tune the learning rate, or, the combination of both. While adaptive SGD methods have been mostly studied in the setting where data and computation are both centralized in a single node, their performance in the distributed learning setting is less understood. As this setting brings new challenges to machine learning, can we add an additional dimension of adaptivity to Adam in this regime?
In this context, we aim to develop a fully adaptive SGD algorithm tailored for the distributed learning. We consider the setting composed of a central server and a set of workers in , where each worker has its local data from a distribution . Workers may have different data distributions , and they collaboratively solve the following problem
(1) |
where is the sought variable and are smooth (but not necessarily convex) functions. We focus on the setting where local data at each worker can not be uploaded to the server, and collaboration is needed through communication between the server and workers. This setting often emerges due to the data privacy concerns, e.g., federated learning [McMahan et al.(2017a)McMahan, Moore, Ramage, Hampson, and y Arcas, Kairouz et al.(2019)Kairouz, McMahan, Avent, Bellet, Bennis, Bhagoji, Bonawitz, Charles, Cormode, Cummings, et al.].
To solve (1), we can in principle apply the single-node version of the adaptive SGD methods such as Adam [Kingma and Ba(2014)]: At iteration , the server broadcasts to all the workers; each worker computes using a randomly selected sample or a minibatch of samples , and then uploads it to the server; and once receiving stochastic gradients from all workers, the server can simply use the aggregated stochastic gradient to update the parameter via the plain-vanilla single-node Adam. When is an unbiased gradient of , the convergence of this distributed implementation of Adam follows from the original ones [Reddi et al.(2018)Reddi, Kale, and Kumar, Chen et al.(2019)Chen, Liu, Sun, and Hong]. To implement this, however, all the workers have to upload the fresh at each iteration. This prevents the efficient implementation of Adam in scenarios where the communication uplink and downlink are not symmetric, and communication especially upload from workers and the server is costly; e.g., cellular networks [Park et al.(2019)Park, Samarakoon, Bennis, and Debbah]. Therefore, our goal is to endow an additional dimension of adaptivity to Adam for solving the distributed problem (1). In short, on top of its adaptive learning rate and update direction, we want Adam to be communication-adaptive.
1.1 Related work
To put our work in context, we review prior contributions that we group in two categories.
1.1.1 SGD with adaptive gradients
A variety of SGD variants have been developed recently, including momentum and acceleration [Polyak(1964), Nesterov(1983), Ghadimi and Lan(2016)]. However, these methods are relatively sensitive to the hyper-parameters such as stepsizes, and require significant efforts on finding the optimal parameters.
Adaptive learning rate. One limitation of SGD is that it scales the gradient uniformly in all directions by a pre-determined constant or a sequence of constants (a.k.a. learning rates). This may lead to poor performance when the training data are sparse [Duchi et al.(2011)Duchi, Hazan, and Singer]. To address this issue, adaptive learning rate methods have been developed that scale the gradient in an entry-wise manner by using past gradients, which include AdaGrad [Duchi et al.(2011)Duchi, Hazan, and Singer, Ward et al.(2019)Ward, Wu, and Bottou], AdaDelta [Zeiler(2012)] and other variants [Li and Orabona(2019)]. This simple technique has improved the performance of SGD in some scenarios.
Adaptive SGD. Adaptive SGD methods achieve the best of both worlds, which update the search directions and the learning rates simultaneously using past gradients. Adam [Kingma and Ba(2014)] and AMSGrad [Reddi et al.(2018)Reddi, Kale, and Kumar] are the representative ones in this category. While these methods are simple-to-use, analyzing their convergence is challenging [Reddi et al.(2018)Reddi, Kale, and Kumar, Wang et al.(2020a)Wang, Lu, Tu, and Zhang]. Their convergence in the nonconvex setting has been settled only recently [Chen et al.(2019)Chen, Liu, Sun, and Hong, Défossez et al.(2020)Défossez, Bottou, Bach, and Usunier]. However, most adaptive SGD methods are studied in the single-node setting where data and computation are both centralized. Very recently, adaptive SGD has been studied in the shared memory setting [Xu et al.(2020)Xu, Sutcher-Shepard, Xu, and Chen], where data is still centralized and communication is not adaptive.
1.1.2 Communication-efficient distributed optimization
Popular communication-efficient distributed learning methods belong to two categories: c1) reduce the number of bits per communication round; and, c2) save the number of communication rounds.
For c1), methods are centered around the ideas of quantization and sparsification.
Reducing communication bits.
Quantization has been successfully applied to distributed machine learning. The 1-bit and multi-bits quantization methods have been developed in [Seide et al.(2014)Seide, Fu, Droppo, Li, and Yu, Alistarh et al.(2017)Alistarh, Grubic, Li, Tomioka, and
Vojnovic, Tang et al.(2018)Tang, Gan, Zhang, Zhang, and
Liu]. More recently, signSGD with majority vote has been developed in [Bernstein et al.(2018)Bernstein, Wang, Azizzadenesheli, and
Anandkumar].
Other advances of quantized gradient schemes include error compensation [Wu et al.(2018)Wu, Huang, Huang, and Zhang, Karimireddy et al.(2019)Karimireddy, Rebjock, Stich, and
Jaggi], variance-reduced quantization [Zhang et al.(2017)Zhang, Li, Kara, Alistarh, Liu, and
Zhang, Horváth et al.(2019)Horváth, Kovalev, Mishchenko, Stich, and
Richtárik], and quantization to a ternary vector [Wen et al.(2017)Wen, Xu, Yan, Wu, Wang, Chen, and Li, Reisizadeh et al.(2019)Reisizadeh, Taheri, Mokhtari, Hassani, and
Pedarsani].
Sparsification amounts to transmitting only gradient coordinates with large enough magnitudes exceeding a certain threshold [Strom(2015), Aji and Heafield(2017)].
To avoid losing information of skipping communication, small gradient components will be accumulated and transmitted when they are large enough [Lin et al.(2018)Lin, Han, Mao, Wang, and Dally, Stich et al.(2018)Stich, Cordonnier, and Jaggi, Alistarh et al.(2018)Alistarh, Hoefler, Johansson, Konstantinov,
Khirirat, and Renggli, Wangni et al.(2018)Wangni, Wang, Liu, and Zhang, Jiang and Agrawal(2018), Tang et al.(2020)Tang, Gan, Rajbhandari, Lian, Zhang, Liu, and
He]. Other compression methods also include low-rank approximation [Vogels et al.(2019)Vogels, Karimireddy, and Jaggi] and sketching [Ivkin et al.(2019)Ivkin, Rothchild, Ullah, Stoica, Arora,
et al.].
However, all these methods aim to resolve c1).
In some cases, other latencies dominate the bandwidth-dependent transmission latency. This motivates c2).
Reducing communication rounds. One of the most popular techniques in this category is the periodic averaging, e.g., elastic averaging SGD [Zhang et al.(2015)Zhang, Choromanska, and LeCun], local SGD (a.k.a. FedAvg) [McMahan et al.(2017b)McMahan, Moore, Ramage, Hampson, and y Arcas, Lin et al.(2020)Lin, Stich, Patel, and Jaggi, Kamp et al.(2018)Kamp, Adilova, Sicking, Hüger, Schlicht, Wirtz, and Wrobel, Stich(2019), Wang and Joshi(2019), Karimireddy et al.(2020)Karimireddy, Kale, Mohri, Reddi, Stich, and Suresh, Haddadpour et al.(2019)Haddadpour, Kamani, Mahdavi, and Cadambe] or local momentum SGD [Yu et al.(2019)Yu, Jin, and Yang, Wang et al.(2020b)Wang, Tantia, Ballas, and Rabbat]. In local SGD, workers perform local model updates independently and the models are averaged periodically. Therefore, communication frequency is reduced. However, except [Kamp et al.(2018)Kamp, Adilova, Sicking, Hüger, Schlicht, Wirtz, and Wrobel, Wang and Joshi(2019), Haddadpour et al.(2019)Haddadpour, Kamani, Mahdavi, and Cadambe], most of local SGD methods follow a pre-determined communication schedule that is nonadaptive. Some of them are tailored for the homogeneous settings, where the data are independent and identically distributed over all workers. To tackle the heterogeneous case, FedProx has been developed in [Li et al.(2018)Li, Sahu, Zaheer, Sanjabi, Talwalkar, and Smith] by solving local subproblems. For learning tasks where the loss function is convex and its conjugate dual is expressible, the dual coordinate ascent-based approaches have been demonstrated to yield impressive empirical performance [Jaggi et al.(2014)Jaggi, Smith, Takác, Terhorst, Krishnan, Hofmann, and Jordan, Ma et al.(2017)Ma, Konečnỳ, Jaggi, Smith, Jordan, Richtárik, and Takáč]. Higher-order methods have also been considered [Shamir et al.(2014)Shamir, Srebro, and Zhang, Zhang and Lin(2015)]. Roughly speaking, algorithms in [Li et al.(2018)Li, Sahu, Zaheer, Sanjabi, Talwalkar, and Smith, Jaggi et al.(2014)Jaggi, Smith, Takác, Terhorst, Krishnan, Hofmann, and Jordan, Ma et al.(2017)Ma, Konečnỳ, Jaggi, Smith, Jordan, Richtárik, and Takáč, Shamir et al.(2014)Shamir, Srebro, and Zhang, Zhang and Lin(2015)] reduce communication by increasing local gradient computation.
The most related line of work to this paper is the lazily aggregated gradient (LAG) approach [Chen et al.(2018)Chen, Giannakis, Sun, and Yin, Sun et al.(2019)Sun, Chen, Giannakis, and Yang]. In contrast to periodic communication, the communication in LAG is adaptive and tailored for the heterogeneous settings. Parameters in LAG are updated at the server, and workers only adaptively upload information that is determined to be informative enough. Unfortunately, while LAG has good performance in the deterministic settings (e.g., with full gradient), as shown in Section 2.1, its performance will be significantly degraded in the stochastic settings [Chen et al.(2020)Chen, Sun, and Yin]. In contrast, our approach generalizes LAG to the regime of running adaptive SGD. Very recently, FedAvg with local adaptive SGD update has been proposed in [Reddi et al.(2020)Reddi, Charles, Zaheer, Garrett, Rush, Konečnỳ, Kumar, and McMahan], which sets a strong benchmark for communication-efficient learning. When the new algorithm in [Reddi et al.(2020)Reddi, Charles, Zaheer, Garrett, Rush, Konečnỳ, Kumar, and McMahan] achieves the sweet spot between local SGD and adaptive momentum SGD, the proposed algorithm is very different from ours, and the averaging period and the selection of participating workers are nonadaptive.
1.2 Our approach
We develop a new adaptive SGD algorithm for distributed learning, called Communication-Adaptive Distributed Adam (CADA). Akin to the dynamic scaling of every gradient coordinate in Adam, the key motivation of adaptive communication is that during distributed learning, not all communication rounds between the server and workers are equally important. So a natural solution is to use a condition that decides whether the communication is important or not, and then adjust the frequency of communication between a worker and the server. If some workers are not communicating, the server uses their stale information instead of the fresh ones. We will show that this adaptive communication technique can reduce the less informative communication of distributed Adam.
Analogous to the original Adam [Kingma and Ba(2014)] and its modified version AMSGrad [Reddi et al.(2018)Reddi, Kale, and Kumar], our new CADA approach also uses the exponentially weighted stochastic gradient as the update direction of , and leverages the weighted stochastic gradient magnitude to inversely scale the update direction . Different from the direct distributed implementation of Adam that incorporates the fresh (thus unbiased) stochastic gradients , CADA exponentially combines the aggregated stale stochastic gradients , where is either the fresh stochastic gradient , or an old copy when . Informally, with denoting the stepsize at iteration , CADA has the following update
(2a) | ||||
(2b) | ||||
(2c) |
where are the momentum weights, is a diagonal matrix whose diagonal vector is , the constant is , and is an identity matrix. To reduce the memory requirement of storing all the stale stochastic gradients , we can obtain by refining the previous aggregated stochastic gradients stored in the server via
(3) |
where is the stochastic gradient innovation, and is the set of workers that upload the stochastic gradient to the server at iteration . Henceforth, and . See CADA’s implementation in Figure 1.
Clearly, the selection of subset is both critical and challenging. It is critical because it adaptively determines the number of communication rounds per iteration . However, it is challenging since 1) the staleness introduced in the Adam update will propagate not only through the momentum gradients but also the adaptive learning rate; 2) the importance of each communication round is dynamic, thus a fixed or nonadaptive condition is ineffective; and 3) the condition needs to be checked efficiently without extra overhead. To overcome these challenges, we develop two adaptive conditions to select in CADA.
With details deferred to Section 2, the contributions of this paper are listed as follows.
c1) We introduce a novel communication-adaptive distributed Adam (CADA) approach that reuses stale stochastic gradients to reduce communication for distributed implementation of Adam.
c2) We develop a new Lyapunov function to establish convergence of CADA under both the nonconvex and Polyak-Łojasiewicz (PL) conditions even when the datasets are non-i.i.d. across workers. The convergence rate matches that of the original Adam.
c3) We confirm that our novel fully-adaptive CADA algorithms achieve at least performance gains in terms of communication upload over some popular alternatives using numerical tests on logistic regression and neural network training.
2 CADA: Communication-Adaptive Distributed Adam
In this section, we revisit the recent LAG method [Chen et al.(2018)Chen, Giannakis, Sun, and Yin] and provide insights why it does not work well in stochastic settings, and then develop our communication-adaptive distributed Adam approach. To be more precise in our notations, we henceforth use for the staleness or age of information from worker used by the server at iteration , e.g., . An age of means “fresh.”
2.1 The ineffectiveness of LAG with stochastic gradients
The LAG method [Chen et al.(2018)Chen, Giannakis, Sun, and Yin] modifies the distributed gradient descent update. Instead of communicating with all workers per iteration, LAG selects the subset of workers to obtain fresh full gradients and reuses stale full gradients from others, that is
(4) |
where is adaptively decided by comparing the gradient difference . Following this principle, the direct (or “naive”) stochastic version of LAG selects the subset of workers to obtain fresh stochastic gradients , . The stochastic LAG also follows the distributed SGD update, but it selects by: if worker finds the innovation of the fresh stochastic gradient is small such that it satisfies
(5) |
where and are pre-fixed constants, then worker reuses the old gradient, , and sets the staleness ; otherwise, worker uploads the fresh gradient, and sets .
If the stochastic gradients were full gradients, LAG condition (5) compares the error induced by using the stale gradients and the progress of the distributed gradient descent algorithm, which has proved to be effective in skipping redundant communication [Chen et al.(2018)Chen, Giannakis, Sun, and Yin]. Nevertheless, the observation here is that the two stochastic gradients (5) are evaluated on not just two different iterates ( and ) but also two different samples ( and ) thus two different loss functions.
This subtle difference leads to the ineffectiveness of (5). We can see this by expanding the left-hand-side (LHS) of (5) by (see details in supplemental material)
(6a) | ||||
(6b) | ||||
(6c) |
Even if converges, e.g., , and thus the right-hand-side (RHS) of (5) , the LHS of (5) does not, because the variance inherited in (6a) and (6b) does not vanish yet the gradient difference at the same function (6c) diminishes. Therefore, the key insight here is that the non-diminishing variance of stochastic gradients makes the LAG rule (5) ineffective eventually. This will also be verified in our simulations when we compare CADA with stochastic LAG.
2.2 Algorithm development of CADA
In this section, we formally develop our CADA method, and present the intuition behind its design.
The key of the CADA design is to reduce the variance of the innovation measure in the adaptive condition. We introduce two CADA variants, both of which follow the update (2), but they differ in the variance-reduced communication rules.
The first one termed CADA1 will calculate two stochastic gradient innovations with one at the sample , and one at the sample , where is a snapshot of the previous iterate that will be updated every iterations. As we will show in (2.2), can be viewed as the difference of two variance-reduced gradients calculated at and . Using as the error induced by using stale information, CADA1 will exclude worker from if worker finds
(7) |
In (7), we use the change of parameter averaged over the past consecutive iterations to measure the progress of algorithm. Intuitively, if (7) is satisfied, the error induced by using stale information will not large affect the learning algorithm. In this case, worker does not upload, and the staleness of information from worker increases by ; otherwise, worker belongs to , uploads the stochastic gradient innovation , and resets .
The rationale of CADA1. In contrast to the non-vanishing variance in LAG rule (see (6)), the CADA1 rule (7) reduces its inherent variance. To see this, we can decompose the LHS of (7) as the difference of two variance reduced stochastic gradients at iteration and . Using the stochastic gradient in SVRG as an example [Johnson and Zhang(2013)], the innovation can be written as
(8) |
Define the minimizer of (1) as . With derivations given in the supplementary document, the expectation of the LHS of (7) can be upper-bounded by
(9) |
If converges, e.g., , the RHS of (9) diminishes, and thus the LHS of (7) diminishes. This is in contrast to the LAG rule (6) lower-bounded by a non-vanishing value. Notice that while enjoying the benefit of variance reduction, our communication rule does not need to repeatedly calculate the full gradient , which is only used for illustration purpose.
In addition to (7), the second rule is termed CADA2. The key difference relative to CADA1 is that CADA2 uses to estimate the error of using stale information. CADA2 will reuse the stale stochastic gradient or exclude worker from if worker finds
(10) |
If (10) is satisfied, then worker does not upload, and the staleness increases by ; otherwise, worker uploads the stochastic gradient innovation , and resets the staleness as . Notice that different from the naive LAG (5), the CADA condition (10) is evaluated at two different iterates but on the same sample .
The rationale of CADA2. Similar to CADA1, the CADA2 rule (10) also reduces its inherent variance, since the LHS of (10) can be written as the difference between a variance reduced stochastic gradient and a deterministic gradient, that is
(11) |
With derivations deferred to the supplementary document, similar to (9) we also have that as the iterate .
For either (7) or (10), worker can check it locally with small memory cost by recursively updating the RHS of (7) or (10). In addition, worker will update the stochastic gradient if the staleness satisfies . We summarize CADA in Algorithm 1.
Computational and memory cost of CADA. In CADA, checking (7) and (10) will double the computational cost (gradient evaluation) per iteration. Aware of this fact, we have compared the number of iterations and gradient evaluations in simulations (see Figures 2-5), which will demonstrate that CADA requires fewer iterations and also fewer gradient queries to achieve a target accuracy. Thus the extra computation is small. In addition, the extra memory for large is low. To compute the RHS of (7) or (10), each worker only stores the norm of model changes ( scalars).
3 Convergence Analysis of CADA
We present the convergence results of CADA. For all the results, we make some basic assumptions.
Assumption 1
The loss function is smooth with the constant .
Assumption 2
Samples are independent, and the stochastic gradient satisfies and .
Note that Assumptions 1-2 are standard in analyzing Adam and its variants [Kingma and Ba(2014), Reddi et al.(2018)Reddi, Kale, and Kumar, Chen et al.(2019)Chen, Liu, Sun, and Hong, Xu et al.(2020)Xu, Sutcher-Shepard, Xu, and Chen].
3.1 Key steps of Lyapunov analysis
The convergence results of CADA critically builds on the subsequent Lyapunov analysis. We will start with analyzing the expected descent in terms of by applying one step CADA update.
Lemma 3.1.
Lemma 3.1 contains four terms in the RHS of (3.1): the first two terms quantify the correlations between the gradient direction and the stale stochastic gradient as well as the state momentum stochastic gradient ; the third term captures the drift of two consecutive iterates; and, the last term estimates the maximum drift of the adaptive stepsizes over iterations.
From Lemma 3.1, analyzing the progress of under CADA is challenging especially when the effects of staleness and the momentum couple with each other. Because the the state momentum gradient is recursively updated by , we will first need the following lemma to characterize the regularity of the stale aggregated stochastic gradients , which lays the theoretical foundation for incorporating the properly controlled staleness into the Adam’s momentum update.
Lemma 3.2 justifies the relevance of the stale yet properly selected stochastic gradients. Intuitively, the first term in the RHS of (3.2) resembles the descent of using SGD with the unbiased stochastic gradient, and the second and third terms will diminish if the stepsizes are diminishing since . This is achieved by our designed communication rules.
In view of Lemmas 3.1 and 3.2, we introduce the following Lyapunov function:
(14) |
where is the solution of (1), and are constants that will be specified in the proof.
The design of Lyapunov function in (3.1) is motivated by the progress of in Lemmas 3.1-3.2, and also coupled with our communication rules (7) and (10) that contain the parameter difference term. We find this new Lyapunov function can lead to a much simple proof of Adam and AMSGrad, which is of independent interest. The following lemma captures the progress of the Lyapunov function.
Lemma 3.3.
The first term in the RHS of (15) is strictly negative, and the second term is positive but potentially small since it is with . This implies that the function will eventually converge if we choose the stepsizes appropriately. Lemma 3.3 is a generalization of SGD’s descent lemma. If we set in (2) and in (3.1), then Lemma 3.3 reduces to that of SGD in terms of ; see e.g., [Bottou et al.(2018)Bottou, Curtis, and Nocedal, Lemma 4.4].
3.2 Main convergence results
Building upon our Lyapunov analysis, we first present the convergence in nonconvex case.
Theorem 3.4 (nonconvex).
From Theorem 3.4, the convergence rate of CADA in terms of the average gradient norms is , which matches that of the plain-vanilla Adam [Reddi et al.(2018)Reddi, Kale, and Kumar, Chen et al.(2019)Chen, Liu, Sun, and Hong]. Unfortunately, due to the complicated nature of Adam-type analysis, the bound in (16) does not achieve the linear speed-up as analyzed for asynchronous nonadaptive SGD such as [Lian et al.(2016)Lian, Zhang, Hsieh, Huang, and Liu]. However, our analysis is tailored for adaptive SGD and does not make any assumption on the asynchrony, e.g., the set of uploading workers are independent from the past or even independent and identically distributed.
Next we present the convergence results under a slightly stronger assumption on the loss .
Assumption 3
The loss function satisfies the Polyak-Łojasiewicz (PL) condition with the constant , that is .
The PL condition is weaker than the strongly convexity, which does not even require convexity [Karimi et al.(2016)Karimi, Nutini, and Schmidt]. And it is satisfied by a wider range of problems such as least squares for underdetermined linear systems, logistic regression, and also certain types of neural networks.
We next establish the convergence of CADA under this condition.
Theorem 3.5 (PL-condition).
Theorem 3.5 implies that under the PL-condition of the loss function, the CADA algorithm can achieve the global convergence in terms of the loss function, with a fast rate . Compared with the previous analysis for LAG [Chen et al.(2018)Chen, Giannakis, Sun, and Yin], as we highlighted in Section 3.1, the analysis for CADA is more involved, since it needs to deal with not only the outdated gradients but also the stochastic momentum gradients and the adaptive matrix learning rates. We tackle this issue by i) considering a new set of communication rules (7) and (10) with reduced variance; and, ii) incorporating the effect of momentum gradients and the drift of adaptive learning rates in the new Lyapunov function (3.1).












4 Simulations
In order to verify our analysis and show the empirical performance of CADA, we conduct simulations using logistic regression and training neural networks. Data are distributed across workers during all tests. We benchmark CADA with some popular methods such as Adam [Kingma and Ba(2014)], stochastic version of LAG [Chen et al.(2018)Chen, Giannakis, Sun, and Yin], local momentum [Yu et al.(2019)Yu, Jin, and Yang] and the state-of-the-art FedAdam [Reddi et al.(2020)Reddi, Charles, Zaheer, Garrett, Rush, Konečnỳ, Kumar, and McMahan]. For local momentum and FedAdam, workers perform model update independently, which are averaged over all workers every iterations. In simulations, critical parameters are optimized for each algorithm by a grid-search. Due to space limitation, please see the detailed choice of parameters, and additional experiments in the supplementary document.
Logistic regression. For CADA, the maximal delay is and . For local momentum and FedAdam, we manually optimize the averaging period as for ijcnn1 and for covtype. Results are averaged over 10 Monte Carlo runs.
Tests on logistic regression are reported in Figures 2-3. In our tests, two CADA variants achieve the similar iteration complexity as the original Adam and outperform all other baselines in most cases. Since our CADA requires two gradient evaluations per iteration, the gradient complexity (e.g., computational complexity) of CADA is higher than Adam, but still smaller than that of other baselines. For logistic regression task, CADA1 and CADA2 save the number of communication uploads by at least one order of magnitude.
Training neural networks. We train a neural network with two convolution-ELU-maxpooling layers followed by two fully-connected layers for 10 classes classification on mnist. We use the popular ResNet20 model on CIFAR10 dataset, which has 20 and roughly 0.27 million parameters. We searched the best values of from the grid to optimize the testing accuracy vs communication rounds for each algorithm. In CADA, the maximum delay is and the average interval . See tests under different in the supplementary material.
Tests on training neural networks are reported in Figures 4-5. In mnist, CADA1 and CADA2 save the number of communication uploads by roughly 60% than local momentum and slightly more than FedAdam. In cifar10, CADA1 and CADA2 achieve competitive performance relative to the state-of-the-art algorithms FedAdam and local momentum. We found that if we further enlarge , FedAdam and local momentum converge fast at the beginning, but reached worse test accuracy (e.g., 5%-15%). It is also evident that the CADA1 and CADA2 rules achieve more communication reduction than the direct stochastic version of LAG, which verifies the intuition in Section 2.1.
5 Conclusions
While Adaptive SGD methods have been widely applied in the single-node setting, their performance in the distributed learning setting is less understood. In this paper, we have developed a communication-adaptive distributed Adam method that we term CADA, which endows an additional dimension of adaptivity to Adam tailored for its distributed implementation. CADA method leverages a set of adaptive communication rules to detect and then skip less informative communication rounds between the server and workers during distributed learning. All CADA variants are simple to implement, and have convergence rate comparable to the original Adam.
References
- [Aji and Heafield(2017)] Alham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. In Proc. Conf. Empirical Methods Natural Language Process., pages 440–445, Copenhagen, Denmark, Sep 2017.
- [Alistarh et al.(2017)Alistarh, Grubic, Li, Tomioka, and Vojnovic] Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. QSGD: Communication-efficient SGD via gradient quantization and encoding. In Proc. Conf. Neural Info. Process. Syst., pages 1709–1720, Long Beach, CA, Dec 2017.
- [Alistarh et al.(2018)Alistarh, Hoefler, Johansson, Konstantinov, Khirirat, and Renggli] Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, and Cédric Renggli. The convergence of sparsified gradient methods. In Proc. Conf. Neural Info. Process. Syst., pages 5973–5983, Montreal, Canada, Dec 2018.
- [Bernstein et al.(2018)Bernstein, Wang, Azizzadenesheli, and Anandkumar] Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Animashree Anandkumar. SignSGD: Compressed optimisation for non-convex problems. In Proc. Intl. Conf. Machine Learn., pages 559–568, Stockholm, Sweden, Jul 2018.
- [Bottou et al.(2018)Bottou, Curtis, and Nocedal] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223–311, 2018.
- [Chen et al.(2018)Chen, Giannakis, Sun, and Yin] Tianyi Chen, Georgios Giannakis, Tao Sun, and Wotao Yin. LAG: Lazily aggregated gradient for communication-efficient distributed learning. In Proc. Conf. Neural Info. Process. Syst., pages 5050–5060, Montreal, Canada, Dec 2018.
- [Chen et al.(2020)Chen, Sun, and Yin] Tianyi Chen, Yuejiao Sun, and Wotao Yin. LASG: Lazily aggregated stochastic gradients for communication-efficient distributed learning. arXiv preprint:2002.11360, 2020.
- [Chen et al.(2019)Chen, Liu, Sun, and Hong] Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong. On the convergence of a class of Adam-type algorithms for non-convex optimization. In Proc. Intl. Conf. Learn. Representations, New Orleans, LA, May 2019.
- [Défossez et al.(2020)Défossez, Bottou, Bach, and Usunier] Alexandre Défossez, Léon Bottou, Francis Bach, and Nicolas Usunier. On the convergence of Adam and Adagrad. arXiv preprint:2003.02395, March 2020.
- [Duchi et al.(2011)Duchi, Hazan, and Singer] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Machine Learning Res., 12(Jul):2121–2159, 2011.
- [Ghadimi and Lan(2013)] Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013.
- [Ghadimi and Lan(2016)] Saeed Ghadimi and Guanghui Lan. Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Mathematical Programming, 156(1-2):59–99, 2016.
- [Haddadpour et al.(2019)Haddadpour, Kamani, Mahdavi, and Cadambe] Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, and Viveck Cadambe. Local sgd with periodic averaging: Tighter analysis and adaptive synchronization. In Proc. Conf. Neural Info. Process. Syst., pages 11080–11092, Vancouver, Canada, December 2019.
- [Horváth et al.(2019)Horváth, Kovalev, Mishchenko, Stich, and Richtárik] Samuel Horváth, Dmitry Kovalev, Konstantin Mishchenko, Sebastian Stich, and Peter Richtárik. Stochastic distributed learning with gradient quantization and variance reduction. arXiv preprint:1904.05115, April 2019.
- [Ivkin et al.(2019)Ivkin, Rothchild, Ullah, Stoica, Arora, et al.] Nikita Ivkin, Daniel Rothchild, Enayat Ullah, Ion Stoica, Raman Arora, et al. Communication-efficient distributed SGD with sketching. In Proc. Conf. Neural Info. Process. Syst., pages 13144–13154, Vancouver, Canada, December 2019.
- [Jaggi et al.(2014)Jaggi, Smith, Takác, Terhorst, Krishnan, Hofmann, and Jordan] Martin Jaggi, Virginia Smith, Martin Takác, Jonathan Terhorst, Sanjay Krishnan, Thomas Hofmann, and Michael I Jordan. Communication-efficient distributed dual coordinate ascent. In Proc. Advances in Neural Info. Process. Syst., pages 3068–3076, Montreal, Canada, December 2014.
- [Jiang and Agrawal(2018)] Peng Jiang and Gagan Agrawal. A linear speedup analysis of distributed deep learning with sparse and quantized communication. In Proc. Conf. Neural Info. Process. Syst., pages 2525–2536, Montreal, Canada, Dec 2018.
- [Johnson and Zhang(2013)] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Proc. Conf. Neural Info. Process. Syst., pages 315–323, 2013.
- [Kairouz et al.(2019)Kairouz, McMahan, Avent, Bellet, Bennis, Bhagoji, Bonawitz, Charles, Cormode, Cummings, et al.] Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. arXiv preprint:1912.04977, December 2019.
- [Kamp et al.(2018)Kamp, Adilova, Sicking, Hüger, Schlicht, Wirtz, and Wrobel] Michael Kamp, Linara Adilova, Joachim Sicking, Fabian Hüger, Peter Schlicht, Tim Wirtz, and Stefan Wrobel. Efficient decentralized deep learning by dynamic model averaging. In Euro. Conf. Machine Learn. Knowledge Disc. Data.,, pages 393–409, Dublin, Ireland, 2018.
- [Karimi et al.(2016)Karimi, Nutini, and Schmidt] Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition. In Proc. Euro. Conf. Machine Learn., pages 795–811, Riva del Garda, Italy, September 2016.
- [Karimireddy et al.(2019)Karimireddy, Rebjock, Stich, and Jaggi] Sai Praneeth Karimireddy, Quentin Rebjock, Sebastian Stich, and Martin Jaggi. Error feedback fixes signsgd and other gradient compression schemes. In Proc. Intl. Conf. Machine Learn., pages 3252–3261, Long Beach, CA, June 2019.
- [Karimireddy et al.(2020)Karimireddy, Kale, Mohri, Reddi, Stich, and Suresh] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic controlled averaging for on-device federated learning. In Proc. Intl. Conf. Machine Learn., July 2020.
- [Kingma and Ba(2014)] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint:1412.6980, December 2014.
- [Li et al.(2018)Li, Sahu, Zaheer, Sanjabi, Talwalkar, and Smith] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127, 2018.
- [Li and Orabona(2019)] Xiaoyu Li and Francesco Orabona. On the convergence of stochastic gradient descent with adaptive stepsizes. In Proc. Intl. Conf. on Artif. Intell. and Stat., pages 983–992, Okinawa, Japan, April 2019.
- [Lian et al.(2016)Lian, Zhang, Hsieh, Huang, and Liu] Xiangru Lian, Huan Zhang, Cho-Jui Hsieh, Yijun Huang, and Ji Liu. A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order. In Proc. Conf. Neural Info. Process. Syst., pages 3054–3062, Barcelona, Spain, December 2016.
- [Lin et al.(2020)Lin, Stich, Patel, and Jaggi] Tao Lin, Sebastian U Stich, Kumar Kshitij Patel, and Martin Jaggi. Don’t use large mini-batches, use local SGD. In Proc. Intl. Conf. Learn. Representations, Addis Ababa, Ethiopia, April 2020.
- [Lin et al.(2018)Lin, Han, Mao, Wang, and Dally] Yujun Lin, Song Han, Huizi Mao, Yu Wang, and William J Dally. Deep gradient compression: Reducing the communication bandwidth for distributed training. In Proc. Intl. Conf. Learn. Representations, Vancouver, Canada, Apr 2018.
- [Ma et al.(2017)Ma, Konečnỳ, Jaggi, Smith, Jordan, Richtárik, and Takáč] Chenxin Ma, Jakub Konečnỳ, Martin Jaggi, Virginia Smith, Michael I Jordan, Peter Richtárik, and Martin Takáč. Distributed optimization with arbitrary local solvers. Optimization Methods and Software, 32(4):813–848, July 2017.
- [McMahan et al.(2017a)McMahan, Moore, Ramage, Hampson, and y Arcas] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Proc. Intl. Conf. Artificial Intell. and Stat., pages 1273–1282, Fort Lauderdale, FL, April 2017a.
- [McMahan et al.(2017b)McMahan, Moore, Ramage, Hampson, and y Arcas] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Proc. Intl. Conf. on Artif. Intell. and Stat., pages 1273–1282, Fort Lauderdale, Florida, Apr 2017b.
- [Nesterov(1983)] Yurii E Nesterov. A method for solving the convex programming problem with convergence rate . In Doklady AN USSR, volume 269, pages 543–547, 1983.
- [Park et al.(2019)Park, Samarakoon, Bennis, and Debbah] Jihong Park, Sumudu Samarakoon, Mehdi Bennis, and Mérouane Debbah. Wireless network intelligence at the edge. Proc. of the IEEE, 107(11):2204–2239, November 2019.
- [Polyak(1964)] Boris T Polyak. Some methods of speeding up the convergence of iteration methods. Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.
- [Reddi et al.(2018)Reddi, Kale, and Kumar] Sashank Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In Proc. Intl. Conf. Learn. Representations, Vancouver, Canada, April 2018.
- [Reddi et al.(2020)Reddi, Charles, Zaheer, Garrett, Rush, Konečnỳ, Kumar, and McMahan] Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečnỳ, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. arXiv preprint:2003.00295, March 2020.
- [Reisizadeh et al.(2019)Reisizadeh, Taheri, Mokhtari, Hassani, and Pedarsani] Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, and Ramtin Pedarsani. Robust and communication-efficient collaborative learning. In Proc. Conf. Neural Info. Process. Syst., pages 8386–8397, Vancouver, Canada, December 2019.
- [Robbins and Monro(1951)] Herbert Robbins and Sutton Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22(3):400–407, September 1951.
- [Seide et al.(2014)Seide, Fu, Droppo, Li, and Yu] Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs. In Proc. Conf. Intl. Speech Comm. Assoc., Singapore, Sept 2014.
- [Shamir et al.(2014)Shamir, Srebro, and Zhang] Ohad Shamir, Nati Srebro, and Tong Zhang. Communication-efficient distributed optimization using an approximate newton-type method. In Proc. Intl. Conf. Machine Learn., pages 1000–1008, Beijing, China, Jun 2014.
- [Stich et al.(2018)Stich, Cordonnier, and Jaggi] Sebastian U. Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. Sparsified SGD with memory. In Proc. Conf. Neural Info. Process. Syst., pages 4447–4458, Montreal, Canada, Dec 2018.
- [Stich(2019)] Sebastian Urban Stich. Local SGD converges fast and communicates little. In Proc. Intl. Conf. Learn. Representations, New Orleans, LA, May 2019.
- [Strom(2015)] Nikko Strom. Scalable distributed DNN training using commodity GPU cloud computing. In Proc. Conf. Intl. Speech Comm. Assoc., Dresden, Germany, September 2015.
- [Sun et al.(2019)Sun, Chen, Giannakis, and Yang] Jun Sun, Tianyi Chen, Georgios Giannakis, and Zaiyue Yang. Communication-efficient distributed learning via lazily aggregated quantized gradients. In Proc. Conf. Neural Info. Process. Syst., page to appear, Vancouver, Canada, Dec 2019.
- [Tang et al.(2018)Tang, Gan, Zhang, Zhang, and Liu] Hanlin Tang, Shaoduo Gan, Ce Zhang, Tong Zhang, and Ji Liu. Communication compression for decentralized training. In Proc. Conf. Neural Info. Process. Syst., pages 7652–7662, Montreal, Canada, December 2018.
- [Tang et al.(2020)Tang, Gan, Rajbhandari, Lian, Zhang, Liu, and He] Hanlin Tang, Shaoduo Gan, Samyam Rajbhandari, Xiangru Lian, Ce Zhang, Ji Liu, and Yuxiong He. Apmsqueeze: A communication efficient adam-preconditioned momentum sgd algorithm. arXiv preprint:2008.11343, August 2020.
- [Vogels et al.(2019)Vogels, Karimireddy, and Jaggi] Thijs Vogels, Sai Praneeth Karimireddy, and Martin Jaggi. PowerSGD: Practical low-rank gradient compression for distributed optimization. In Proc. Conf. Neural Info. Process. Syst., pages 14236–14245, Vancouver, Canada, December 2019.
- [Wang et al.(2020a)Wang, Lu, Tu, and Zhang] Guanghui Wang, Shiyin Lu, Weiwei Tu, and Lijun Zhang. SAdam: A variant of adam for strongly convex functions. In Proc. Intl. Conf. Learn. Representations, 2020a.
- [Wang and Joshi(2019)] Jianyu Wang and Gauri Joshi. Cooperative SGD: A unified framework for the design and analysis of communication-efficient SGD algorithms. In ICML Workshop on Coding Theory for Large-Scale ML, Long Beach, CA, June 2019.
- [Wang et al.(2020b)Wang, Tantia, Ballas, and Rabbat] Jianyu Wang, Vinayak Tantia, Nicolas Ballas, and Michael Rabbat. SlowMo: Improving communication-efficient distributed SGD with slow momentum. In Proc. Intl. Conf. Learn. Representations, 2020b.
- [Wangni et al.(2018)Wangni, Wang, Liu, and Zhang] Jianqiao Wangni, Jialei Wang, Ji Liu, and Tong Zhang. Gradient sparsification for communication-efficient distributed optimization. In Proc. Conf. Neural Info. Process. Syst., pages 1299–1309, Montreal, Canada, Dec 2018.
- [Ward et al.(2019)Ward, Wu, and Bottou] Rachel Ward, Xiaoxia Wu, and Leon Bottou. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. In Proc. Intl. Conf. Machine Learn., pages 6677–6686, Long Beach, CA, June 2019.
- [Wen et al.(2017)Wen, Xu, Yan, Wu, Wang, Chen, and Li] Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Terngrad: Ternary gradients to reduce communication in distributed deep learning. In Proc. Conf. Neural Info. Process. Syst., pages 1509–1519, Long Beach, CA, Dec 2017.
- [Wu et al.(2018)Wu, Huang, Huang, and Zhang] Jiaxiang Wu, Weidong Huang, Junzhou Huang, and Tong Zhang. Error compensated quantized sgd and its applications to large-scale distributed optimization. In Proc. Intl. Conf. Machine Learn., pages 5325–5333, Stockholm, Sweden, Jul 2018.
- [Xu et al.(2020)Xu, Sutcher-Shepard, Xu, and Chen] Yangyang Xu, Colin Sutcher-Shepard, Yibo Xu, and Jie Chen. Asynchronous parallel adaptive stochastic gradient methods. arXiv preprint:2002.09095, February 2020.
- [Yu et al.(2019)Yu, Jin, and Yang] Hao Yu, Rong Jin, and Sen Yang. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization. In Proc. Intl. Conf. Machine Learn., pages 7184–7193, Long Beach, CA, June 2019.
- [Zeiler(2012)] Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint:1212.5701, December 2012.
- [Zhang et al.(2017)Zhang, Li, Kara, Alistarh, Liu, and Zhang] Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, and Ce Zhang. Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning. In Proc. Intl. Conf. Machine Learn., pages 4035–4043, Sydney, Australia, Aug 2017.
- [Zhang et al.(2015)Zhang, Choromanska, and LeCun] Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging SGD. In Proc. Conf. Neural Info. Process. Syst., pages 685–693, Montreal, Canada, Dec 2015.
- [Zhang and Lin(2015)] Yuchen Zhang and Xiao Lin. DiSCO: Distributed optimization for self-concordant empirical loss. In Proc. Intl. Conf. Machine Learn., pages 362–370, Lille, France, June 2015.
Supplementary materials for “CADA: Communication-Adaptive Distributed Adam”
In this supplementary document, we first present some basic inequalities that will be used frequently in this document, and then present the missing derivations of some claims, as well as the proofs of all the lemmas and theorems in the paper, which is followed by details on our experiments. The content of this supplementary document is summarized as follows.
6 Supporting Lemmas
Define the -algebra . For convenience, we also initialize parameters as . Some basic facts used in the proof are reviewed as follows.
Fact 1. Assume that are independent random variables, and . Then
(18) |
Fact 2. (Young’s inequality) For any ,
(19) |
As a consequence, we have
(20) |
Fact 3. (Cauchy-Schwarz inequality) For any , we have
(21) |
Lemma 6.1.
For , if are the iterates generated by CADA, we have
(22) |
and similarly, we have
(23) |
Proof 6.2.
We first show the following holds.
(24) |
where (a) follows from the law of total probability, and (b) holds because is deterministic conditioned on when .
We first prove (6.1) by decomposing it as
(25) |
where (c) holds due to (6.2), (d) uses Assumption 1, and (e) applies the Young’s inequality.
Proof 6.4.
Using Assumption 2, it follows that
(29) |
Therefore, from the update (2a), we have
Since , if follows by induction that .
Using Assumption 2, it follows that
(30) |
Lemma 6.5.
7 Missing Derivations in Section 2.2
The analysis in this part is analogous to that in [Ghadimi and Lan(2013)]. We define an auxiliary function as
where is a minimizer of . Assume that is -Lipschitz continuous for all , we have
Taking expectation with respect to , we can obtain
Note that is also -Lipschitz continuous and thus
7.1 Derivations of (6)
7.2 Derivations of (9)
Recall that
And by (21), we have . We decompose the first term as
By nonnegativity of , we have
(35) |
Similarly, we can prove
(36) |
Therefore, it follows that
7.3 Derivations of (11)
8 Proof of Lemma 3.1
Using the smoothness of in Assumption 1, we have
(37) |
We can further decompose the inner product as
(38) |
where we again decompose the first inner product as
(39) |
Next, we bound the terms separately.
Taking expectation on conditioned on , we have
(40) |
where follows from the -smoothness of implied by Assumption 1; and (b) uses again the decomposition (8) and (8).
Taking expectation on over all the randomness, we have
(41) |
where (d) follows from the Cauchy-Schwarz inequality and (e) is due to Assumption 2.
Regarding , we will bound separately in Lemma 3.2.
Taking expectation on over all the randomness, we have
(42) |
9 Proof of Lemma 3.2
We first analyze the inner produce under CADA2 and then CADA1.
First recall that . Using the law of total probability implies that
(45) |
Decomposing the inner product, for the CADA2 rule (10), we have
(47) |
where (b) follows from Lemma 6.1.
Using the Young’s inequality, we can bound the last inner product in (9) as
(48) |
where (g) follows from the Cauchy-Schwarz inequality, and (h) uses the adaptive communication condition (10) in CADA2, and (i) follows since is entry-wise nonnegative and is nonnegative.
10 Proof of Lemma 3.3
For notational brevity, we re-write the Lyapunov function (3.1) as
(51) |
where are some positive constants.
Therefore, taking expectation on the difference of and in (10), we have (with )
(52) |
where (a) uses the smoothness in Assumption 1 and the definition of in (8) and (8).
Note that we can bound the same as (8) in the proof of Lemma 3.1. In addition, Lemma 3.2 implies that
(53) |
Select and so that and
In addition, select to ensure that . Then it follows from (10) that
(55) |
where we have also used the fact that since .
If we choose for , then it follows from (10) that
(56) |
To ensure and , it is sufficient to choose and satisfying (with )
(57) | |||
(58) |
11 Proof of Theorem 3.4
By taking summation on (10) over , it follows from that
(62) |
Specifically, if we choose a constant stepsize , where is a constant, and define
(63) |
and
(64) |
and
(65) |
and
(66) |
we can obtain from (11) that
where we define , , , and .
12 Proof of Theorem 3.5
By the PL-condition of , we have
(67) |
where (a) uses the definition of in (66), and (b) uses Assumption 2 and Lemma 6.3.
Plugging (12) into (10), we have
(68) | ||||
If we choose to ensure that , then we can obtain from (68) that
(69) | ||||
If , we choose parameters to guarantee that
(70) | |||
(71) |
and choose to ensure that .
Then we have
(72) |
If we choose , where is a sufficiently large constant to ensure that satisfies the aforementioned conditions, then we have
from which the proof is complete.
13 Additional Numerical Results
13.1 Simulation setup
In order to verify our analysis and show the empirical performance of CADA, we conduct experiments in the logistic regression and training neural network tasks, respectively.
In logistic regression, we tested the covtype and ijcnn1 in the main paper, and MNIST in the supplementary document. In training neural networks, we tested MNIST dataset in the main paper, and CIFAR10 in the supplementary document. To benchmark CADA, we compared it with some state-of-the-art algorithms, namely ADAM [Kingma and Ba(2014)], stochastic LAG, local momentum [Yu et al.(2019)Yu, Jin, and Yang, Wang et al.(2020b)Wang, Tantia, Ballas, and Rabbat] and FedAdam [Reddi et al.(2020)Reddi, Charles, Zaheer, Garrett, Rush, Konečnỳ, Kumar, and McMahan].
All experiments are run on a workstation with an Intel i9-9960x CPU with 128GB memory and four NVIDIA RTX 2080Ti GPUs each with 11GB memory using Python 3.6.
13.2 Simulation details
13.2.1 Logistic regression.
Objective function. For the logistic regression task, we use either the logistic loss for the binary case, or the cross-entropy loss for the multi-class class, both of which are augmented with an norm regularizer with the coefficient .
Data pre-processing. For ijcnn1 and covtype datasets, they are imported from the popular library LIBSVM (https://www.csie.ntu.edu.tw/ cjlin/libsvm/) without further preprocessing. For MNIST, we normalize the data and subtract the mean. We uniformly partition ijcnn1 dataset with 91,701 samples and MNIST dataset with 60,000 samples into workers. To simulate the heterogeneous setting, we partition covtype dataset with 581,012 samples randomly into workers with different number of samples per worker.
For covtype, we fix the batch ratio to be 0.001 uniformly across all workers; and for ijcnn1 and MNIST, we fix the batch ratio to be 0.01 uniformly across all workers.
Choice of hyperparameters. For the logistic regression task, the hyperparameters in each algorithm are chosen by hand to roughly optimize the training loss performance of each algorithm. We list the values of parameters used in each test in Tables 1-2.
Algorithm | stepsize | momentum weight | averaging interval |
---|---|---|---|
FedAdam | |||
Local momentum | |||
ADAM | / | ||
CADA1&2 | , | ||
Stochastic LAG | / |
Algorithm | stepsize | momentum weight | averaging interval |
---|---|---|---|
FedAdam | |||
Local momentum | |||
ADAM | / | ||
CADA | , | ||
Stochastic LAG | / |
13.2.2 Training neural networks.
For training neural networks, we use the cross-entropy loss but with different network models.
Neural network models. For MNIST dataset, we use a convolutional neural network with two convolution-ELUmaxpooling layers (ELU is a smoothed ReLU) followed by two fully-connected layers. The first convolution layer is with padding, and the second layer is with padding. The output of second layer is followed by two fully connected layers with one being and the other being . The output goes through a softmax function. For CIFAR10 dataset, we use the popular neural network architecture ResNet20 111https://github.com/akamaster/pytorch_resnet_cifar10 which has 20 and roughly 0.27 million parameters. We do not use a pre-trained model.
Data pre-processing. We uniformly partition MNIST and CIFAR10 datasets into workers. For MNIST, we use the raw data without preprocessing. The minibatch size per worker is 12. For CIFAR10, in addition to normalizing the data and subtracting the mean, we randomly flip and crop part of the original image every time it is used for training. This is a standard technique of data augmentation to avoid over-fitting. The minibatch size for CIFAR10 is 50 per worker.
Choice of hyperparameters. For MNIST dataset which is relatively easy, the hyperparameters in each algorithm are chosen by hand to optimize the performance of each algorithm. We list the values of parameters used in each test in Table 3.
Algorithm | stepsize | momentum weight | averaging interval |
---|---|---|---|
FedAdam | |||
Local momentum | |||
ADAM | / | ||
CADA1&2 | |||
Stochastic LAG | / |
For CIFAR10 dataset, we search the best values of hyperparameters from the following search grid on a per-algorithm basis to optimize the testing accuracy versus the number of communication rounds. The chosen values of parameter are listed in Table 4.
FedAdam: ; ; .
Local momentum: ; .
CADA1: ; .
CADA2: ; .
LAG: ; .




Algorithm | stepsize | momentum weight | averaging interval |
---|---|---|---|
FedAdam | |||
Local momentum | |||
CADA1 | , | ||
CADA2 | , | ||
Stochastic LAG | / |
Additional results. In addition to the results presented in the main paper, we report a new set of simulations on the performance of local update based algorithms under different averaging interval . Since algorithms under do not perform as good as , we only plot in Figures 6 and 7 to ease the comparison. Figure 6 compares the performance of FedAdam and local momentum on MNIST dataset under different averaging interval . Figure 7 compares the performance of FedAdam and local momentum on CIFAR10 dataset under different .
Figure 7 compares the performance of FedAdam and local momentum on CIFAR10 dataset under different averaging interval . FedAdam and local momentum under a larger averaging interval have faster convergence speed at the initial stage, but they reach slightly lower testing accuracy. This reduced test accuracy is common among local SGD-type methods, which has also been studied theoretically; see e.g., [Haddadpour et al.(2019)Haddadpour, Kamani, Mahdavi, and Cadambe].