OledFL: Unleashing the Potential of Decentralized Federated Learning via Opposite Lookahead Enhancement
Abstract
Decentralized Federated Learning (DFL) surpasses Centralized Federated Learning (CFL) in terms of faster training, privacy preservation, and light communication, making it a promising alternative in the field of federated learning. However, DFL still exhibits significant disparities with CFL in terms of generalization ability such as rarely theoretical understanding and degraded empirical performance due to severe inconsistency. In this paper, we enhance the consistency of DFL by developing an opposite lookahead enhancement technique (Ole), yielding OledFL to optimize the initialization of each client in each communication round, thus significantly improving both the generalization and convergence speed. Moreover, we rigorously establish its convergence rate in non-convex setting and characterize its generalization bound through uniform stability, which provides concrete reasons why OledFL can achieve both the fast convergence speed and high generalization ability. Extensive experiments conducted on the CIFAR10 and CIFAR100 datasets with Dirichlet and Pathological distributions illustrate that our OledFL can achieve up to 5% performance improvement and 8 speedup, compared to the most popular DFedAvg optimizer in DFL.
Index Terms:
Decentralized Federated Learning, Non-convex Optimization, Acceleration, Convergence Analysis, Generalization Analysis.I Introduction
Federated Learning (FL) is a novel distributed machine learning paradigm that ensures privacy protection [1, 2, 3]. It enables multiple participants to collaborate on training models without sharing their raw data. Currently, most research efforts [4, 5, 6, 7, 8, 9] have focused on Centralized Federated Learning (CFL). However, the presence of a central server in CFL introduces challenges such as communication burden, single point of failure [10], and privacy breaches [11]. In contrast, Distributed Federated Learning (DFL) offers improved privacy protection [12], faster model training [13], and robustness to slow client devices [14]. Therefore, DFL has emerged as a popular alternative solution [10, 13].
However, there still exists a significant performance gap between CFL and existing DFL methods, which we attribute to the decentralized communication approach of DFL. The lack of central server coordination results in increasing inconsistency between the model parameters of client and of client as the communication rounds increase, the severe inconsistency leads to a significant gap between the final output of the model and the global optimum , leading to performance disparities compared to CFL. Although existing methods such as DFedAvgM [15] and DFedSAM [16] have improved the performance of DFL algorithms by introducing new local optimizers to accelerate convergence and address heterogeneous data overfitting, they have not tackled the issue from the critical perspective of communication discrepancies between DFL and CFL, resulting in a persistent performance gap between DFL and CFL. Additionally, existing theoretical analyses of DFL methods are limited to convergence analysis, while the generalization theory is still absent.
In this work, we propose a plug-in method named the opposite Lookahead enhancement (Ole) technique to enhance the consistency of DFL, dubbed OledFL that reduces the performance gap between CFL and DFL. OledFL framework can seamlessly integrate existing DFL optimizers to significantly improve the convergence speed and generalization performance of DFL empirically. We also theoretically prove that OledFL can enhance the convergence speed and reduce generalization error compared to the original DFL methods. Specifically, OledFL performs a retraction operation during the initialization phase of optimizing each client model parameter (see Figure 1). This ensures that each client will not stray too far from during the subsequent optimization process. By adopting this initialization operation for each client before optimization, the mutual consistency among them is naturally strengthened.
Theoretically, we jointly analyze the optimization error and generalization error by introducing excess error. Specifically, we demonstrate that in a non-convex setting, OledFL-SGD (integrating OledFL with DFedAvg) and OledFL-SAM (integrating OledFL with DFedSAM) exhibit an optimization error and convergence rate of , and the theoretical analysis indicates that “Ole” can reduce the algorithm’s optimization error (Remark 2). Additionally, through uniform stability analysis, we assess the algorithm’s generalization error and find that “Ole” can significantly reduce generalization error (Remark 3). For the experiments on CIFAR10&100 datasets under Dirichlet and Pathological distributions, OledFL consistently demonstrates notable improvements in convergence speed (see Table III) and generalization performance (see Table II) over existing DFL methods and outperforms state-of-the-art CFL methods such as FedSAM [17] and SCAFFOLD [18].
In summary, our main contributions are three-fold:
-
•
We propose an opposite Lookahead enhancement (Ole) technique to address the inconsistency issue in DFL, dubbed OledFL. OledFL can seamlessly integrate existing DFL optimizers such as DFedAvg and DFedSAM to significantly improve their convergence speed and generalization performance, thereby reducing the performance gap between CFL and DFL. Furthermore, we provide a comprehensive explanation of the role of Ole, including intuitive (Figure 1), theoretical (Remark 3), and experimental interpretations (Section 5 & Figure 4).
-
•
We establish a pioneering generalization analysis (Section IV) in the field of DFL, which theoretically demonstrates the effectiveness of OledFL from the perspective of optimization error bounds and generalization error bounds.
-
•
We conduct extensive experiments on CIFAR10&100 datasets under Dirichlet and Pathological distributions, respectively, which demonstrate that OledFL significantly enhances the convergence speed and generalization performance of existing DFL methods. With the assistance of Ole, it can achieve up to 5% performance improvement and 8 speedup, compared to the most popular DFedAvg optimizer in DFL. This notably reduces the gap between CFL and DFL (Table II & Figure 2).
II Related work
Below, we will briefly review the most relevant work to our research, which includes Decentralized Federated Learning (DFL), acceleration techniques in optimization, and theoretical guarantees of DFL.
Decentralized Federated Learning (DFL). To mitigate the communication burden on the server in centralized scenarios, decentralized communication methods distribute the communication load to each node while maintaining overall communication complexity equivalent to that in centralized scenarios [13]. Additionally, decentralized communication methods afford improved privacy protection compared to CFL [20, 21, 22]. DFL has emerged as a promising field of research, recognized as a challenge in various review articles in recent years [23, 24]. Within DFL, Sun et al. [15] extend the FedAvg algorithm [1] to decentralized scenarios and complement it with local momentum acceleration to enhance convergence. Furthermore, Dai et al. [25] introduce sparse training into DFL to reduce communication and computation costs, while shi et al. [16] apply SAM to DFL and enhance the consistency among clients by incorporating Multiple Gossip Steps. These endeavors gradually improve the performance of DFL from different perspectives. However, a significant performance gap still exists compared to CFL. For further related work on DFL, please refer to the survey papers [11, 26, 23] and their references therein.
Acceleration Techniques for Deep Learning. In deep learning, momentum and restart are typical acceleration techniques. The momentum techniques focus more on the optimization of the optimizer design, while restart techniques focus more on the selection of initialization points. In Federated Learning (FL), one type of method used in the centralized FL (CFL) domain is the global momentum, such as FedCM [27] and MimeLite [28], which estimates the global momentum at the server and applies it to each client update, thereby alleviating the problem of client heterogeneity. Another type is the local momentum used in DFL, such as the DFedAvgM algorithm proposed by [15], which utilizes local momentum to accelerate the convergence process. Restart techniques have nearly negligible computational cost but significantly enhance the effectiveness of the algorithm. Zhou [29] introduces the Lookahead optimizer, which improves learning stability and reduces the variance of its inner optimizer through a -steps forward, step back approach. Additionally, lin et al.[30] develop a universal framework called Catalyst, which can accelerate first-order optimization methods such as SAG [31], SAGA [32], and SVRG [33]. Subsequently, Trimbach et al.[34] extend Catalyst to the field of distributed learning and used Catalyst to accelerate the DSGD algorithm [35].
Theoretical Guarantees of DFL. A more comprehensive comparison is presented in Table I. In the aspect of convergence analysis, current DFL works such as DFedAvg [15], DFedAvgM [15], and DFedSAM [16] have only concentrated on the convergence analysis. DFedSAM, by abandoning the gradient bounded assumption in DFedAvg, has demonstrated an advancement in the convergence analysis of the algorithm. However, in machine learning, algorithms are often evaluated based on their ability to perform well on new, unseen data. This is known as generalization performance, and algorithms that exhibit higher generalization performance are considered to be state-of-the-art. However, in the field of DFL, there is currently a lack of analysis on how well-proposed algorithms generalize to new data. This results in a lack of theoretical support for why these algorithms perform well.
III Methodology
In this section, we begin by elucidating the meanings of several notations. We then introduce the problem setup, and then, we propose OledFL and compare it with existing methods to demonstrate the novelty of OledFL. Finally, we will elucidate the relation with Chebyshev Acceleration and provide an effective and intuitive explanation of the OledFL.
III-A Notations
Let be the total number of clients. represents the number of communication rounds. indicates variable at the -th iteration of the -th round in the -th client. denotes the model parameters. The communication topology between clients can be represented as graph denoted as , where represents the set of clients, denotes the links between clients, represents the weight of the link between client and client and represents the mixing matrix. The inner product of two vectors is denoted by , and represents the Euclidean norm of a vector. Other symbols will be explained in their respective contexts.
III-B Problem Setup
In this paper, we consider a network of clients whose objective is to jointly solve the following distributed population risk minimization problem:
(1) |
where represents the data distribution in the -th client, which exhibits heterogeneity across clients. Each client’s local objective function is associated with the training data samples . We denote as the optimal value of (1). Unlike CFL, we address (1) by enabling clients to collaborate through a mixing matrix in a decentralized manner, leveraging peer-to-peer communication among clients without the need for server coordination.
Practically, we consider the empirical risk minimization of the non-convex finite-sum problem in DFL as:
(2) |
where each client stores a private dataset , with drawn from an unknown distribution . We denote as the optimal value of problem (2).
III-C OledFL Algorithm

The most popular decentralized FL optimizers for solving the problem (2) are DFedSAM [16] and DFedAvg [15]. However, the decentralized approach, lacking central server coordination, leads to increased client inconsistency. To enhance consistency, we design the opposite lookahead enhanced (Ole) initialization method before performing local optimization at each client:
(3) |
where is the aggregated model. Intuitively, at each initialization, clients obtain the opposite direction of the most recent iteration through Ole. This ensures that the values obtained in the subsequent optimization, , do not deviate too far from the initial value , thereby strengthening the consistency among clients. A more intuitive explanation of the effect of Ole is shown in Figure 1. The complete algorithm is presented in Algorithm 1 111Here we provide the default form of OledFL that is composed with SAM. In the experiments, we will instantiate OledFL as OledFL-SGD (by setting ) and OledFL-SAM if necessary..
Discuss on Lookhead Optimizer. Zhang et al. [36] propose the lookahead optimizer with the initial optimization value denoted as , whose core iterative form is given by:
(4) |
It optimizes the sequence of “fast weights” through an internal loop times and then utilizes these fast weights to determine the initial search direction of the “slow weights” , which reduces variance and facilitates rapid convergence of lookahead in practice. When we omit the subscript in (3) and set in (4), the Ole component in (3) is exactly opposite to the symbol in (4), which is the origin of the name “Opposite lookahead enhancement”.
Discuss on Catalyst. In distributed optimization, lin et al.[30] propose a generic acceleration scheme for a large class of optimization methods with the initial optimization value denoted as , whose core iterative form as follows:
(5) |
Through (5), significant acceleration in the convergence of algorithms such as SAG [31], SAGA [32], and SVRG [33] can be achieved. By comparing (5) with (3), it is evident that the initialization coefficient in (3) is a simpler fixed value. When considering the subscript in (5) [34], the subtraction in Ole is performed using the client’s most recently obtained parameter value rather than the value after the previous communication . This difference leads to a fundamental distinction between catalyst and ole, where ole initializes the point in the opposite direction of the current point and the local optimal value point, as shown in Figure 1 (OledFL), while catalyst, similar to the momentum method, initializes the point in the direction of from the current point.
Discussion of the relation with Chebyshev Acceleration (CA): Chebyshev Acceleration has been widely utilized to expedite the attainment of consensus among networked agents during the distributed optimization process [37]. When CA is integrated with stochastic gradient accumulation employing a large batch size at each iteration, it can ensure optimal convergence [38, 39]. A key idea of CA involves transforming the mixing matrix from to . Here, is expressed as:
In simple terms, CA transforms the communication topology into in the communication period, where . It can be proven that facilitates faster achievement of consensus among multiple clients compared to [40, 41]. The update form of Ole is equivalent to aggregation using the communication matrix , demonstrating a connection between Ole and the core idea of CA. With the relationship to CA, we will constrain in Algorithm 1. Figure 1 shows that Ole enhances consistency, and the relationship between Ole and CA can explain that the acceleration effect of CA originates from the enhancement of consistency among clients.
IV Theoretical Analysis
In this section, we begin by presenting the theoretical analysis, which offers a comprehensive examination of the combined performance in optimization and generalization. Then, we introduce the necessary assumptions utilized in our proofs. At last, we outline the main theorems, encompassing both optimization and generalization, respectively.
IV-A Excess Risk Error
In existing literature on DFL, the most of analyses focus on the studies from the onefold perspective of convergence but ignore learning its impact on generality [16, 15, 42]. Additionally, some studies in distributed learning exclusively analyze the algorithm’s generalization while overlooking the effect on convergence [43, 44, 45]. To offer a more comprehensive examination of the joint performance of both optimization and generalization in FL, we introduce the well-known concept of excess risk in our analysis.
Firstly, we define as the final model generated by the OledFL method after communication rounds, where . In comparison with , our primary focus is on the efficiency of which corresponds to its generalization performance and test accuracy. Consequently, we analyze based on the excess risk as:
(6) | ||||
Generally, is expected to be very small and even to zero if the model could fit the dataset. Thus could be considered as the joint efficiency of the generated model . Thereinto, means the different performance of between the training dataset and the test dataset, and means the similarity between and optimization optimum . From the perspective of the excess risk, approximates our focus . It is worth noting that, in non-convex settings, is measured via gradient residual rather than . We will investigate the optimization and generalization performance respectively in the following subsections.
IV-B Definition And Assumptions
Below, we introduce the definition of the mixing matrix and several assumptions utilized in our analysis.
Definition 1
(Gossip/Mixing matrix). [Definition 1, [15]] The gossip matrix is assumed to have the following properties: (i) (Graph) If and , then , otherwise, ; (ii) (Symmetry) ; (iii) (Null space property) ; (iv) (Spectral property) . Under these properties, the eigenvalues of satisfy . Furthermore, we define and as the spectral gap of .
Assumption 1
(L-Smoothness) The non-convex function satisfies the smoothness property for all , i.e., , for all .
Assumption 2
(Bounded Variance) The stochastic gradient with the randomly sampled data on the local client is unbiased and with bounded variance, i.e., and , for all .
Assumption 3
(Bounded Heterogeneity) For all .the heterogeneous similarity is bounded on the gradient norm as , where and are two constants.
Assumption 4
(Lipschitz Continuity). The global function fsatisfies the -Lipschitz property, i.e. for ,
Definition 1 is commonly used to describe the communication topology in DFL, where it can also be viewed as a Markov transition matrix. The term measures the speed of convergence to the equilibrium state. Assumptions 1-3 are mild and commonly used to analyze the non-convex objective DFL [15, 16]. Assumption 4 is utilized to bound the uniform stability for the non-convex objective [43, 46, 47]. It is important to note that when describing the algorithm’s convergence speed, we only use assumptions 1-3; whereas in describing the algorithm’s generalization bound, we utilize assumptions 1, 2 and 4. This is because assumption 4 implies bounded gradients, i.e., , while assumption 3 is more general than the bounded gradient assumption.
IV-C Optimization Analysis
In this part, we present the optimization error and convergence rate of OledFL under the assumptions 1-3. All the proofs can be found in the Appendix B-A.
Theorem 1
Under Assumption 1 - 3, let the learning rate satisfy where , let the Ole parameter , and after training rounds, the averaged model parameters generated by our proposed algorithm satisfies:
where is a constant and , , , which the consistency term and .
Further, by selecting the proper learning rate and let as the initialization bias, thenthe averaged model parameters satisfies:
Remark 1
When setting in Theorem 1, we can obtain the optimization error and convergence rate of OledFL with local SGD. Moreover, by setting , the term in the convergence rate, as generated by local SAM, can be neglected in comparison to the dominant term . Furthermore, OledFL achieves a convergence rate of , which has been demonstrated as the optimal rate in stochastic methods in DFL under general assumptions. Additionally, in the dominant term of the convergence rate , it can be observed that better topological connectivity leads to a faster convergence rate. We will verify this conclusion in Section V-C. Moreover, a larger number of local epochs also leads to a faster convergence rate, as confirmed in Section V-D (see Figure 8 (b)).
Remark 2
Theorem 1 provides a general convergence bound for the OledFL. When , it degrades to the vanilla DFedAvg method [15]. Similar to DFedAvg, it is influenced by the initialization bias and the intrinsic variance . However, with the adoption of initialization, the consistency term can assist in reducing the upper bound of the convergence rate. Furthermore, from Theorem 1, it can be observed that a larger leads to a smaller optimization error. On the other hand, provides an upper bound. By neglecting specific numerical relations and focusing on the associated relations, we can obtain an important conclusion: better-connected communication topology (implying a smaller ) corresponds to a smaller optimal value of for the algorithm. Section V-C verifies our conclusion (see Figure 7).
Algorithm | CIFAR-10 | CIFAR-100 | ||||
Dir 0.3 | Dir 0.6 | IID | Dir 0.3 | Dir 0.6 | IID | |
FedAvg | 78.10 0.81 | 79.00 1.04 | 81.16 0.20 | 54.50 0.70 | 55.69 0.72 | 56.94 0.41 |
FedSAM | 80.22 0.70 | 81.43 0.28 | 82.83 0.23 | 57.76 0.36 | 58.62 0.51 | 59.97 0.19 |
SCAFFOLD | 78.39 0.35 | 79.85 0.18 | 81.75 0.23 | 60.23 0.30 | 61.42 0.52 | 62.75 0.31 |
D-PSGD | 59.76 0.04 | 60.03 0.13 | 62.93 0.12 | 55.68 0.20 | 56.68 0.10 | 57.60 0.12 |
DFedAvg | 77.25 0.12 | 77.83 0.11 | 79.97 0.08 | 58.17 0.10 | 58.22 0.50 | 59.00 0.31 |
DFedAvgM | 79.30 0.24 | 80.66 0.07 | 82.72 0.20 | 57.79 0.29 | 58.13 0.39 | 58.90 0.47 |
DFedSAM | 79.37 0.07 | 80.47 0.09 | 82.14 0.09 | 57.67 0.20 | 58.55 0.23 | 59.53 0.24 |
OledFL-SGD | 82.20 0.20 | 82.72 0.22 | 84.10 0.24 | 60.00 0.15 | 60.65 0.30 | 62.58 0.21 |
OledFL-SAM | 84.45 0.19 | 84.70 0.10 | 85.75 0.20 | 61.22 0.12 | 61.78 0.22 | 63.55 0.20 |
Algorithm | CIFAR-10 | CIFAR-100 | ||||
Path 2 | Path 4 | Path 6 | Path 10 | Path 20 | Path 30 | |
FedAvg | 69.01 0.96 | 75.80 1.01 | 76.76 1.11 | 52.12 0.70 | 55.70 0.63 | 57.14 0.26 |
FedSAM | 69.33 1.32 | 75.78 1.04 | 77.55 0.68 | 52.46 0.69 | 55.75 0.55 | 57.60 0.36 |
SCAFFOLD | 64.28 2.12 | 78.08 0.39 | 80.35 0.19 | 54.17 0.51 | 58.32 0.67 | 60.50 0.35 |
D-PSGD | 59.53 0.40 | 63.80 0.22 | 64.86 0.48 | 52.66 0.58 | 55.79 0.11 | 56.82 0.35 |
DFedAvg | 74.73 0.26 | 77.50 0.25 | 79.00 0.25 | 54.72 0.19 | 57.57 0.11 | 58.30 0.24 |
DFedAvgM | 75.10 0.27 | 79.50 0.33 | 81.07 0.27 | 48.19 0.45 | 53.60 0.64 | 53.95 0.85 |
DFedSAM | 75.08 0.11 | 79.85 0.09 | 81.36 0.11 | 53.86 0.17 | 57.58 0.22 | 58.80 0.21 |
OledFL-SGD | 78.20 0.24 | 81.54 0.25 | 83.10 0.30 | 55.03 0.11 | 58.87 0.13 | 59.75 0.10 |
OledFL-SAM | 79.58 0.18 | 83.45 0.21 | 84.90 0.18 | 56.68 0.16 | 60.22 0.24 | 61.57 0.21 |
IV-D Generalization Analysis
In this section, we primarily present a generalization analysis of our OledFL method under two gradient properties (bounded variance and Lipschitz continuity). We utilize uniform stability analysis, which is widely adopted in previous literature, to analyze the generalization error of OledFL. Next, we introduce the definition of uniform stability and then demonstrate the effectiveness of OledFL. All detailed proofs can be available in the Appendix B-B.
In DFL framework, we suppose there are clients participating in the training process as a set . Each client has a local dataset with total data sampled from a specific unknown distribution . Now we define a re-sampled dataset which only differs from the dataset on the -th data. We replace the with and keep other local dataset, which composes a new set . only differs from the at -th data on the -th client. Then, based on these two sets, OledFL could generate two solutions, and respectively, after rounds. By bounding the difference according to these two models, we can obtain stability and generalization efficiency.
Definition 2
Theorem 2
Under Assumption 2, 3, and 4, let all conditions in the optimization process be satisfied, let the learning rate be selected as where is a constant and , let be a specific round to firstly select the different data sample, and let be the upper bound, for arbitrary data sample followed the joint distribution , we have:
where denotes the amount of data owned by each client, Furthermore, to minimize the stability errors, we can select the proper observation point we then have
Remark 3
We provide further discussion on the impact of on the convergence rate and generalization performance of the algorithm. Firstly, from the negative coefficient in the last term of Theorem 1, it can be inferred that a larger value will reduce the optimization error, thereby accelerating the convergence rate. As depicted in Figure 8 (a), with the increase in , the convergence curve becomes steeper, demonstrating a faster convergence speed. Additionally, the expression of the generalization error derived from Theorem 2, indicates that as increases, the terms and decrease. This implies that with an increase in , the generalization error of the algorithm will decrease, thereby enhancing its generalization performance. It is evident from Figure 8 (a) that as increases, the maximum value of the convergence curve (indicating generalization performance) also increases. In summary, a larger will simultaneously enhance the convergence speed and generalization of the algorithm. Furthermore, from Theorem 2, it can be observed that better topological connectivity (smaller ) leads to a smaller generalization error, which is validated in Section V-C (see Figure 6).
Remark 4
Theorem 2 establishes the generalization error of OledFL. When , it reduces to the vanilla DFedAvg [15], the generalization error is given by , which fills the gap in the generalization performance of DFedAvg [15]. From the generalization error, we mainly focus on the terms of the total number of data samples , training length and . Compared with the generalization bound of D-SGD proposed by [43], , the local epochs in DFedAvg can enhance its generalization, which is consistent with the observation in Figure 2&3.
V Experiment
In this section, we conduct extensive experiments to verify the effectiveness of the proposed OledFL.
Methods | Dir 0.3 | Dir 0.6 | IID | ||||||
Acc@75 | Acc@77 | Acc@80 | Acc@75 | Acc@77 | Acc@80 | Acc@78 | Acc@80 | Acc@82 | |
FedAvg | 141 (1.3) | 244 (1.7) | >500 | 111 (1.4) | 166 (1.7) | 485 (1.0) | 150 (1.2) | 243 (2.0) | >500 |
FedSAM | 141 (1.3) | 202 (2.1) | 402 (1.2) | 121 (1.3) | 165 (1.7) | 296 (1.7) | 142 (1.2) | 199 (2.4) | 363 (0.8 ) |
SCAFFOLD | 264 (0.7) | 356 (1.2) | >500 | 202 (0.8) | 262 (1.1) | 470 (1.1) | 180 (1.0) | 273 (1.8) | >500 |
D-PSGD | >500 | >500 | >500 | >500 | >500 | >500 | >500 | >500 | >500 |
DFedAvg | 179 (1.0) | 419 (1.0) | >500 | 152 (1.0) | 283 (1.0) | >500 | 176 (1.0) | 479 (1.0) | >500 |
DFedAvgM | 93 (1.9) | 141 (3.0) | >500 | 64 (2.4) | 99 (2.9) | 305 (1.6) | 59 (3.0) | 117 (4.1) | 303 (1.7) |
DFedSAM | 187 (1.0) | 265 (1.6) | >500 | 155 (1.0) | 203 (1.4) | 414 (1.2) | 143 (1.2) | 212 (2.3) | 452 (1.1) |
OledFL-SGD | 54 (3.3) | 77 (5.4) | 146 (3.4) | 41 (3.7) | 58 (4.9) | 110 (4.5) | 37 (4.8) | 59 (8.1) | 91 (5.5) |
OledFL-SAM | 57 (3.1) | 68 (6.2) | 104 (4.8) | 45 (2.9) | 53 (5.3) | 82 (6.1) | 48 (3.7) | 63 (7.6) | 90 (5.6) |
Methods | Path 2 | Path 4 | Path 6 | ||||||
Acc@55 | Acc@65 | Acc@70 | Acc@65 | Acc@70 | Acc@75 | Acc@70 | Acc@75 | Acc@80 | |
FedAvg | 137 (0.5) | 283 (0.5) | >500 | 113 (0.5) | 180 (0.5) | 327 (0.6) | 129 (0.5) | 208 (0.5) | >500 |
FedSAM | 156 (0.4) | 313 (0.4) | 453 (0.5) | 119 (0.5) | 172 (0.5) | 333 (0.6) | 132(0.4) | 225 (0.5) | >500 |
SCAFFOLD | 183 (0.3) | 439 (0.3 ) | >500 | 122 (0.5) | 172 (0.5 ) | 287 (0.7 ) | 114 (0.5 ) | 182 (0.6 ) | 439 (1.1) |
D-PSGD | 340 (0.2) | >500 | >500 | >500 | >500 | >500 | >500 | >500 | >500 |
DFedAvg | 62 (1.0) | 139 (1.0) | 223 (1.0) | 59 (1.0) | 92 (1.0) | 187 (1.0) | 59 (1.0) | 113 (1.0) | >500 |
DFedAvgM | 126 (0.5) | 179 (0.8) | 244 (0.9) | 46 (1.3) | 64 (1.4) | 119 (1.6) | 32 (1.8) | 65 (1.7) | 257 (1.9) |
DFedSAM | 75 (0.8) | 157 (0.9) | 248 (0.9) | 75 (0.8) | 111 (0.8) | 187 (1.0) | 84 (0.7) | 128 (0.9) | 328 (1.5) |
OledFL-SGD | 34 (1.8) | 66 (2.1) | 94 (2.4) | 25 (2.4) | 36 (2.6) | 57 (3.3) | 24 (2.5) | 37 (3.1) | 89 (5.6) |
OledFL-SAM | 44 (1.4) | 88 (1.6) | 117 (1.9) | 30 (2.0) | 41 (2.2) | 63 (3.0) | 31 (1.9) | 43 (2.6) | 83 (6.0) |
V-A Experiment Setup
Dataset. We evaluate the proposed OledFL on CIFAR-10&100 datasets [49] in both IID and non-IID settings. To simulate non-IID data distribution among federated clients, we utilize the Dirichlet [50] and Pathological distribution [51]. Specifically, in the Dirichlet distribution, the local data of each client is partitioned by sampling label ratios from the Dirichlet distribution Dir(). A smaller value of indicates a higher degree of non-IID. In our experiments, we set and to represent different levels of non-IID. In the Pathological distribution, the local data of each client is partitioned by sampling label ratios from the Pathological distribution Path(), where the value of represents the number of classes owned by each client. For example, in the CIFAR-10 dataset, we set to indicate that each client possesses only 2 randomly selected classes out of the 10 available. In our experimental setup, we respectively set , , and for the CIFAR-10 dataset, and , , and for the CIFAR-100 dataset.
Baselines. The baselines used for comparison include several state-of-the-art (SOTA) methods in both the CFL and DFL settings. Specifically, the centralized baselines include FedAvg [1], FedSAM [52, 17], and SCAFFOLD [18] . In the decentralized setting, D-PSGD [13], DFedAvg, and DFedAvgM [15], along with DFedSAM [16], are used for comparison. Note that both our baseline and proposed algorithms are designed to operate synchronously.
Implementation Details. The total number of clients is set to 100, with 10% of the clients participating in the communication, creating a random bidirectional topology. For decentralized methods, all clients perform the local iteration step, while for centralized methods, only the participating clients perform the local update [16]. The local learning rate is initialized to 0.1 with a decay rate of 0.998 per communication round for all experiments. For SAM-based algorithms, such as DFedSAM and OledFL-SAM, we set the perturbation weight as . As for , we set for CIFAR-10 and for CIFAR-100 in the random topology. The maximum number of communication rounds is set to 500 for all experiments on CIFAR-10&100. Additionally, all ablation studies are conducted on the CIFAR-10 dataset with a data partition method of Dir 0.3 and 500 communication rounds.
Communication Configurations. To ensure a fair comparison between decentralized and centralized approaches, we have implemented a dynamic and time-varying connection topology for DFL methods. This approach ensures that the number of connections in each round does not exceed the number of connections in the centralized server, thus enabling the matching of communication volume between the decentralized and centralized methods as [25]. To ensure a fair comparison, we regulate the number of neighbors for each client in DFL using the client participation rate. Here, we set each client to randomly select 10 neighbors during each communication round. Other communication topologies, such as Grid, are generated according to corresponding rules.
V-B Performance Evaluation




Figures 2 and 3 display the comparison between the baseline method and OledFL on the CIFAR10&100 datasets under Dirichlet and Pathological distributions. From the figures, it is evident that OledFL can converge rapidly with fewer communication rounds, and OledFL-SAM also surpasses SCAFFOLD in terms of generalization performance. It is worth noting that, under the Pathological data distribution, CFL methods exhibit instability during training, whereas DFL methods provide a more stable training process.
Performance Analysis. In Table II, we conduct a series of experiments to compare the performance of our method with baseline methods. The results show that under various non-IID settings, our method consistently outperforms other methods, for example, on the CIFAR-10 dataset, our method outperforms the previous SOTA method DFedSAM by at least 4.5% in both Dirichlet and Pathological distributions. Even in the IID case, our method also outperforms DFedSAM by at least 3.5%. Similarly, these results are also observed on the more complex CIFAR-100 dataset, where compared to DFedSAM, our method provides at least a 3% improvement under Dirichlet distribution and at least a 2.6% improvement under Pathological distribution.
Impact of non-IID levels (). The robustness of OledFL in various non-IID settings is evident from Figure 2& 3 and Table II. A smaller value of alpha () indicates a higher level of non-IID, which makes the task of optimizing a consensus problem more challenging. However, our algorithm consistently outperforms all baselines across different levels of non-IID. Furthermore, methods based on the SAM optimizer consistently achieve higher accuracy across different levels of non-IID. For instance, on the CIFAR-10 dataset, under the Dirichlet distribution, the performance of our algorithm OledFL-SAM only decreases by 1.3% from the IID to Dir 0.3, which significantly outperforms DFedSAM (2.76%), DFedAvgM (3.42%), DFedAvg (2.72%), D-PSGD (3.17%), and FedAvg (3.06%), respectively. This demonstrates the robustness of our method to heterogeneous data.

Impact of Ole Parameter . In Table II and Figure 2&3, we observe a significant improvement in generalization and convergence speed due to Ole initialization. Taking the example of CIFAR-10 under Dirichlet 0.3, parameter initialization helps the DFedAvg algorithm achieve an improvement of around 4% in accuracy, and the DFedSAM algorithm achieves an improvement of around 5% in accuracy. In terms of convergence speed, parameter initialization helps both the DFedAvg and DFedSAM algorithms achieve at least 3 speedup, and in some cases, even up to 8 speedup, which coincides with our theoretical analysis and demonstrates the efficacy of our proposed OledFL (see Remark 3).




Explanation of the Effectiveness of Ole. To explore the effectiveness of Ole, we plot the loss landscapes and corresponding contour maps of OledFL-SAM and DFedSAM on the CIFAR10 dataset under Dirichlet 0.3, as shown in Figure 5. It is evident that OledFL-SAM can find parameters with lower loss values. Furthermore, from Figure 4, it can be observed that the loss landscape of OledFL-SAM is flatter. According to the research findings of Keskar et al. [53] and Dziugaite et al. [54], flatter loss surfaces lead to better generalization performance. This observation can explain why OledFL-SAM may exhibit superior generalization performance. This is also consistent with the theoretical analysis results in Remark 3.
V-C Topology-aware Performance
Below, we explore the effects of topologies on different DFL methods on CIFAR-10 dataset with Dirichlet .
Algorithm | Ring | Grid | Exp | Full |
D-PSGD | 51.24 | 52.06 | 55.58 | 65.46 |
DFedAvg | 63.30 | 74.72 | 77.11 | 78.52 |
DFedAvgM | 65.49 | 76.89 | 78.01 | 80.14 |
DFedSAM | 65.12 | 77.86 | 78.88 | 81.04 |
OledFL-SGD | 71.78 | 81.05 | 78.73 | 81.52 |
OledFL-SAM | 74.48 | 82.95 | 79.99 | 83.13 |

Topology | ||
Fully connected | 0 | 0 |
Exponential | ||
Grid | ||
Ring |
Impact of Sparse Connectivity . Each client in the network only communicates with its predetermined neighbors, and the specific communication pattern is determined by the corresponding topology. In Table V, the degree of sparse connectivity follows the order: Ring Grid Exponential Full-connected [16, 45, 55]. From Figure 6 and Table IV, It can be observed that there is a general trend: as the sparse connectivity decreases, the accuracy of our proposed algorithm on the test set increases. This can be attributed to the fact that a well-designed communication topology enables clients to obtain better initial optimization points through communication, leading to improved results. Furthermore, OledFL consistently achieves higher test set accuracy compared to other DFL baselines across various topologies. For example, Ole can enhance DFedSAM’s performance by over 9% on a Ring topology. Additionally, from Figure 6, it can be observed that the convergence rate and generalization performance of OledFL generally increases with better topological connectivity, confirming the conclusion of Remarks 1&3.

Relationship between and . From Section IV, we have concluded that tighter topological connectivity (indicating smaller ) corresponds to smaller initial parameter values . To validate this theoretical result, we conduct experiments on the OledFL algorithm using the parameter set in each of the four topologies presented in Table V to select the optimal corresponding to the algorithm’s performance. The results of the optimal values under different topologies are presented in Figure 7. By comparing the - relationships in Figure 7 with the corresponding values in Table V, the experimental results validate our conclusion in Remark 2 of Theorem 1.
V-D Ablation Study
We verify the influence of each component and hyperparameter in OledFL. All the ablation studies are conducted with the “Random” topology, which is consistent with the communication configuration used in Section V-B.

Impact of Ole Parameter . The convergence curves under different Ole coefficients after 500 communication rounds are displayed in Figure 8 (a) on the CIFAR-10 dataset under the Dirichlet 0.3 distribution. A larger value implies a greater deviation of each client’s initialization point from the corresponding optimal point. We verify the performance curves of OledFL-SAM under the parameter set . It can be seen that under the random bidirectional topology, the optimal is 0.99. Furthermore, it is clear that as increases, the algorithm’s convergence speed and generalization performance both improve. When , the algorithm diverges, which is consistent with the upper bound of obtained in the discussion of its relationship with CA in Section III-C.
Impact of Client Number . In Figure 8 (c), we present the performance with different numbers of participants, . We observe that, with an increase in the number of local data, OledFL achieves the best performance when or 100, while a decrease in performance is observed when is relatively large. This can be attributed to the fact that a smaller number of clients has a larger number of local training samples, leading to improved training performance.
Impact of Perturbation Radius . The perturbation radius is an additional factor influencing the convergence of OledFL. It controls the size of the perturbation radius, where a larger perturbation radius implies a more ambiguous direction of parameter descent, thereby affecting the convergence speed. We evaluate OledFL’s generalization performance using different values of from the set . Figure 8 (d) illustrates the highest accuracy achieved when . Additionally, we observe that a larger leads to a slower convergence speed, which is consistent with our previous analysis.
Impact of local epochs . represents the number of optimization rounds performed by each client. A larger value of is more likely to lead to the ”client drift” phenomenon, resulting in increased inconsistency between clients and thereby affecting the algorithm’s performance. We assess the generalization performance of OledFL using different values of from the set . Figure 8 (b) demonstrates the highest accuracy achieved when . Additionally, we observe that a larger leads to faster convergence in Figure 8 (b), consistent with our theoretically proven .
VI Conlusion
In this paper, we propose a plug-in method named OledFL, which integrates existing DFL optimizers to enhance the consistency among clients and significantly improve convergence speed and generalization performance at almost negligible computational cost. Theoretically, we are the first to conduct a joint analysis of algorithm convergence and generalization in the field of DFL, and we demonstrate the effectiveness of OledFL in reducing optimization error and generalization error. Moreover, a comprehensive explanation of the mechanism of action of Ole has been provided, encompassing intuitive, experimental, and theoretical perspectives. Furthermore, certain conclusions derived from theoretical deductions have been validated experimentally, thus offering additional insights into the Ole plugin methodology. Finally, extensive experiments on the CIFAR10&100 datasets under Dirichlet and Pathological distributions demonstrate that OledFL can significantly reduce the performance gap between CFL and DFL, and even surpass CFL optimizers such as FedSAM and SCAFFOLD, which is crucial for further advancement in the field of DFL.
References
- [1] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.
- [2] B. Gu, A. Xu, Z. Huo, C. Deng, and H. Huang, “Privacy-preserving asynchronous vertical federated learning algorithms for multiparty collaborative learning.” IEEE Transactions on Neural Networks and Learning Systems, p. 6103–6115, Nov 2022. [Online]. Available: http://dx.doi.org/10.1109/tnnls.2021.3072238
- [3] P. Zhou, K. Wang, L. Guo, S. Gong, and B. Zheng, “A privacy-preserving distributed contextual federated online learning framework with big data support in social recommender systems,” IEEE Transactions on Knowledge and Data Engineering, p. 1–1, Jan 2019. [Online]. Available: http://dx.doi.org/10.1109/tkde.2019.2936565
- [4] S. Zhou and G. Y. Li, “Federated learning via inexact admm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
- [5] Y. Sun, L. Shen, T. Huang, L. Ding, and D. Tao, “Fedspeed: Larger local interval, less communication round, and higher generalization accuracy,” arXiv preprint arXiv:2302.10429, 2023.
- [6] T. Li, A. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” arXiv: Learning,arXiv: Learning, Dec 2018.
- [7] D. A. E. Acar, Y. Zhao, R. M. Navarro, M. Mattina, P. N. Whatmough, and V. Saligrama, “Federated learning based on dynamic regularization,” arXiv preprint arXiv:2111.04263, 2021.
- [8] X. Zhang, M. Hong, S. Dhople, W. Yin, and Y. Liu, “Fedpd: A federated learning framework with adaptivity to non-iid data,” IEEE Transactions on Signal Processing, vol. 69, pp. 6055–6070, 2021.
- [9] R. Dai, X. Yang, Y. Sun, L. Shen, X. Tian, M. Wang, and Y. Zhang, “Fedgamma: Federated learning with global sharpness-aware minimization,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
- [10] M. Chen, Y. Xu, H. Xu, and L. Huang, “Enhancing decentralized federated learning for non-iid data on heterogeneous devices,” in 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023, pp. 2289–2302.
- [11] E. Gabrielli, G. Pica, and G. Tolomei, “A survey on decentralized federated learning,” arXiv preprint arXiv:2308.04604, 2023.
- [12] E. Cyffers and A. Bellet, “Privacy amplification by decentralization,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 5334–5353.
- [13] X. Lian, C. Zhang, H. Zhang, C.-J. Hsieh, W. Zhang, and J. Liu, “Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent,” in Advances in Neural Information Processing Systems, 2017, pp. 5330–5340.
- [14] G. Neglia, G. Calbi, D. Towsley, and G. Vardoyan, “The role of network topology for distributed machine learning,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2019, pp. 2350–2358.
- [15] T. Sun, D. Li, and B. Wang, “Decentralized federated averaging,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
- [16] Y. Shi, L. Shen, K. Wei, Y. Sun, B. Yuan, X. Wang, and D. Tao, “Improving the model consistency of decentralized federated learning,” arXiv preprint arXiv:2302.04083, 2023.
- [17] Z. Qu, X. Li, R. Duan, Y. Liu, B. Tang, and Z. Lu, “Generalized federated learning via sharpness aware minimization,” in International Conference on Machine Learning. PMLR, 2022, pp. 18 250–18 280.
- [18] S. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. Suresh, “Scaffold: Stochastic controlled averaging for federated learning,” International Conference on Machine Learning,International Conference on Machine Learning, Jul 2020.
- [19] T. Sun, D. Li, and B. Wang, “Stability and generalization of decentralized stochastic gradient descent,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 11, 2021, pp. 9756–9764.
- [20] Q. Yang, Y. Liu, Y. Cheng, Y. Kang, T. Chen, and H. Yu, “Federated learning. morgan & claypool publishers,” 2019.
- [21] A. Lalitha, S. Shekhar, T. Javidi, and F. Koushanfar, “Fully decentralized federated learning,” in Third workshop on Bayesian Deep Learning (NeurIPS), 2018.
- [22] A. Lalitha, O. C. Kilinc, T. Javidi, and F. Koushanfar, “Peer-to-peer federated learning on graphs,” arXiv preprint arXiv:1901.11173, 2019.
- [23] E. T. M. Beltrán, M. Q. Pérez, P. M. S. Sánchez, S. L. Bernal, G. Bovet, M. G. Pérez, G. M. Pérez, and A. H. Celdrán, “Decentralized federated learning: Fundamentals, state-of-the-art, frameworks, trends, and challenges,” arXiv preprint arXiv:2211.08413, 2022.
- [24] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., “Advances and open problems in federated learning,” Foundations and trends® in machine learning, vol. 14, no. 1–2, pp. 1–210, 2021.
- [25] R. Dai, L. Shen, F. He, X. Tian, and D. Tao, “Dispfl: Towards communication-efficient personalized federated learning via decentralized sparse training,” arXiv preprint arXiv:2206.00187, 2022.
- [26] L. Yuan, L. Sun, P. S. Yu, and Z. Wang, “Decentralized federated learning: A survey and perspective,” arXiv preprint arXiv:2306.01603, 2023.
- [27] J. Xu, S. Wang, L. Wang, and A. C.-C. Yao, “Fedcm: Federated learning with client-level momentum,” arXiv preprint arXiv:2106.10874, 2021.
- [28] S. P. Karimireddy, M. Jaggi, S. Kale, M. Mohri, S. J. Reddi, S. U. Stich, and A. T. Suresh, “Mime: Mimicking centralized stochastic algorithms in federated learning,” arXiv preprint arXiv:2008.03606, 2020.
- [29] P. Zhou, H. Yan, X. Yuan, J. Feng, and S. Yan, “Towards understanding why lookahead generalizes better than sgd and beyond,” Neural Information Processing Systems,Neural Information Processing Systems, Dec 2021.
- [30] H. Lin, J. Mairal, and Z. Harchaoui, “A universal catalyst for first-order optimization,” Advances in neural information processing systems, vol. 28, 2015.
- [31] M. Schmidt, N. Le Roux, and F. Bach, “Minimizing finite sums with the stochastic average gradient,” Mathematical Programming, vol. 162, pp. 83–112, 2017.
- [32] A. Defazio, F. Bach, and S. Lacoste-Julien, “Saga: A fast incremental gradient method with support for non-strongly convex composite objectives,” Advances in neural information processing systems, vol. 27, 2014.
- [33] L. Xiao and T. Zhang, “A proximal stochastic gradient method with progressive variance reduction,” SIAM Journal on Optimization, vol. 24, no. 4, pp. 2057–2075, 2014.
- [34] E. Trimbach and A. Rogozin, “An acceleration of decentralized sgd under general assumptions with low stochastic noise,” in International Conference on Mathematical Optimization Theory and Operations Research. Springer, 2021, pp. 117–128.
- [35] A. Koloskova, N. Loizou, S. Boreiri, M. Jaggi, and S. Stich, “A unified theory of decentralized sgd with changing topology and local updates,” in International Conference on Machine Learning. PMLR, 2020, pp. 5381–5393.
- [36] M. Zhang, J. Lucas, J. Ba, and G. E. Hinton, “Lookahead optimizer: k steps forward, 1 step back,” Advances in neural information processing systems, vol. 32, 2019.
- [37] K. Scaman, F. Bach, S. Bubeck, Y. Lee, and L. Massoulié, “Optimal algorithms for smooth and strongly convex distributed optimization in networks,” Le Centre pour la Communication Scientifique Directe - HAL - Diderot,Le Centre pour la Communication Scientifique Directe - HAL - Diderot, Feb 2017.
- [38] Y. Lu and C. Sa, “Optimal complexity in decentralized training,” Cornell University - arXiv,Cornell University - arXiv, Jun 2020.
- [39] K. Yuan, X. Huang, Y. Chen, X. Zhang, Y. Zhang, and P. Pan, “Revisiting optimal convergence rate for smooth and non-convex stochastic decentralized optimization,” Oct 2022.
- [40] K. Huang, S. Pu, and A. Nedić, “An accelerated distributed stochastic gradient method with momentum,” arXiv preprint arXiv:2402.09714, 2024.
- [41] Z. Song, L. Shi, S. Pu, and M. Yan, “Optimal gradient tracking for decentralized optimization,” Cornell University - arXiv,Cornell University - arXiv, Oct 2021.
- [42] Q. Li, L. Shen, G. Li, Q. Yin, and D. Tao, “Dfedadmm: Dual constraints controlled model inconsistency for decentralized federated learning,” arXiv preprint arXiv:2308.08290, 2023.
- [43] T. Sun, D. Li, and B. Wang, “Stability and generalization of the decentralized stochastic gradient descent,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 11, p. 9756–9764, Sep 2022. [Online]. Available: http://dx.doi.org/10.1609/aaai.v35i11.17173
- [44] M. Zhu, L. Shen, B. Du, and D. Tao, “Stability and generalization of the decentralized stochastic gradient descent ascent algorithm,” Oct 2023.
- [45] T. Zhu, F. He, L. Zhang, Z. Niu, M. Song, and D. Tao, “Topology-aware generalization of decentralized sgd,” in International Conference on Machine Learning. PMLR, 2022, pp. 27 479–27 503.
- [46] Y. Sun, L. Shen, and D. Tao, “Understanding how consistency works in federated learning via stage-wise relaxed initialization,” arXiv preprint arXiv:2306.05706, 2023.
- [47] M. Hardt, B. Recht, and Y. Singer, “Train faster, generalize better: stability of stochastic gradient descent,” International Conference on Machine Learning,International Conference on Machine Learning, Jun 2016.
- [48] Y. Zhang, W. Zhang, S. Bald, V. Pingali, C. Chen, and M. Goswami, “Stability of sgd: Tightness analysis and improved bounds,” Cornell University - arXiv,Cornell University - arXiv, Feb 2021.
- [49] A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
- [50] H.-H. Hsu, H. Qi, and M. Brown, “Measuring the effects of non-identical data distribution for federated visual classification,” arXiv: Learning,arXiv: Learning, Sep 2019.
- [51] M. Zhang, K. Sapra, S. Fidler, S. Yeung, and J. Alvarez, “Personalized federated learning with first order model optimization,” Learning,Learning, Dec 2020.
- [52] D. Caldarola, B. Caputo, and M. Ciccone, “Improving generalization in federated learning by seeking flat minima,” in European Conference on Computer Vision. Springer, 2022, pp. 654–672.
- [53] N. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. Tang, “On large-batch training for deep learning: Generalization gap and sharp minima,” International Conference on Learning Representations,International Conference on Learning Representations, Sep 2016.
- [54] G. Dziugaite and D. Roy, “Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data,” Uncertainty in Artificial Intelligence,Uncertainty in Artificial Intelligence, Mar 2017.
- [55] B. Ying, K. Yuan, Y. Chen, H. Hu, P. Pan, and W. Yin, “Exponential graph is provably efficient for decentralized deep training,” Oct 2021.
- [56] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009.
- [57] H. Yang, M. Fang, and J. Liu, “Achieving linear speedup with partial worker participation in non-iid federated learning,” arXiv: Learning,arXiv: Learning, Jan 2021.
In this part, we provide supplementary materials including the proof of the optimization and generalization error. Here’s the table of contents for the Appendix.
- •
Appendix A Supplementary Material for Experiment
Appendix B Proof of the theoretical analysis.
B-A Proofs for the Optimization Error
In this part, we prove the training error for our proposed method. We assume the objective is -smooth w.r.t . Then we could upper bound the training error in the FL. Some useful notations in the proof are introduced in the Table VI. Then we introduce some important lemmas used in the proof.
Notation | Formulation | Description |
- | parameters at -th iteration in round on client | |
- | global parameters in round on client | |
averaged norm of the local updates in round | ||
norm of the average global updates in round | ||
inconsistency / divergence term in round | ||
bias between the initialization state and optimal |
B-A1 Important Lemmas
Lemma 1
(Bounded local updates) We first bound the local training updates in the local training. Under the Assumptions stated, the averaged norm of the local updates of total clients could be bounded as:
(8) |
Proof 1
measures the norm of the local offset during the local training stage. It could be bounded by two major steps. Firstly, we bound the separated term on the single client at iteration as:
(9) | ||||
where the learning rate is required .
Computing the average of the separated term on client , we have:
(10) | ||||
Unrolling the aggregated term on iteration . When local interval . Then we have:
(11) | ||||
Summing the iteration on .
(12) |
This completes the proof.
Lemma 2
(Bounded global updates) Under assumptions stated above, the norm of the global update of clients could be bounded as:
(13) |
Proof 2
(14) | ||||
As is a doubly stochastic matrix, we have . Then we have
(15) | ||||
Because of , we have . Then, we get
(16) | ||||
Since , we have completed the proof.
Lemma 3
[Lemma 4, [13]] For any , the mixing matrix satisfies where and for a matrix , we denote its spectral norm as . Furthermore, and
In [Proposition 1, [56]], the author also proved that for some that depends on the matrix.
Lemma 4
Let be generated by our proposed Algorithm for all and any learning rate , we have following bound:
(17) |
Where .
Proof 3
Following [Lemma D.5, [16]], we denote . With these notation, we have
(18) |
where . The iteration equation (29) can be rewritten as the following expression
(19) |
Obviously, it follows
(20) |
According to Lemma 3, it holds
Multiplying both sides of equation (19) with and using equation (20), we then get
(21) |
where we used initialization . Then, we are led to
(22) | ||||
With Cauchy inequality,
Direct calculation gives us
With Lemma 1, for any :
(23) |
Thus, we get:
where .
The fact that then proves the result.
Lemma 5
(Bounded divergence term) Under assumptions stated above, and , The divergence term could be bounded as the recursion of:
Where
Proof 4
The divergence term measures the inconsistency level in the FL framework. According to the local updates, we have the following recursive formula:
(24) |
By taking the squared norm and expectation on both sides, we have:
The second term in the above inequality is we have bounded in lemma 2. The third and fourth terms have been bounded in Lemma 4. Then we bound the stochastic gradients term. We have:
Taking the average on client i, we have:
Combining this and the squared norm inequality, we have:
Where (a) uses Lemma 1 and is a very small value, i.e., we have omitted terms of . Let denote the term that is independent of ,i.e., . Then we have:
Let where . Let and dividing both sides by , we obtain:
Utilizing the learning rate condition , we obtain , then let where , thus we add on both sides and get the recursive formulation:
B-A2 Expanding the Smoothness Inequality for the Non-convex Objective
For the non-convex and -smooth function , we firstly expand the smoothness inequality at round as:
(25) | ||||
According to Lemma 1 and lemma 2 to bound the and , we can get the following recursive formula:
Furthermore, with Lemma 4, we can get:
Where , Therefore, we have:
And then, (25) can be represented as
(26) | ||||
Where we use vary small [16]. Furthermore, in Lemma 5, . By setting , we can obtain . Next, with Lemma 5, we get:
(27) |
B-A3 Proof of Theorem 1
Theorem 3
Under Assumption 1 - 3, let the learning rate satisfy where , let the relaxation coefficient , and after training rounds, the averaged model parameters generated by our proposed algorithm satisfies:
Where is a constant.
Further, by selecting the proper learning rate and let as the initialization bias, thenthe averaged model parameters satisfies:
Proof 5
According to the expansion of the smoothness inequality (28), we have:
Taking the accumulation from to , we have:
We select the learning rate and let as the initialization bias, then we have:
This completes the proof.
B-B Proofs for the Generalization Error
In this part, we prove the generalization error for our proposed method. We assume the objective function is -smooth and -Lipschitz as defined in [47, 29]. We follow the uniform stability to upper bound the generalization error in the DFL.
We suppose there are clients participating in the training process as a set . Each client has a local dataset with total data sampled from a specific unknown distribution . Now we define a re-sampled dataset which only differs from the dataset on the -th data. We replace the with and keep other local dataset, which composes a new set . only differs from the at -th data on the -th client. Then, based on these two sets, our method could generate two output models, and respectively, after training rounds. We first introduce some notations used in the proof of the generalization error.
Notation | Formulation | Description |
- | parameters trained with set | |
- | parameters trained with set | |
stability difference at -iteration on -round |
Then we introduce some important lemmas in our proofs.
B-B1 Important Lemmas
Lemma 6
(Lemma 3.11 in [47]) We follow the definition in [47, 29] to upper bound the uniform stability term after each communication round in DFL paradigm. Different from their vanilla calculations, DFL considers the finite-sum function on heterogeneous clients. Let non-negative objective is -smooth and -Lipschitz. After training rounds on and . our method generates two models and respectively. For each data and every , we have:
Proof 6
Let denote the event and , we have:
Before the -th data on -th client is sampled, the iterative states are identical on both and . Let is the index of the first different sampling, if , then hold for . Therefore, we have:
We complete the proof.
Lemma 7
(Lemma 1.1 in [29]) Different from their calculations, we prove similar inequalities on in the stochastic optimization. Under Assumption 1 and 2, the local updates satisfy on and on . If at -th iteration on each round, we sample the same data in and , then we have:
Proof 7
In each round , by the triangle inequality and omitting the same data , we have:
The final inequality adopts assumptions of . This completes the proof.
Lemma 8
(Lemma 1.2 in [29]) Different from their calculations, we prove similar inequalities on in the stochastic optimization. Under Assumption 1, 4 and 2, the local updates satisfy on and on . If at -th iteration on each round, we sample the different data in and , then we have:
Proof 8
In each round , let by the triangle inequality and denoting the different data as and , we have:
The final inequality adopts the -Lipschitz. This completes the proof.
Lemma 9
Given the stepsize , we have following bound:
Where .
Proof 9
Following [Lemma 4, [15]], we denote . With these notation, we have
(29) |
where . Following [Lemma 8, [43]], we have:
(30) |
Assuming , it means that . Next, we will bound . According to the equation and Assumption 4, we have:
Next, we need to bound . According to (24), we have:
Where (a) uses , and . Then we get
This implies:
According to (30), we have:
(31) |
Letting the term on the right-hand side of (31) be denoted as , we obtain that, when and let , we have
We have completed the proof.
Lemma 10
B-B2 Bounded Uniform Stability
According to Lemma 6, we firstly bound the recursive stability on in one round. If the sampled data is the same, we can adopt Lemma 7. Otherwise, we adopt Lemma 8. Thus we can bound the first term in Lemma 6 as:
Balancing the LHS and RHS, we have the following recursive formulation:
Therefore, in one single communication round, by generally defining learning rate :
The next important relationship is to measure the and . According to the update rule , we have the difference follows:
By taking the expectation on the norm, we have:
Next, we bound R1.
Where we used Lemma 9 and Lemma 10 to establish the final inequality and let the learning rate be the same selection as it in the optimization of , then we get . Finally we get
By denoting be the combination of learning rate , we can provide an upper bound of the recursive formulation as:
To balance the constant part, assuming the learning rate is decayed by communication round which indicates and let be the upper bound and small learning rates ensure that , then we have the following recursive formulation:
Where , Unrolling from to , we have:
According to the preceding selection of the learning rate, , we have:
To summarize the above inequalities and the Lemma 6, we have:
Setting , We then have