Does Worst-Performing Agent Lead the Pack? Analyzing Agent Dynamics in Unified Distributed SGD
Abstract
Distributed learning is essential to train machine learning algorithms across heterogeneous agents while maintaining data privacy. We conduct an asymptotic analysis of Unified Distributed SGD (UD-SGD), exploring a variety of communication patterns, including decentralized SGD and local SGD within Federated Learning (FL), as well as the increasing communication interval in the FL setting. In this study, we assess how different sampling strategies, such as i.i.d. sampling, shuffling, and Markovian sampling, affect the convergence speed of UD-SGD by considering the impact of agent dynamics on the limiting covariance matrix as described in the Central Limit Theorem (CLT). Our findings not only support existing theories on linear speedup and asymptotic network independence, but also theoretically and empirically show how efficient sampling strategies employed by individual agents contribute to overall convergence in UD-SGD. Simulations reveal that a few agents using highly efficient sampling can achieve or surpass the performance of the majority employing moderately improved strategies, providing new insights beyond traditional analyses focusing on the worst-performing agent.
1 Introduction
Distributed learning deals with the training of models across multiple agents over a communication network in a distributed manner, while addressing the challenges of privacy, scalability, and high-dimensional data [11, 55]. Each agent holds a private dataset and an agent-specified loss function that depends on the model parameter and a data point . The goal is then to find a local minima of the objective function , where agent ’s loss function and represents the target distribution of data for agent .111Throughout the paper we don’t impose convexity assumption on . For convex , is the global minima. For non-convex , represents the collection of local minima, which is of great interest in neural network training for sufficiently good performance [20, 19]. With an additional condition such as the Polyak-Lojasiewicz inequality, non-convex is ensured to have a unique minima [1, 75, 79]. Each agent can locally compute the gradient w.r.t. for every sampled data point . Due to the distributed nature, and are not necessarily identically distributed over so that the minima of each local function can be far away from . This is particularly relevant in decentralized training data, e.g., Federated Learning (FL) with heterogeneous data across data centers or devices [81, 31].
In this paper, we focus on Unified Distributed SGD (UD-SGD), where each agent updates its model parameter in a two-step process:
(1a) | |||
(1b) |
where denotes the step size, is the data sampled by agent at time (i.e., agent dynamics), and represents the doubly-stochastic communication matrix satisfying and , . In the special case of , (1) simplifies to the vanilla SGD where for all . UD-SGD covers a wide range of distributed algorithms, e.g., decentralized SGD (DSGD) [71, 80, 61, 69], distributed SGD with changing topology (DSGD-CT) [24, 43], local SGD (LSGD) in FL [55, 76], and its variant aimed at reducing communication costs (LSGD-RC) [51].

Versatile Communication Patterns : For visualization, we depict the scenarios of UD-SGD (1) in Figure 1. In DSGD, each agent (node) in the graph communicates with its neighbors after each SGD computation via , representing the underlying network topology. As a special case, central server-based aggregation, forming a fully connected network, translates into a rank-1 matrix . To minimize communication expenses, FL variants allow each agent to perform multiple SGD steps before aggregation [55, 67, 76], resulting in a communication interval of length and a consistent pattern for , and otherwise. In particular, i) corresponds to LSGD with full agent participation (LSGD-FP) [76, 42, 51]; ii) is a random matrix generated by partial agent participation (LSGD-PP) [55, 18, 74]; iii) is generated by Metropolis-Hasting algorithm in decentralized setting, e.g., hybrid LSGD (HLSGD) [37, 32] and decentralized FL (DFL) [46, 77, 16]. We defer further discussion of to Section F.1.
Markovian vs i.i.d. Sampling: Agents typically employ i.i.d. or Markovian sampling, as illustrated in the bottom brown box of Figure 1. In cases where agents have full access to their data, DSGD with i.i.d sampling has been extensively studied [60, 43, 61, 47]. In FL, many application-oriented LSGD variants have been investigated [51, 18, 77, 32, 37, 53]. However, these works solely focus on i.i.d. sampling, restricting their applicability to Markovian sampling scenarios.
Markovian sampling, which has received increased attention in limited settings (see Table 1), is vital where agents lack independent data access. For instance, in statistical applications, agents with an unknown a priori distribution often use Markovian sampling over i.i.d. sampling [40, 63]. In HLSGD across device-to-device (D2D) networks [32, 37], random walks reduce communication costs compared to the frequent aggregations required by Gossip algorithms [38, 28, 4]. For single-agent scenarios, vanilla SGD with Markovian noise, as applied in a D2D network, has shown improved communication efficiency and privacy [68, 28, 35]. In contrast, for agents with full data access, Markov Chain Monte Carlo (MCMC) methods can be more efficient than i.i.d. sampling, especially in high-dimensional spaces with constraints [27, 40], where acceptance-rejection methods [12] lead to computational inefficiency (e.g., wasted samples) due to multiple rejections before obtaining a sample that satisfies constraints [26, 68]. In addition, shuffling methods can be considered as high-order Markov chains [38], which achieves faster convergence than i.i.d. sampling [1, 78, 79].
Reference | Analysis | Sampling | Communication Pattern | D.A.B. | L.S. | A.N.I. |
[58] | Asym. | i.i.d. | DSGD | ✓ | ✓ | ✓ |
[51] | Asym. | i.i.d. | LSGD-RC | ✓ | ✓ | N/A |
[43, 47] | Non-Asym. | i.i.d. | DSGD-CT | ✓ | ||
[61] | Non-Asym. | i.i.d. | DSGD | ✓ | ||
[18, 53] | Non-Asym. | i.i.d. | LSGD-PP | ✓ | N/A | |
[37, 32] | Non-Asym. | i.i.d. | HLSGD | ✓ | ||
[77, 16] | Non-Asym. | i.i.d. | DFL | ✓ | ||
[71, 80, 69] | Non-Asym. | Markov | DSGD | |||
[42, 72] | Non-Asym. | Markov | LSGD-FP | ✓ | N/A | |
[68, 4, 28] | Non-Asym. | Markov | N/A (single agent) | N/A | N/A | N/A |
[38, 52] | Asym. | Markov | N/A (single agent) | N/A | N/A | N/A |
Our Work | Asym. | Markov | UD-SGD | ✓ | ✓ | ✓ |
Limitations of Non-Asymptotic Analysis on Agent’s Sampling Strategy: Recent studies on the non-asymptotic behavior of DSGD and LSGD variants under Markovian sampling, as summarized in Table 1, have made significant strides. However, these works often fall short in accurately revealing the statistical influence of each agent dynamics on the performance of UD-SGD. For instance, [71, 69] proposed the error bound , where and denotes the identical mixing rate for all agents, overlooking agent heterogeneity in sampling strategy. A similar assumption to is also evident in [42]. More recent contributions from [80, 72] have attempted to relax these constraints by considering a finite-time bound of , where is the mixing time of the slowest agent. This approach, however, inherently focuses on the worst-performing agent, neglecting how other agents with faster mixing rates might positively influence the system.222Although improving the finite-time upper bound to distinguish each agent may not be the focus of the aforementioned works, their analyses require every Markov chain to be close to some neighborhood of its stationary distribution. This naturally incurs a maximum operator, and thus convergence is strongly influenced by the slowest mixing rate, i.e., the worst-performing agent. Such an analysis fails to capture the collective impact of other agents on the overall system performance, a crucial aspect in large-scale applications where identifying and managing the worst-performing agent is challenging due to privacy concerns or sporadic unreachability. Since agents in distributed learning have the freedom to choose their sampling strategies, it’s vital to understand how each agent’s improved sampling approach contributes to the overall convergence speed of the UD-SGD algorithm. This understanding is key to enhancing system performance, particularly in large-scale machine learning scenarios where agent heterogeneity is a defining feature.
Rationale for Asymptotic Analysis: Recent trends in convergence analysis have leaned towards non-asymptotic methods, yet it’s crucial to recognize the complementary role of asymptotic analysis for a better understanding of convergence behaviors, as highlighted in [9, 56, 25, 39]. For vanilla SGD, [59, 17] emphasized that central limit theorem (CLT) is far less asymptotic than it may appear under both i.i.d. and Markovian sampling. Notably, the limiting covariance matrix, a key statistical feature in vanilla SGD’s CLT, also prominently features in high-probability bound [59], explicit finite-time bound [17] and -Wasserstein distance in the non-asymptotic CLT [66]. [38] further underscored this by numerically showing that the limiting covariance matrix provides a more precise depiction of convergence than the mixing rates often used in finite-time upper bounds [26, 68]. Moreover, they argued that finite-time analysis may not suitably apply to certain efficient high-order Markov chains, due to the lack of comparative mixing-rate metrics.
Our Contributions: We present an asymptotic analysis of the UD-SGD algorithm (1) under heterogeneous agent dynamics and a large family of communication patterns . Specifically,
Under appropriate assumptions, all agents performing (1) asymptotically reach the consensus and find : , denotes the average model parameter among all agents, we have
(2) |
Moreover, we derive the CLT of UD-SGD in the form of
(3) |
Our framework addresses technical challenges in quantifying consensus error under various communication patterns and slowly increasing communication interval. This shows a substantial extension compared to previous studies [58, 43, 51], particularly in regulating the growth of communication intervals (Assumption 2.3-ii) and in proving the scaled consensus error’s boundedness (Lemma B.1). Furthermore, we reformulate UD-SGD as a stochastic approximation-like iteration and tackle the Markovian noise term using the Poisson equation, a technique previously confined only to vanilla SGD with Markovian sampling [17, 38, 52]. The key here is to devise the noise decomposition that separates the consensus error among all agents from the error caused by the bias from the Markov chain, which aligns with the target distribution only asymptotically at infinity, not at finite times.
In analyzing (3), we derive the exact form of as . Here, is the limiting covariance matrix of agent , which depends mainly on its sampling strategy . This allows us to show that improving individual agents’ sampling strategy can reduce the covariance in CLT, which in turn implies a smaller mean-square error (MSE) for large time . This is a significant advancement over previous finite-sample bounds that only account for the worst-performing agent and do not fully capture the effect of individual agent dynamics on overall system performance. Our CLT result (3) also treats recent findings in [38] as a very special case with , where the relationship therein between the sampling efficiency of the Markov chain and the limiting covariance matrix in the CLT of vanilla SGD, can carry over to our UD-SGD.
We demonstrate that our analysis supports recent findings from studies such as [42], which exhibited linear speedup scaling with the number of agents under LSGD-FP with Markovian sampling; and [62, 61], which examined the notion of ‘asymptotic network independence’ for DSGD with i.i.d. sampling, where the convergence of the algorithm (1) at large time depends solely on the left eigenvector of ( considered in this paper) rather than the specific communication network topology encoded in , but now under Markovian sampling. We extend these findings in view of CLT to a broader range of communication patterns and general sampling strategies .
We conduct numerical experiments using logistic regression and neural network training with several choices of agents’ sampling strategies, including a recently proposed one via nonlinear Markov chain [25]. Our results uncover a key phenomenon: a handful of compliant agents adopting highly efficient sampling strategies can match or exceed the performance of the majority using moderately improved strategies. This finding is crucial for practical optimization in large-scale learning systems, moving beyond the current literature that only considers the worst-performing agent in more restrictive settings.
2 Preliminaries
Basic Notations: We use to indicate the Euclidean norm of a vector and to indicate the spectral norm of a matrix . The identity matrix of dimension is denoted by , and the all-one (resp. all-zero) vector of dimension is denoted by (resp. ). Let . The diagonal matrix with the entries of on the main diagonal is written as . We also use ‘’ for Loewner ordering such that is equivalent to for any .
Asymptotic Covariance Matrix: Asymptotic variance is a widely used metric for evaluating the second-order properties of Markov chains associated with a scalar-valued test function in the MCMC literature, e.g., Chapter 6.3 [12], and asymptotic covariance matrix is its multivariate version for a vector-valued function. Specifically, we consider a finite, irreducible, aperiodic and positive recurrent (ergodic) Markov chain with transition matrix and stationary distribution , and the estimator for any vector-valued function . According to the ergodic theorem [12, 13], we have a.s.. As defined in [13, 38], the asymptotic covariance matrix for a vector-valued function is given by
(4) |
where . By following the algebraic manipulations in [12, Theorem 6.3.7] for asymptotic variance (univariate version), we can rewrite (4) in a matrix form such that
(5) |
where and . This matrix form explicitly shows the dependence on the transition matrix and its stationary distribution , and will be utilized in our Theorem 3.3.
Model Description: The UD-SGD in (1) can be expressed in a compact iterative form, i.e., we have
(6) |
at each time , where each agent samples according to its own Markovian trajectory with stationary distribution such that . Let denote the communication interval between the -th and -th aggregation among agents, and be the time instance for the -th aggregation. We also define as the index of the upcoming aggregation at time such that indicates the communication interval for the -th aggregation, or more precisely, the length of the communication interval that includes the time index . The communication pattern follows that if and otherwise for , where the examples of will be discussed in Section F.1. Note that i) when , (6) reduces to DSGD; ii) when , (6) becomes the local SGD in FL. iii) When increases with , we recover some choices of studied in [51] beyond LSGD-RC with i.i.d. sampling. This increasing communication interval aims to further reduce the frequency of aggregation among agents for lower communication costs, but now under a Markovian sampling setting and a wider range of communication patterns. We below state the assumptions needed for the main theoretical results.
Assumption 2.1 (Regularity of the gradient).
For each and , the function is L-smooth in terms of , i.e., for any ,
(7) |
In addition, we assume that the objective function is twice continuously differentiable and -strongly convex only around the local minima , i.e.,
(8) |
2.1 imposes the regularity conditions on the gradient and Hessian matrix of the objective function , as is commonly assumed in [10, 45, 29, 38]. Note that (7) requires per-sample Lipschitzness of and is stronger than the Lipschitzness of its expected version , which is commonly assumed under i.i.d sampling setting [73, 50, 30]. However, we remark that this is in line with previous work on DSGD and LSGD-FP under Markovian sampling as well [71, 42, 80], because is no longer the unbiased stochastic version of and the effect of has to be taken into account in the analysis. The local strong convexity at the minimizer is commonly assumed to analyze the convergence of the algorithm under both asymptotic and non-asymptotic analysis [10, 29, 38, 45, 52, 80].
Assumption 2.2 (Ergodicity of Markovian sampling).
is an ergodic Markov chain with stationary distribution such that , and is independent from .
The ergodicity of the underlying Markov chains, as stated in 2.2, is commonly assumed in the literature [26, 68, 80, 42, 38]. This assumption ensures the asymptotic unbiasedness of the loss function , which takes i.i.d. sampling as a special case.
Assumption 2.3 (Decreasing step size and slowly increasing communication interval).
i) For bounded communication interval , we assume the polynomial step size and ; Or ii) If as , we assume and define , where the sequence satisfies , , and .
In 2.3, the polynomial step size is standard in the literature and it has the property , [17, 38]. Inspired by [51], we introduce to control the step size within each -th communication interval with length to restrict the growth of . Specifically, and ensure that and does not increase too fast in . sets the restriction on the increment from to . Several practical forms of suggested by [51], including and , also satisfy 2.3-ii). We defer to Appendix A the mathematical verification of these two types of , together with the practical implications of increasing communication interval .
Remark 1.
In 2.3, we incorporate an increasing communication interval along with a step size . This complements the choice of step size in [51, Assumption ], where for . It is important to note, however, that the increasing communication interval specified in [51, Assumption ] is applicable only in i.i.d sampling. Under the Markovian sampling framework, the expression loses its unbiased and Martingale difference properties. Consequently, the Martingale CLT application as utilized by [51] does not directly extend to Markovian sampling. To address this, we adapted techniques from [29, 58] to accommodate the increasing communication interval within the Markovian sampling setting and various communication patterns. This adaptation necessitates , a specification not covered in [51]. Exploring more general forms of that could relax this assumption is outside the scope of our current study.
Assumption 2.4 (Stability on model parameter).
We assume almost surely .
2.4 claims that the sequence of always remains in a path-dependent compact set. It is to ensure the stability of the algorithm that serves the purpose of analyzing the convergence, which is often assumed under the asymptotic analysis of vanilla SGD with Markovian noise [23, 29, 52]. As mentioned in [58, 70], checking 2.4 is challenging and requires case-by-case analysis, even under i.i.d. sampling. Only recently the stability of SGD under Markovian sampling has been studied in [9], but the result for UD-SGD remains unknown in the literature. Thus, we analyze each agent’s sampling strategy in the asymptotic regime under this stability condition.
Assumption 2.5 (Contraction property of communication matrix).
i). is independent of the sampling strategy for all and is assumed to be doubly-stochastic for all ; ii). At each aggregation step , is independently generated from some distribution such that for some constant .
The doubly-stochasticity of in 2.5-i) is widely assumed in the literature [54, 24, 43, 80]. 2.5-ii) is a contraction property to ensure that agents employing UD-SGD will asymptotically achieve the consensus, which is also common in [7, 24, 80]. Examples of that satisfy 2.5-ii), e.g., Metropolis-Hasting matrix, partial agent participation in FL, are deferred to Appendix F.1 due to space constraint.
3 Asymptotic Analysis of UD-SGD
Almost Sure Convergence: Let represent the consensus among all the agents at time , we establish the asymptotic consensus of the local parameters , as stated in Lemma 3.1.
Lemma 3.1.
Lemma 3.1 indicates that all agents asymptotically reach consensus at a rate of (or ). This finding extends the scope of [58, Proposition 1], incorporating considerations for Markovian sampling, FL settings, and increasing communication interval . The proof, detailed in Appendix B, primarily tackles the challenge of establishing the boundedness of the sequences (or ) almost surely for all . This is specifically analyzed in Lemma B.1. Next, with additional 2.2, we are able to obtain the almost sure convergence of to .
Theorem 3.2 is achieved by decomposing the Markovian noise term , using the Poisson equation technique as discussed in [6, 29, 17], into a Martingale difference noise term, along with additional noise terms. We then reformulate (6) into an iteration akin to stochastic approximation, as depicted in (56). The subsequent step involves verifying the conditions on these noise terms under our stated assumptions. Crucially, this theorem also establishes that UD-SGD ensures an almost sure convergence of each agent to a local minimum , even in scenarios where the communication interval gradually increases, in accordance with Assumption 2.3-ii). The detailed proof of this theorem is provided in Appendix C.
Central Limit Theorem: Let represent the asymptotic covariance matrix (defined in (5)) associated with each agent , given their sampling strategy and function . Define . We assume the polynomial step-size , and . In the case of , we further assume , where is defined in (8). For notational simplicity, and without loss of generality, our remaining CLT result is stated while conditioning on the event that for some .
Theorem 3.3.
Let Assumptions 2.1 - 2.5 hold. Then,
(11) |
where the limiting covariance matrix is in the form of
(12) |
Here, we have if , or if , where is defined in (8).
Moreover, let and . For , we have
(13) |
The proof, presented in Appendix D, addresses the technical challenges in deriving the CLT for UD-SGD, specifically the second-order conditions in decomposing the Markovian noise term, which is not present in the i.i.d. sampling case [58, 43, 51]. We decompose into three parts in (48) using Poisson equation: . The consensus error embedded in noise terms and is a new factor, whose characteristics have been quantified in our Lemma 3.1 but are not present in the single-agent scenario analyzed as an application of stochastic approximation in [22, 29]. The specifics of this analysis are expanded upon in Appendices D.1 to D.3. We require for to ensure that the largest eigenvalue of is negative, as this is a necessary condition for the existence of in (12) (otherwise integration diverges). In the case where there is only one agent (), and reduce to the matrices specified in the CLT result of vanilla SGD [29, 38, 52]. In addition, for a special case of constant communication interval in Assumption 2.3-i) and i.i.d. sampling as shown in Table 1, we recover the CLT of LSGD-RC in [51]. See Appendix E for detailed discussions.
Theorem 3.3 has significant implications for the MSE of for large time , i.e., , where is the -dimensional vector of all zeros except at the -th entry. This indicates that a smaller limiting covariance matrix , according to the Loewner order, results in a smaller trace of and consequently in a reduced MSE for large . Consideration for smaller will be presented in the next section, where agents have the opportunity to improve their sampling strategies.
Remark 2.
Studies by [62, 61] have shown that in DSGD with a fixed doubly-stochastic matrix , the influence of communication topology diminishes after a transient period. Our Theorem 3.3 extends these findings to Markovian sampling and a broader spectrum of communication patterns as in Table 1. This extension is based on the fact that the consensus error, impacted primarily by the communication pattern, decreases faster than the CLT scale and is thus not the dominant factor in the asymptotic regime, as suggested by Lemma 3.1.
Remark 3.
Recent studies have highlighted linear speedup with increasing number of agents in the dominant term of their finite-sample error bounds under DSGD-CT with i.i.d. sampling [43] and LSGD-FC with Markovian sampling [42]. However, our Theorem 3.3 demonstrates this phenomenon under more diverse communication patterns and Markovian sampling in Table 1 via the leading term in our CLT. Specifically, it scales with , i.e. , where denotes the average limiting covariance matrices across all agents and , suggesting that the MSE will be improved by . A similar argument also applies to in (13), i.e., , where and .
Impact of Agent’s Sampling Strategy: In the literature, the mixing time-based technique has been widely used in the non-asymptotic analysis in SGD, DSGD and various LSGD variants in FL [26, 68, 69, 80, 42], i.e., for each agent and some constant ,
(14) |
where is the mixing rate of the underlying Markov chain. However, typical non-asymptotic analyses often rely on among agents, i.e., the worst-performing agent in their finite-time bounds [80, 72], or assume an identical mixing rate across all agents [42, 69].
In contrast, Remark 3 highlights that each agent holds its own limiting covariance matrices and , which are predominantly governed by the matrix , capturing the agent’s sampling strategy and contributing equally to the overall performance of UD-SGD. For each agent , denote by and the asymptotic covariance matrices associated with two candidate sampling strategies and , respectively. Let and be the limiting covariance matrices of the distributed system in (12), where agent employs and , respectively, while keeping other agents’ sampling strategies unchanged. Then, we have the following result.
Corollary 3.4.
For agent , if there exist two sampling strategies and such that , we have .
Corollary 3.4 directly follows from the definition of Loewner ordering, and Loewner ordering being closed under addition (i.e., implies ). It demonstrates that even a single agent improves its sampling strategy from to , it leads to an overall reduction in (in terms of Loewner ordering), thereby decreasing the MSE and benefiting the entire group of agents. The subsequent question arises: How do we identify an improved sampling strategy over the baseline ?
This question has been partially addressed by [57, 48, 38], which qualitatively investigates the ‘efficiency ordering’ of two sampling strategies. In particular, [38, Theorem (i)] shows that sampling strategy is more efficient than if and only if for any vector-valued function . Consequently, in the UD-SGD framework, employing a more efficient sampling strategy over the baseline by agent leads to , thus satisfying . This finding, as per Corollary 3.4, implies an overall improvement in UD-SGD.
For illustration purposes, we list a few examples where two competing sampling strategies follow efficiency ordering: i) When an agent has complete access to the entire dataset (e.g., deep learning), shuffling techniques like single shuffling and random reshuffling are more efficient than i.i.d. sampling [38, 79]; ii) When an agent works with a graph-like data structure and employs a random walk, e.g., agent in Figure 1, using non-backtracking random walk (NBRW) is more efficient than simple random walk (SRW) [48]. iii) A recently proposed self-repellent random walk (SRRW) is shown to achieve near-zero sampling variance, indicating even higher sampling efficiency than NBRW and SRW [25].333Note that SRRW is a nonlinear Markov chain that depends on the relative visit counts of each node in the graph. While its application in single-agent optimization has been studied in [39], expanding the theoretical examination of SRRW to multi-agent scenarios is beyond the scope of this paper. However, we can still numerically evaluate the performance of UD-SGD with multiple agents on general communication matrices using SRRW as a highly efficient sampling strategy in Section 4. This random-walk-based sampling finds a particular application in large-scale FL within D2D networks (e.g., mobile networks, wireless sensor networks), where each agent acts as an edge server or access point, gathering information from the local D2D network [37, 32]. Employing a random walk over local D2D network for each agent constitutes the sampling strategy.
Theorem 3.3 and Corollary 3.4 not only qualitatively compare these sampling strategies but also allow for a quantitative assessment of the overall system enhancement. Since every agent contributes equally to the limiting covariance matrix of the distributed system as in Remark 3, a key application scenario is to encourage a subset of compliant agents to adopt highly efficient strategies like SRRW, potentially yielding better performance than universally upgrading to slightly improved strategies like NBRW. This approach, more feasible and impactful in large-scale machine learning scenarios where some agents cannot freely modify their sampling strategies, is a unique aspect of our framework not addressed in previous works focusing on the worst-performing agent [80, 42, 69, 72].
4 Experiments
In this section, we empirically evaluate the effect of agents’ sampling strategies under various communication patterns in UD-SGD. We consider the -regularized binary classification problem
(15) |
where the feature vector and its corresponding label are held by agent , with a penalty parameter set to . We use the ijcnn1 dataset [14] with features in each data point and k data points in total, which is evenly distributed to two groups with agents each ( agents in total) and each agent holds distinct data points. Each agent in the first group has full access to its entire dataset, and thus can employ i.i.d. sampling (baseline) or single shuffling. On the other hand, each agent in the other group has a graph-like structure and uses SRW (baseline), NBRW or SRRW with reweighting to sample its local dataset with uniform weight. In this simulation, we assume that agents can only communicate through a communication network using the DSGD algorithm. This scenario with heterogeneous agents, as depicted in Figure 1, is of great interest in large-scale machine learning [37, 32]. In addition, we employ a decreasing step size in our UD-SGD framework (1) because it is typically used for the strongly convex objective function and is tested to have the fastest convergence in this simulation setup. Due to space constraints, we defer detailed simulation setup, including the introduction of SRW, NBRW, and SRRW, to Section G.1.






The simulation results are obtained through independent trials. In Figure 2, we assume that the first group of agents perform either i.i.d. sampling or shuffling method, while the other group of agents all change their sampling strategies from baseline SRW to NBRW and SRRW, as shown in the legend. This plot shows that improved sampling strategy leads to overall convergence speedup since NBRW and SRRW are more efficient than SRW [38, 25]. Furthermore, it illustrates that SRRW is significantly more efficient than NBRW in this simulation setup, i.e., SRRW NBRW SRW in terms of sampling efficiency. While keeping the second group of agents unchanged, we can see that shuffling method outperforms i.i.d. sampling with smaller asymptotic MSE. However, shuffling method may not perform perfectly for small time due to slow mixing behavior in the initial period, which is also observed in the single-agent scenario in [65, 1, 38]. The error bar therein also indicates that the random-walk sampling strategy has a significant impact on the overall system performance and SRRW has smaller variance than NBRW and SRW.
In Figure 2, we let the first group of agents perform i.i.d. sampling while only changing a portion of agents in the second group to upgrade from SRW to SRRW, e.g., SRW SRRW in the legend means that there are 30 agents using SRW while the rest agents in the second group upgrade to SRRW. We observe that more agents willing to upgrade from SRW to SRRW lead to smaller asymptotic MSE, as predicted by Theorem 3.3 and Remark 3. This improvement in MSE reduction doesn’t scale linearly with more agents adopting SRRW because each agent holds its own dataset that are not necessarily identical, resulting in different individual limiting covariance matrices .
While maintaining i.i.d. sampling for the first group of agents, we compare the performance when the second group of agents in Figure 2 employ NBRW or SRRW. Remarkably, the case with only agents out of agents in the second group adopting far more efficient sampling strategy ( SRW, SRRW) through incentives or compliance already produces a smaller MSE than all agents using slightly better strategy ( NBRW). The performance gap becomes even more pronounced when agents upgrade from SRW to SRRW ( SRW, SRRW). We show that the performance of a distributed system can be improved significantly when a small proportion of agents adopt highly efficient sampling strategies.
Figure 2 empirically illustrates the asymptotic network independence property via four algorithms under our UD-SGD framework: Centralized SGD (communication interval , communication matrix ); LSGD-FP (FL with full client participation, , ); DSGD-VT (DSGD with time-varying topologies, randomly chosen from doubly stochastic matrices); DFL (decentralized FL with fixed MH-generated and increasing communication interval after -th aggregation). We fix the sampling strategy (shuffling, SRRW) throughout this plot. All four algorithms overlap around steps, implying that they have entered the asymptotic regime with similar performance where the CLT result dominates, implying the asymptotic network independence in the long run.
Figure 2 and 2 show the performance of different sampling strategies in DSGD-VT and DFL algorithms in terms of MSE. Both plots consistently demonstrate that improving agent’s sampling strategies (e.g., shuffling > iid sampling, and SRRW > NBRW > SRW) leads to faster convergence with smaller MSE, supporting our theory.
Furthermore, in Section G.2, we simulate an image classification task with CIFAR-10 dataset [44] by training a -layer CNN and ResNet-18 model collaboratively through a -agent network. The result is illustrated in Figure 3, where SRRW outperforms NBRW and SRW as expected. In summary, we find that upgrading even a small portion of agents to efficient sampling strategies (e.g., shuffling method, NBRW, SRRW under different dataset structures) improves system performance in UD-SGD. These results are consistent in binary and image classification tasks, underscoring that every agent matters in distributed learning.
5 Conclusion
In this work, we develop an UD-SGD framework that establishes the CLT of various distributed algorithms with Markovian sampling. We overcome technical challenges such as quantifying consensus error under very general communication patterns and decomposing Markovian noise through the Poisson equation, which extends the analysis beyond the single-agent scenario. We demonstrate that even if only a few agents optimize their sampling strategies, the entire distributed system will benefit with a smaller limiting covariance in the CLT, suggesting a reduced MSE. This finding challenges the current established upper bounds where the worst-performing agent leads the pack. Future studies could pivot towards developing fine-grained finite-time bounds to individually characterize each agent’s behavior, and theoretically analyze the effect of SRRW in UD-SGD.
6 Acknowledgments and Disclosure of Funding
We thank the anonymous reviewers for their constructive comments. This work was supported in part by National Science Foundation under Grant Nos. CNS-2007423, IIS-1910749, and IIS-2421484.
References
- Ahn et al. [2020] Kwangjun Ahn, Chulhee Yun, and Suvrit Sra. Sgd with shuffling: optimal rates without component convexity and large epoch requirements. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pages 17526–17535, 2020.
- Alon et al. [2007] Noga Alon, Itai Benjamini, Eyal Lubetzky, and Sasha Sodin. Non-backtracking random walks mix faster. Communications in Contemporary Mathematics, 9(04):585–603, 2007.
- Andrieu et al. [2005] Christophe Andrieu, Éric Moulines, and Pierre Priouret. Stability of stochastic approximation under verifiable conditions. SIAM Journal on control and optimization, 44(1):283–312, 2005.
- Ayache et al. [2023] Ghadir Ayache, Venkat Dassari, and Salim El Rouayheb. Walk for learning: A random walk approach for federated learning from heterogeneous data. IEEE Journal on Selected Areas in Communications, 41(4):929–940, 2023.
- Ben-Hamou et al. [2018] Anna Ben-Hamou, Eyal Lubetzky, and Yuval Peres. Comparing mixing times on sparse random graphs. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1734–1740. SIAM, 2018.
- Benveniste et al. [2012] Albert Benveniste, Michel Métivier, and Pierre Priouret. Adaptive algorithms and stochastic approximations, volume 22. Springer Science & Business Media, 2012.
- Bianchi et al. [2013] Pascal Bianchi, Gersende Fort, and Walid Hachem. Performance of a distributed stochastic approximation algorithm. IEEE Transactions on Information Theory, 59(11):7405–7418, 2013.
- Billingsley [2013] Patrick Billingsley. Convergence of probability measures. John Wiley & Sons, 2013.
- Borkar et al. [2021] Vivek Borkar, Shuhang Chen, Adithya Devraj, Ioannis Kontoyiannis, and Sean Meyn. The ode method for asymptotic statistics in stochastic approximation and reinforcement learning. arXiv preprint arXiv:2110.14427, 2021.
- Borkar [2022] V.S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint: Second Edition. Texts and Readings in Mathematics. Hindustan Book Agency, 2022.
- Boyd et al. [2011] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1):1–122, 2011.
- Brémaud [2013] Pierre Brémaud. Markov chains: Gibbs fields, Monte Carlo simulation, and queues, volume 31. Springer Science & Business Media, 2013.
- Brooks et al. [2011] Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Handbook of markov chain monte carlo. CRC press, 2011.
- Chang and Lin [2011] Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011.
- Chellaboina and Haddad [2008] VijaySekhar Chellaboina and Wassim M Haddad. Nonlinear dynamical systems and control: A Lyapunov-based approach. Princeton University Press, 2008.
- Chellapandi et al. [2023] Vishnu Pandi Chellapandi, Antesh Upadhyay, Abolfazl Hashemi, and Stanislaw H Zak. On the convergence of decentralized federated learning under imperfect information sharing. arXiv preprint arXiv:2303.10695, 2023.
- Chen et al. [2020] Shuhang Chen, Adithya Devraj, Ana Busic, and Sean Meyn. Explicit mean-square error bounds for monte-carlo and linear stochastic approximation. In International Conference on Artificial Intelligence and Statistics, pages 4173–4183. PMLR, 2020.
- Chen et al. [2022] Wenlin Chen, Samuel Horváth, and Peter Richtárik. Optimal client sampling for federated learning. Transactions on Machine Learning Research, 2022.
- Choromanska et al. [2015] Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In Artificial intelligence and statistics, pages 192–204, 2015.
- Dauphin et al. [2014] Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, volume 27, 2014.
- Davis [1970] Burgess Davis. On the intergrability of the martingale square function. Israel Journal of Mathematics, 8:187–190, 1970.
- Delyon [2000] Bernard Delyon. Stochastic approximation with decreasing gain: Convergence and asymptotic theory. Technical report, 2000.
- Delyon et al. [1999] Bernard Delyon, Marc Lavielle, and Eric Moulines. Convergence of a stochastic approximation version of the em algorithm. Annals of statistics, pages 94–128, 1999.
- Doan et al. [2019] Thinh Doan, Siva Maguluri, and Justin Romberg. Finite-time analysis of distributed td (0) with linear function approximation on multi-agent reinforcement learning. In International Conference on Machine Learning, pages 1626–1635. PMLR, 2019.
- Doshi et al. [2023] Vishwaraj Doshi, Jie Hu, and Do Young Eun. Self-repellent random walks on general graphs–achieving minimal sampling variance via nonlinear markov chains. In International Conference on Machine Learning. PMLR, 2023.
- Duchi et al. [2012] John C Duchi, Alekh Agarwal, Mikael Johansson, and Michael I Jordan. Ergodic mirror descent. SIAM Journal on Optimization, 22(4):1549–1578, 2012.
- Dyer et al. [1993] Martin Dyer, Alan Frieze, Ravi Kannan, Ajai Kapoor, Ljubomir Perkovic, and Umesh Vazirani. A mildly exponential time algorithm for approximating the number of solutions to a multidimensional knapsack problem. Combinatorics, Probability and Computing, 2(3):271–284, 1993.
- Even [2023] Mathieu Even. Stochastic gradient descent under markovian sampling schemes. In International Conference on Machine Learning, 2023.
- Fort [2015] Gersende Fort. Central limit theorems for stochastic approximation with controlled markov chain dynamics. ESAIM: Probability and Statistics, 19:60–80, 2015.
- Fraboni et al. [2022] Yann Fraboni, Richard Vidal, Laetitia Kameni, and Marco Lorenzi. A general theory for client sampling in federated learning. In International Workshop on Trustworthy Federated Learning in conjunction with IJCAI, 2022.
- Ghosh et al. [2020] Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. An efficient framework for clustered federated learning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pages 19586–19597, 2020.
- Guo et al. [2022] Yuanxiong Guo, Ying Sun, Rui Hu, and Yanmin Gong. Hybrid local SGD for federated learning with heterogeneous communications. In International Conference on Learning Representations, 2022.
- Hall et al. [2014] P. Hall, C.C. Heyde, Z.W. Birnbauam, and E. Lukacs. Martingale Limit Theory and Its Application. Communication and Behavior. Elsevier Science, 2014.
- He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Hendrikx [2023] Hadrien Hendrikx. A principled framework for the design and analysis of token algorithms. In International Conference on Artificial Intelligence and Statistics, pages 470–489. PMLR, 2023.
- Horn and Johnson [1991] Roger A. Horn and Charles R. Johnson. Topics in Matrix Analysis. Cambridge University Press, 1991. doi: 10.1017/CBO9780511840371.
- Hosseinalipour et al. [2022] Seyyedali Hosseinalipour, Sheikh Shams Azam, Christopher G Brinton, Nicolo Michelusi, Vaneet Aggarwal, David J Love, and Huaiyu Dai. Multi-stage hybrid federated learning over large-scale d2d-enabled fog networks. IEEE/ACM Transactions on Networking, 2022.
- Hu et al. [2022] Jie Hu, Vishwaraj Doshi, and Do Young Eun. Efficiency ordering of stochastic gradient descent. In Advances in Neural Information Processing Systems, 2022.
- Hu et al. [2024] Jie Hu, Vishwaraj Doshi, and Do Young Eun. Accelerating distributed stochastic optimization via self-repellent random walks. In The Twelfth International Conference on Learning Representations, 2024.
- Jerrum and Sinclair [1996] Mark Jerrum and Alistair Sinclair. The markov chain monte carlo method: an approach to approximate counting and integration. Approximation Algorithms for NP-hard problems, PWS Publishing, 1996.
- Kailath et al. [2000] Thomas Kailath, Ali H Sayed, and Babak Hassibi. Linear estimation. Prentice Hall, 2000.
- Khodadadian et al. [2022] Sajad Khodadadian, Pranay Sharma, Gauri Joshi, and Siva Theja Maguluri. Federated reinforcement learning: Linear speedup under markovian sampling. In International Conference on Machine Learning, pages 10997–11057, 2022.
- Koloskova et al. [2020] Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian Stich. A unified theory of decentralized sgd with changing topology and local updates. In International Conference on Machine Learning, pages 5381–5393, 2020.
- Krizhevsky and Hinton [2009] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, 2009.
- Kushner and Yin [2003] Harold Kushner and G George Yin. Stochastic approximation and recursive algorithms and applications, volume 35. Springer Science & Business Media, 2003.
- Lalitha et al. [2018] Anusha Lalitha, Shubhanshu Shekhar, Tara Javidi, and Farinaz Koushanfar. Fully decentralized federated learning. In Advances in neural information processing systems, 2018.
- Le Bars et al. [2023] Batiste Le Bars, Aurélien Bellet, Marc Tommasi, Erick Lavoie, and Anne-Marie Kermarrec. Refined convergence and topology learning for decentralized sgd with heterogeneous data. In International Conference on Artificial Intelligence and Statistics, pages 1672–1702. PMLR, 2023.
- Lee et al. [2012] Chul-Ho Lee, Xin Xu, and Do Young Eun. Beyond random walk and metropolis-hastings samplers: why you should not backtrack for unbiased graph sampling. ACM SIGMETRICS Performance evaluation review, 40(1):319–330, 2012.
- Leskovec and Krevl [2014] Jure Leskovec and Andrej Krevl. Snap datasets: Stanford large network dataset collection, 2014.
- Li et al. [2020] Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the convergence of fedavg on non-iid data. In International Conference on Learning Representations, 2020.
- Li et al. [2022] Xiang Li, Jiadong Liang, Xiangyu Chang, and Zhihua Zhang. Statistical estimation and online inference via local sgd. In Proceedings of Thirty Fifth Conference on Learning Theory, volume 178 of Proceedings of Machine Learning Research, pages 1613–1661, 02–05 Jul 2022.
- Li et al. [2023] Xiang Li, Jiadong Liang, and Zhihua Zhang. Online statistical inference for nonlinear stochastic approximation with markovian data. arXiv preprint arXiv:2302.07690, 2023.
- Liao et al. [2023] Yunming Liao, Yang Xu, Hongli Xu, Lun Wang, and Chen Qian. Adaptive configuration for heterogeneous participants in decentralized federated learning. In IEEE INFOCOM 2023. IEEE, 2023.
- Mathkar and Borkar [2016] Adwaitvedant S Mathkar and Vivek S Borkar. Nonlinear gossip. SIAM Journal on Control and Optimization, 54(3):1535–1557, 2016.
- McMahan et al. [2017] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
- Meyn [2022] Sean Meyn. Control systems and reinforcement learning. Cambridge University Press, 2022.
- Mira [2001] Antonietta Mira. Ordering and improving the performance of monte carlo markov chains. Statistical Science, pages 340–350, 2001.
- Morral et al. [2017] Gemma Morral, Pascal Bianchi, and Gersende Fort. Success and failure of adaptation-diffusion algorithms with decaying step size in multiagent networks. IEEE Transactions on Signal Processing, 65(11):2798–2813, 2017.
- Mou et al. [2020] Wenlong Mou, Chris Junchi Li, Martin J Wainwright, Peter L Bartlett, and Michael I Jordan. On linear stochastic approximation: Fine-grained polyak-ruppert and non-asymptotic concentration. In Conference on Learning Theory, pages 2947–2997. PMLR, 2020.
- Neglia et al. [2020] Giovanni Neglia, Chuan Xu, Don Towsley, and Gianmarco Calbi. Decentralized gradient methods: does topology matter? In International Conference on Artificial Intelligence and Statistics, pages 2348–2358. PMLR, 2020.
- Olshevsky [2022] Alex Olshevsky. Asymptotic network independence and step-size for a distributed subgradient method. Journal of Machine Learning Research, 23(69):1–32, 2022.
- Pu et al. [2020] Shi Pu, Alex Olshevsky, and Ioannis Ch Paschalidis. Asymptotic network independence in distributed stochastic optimization for machine learning: Examining distributed and centralized stochastic gradient descent. IEEE signal processing magazine, 37(3):114–122, 2020.
- Robert et al. [1999] Christian P Robert, George Casella, and George Casella. Monte Carlo statistical methods, volume 2. Springer, 1999.
- Ross et al. [1996] Sheldon M Ross, John J Kelly, Roger J Sullivan, William James Perry, Donald Mercer, Ruth M Davis, Thomas Dell Washburn, Earl V Sager, Joseph B Boyce, and Vincent L Bristow. Stochastic processes, volume 2. Wiley New York, 1996.
- Safran and Shamir [2020] Itay Safran and Ohad Shamir. How good is sgd with random shuffling? In Conference on Learning Theory, pages 3250–3284. PMLR, 2020.
- Srikant [2024] R. Srikant. Rates of Convergence in the Central Limit Theorem for Markov Chains, with an Application to TD Learning. arXiv e-prints, art. arXiv:2401.15719, January 2024. doi: 10.48550/arXiv.2401.15719.
- Stich [2018] Sebastian U Stich. Local sgd converges fast and communicates little. In International Conference on Learning Representations, 2018.
- Sun et al. [2018] Tao Sun, Yuejiao Sun, and Wotao Yin. On markov chain gradient descent. In Advances in neural information processing systems, volume 31, 2018.
- Sun et al. [2023] Tao Sun, Dongsheng li, and Bao Wang. On the decentralized stochastic gradient descent with markov chain sampling. IEEE Transactions on Signal Processing, PP, 07 2023. doi: 10.1109/TSP.2023.3297053.
- Vidyasagar [2022] M Vidyasagar. Convergence of stochastic approximation via martingale and converse lyapunov methods. arXiv preprint arXiv:2205.01303, 2022.
- Wai [2020] Hoi-To Wai. On the convergence of consensus algorithms with markovian noise and gradient bias. In 2020 59th IEEE Conference on Decision and Control (CDC), pages 4897–4902. IEEE, 2020.
- Wang et al. [2023] Han Wang, Aritra Mitra, Hamed Hassani, George J Pappas, and James Anderson. Federated temporal difference learning with linear function approximation under environmental heterogeneity. arXiv preprint arXiv:2302.02212, 2023.
- Wang et al. [2020] Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H. Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. In Advances in Neural Information Processing Systems, volume 33, pages 7611–7623, 2020.
- Wang and Ji [2022] Shiqiang Wang and Mingyue Ji. A unified analysis of federated learning with arbitrary client participation. In Advances in neural information processing systems, 2022.
- Wojtowytsch [2023] Stephan Wojtowytsch. Stochastic gradient descent with noise of machine learning type part i: Discrete time analysis. Journal of Nonlinear Science, 33(3):45, 2023.
- Woodworth et al. [2020] Blake Woodworth, Kumar Kshitij Patel, Sebastian Stich, Zhen Dai, Brian Bullins, Brendan Mcmahan, Ohad Shamir, and Nathan Srebro. Is local sgd better than minibatch sgd? In International Conference on Machine Learning, pages 10334–10343. PMLR, 2020.
- Ye et al. [2022] Hao Ye, Le Liang, and Geoffrey Ye Li. Decentralized federated learning with unreliable communications. IEEE Journal of Selected Topics in Signal Processing, 16(3):487–500, 2022.
- Yun et al. [2021] Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Open problem: Can single-shuffle sgd be better than reshuffling sgd and gd? In Proceedings of Thirty Fourth Conference on Learning Theory, volume 134, pages 4653–4658. PMLR, Aug 2021.
- Yun et al. [2022] Chulhee Yun, Shashank Rajput, and Suvrit Sra. Minibatch vs local SGD with shuffling: Tight convergence bounds and beyond. In International Conference on Learning Representations, 2022.
- Zeng et al. [2022] Sihan Zeng, Thinh T Doan, and Justin Romberg. Finite-time convergence rates of decentralized stochastic approximation with applications in multi-agent and multi-task learning. IEEE Transactions on Automatic Control, 2022.
- Zhao et al. [2018] Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582, 2018.
Appendix A Discussion of 2.3-ii)
A.1 Suitable choices of
When wet let (resp. ), as suggested by [51], it trivially satisfies since by definition (resp. ), and (resp. ) for any . Besides, . To ensure , it is sufficient to have , or equivalently, . Since can be arbitrarily small to satisfy the condition, is satisfied. When , we can rewrite the last condition as
(16) |
where we have such that and as , which leads to . Similarly, for , we have such that and as , which also leads to .
A.2 Practical implications of 2.3-ii)
In this assumption, we allow the number of local iterations to go to infinity asymptotically. In distributed learning environments such as mobile, IoT, and wireless sensor networks, where nodes are often constrained by battery life, increasing communication interval in 2.3-ii) plays a crucial role in balancing energy costs with communication effectiveness. It allows agents to communicate more frequently early on, leading to a faster initial convergence to the neighborhood of . Then, we slow down the communication frequency between agents to conserve energy, leveraging the diminishing returns on accuracy improvements from additional communications.
Consider the scenario where devices across multiple clusters collaborate on a distributed optimization task, utilizing local datasets. Devices within each cluster form a communication network that allows a virtual agent to perform a heterogeneous Markov chain trajectory via random walk, or an i.i.d. sequence in a complete graph with self-loops, depending on the application context. Each cluster features an edge server that supports the exchange of model estimates with neighboring clusters. By performing local updates before uploading these to the cluster’s edge server, the model benefits from reduced communication overhead. As the frequency of updates between devices and edge servers decreases — optimized by gradually increasing — we effectively lower communication costs, particularly as the model estimation is close to .
Appendix B Proof of Lemma 3.1
Let and , where is the Kronecker product. Let . Then, motivated by [58], we define a sequence in the increasing communication interval case (resp. in the bounded communication interval case), where is defined in 2.3-ii). represents the consensus error of the model.
We first give the following lemma that shows the pathwise boundedness of .
Lemma B.1.
Lemma B.1 and 2.3-ii) imply that for any , for some constant that depends on and . Along with Assumption 2.4 such that is always bounded per each trajectory, it means
Let be a sequence of increasing compact subset of such that . Then, we know that for any ,
(17) |
(17) indicates either one of the following two cases:
-
•
there exists some trajectory-dependent index such that each trajectory is always within the compact set , i.e., (satisfied by the construction of increasing compact sets and 2.4), and we have such that ;
-
•
will escape the compact set eventually for any in finite time such that when is large enough.
We can see the second case contradicts 2.4 because we assume every trajectory is within some compact set. Therefore, (17) for any is equivalent to showing and . Under Assumption 2.3-i) we can obtain similar result by following the same steps as above, which completes the proof of Lemma 3.1.
Proof of Lemma B.1.
Case 1 (Increasing communication interval ): By left multiplying (18) with , along with in 2.3-ii), we have the following iteration
(19) |
where the equality comes from . With (18) and (19), we have
(20) |
where the second equality comes from and . Let , dividing both sides of (20) by gives
(21) |
Define the filtration as . Recursively computing (21) w.r.t the time interval gives
(22) |
where is the backward multiplier, the second equality comes from and for . In 2.5, we have . Then,
(23) |
where the first inequality comes from and for . Then, we analyze the norm of the gradient in the second term on the RHS of (23) conditioned on . By 2.4, we assume is within some compact set at time such that for some constant . For and any , we have
Considering , we have such that . In addition, we have
(24) |
such that . Thus, for any ,
(25) |
For and any , we have
Similar to the steps in (24), we have
(26) |
Then, and, together with (25), we have
(27) |
By induction, for .
The next step is to analyze the growth rate of . By for , we have
For step size , we have such that . Then,
(28) |
where the last inequality comes from . We can see the sum of the norm of the gradients are bounded by , which only depends on the compact set at time .
Let . Since from 2.3-ii), , there exists some large enough such that for any . Note that depends only on and is independent of . Then, let , we can rewrite (23) as
(29) |
where satisfies , which is derived from rearranging (29) as and upper bounding the RHS. Upon noting that , we obtain
(30) |
The induction leads to for any . Besides, for , by following the above steps (23) applied to (21), we have
(31) |
By (28) we already show that conditioned on . Therefore, for . This completes the boundedness analysis of .
Case 2 (Bounded communication interval ): In this case, we do not need the auxiliary step size and can directly work on for . Similar to (20), we have
(32) |
and let , dividing both sides of above equation by gives
(33) |
Then, by following the similar steps in (22) and (23), we obtain
(34) |
Also similar to (25) - (28), we can bound the sum of the norm of the gradients as
(35) |
Now that is bounded above by , . Then, we further bound (35) as
(36) |
The subsequent proof is basically a replication of (29) - (31) and is therefore omitted. ∎
Appendix C Proof of Theorem 3.2
We focus on analyzing the convergence property of , which is obtained by left multiplying (18) with , i.e.,
(37) |
where the second equality comes from being doubly stochastic and .
For self-contained purpose, we first give the almost sure convergence result for the stochastic approximation that will be used in our proof.
Theorem C.1 (Theorem 2 [23]).
Consider the stochastic approximation in the form of
(38) |
Assume that
-
C1.
w.p.1, the closure of is a compact subset of ;
-
C2.
is a decreasing sequence of positive number such that ;
-
C3.
w.p.1, exists and is finite. Moreover, .
-
C4.
vector-valued function is continuous on and there exists a continuously differentiable function such that for all . Besides, the interior of is empty where .
Then, w.p.1, . ∎
We can rewrite (37) as
(39) |
and work on the converging behavior of the third and fourth term. By definition of function , we have
(40) |
By the Lipschitz continuity of function in (7), we have
(41) |
where the second inequality comes from the Cauchy-Schwartz inequality. In Appendix B, we have shown almost surely such that almost surely.
Next, we further decompose the fourth term in (39). For an ergodic transition matrix and a function associated with the same state space , define the operator for the -step transition probability . Denote by the underlying transition matrices of all agents with corresponding stationary distribution . Then, for every function , there exists a corresponding function such that
(42) |
The solution of the Poisson equation (42) has been studied in the literature, e.g., [17, 38]. For self-contained purpose, we derive the closed-form from scratch. First of all, we can obtain function in the recursive form as follows,
(43) |
It is not hard to check that (43) satisfies (42). Note that by induction we get
(44) |
Then, we can further simplify (43), and the closed-form expression of is given as
(45) |
where the fourth equality comes from (44). Note that the so-called ‘fundamental matrix’ exists for every ergodic Markov chain from 2.2. Since function is Lipschitz continuous, we have the following lemma.
Lemma C.2.
Under assumption (A1), functions and are both Lipschitz continuous in for any .
Proof.
By (45), for any and , we have
(46) |
where the second inequality holds for a constant that is the largest absolute value of the entry in the matrix . Therefore, is Lipschitz continuous in . Moreover, following the similar steps as above, we have
(47) |
such that is also Lipschitz continuous in , which comletes the proof. ∎
Now with (42) we can decompose as
(48) |
Here is a Martingale difference sequence and we need the martingale convergence theorem in Theorem C.3.
Theorem C.3 (Theorem [64]).
For an -Martingale , set . If for some ,
(49) |
then converges almost surely. ∎
We want to show that such that converges almost surely by Theorem C.3. As we can see in (45), with Lemma C.2 and 2.4, for a sample path ( within a compact set ), and almost surely for all . This ensures that is an -bounded martingale difference sequence, i.e., . Together with 2.3, we get
(50) |
and thus converges almost surely.
Next, for the term we have
(51) |
As is shown before, for a given sample path, is bounded almost surely for all and such that almost surely. Since , we have . Note that there exists a path-dependent constant (that bounds ) such that for any ,
(52) |
Since , there exists a positive integer such that for all , and for every . Therefore, is a Cauchy sequence and converges by Cauchy convergence criterion. The last term of (51) tends to zero. Therefore, converges and is finite.
For the last term , Lemma C.2 leads to
(53) |
for the Lipschitz constant of . However, the relationship between and is not obvious in the D-SGD and FL setting due to the update rule (18) with communication matrix , unlike the classical stochastic approximation shown in (38). We come up with the novel decomposition of , which takes the consensus error into account, to solve this issue, i.e.,
(54) |
Using the Lipschitzness property of in Lemma C.2, we have
(55) |
In Appendix B we have shown almost surely. Moreover, is bounded per sample path. Therefore, such that almost surely.
Appendix D Proof of Theorem 3.3
To obtain Theorem 3.3, we need to utilize the existing CLT result for general SA in Theorem D.1 and check all the necessary conditions therein.
Theorem D.1 (Theorem 2.1 [29]).
Consider the stochastic approximation iteration (38), assume
-
C1.
Let be the root of function , i.e., , and assume . Moreover, assume the mean field is twice continuously differentiable in a neighborhood of , and the Jacobian is Hurwitz, i.e., the largest real part of its eigenvalues ;
-
C2.
The step size , , and either (i). , or (ii). for some ;
-
C3.
almost surely for any ;
-
C4.
-
(a)
is an -Martingale difference sequence, i.e., , and there exists such that ;
-
(b)
, where is a symmetric positive semi-definite matrix and
(57)
-
(a)
-
C5.
Let , is -adapted, and
(58)
Then,
(59) |
where
(60) |
∎
Note that the matrix in the condition C4(b) of Theorem D.1 was assumed to be positive definite in the original Theorem 2.1 [29]. It was only to ensure that the solution to the Lyapunov equation (60) is positive definite, which was only used for the stability of the related autonomous linear ODE (e.g., Theorem 3.16 [15] or Theorem 2.2.3 [36]). However, in this paper, we do not need strict positive definite matrix . Therefore, we extend to be positive semi-definite such that is also positive semi-definite (see Lemma D.2 for the closed form of matrix ). Such kind of extension does not change any of the proof steps in [29].
D.1 Discussion about C1-C3
Our Assumption 2.1 corresponds to C1 by letting function therein. We can also let in Theorem 3.3 large enough to satisfy C2. The typical form of step size, also indicated in [29], is polynomial step size for . Note that satisfies C2 (i) and satisfies C2 (ii). Assumption 2.4 corresponds to C3.444Theorem D.1 is slightly modified in terms of condition C3, which is mentioned as a special case in Section 2.2 [29]. For the sake of mathematical simplicity, we stick to condition C3 in the proof.
D.2 Analysis of C4
To check condition C4, we need to analyze the Martingale difference sequence . Recall such that there exists a constant ,
(61) |
Since almost surely by 2.4 and is a finite state space, at all time , we have
(62) |
and there exists another constant such that by definition of , we have
(63) |
Therefore, a.s. for all and C4.(a) is satisfied.
We now turn to C4.(b). Note that for any , we have due to the independence between agent and , and . Then, we have
(64) |
The analysis of is inspired by Section 4 [29] and Section 4.3.3 [22], where they constructed another Poisson equation to further decompose the noise terms therein.555However, we note that [29, 22] considered the Lipschitz continuity of function defined in (68) as an assumption instead of a conclusion, where we give a detailed proof for this. We also obtain matrix in an explicit form, which coincides with the definition of asymptotic covariance matrix and was not simplified in [29]. The discussion on the improvement of is outlined in Section 3.2, which was not the focus of [29, 22] and was not covered therein. Here, expanding gives
(65) |
Denote by
(66) |
and let its expectation w.r.t the stationary distribution be , we can construct another Poisson equation, i.e.,
(67) |
for some matrix-valued function . Following the similar steps shown in (42) - (45), we can obtain the closed-form expression
(68) |
Then, we can decompose (65) into
(69) |
Let , , , and , we want to prove that satisfies the first condition in C4, and meet the second condition in C4.
We now show that for all , is Lipschitz continuous in for some compact subset . For any and , we can get
(70) |
for some constant , where the last inequality comes from since and the Lipschitz continuous function . Similarly, we can get . Therefore, and are Lipschitz continuous in for any .
For the sequence , by applying Theorem 3.2 and conditioned on for an optimal point , we have . This implies for every and thus as almost surely, which satisfies the first condition in (57).
For the Martingale difference sequence , we use Burkholder inequality (e.g., Theorem 2.10 [33], [21]) such that for and some constant ,
(71) |
By the definition (66) and Assumption 2.4, for a sample path, for any , as well as , which leads to for any because of (68). Then, we have for the path-dependent constant . Taking and we have
(72) |
Thus, Lebesgue dominated convergence theorem gives
and we have .
For the sequence , we have
(73) |
Since and are Lipschitz continuous in , is also Lipschitz continuous in and is bounded. We have
(74) |
where for a given sample path, is the Lipschitz constant of , and for any and . Then,
(75) |
because by assumption 2.3. Therefore, the second condition of C4 is satisfied.
D.3 Analysis of C5
We now analyze condition C5. The decreasing rate of each term in (56) has been proved in Appendix C. Specifically, by assumption 2.4, there exists a compact subset for a given sample path, and
-
•
we have shown that a.s., which implies a.s.
-
•
For , in the case of increasing communication interval, , by Assumption 2.3-ii), we know such that almost surely. On the other hand, in the case of bounded communication interval, such that a.s.
-
•
Since almost surely, we have almost surely. Then, leads to a.s.
Let and . From above, we can see that C5 in Theorem D.1 is satisfied and we show that all the conditions in Theorem D.1 have been satisfied.
D.4 CLosed Form of Limitimg Covariance Matrix
Lastly, we need to analyze the closed-form expression of as in C4 (b) of Theorem D.1. Recall that and in (69). We now give the exact form of function as follows:
(76) |
where the second equality comes from the recursive form of in (45), and that the process is in its stationary regime, i.e., from the beginning. The last equality comes from rewriting in a matrix form. Note that is exactly the asymptotic covariance matrix of the underlying Markov chain associated with the test function . By utilizing the following lemma, we can obtain the explicit form of as defined in (60).
Lemma D.2 (Lemma D.2.2 [41]).
If all the eigenvalues of matrix have negative real part, then for every positive semi-definite matrix there exists a unique positive semi-definite matrix satisfying . The explicit solution is given as
(77) |
D.5 CLT of Polyak-Ruppert Averaging
We now consider the CLT result of Polyak-Ruppert averaging . The steps follow similar way by verifying that the conditions in the related CLT of Polyak-Ruppert averaging for the stochastic approximation are satisfied. The additional assumption is given below.
-
C6.
For the sequence in (38), with probability .
Then, the CLT of Polyak-Ruppert averaging is as follows.
Theorem D.3 (Theorem 3.2 of [29]).
Consider the iteration (38), assume C1, C3, C4, C5 in Theorem D.1 are satisfied. Moreover, assume C6 is satisfied. Then, with step size for , we have
(78) |
where .
Discussion about C1 and C3 can be found in Section D.1. Condition C4 has been analyzed in Section D.2 and condition C5 has been examined in Section D.3. The only condition left to analyze is C6, which is based on the results obtained in Section D.3. In view of (56), , so C6 is equivalent to
(79) |
In Section D.3, we have shown that , . Note that by Assumption 2.3, we consider bounded communication interval for step size for , and hence, such that . We then know that
(80) |
such that
(81) |
which proved (79) and C6 is verified. Therefore, Theorem D.3 is proved under our Assumptions 2.1 - 2.5.
Appendix E Discussion on the comparison of Theorem 3.3 to the CLT result in [51]
As a byproduct of our Theorem 3.3, we have the following corollary.
Proof.
Since for all , we have . There is an existing result showing the CLT result of the partial sum of a sub-sequence (after normalization) has the same normal distribution as the partial sum of the original sequence.
Theorem E.2 (Theorem 14.4 of [8]).
Given a sequence of random variable with partial sum such that . Let be some positive random variable taking integer value such that is on the same space as . In addition, for some sequence going to infinity, for a positive constant . Then, .
From Theorem E.2 and our Theorem 3.3, we have . ∎
Recently, [51] studied the CLT result under the LSGD-FP algorithm with i.i.d sampling (with slightly different setting of the step size). We are able recover Theorem 3.1 of [51] under the constant communication interval while adjusting their step size to make a fair comparison. We state their algorithm below for self-contained purpose. During each communication interval ,
(83) |
The CLT result associated with (83) is given below.
Theorem E.3 (Theorem 3.1 of [51]).
Under LSGD-FP algorithm with i.i.d. sampling, we have
(84) |
where .
Note that for constant . We can rewrite (84) as
(85) |
such that
(86) |
Note that the step size in (83) keeps unchanged during each communication interval, while ours in (1) keeps decreasing even in the same communication interval. This makes our step size decreasing faster than theirs. To make a fair comparison, we only choose a sub-sequence in (86) such that it is ‘equivalent’ to see that our step sizes become the same at each aggregation step. In this case, we again use Theorem E.2 to obtain
(87) |
such that
(88) |
Therefore, our Corollary E.1 also recovers Theorem 3.1 of [51] under the constant communication interval , but with more general communication patterns and Markovian sampling.
Appendix F Discussion on Communication Patterns
F.1 Examples of Communication Matrix
Metropolis Hasting Algorithm: In the decentralized learning such as D-SGD, HLSGD and DFL, at the aggregation step can be generated locally using the Metropolis Hasting algorithm based on the underlying communication topology, and is deterministic [62, 43, 80]. Specifically, each agent exchanges its degree with its neighbors , forming the weight for and . In this case, is doubly stochastic and symmetric. By Perron-Frobenius theorem, its SLEM . Then, , which satisfies 2.5-ii). It is worth noting that this algorithm is robust to time-varying communication topologies.
Client Sampling in FL: For LSGD-FP studied in [67, 76, 42], trivially satisfies 2.5-ii). For LSGD-PP on the other hand, only a small fraction of agents participate in each aggregation step for consensus [50, 30]. Denote by a randomly selected set of agents (without replacement) of fixed size at time and plays a role of aggregating for agent . Additionally, the central server needs to broadcast updated parameter to the newly selected set with the same size, which results in a bijective mapping (for and ) and a corresponding permutation matrix . Then, the communication matrix becomes .666In Section F.2, we will discuss the mathematical equivalence between our client sampling strategy and the commonly used one in the FL literature [50, 30]. Specifically, if and otherwise. Besides, for , for , and otherwise. Note that is now a random matrix, since is a randomly chosen subset of size . Clearly, for each choice of , is doubly stochastic, symmetric and . Taking the expectation of w.r.t the randomly selected set gives for and for . Note that has all positive entries. Therefore, we use the fact for permutation matrix such that by Perron–Frobenius theorem and eigendecomposition, which satisfies 2.5-ii).
F.2 Discussion on partial client sampling
The commonly used partial client sampling algorithm in the FL literature [50, 30] is FedAvg as follows:
-
1.
At time , the central server updates its global parameter from the agents in the previous set . Then, the central server selects a new subset of agents and broadcasts to agent , i.e., ;
-
2.
Each selected agent computes steps of SGD locally and consecutively updates its local parameter according to (1a);
-
3.
Each selected agent uploads to the central server.
Then, the central server repeats the above three steps with and a new set of selected agents.
In our client sampling scheme, at the aggregation step , the design of results in for a selected agent , and for an unselected agent . Meanwhile, the central server updates the global parameter for . Then, the permutation matrix ensures that the newly selected agent will use as the initial point for its subsequent SGD iterations. Consequently, from the selected agents’ perspective, the communication matrix corresponds to step 1 in FedAvg. As we can observe, both algorithms update the global parameter identically from the central server’s viewpoint, rendering them mathematically equivalent regarding the global parameter update.
We acknowledge that under the i.i.d sampling strategy, the behavior of unselected agents in our algorithm differs from FedAvg. Specifically, unselected agents are idle in FedAvg, while they continue the SGD computation in our algorithm (despite not contributing to the global parameter update). Importantly, when an unselected agent is later selected, the central server overwrites its local parameter during the broadcasting process. This ensures that the activities of agents when they are unselected have no impact on the global parameter update.
To our knowledge, the FedAvg algorithm under the Markovian sampling strategy remains unexplored in the FL literature. Extrapolating the behavior of unselected agents in FedAvg from i.i.d sampling to Markovian sampling suggests that unselected agents would remain idle. In contrast, our algorithm enables unselected agents to continue evolving . These additional transitions contribute to faster mixing of the Markov chain for each unselected agent and a smaller bias of relative to its mean field , potentially accelerating the convergence.
Appendix G Additional Simulation
G.1 Simulation Setup in Section 4
This simulation is performed on a PC with an AMD R9 5950X, RTX 3080 and 128 GB RAM. In this simulation, we assume that agents follow the DSGD algorithm (1). In Figure 2 - 2, each agent holds a disjoint local dataset (non-overlapping data points for every agent), while we distribute the ijcnn1 dataset [14] with more varied distribution among agents by leveraging Dirichlet distribution with the default alpha value of in Figure 2 - 2.
Moreover, we assume that all agents are distributed over a communication network. In order to create this network among agents and the graph-like dataset structure held by each agent, we utilize connected sub-graphs from the real-world graph Facebook in SNAP [49]. All agents collaborate together to generate a deterministic communication matrix with Metropolis Hasting algorithm of the following form: For , we have
where represents the degree of agent in the graph. The communication interval is set to , as is the usual choice in DSGD [71, 80, 61, 69].
For the first group of agents, we assume they have full access to their datasets, thus performing i.i.d. sampling or single shuffling. In particular,
-
•
i.i.d. sampling employed by agent means that the data point is independently and uniformly sampled from its dataset at each time .
-
•
Single shuffling, by its name, only shuffles the dataset once and adheres to that specific order throughout the training process.
On the other hand, within the second group of agents, we assume that they hold graph-like datasets. Now, we introduce simple random walk (SRW), non-backtracking random walk (NBRW), and self-repellent random walk (SRRW) in order:
-
•
SRW refers to the walker that chooses one of the neighboring nodes uniformly at random.
- •
-
•
SRRW, recently proposed by [25], is designed with a nonlinear transition kernel of the following form:
(89) where matrix is the transition kernel of the baseline Markov chain and is its corresponding stationary distribution. Additionally, denotes the force of self repellence, and larger leads to stronger force of self repellence, thus higher sampling efficiency [25, Corollary 4.3]. Moreover, vector is in the interior of probability simplex, representing the empirical distribution, in other words, the visit frequency of each node in the graph. The update rule of this empirical distribution is in the following form:
(90) where is the step size of SRRW iterates. was original proposed in [25] and is recently extended to in [39]. In this simulation, we use SRW as the baseline Markov chain of SRRW, and in turn is proportional to the degree distribution. We also assume , i.e., each node has been visited once, and choose the step size , force of self repellence according to the suggestion in [39, Section 4].
Since SRW, NBRW, and SRRW all admit the stationary distribution that is proportional to degree distribution, in order to obtain unfirom target in (15), we need to reweight the gradient computed by each agent in order to maintain an asymptotic unbiased gradient. Thus, agent should modify its SGD update from (1a) to the following:
(91) |
G.2 Image Classification Task
In this part, we perform the image classification task through a -layer neural network, where the CIFAR10 dataset [44] with k image data is evenly distributed to agents. Each agent possesses k images, which are further divided into batches, each batch with images.
The Convolutional Neural Network (CNN) model used in this simulation encompasses:
-
•
Two convolutional layers (i.e., nn.Conv2d(3, 6, 5) and nn.Conv2d(6, 16, 5)), each followed by ReLU activation functions to introduce non-linearity and max pooling (i.e., nn.MaxPool2d(2, 2)) to reduce spatial dimensions.
-
•
Three fully connected (linear) layers, concluding with a softmax output to handle the multi-class classification problem.
Similar to the simulation setup in Section 4, among the participating agents, five have unrestricted access to their respective data allocations, enabling them to utilize the shuffling method to iterate through their batches. The other five agents are designed to simulate limited data access scenarios. Their data access is structured using five distinct graph topologies extracted from the SNAP dataset collection [49], each graph simulating a unique communication pattern among the batches (nodes) of data. Within these topologies, agents adopt one of three random walk strategies — SRW, NBRW, and SRRW, all with importance reweighting — to sample the batches for training.



Local model training is conducted for five epochs at each agent before aggregating the updates at a central server — a process repeated for a total of communication rounds. Each epoch consists of a full traversal of the local dataset of agents in the first group, or batches sampled for training in the second group of agents. To mimic realistic conditions, we also introduce partial agent participation where only of agents are selected randomly to transmit their updates in each round, reflecting the intermittent communication in real-world FL deployments. Lastly, the selection of the step size for SRRW iterates (90) is a critical aspect of our experiments. In this simulation, we experiment with various values of to determine the most advantageous setting for maximizing the benefits of the SRRW strategy. Based on our findings, the best choice for the SRRW step size is , in other words, .
The simulation result is quantified by averaging the training loss across ten repeated trials for each configuration. As depicted in Figure 3, the training results are consistent with our previous findings in Figure 2 in the context of the FL framework and the training of neural networks: the use of a more effective sampling approach, even for a portion of the agents, results in significant enhancements in the overall training of the model, and this improvement is further enhanced through the highly efficient sampling strategy SRRW.
In Figure 3 and 3, we perform image classification experiments in the FL setting with partial client participation. Only random agents will participate in the training process at each aggregation phase. In Figure 3, we fix the sampling strategy (shuffling, SRRW with ) and test the linear speedup effect for the -layer CNN model by duplicating agents to agents with , keeping the same participation ratio . As can be seen from the plot, the training loss is inversely proportional to the number of agents, i.e., at rounds, the training loss is for agents, for agents, for agents, and for agents. In Figure 3, we extend the current simulation from 5-layer CNN model to ResNet-18 model [34] in order to numerically test the performance of different sampling strategies in a more complex neural network training task. By fixing the shuffling in the first group of agents, we observe that improving Markovian sampling from SRW to NBRW, then to SRRW, gives accelerated training process with smaller training loss.
Appendix H Limitations
Our study provides crucial insights into the identification of nuanced agents’ sampling behaviors in UD-SGD, where improving each agent’s sampling strategy speeds up overall system performance without additional computational burden except the additional storage for the visit counts used for sampling their datasets. Our UD-SGD is scalable in terms of larger datasets as the sampling strategy (i.e., random walk) utilized by each agent leverages only local information for its dataset. However, our work has two limitations that should be acknowledged.
-
1.
Assumption 2.4 posits that the parameter trajectory is almost surely bounded, which is a strong assumption. This is crucial for guaranteeing the well-defined nature of all related quantities. Some mechanisms such as projections onto a compact subset [45, Chapter 5.1], or truncation-based method with expanding compact subsets can do the trick to ensure that the iteration is always bounded [3]. As mentioned in our discussion after Assumption 2.4, only recently the stability of SGD under Markovian sampling has been guaranteed to hold for some class of objective function [9], while the discussion on stability issue under multi-agent scenario with Markovian sampling remains an open problem and we do not pursue to remove this stability assumption in this paper.
-
2.
Asymptotic analysis: The main results of our work, i.e., almost sure convergence and central limit theorem in distributed optimization, are based on asymptotic analysis and might not accurately represent the finite-sample performance of each contributing agent in the system. The state-of-the-art finite-sample analysis in the literature only focuses on the worst-performing agent that cannot capture the nuanced differences between agents’ sampling strategies, with the explanation detailed in Footnote 2. Thus, a finite-sample error bound that distinguish every agent’s dynamics is still unknown and regarded as a future direction.
Checklist
-
1.
Claims
-
Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?
-
Answer: [Yes]
-
Justification: The abstract and the introduction clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. The claims made match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
-
2.
Limitations
-
Question: Does the paper discuss the limitations of the work performed by the authors?
-
Answer: [Yes]
-
Justification: The ‘Limitations’ section is provided in Appendix H.
-
3.
Theory Assumptions and Proofs
-
Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
-
Answer: [Yes]
-
Justification: All the theorems, formulas, and proofs in the paper have been numbered and cross-referenced. All assumptions have been clearly stated or referenced in the statement of any theorems. The complete and correct proofs have been provided in appendices.
-
4.
Experimental Result Reproducibility
-
Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
-
Answer: [Yes]
-
Justification: The detailed simulation setups have been provided in Appendix G.
-
5.
Open access to data and code
-
Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
-
Answer: [No]
-
Justification: We do not provide clean source codes, but detailed instructions to perform the experiment have been provided in Appendix G.
-
6.
Experimental Setting/Details
-
Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
-
Answer: [Yes]
-
Justification: The full details have been provided in Appendix G.
-
7.
Experiment Statistical Significance
-
Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
-
Answer: [Yes]
-
8.
Experiments Compute Resources
-
Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
-
Answer: [Yes]
-
Justification: We specify the platform that runs the simulation in Appendix G.1.
-
9.
Code Of Ethics
-
Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
-
Answer: [Yes]
-
Justification: We fully obey the NeurIPS Code of Ethics.
-
10.
Broader Impacts
-
Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
-
Answer: [N/A]
-
Justification: This work mainly aims to demonstrate how the improvement of partial agents’ sampling strategies can accelerate the overall system’s performance. It advocates for improving common wealth and has no negative social impact.
-
11.
Safeguards
-
Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
-
Answer: [N/A]
-
Justification: The paper poses no such risks.
-
12.
Licenses for existing assets
-
Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
-
Answer: [Yes]
-
Justification: We have cited correctly the datasets ijcnn1 and CIFAR-10 used for experiments in our paper.
-
13.
New Assets
-
Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
-
Answer: [N/A]
-
Justification: The paper does not release new assets.
-
14.
Crowdsourcing and Research with Human Subjects
-
Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
-
Answer: [N/A]
-
Justification: The paper does not involve crowdsourcing nor research with human subjects.
-
15.
Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects
-
Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
-
Answer: [N/A]
-
Justification: The paper does not involve crowdsourcing nor research with human subjects.