The Sample-Communication Complexity Trade-off in Federated Q-Learning
We consider the problem of federated Q-learning, where agents aim to collaboratively learn the optimal Q-function of an unknown infinite-horizon Markov decision process with finite state and action spaces. We investigate the trade-off between sample and communication complexities for the widely used class of intermittent communication algorithms. We first establish the converse result, where it is shown that a federated Q-learning algorithm that offers any speedup with respect to the number of agents in the per-agent sample complexity needs to incur a communication cost of at least an order of up to logarithmic factors, where is the discount factor. We also propose a new algorithm, called Fed-DVR-Q , which is the first federated Q-learning algorithm to simultaneously achieve order-optimal sample and communication complexities. Thus, together these results provide a complete characterization of the sample-communication complexity trade-off in federated Q-learning.
1 Introduction
Reinforcement Learning (RL) (Sutton2018RLBook) refers to a paradigm of sequential decision making where an agent aims to learn an optimal policy, i.e., a policy that maximizes the long-term total reward, through repeated interactions with an unknown environment. RL finds applications across a diverse array of fields including, but not limited to, autonomous driving, games, recommendation systems, robotics and Internet of Things (IoT) (Kober2013RoboticsSurvey; Yurtsever2020SelfDriving; Huang2016Go; Lim2020IoT).
The primary hurdle in RL applications is often the high-dimensional nature of the decision space that necessitates the learning agent to have to access to an enormous amount of data in order to have any hope of learning the optimal policy. Moreover, the sequential collection of such an enormous amount of data through a single agent is extremely time-consuming and often infeasible in practice (mnih2016a3c). Consequently, practical implementations of RL involve deploying multiple agents to collect data in parallel. This decentralized approach to data collection has fueled the design and development of distributed or federated RL algorithms that can collaboratively learn the optimal policy without actually transferring the collected data to a centralized server, while achieving a linear speedup in terms of the number of agents. Such a federated approach to RL, which does not require the transfer of local data, is gaining interest due to lower bandwidth requirements and lower security and privacy risks. In this work, we focus on federated variants of the vastly popular Q-learning algorithm (Watkins1992QL), where the agents collaborate to directly learn the optimal Q-function without forming an estimate of the underlying unknown environment.
A particularly important aspect of designing federated RL algorithms, including federated Q-learning algorithms, is to address the natural tension between sample and communication complexities. At one end of the spectrum lies the naïve approach of running a centralized algorithm with an optimal sample complexity after transferring and combining all the collected data at a central server. Such an approach trivially achieves the optimal sample complexity while suffering from a very high and prohibitive communication complexity. On the other hand, several recently proposed algorithms (Khodadadian2022FederatedQL; Woo2023FedSynQ; Woo2024FedOffline; zheng2024federated) operate in more practical regimes, offering significantly lower communication complexities when compared to the naïve approach at the cost of sub-optimal sample complexities. These results suggest the existence of an underlying trade-off between sample and communication complexities of federated RL algorithms. The primary goal of this work is to better understand this trade-off in the context of federated Q-learning by investigating these following fundamental questions.
-
•
Fundamental limit of communication: What is the minimum amount of communication required by a federated Q-learning algorithm to achieve any statistical benefit of collaboration?
-
•
Optimal algorithm design: How does one design a federated Q-learning algorithm that simultaneously offers optimal sample and communication complexity guarantees, i.e., operates on the optimal frontier of the sample-communication complexity trade-off?
1.1 Main results
In this work, we consider a setup where distributed agents — each with access to a local generative model (Kearns1998FiniteSampleQL) — collaborate to learn the optimal Q-function of an infinite-horizon Markov decision process (MDP) (Puterman2014MDPBook), which is defined over a finite state space and a finite action space , and has a discount factor of . To probe the communication complexity, we consider a common setup in federated learning called the intermittent communication setting (Woodworth2021IntermittentComm), where the agents intermittently share information among themselves with the help of a central server. We provide a complete characterization of the trade-off between sample and communication complexities under the aforementioned setting by providing answers to both the questions. Summarized below, the main result of this work is twofold.
-
•
Fundamental lower bounds on the communication complexity of federated Q-learning. We establish lower bounds on the communication complexity of federated Q-learning, both in terms of the number of communication rounds and the overall number of bits that need to be transmitted in order to achieve any speedup in the convergence rate with respect to the number of agents. Specifically, we show that in order for an intermittent communication algorithm to obtain any benefit of collaboration, i.e., any order of speedup with respect to the number of agents, the number of communication rounds must be least and the number of bits sent by each agent to the server must be least , where denotes the number of samples taken by the algorithm for each state-action pair.
-
•
Achieving the optimal sample-communication complexity trade-off of federated Q-learning. We propose a new federated Q-learning algorithm called Federated Doubly Variance Reduced Q-Learning (dubbed Fed-DVR-Q ), that simultaneously achieves order-optimal sample complexity and communication complexity as dictated by the lower bound. We show that Fed-DVR-Q learns an -accurate optimal Q-function in the sense with i.i.d. samples from the generative model at each agent while incurring a total communication cost of bits per agent across rounds of communication. Thus, Fed-DVR-Q not only improves upon both the sample and communication complexities of existing algorithms, but also is the first algorithm to achieve order-optimal sample and communication complexities (See Table 1 for a comparison).
Algorithm/Reference | Number of Agents | Sample Complexity | Communication Complexity |
---|---|---|---|
Q-learning (Li2023QLMinimax) | N/A | ||
Variance Reduced Q-learning (Wainwright2019VarianceReduced) | N/A | ||
Fed-SynQ (Woo2023FedSynQ) | |||
Fed-DVR-Q (This work) | |||
Lower bound ((Azar2013MinimaxPAC), This work) |
1.2 Related work
Single-agent Q-learning.
Q-learning has been extensively studied in the single-agent setting in terms of its asymptotic convergence (Jaakkola1993StochasticIterativeDP; Tsitsiklis1994AsynchronousQL; Szepesvari1997AsymptoticQL; Borkar2000ODE-RL) and its finite-time sample complexity in synchronous (EvenDar2004LearningRates; Kearns1998FiniteSampleQL; Beck2012ConstantStepSize; sidford2018near; Wainwright2019ConeContractive; Wainwright2019VarianceReduced; Chen2020FiniteSampleAnalysis; Li2023QLMinimax) and asynchronous settings (Chen2021Lyapunov; Li2023QLMinimax; Li2021SampleComplexityAsynchronousQL; Qu2020FiniteAsychronousQL; xia2024instance) in terms of convergence in the sense. On the other hand, regret analysis of Q-learning has been carried out in both online settings (jin2018q; bai2019provably; menard2021ucb; li2021breaking) and offline settings (shi2022pessimistic; yan2022efficacy), to name a few.
Federated and distributed RL.
There has also been a considerable effort towards developing distributed and federated RL algorithms. The distributed variants of the classical temporal difference (TD) learning algorithm have been investigated in a series of studies (Chen2021OffPolicyTDC; Doan2019TD0LinearFunctionMARL; Doan2021DistributedTD0; Sun2020FiniteDecentralizedTDLinear; Wai2020ConvergenceConsensus; liu2023distributed; Wang2020DecentralizedTDTracking; Zeng2021FiniteDecentralizedSA). The impact of environmental heterogeneity in federated RL was studied in Wang2023FederatedTDHeterogeneity for TD learning, and in Jin2022FederatedRLHeterogeneity when the local environments are known. A distributed version of the actor-critic algorithm was studied by Shen2023AsynchronousAdvantageActorCritic where the authors established convergence of their algorithm and demonstrated a linear speedup in the number of agents in their sample complexity bound. Chen2022DecentralizedActorCritic proposed a new distributed actor-critic algorithm which improved the dependence of sample complexity on the error with a communication cost of . Chen2021CommEfficientPolicyGrad proposed a communication-efficient distributed policy gradient algorithm with convergence analysis and established a communication complexity of . Xie2023FedKL adopts a distributed policy optimization perspective, which is different from the Q-learning paradigm considered in this work. Moreover, the algorithm in Xie2023FedKL obtains a linear communication cost, which is worse than that obtained in our work. Similarly, Zhang2024FiniteTimeOnPolicy focuses on on-policy learning and incurs a communication cost that depends polynomially on the required error . Several additional studies (Yang2023FederatedNPG; Zeng2021DecentralizedNPG; Lan2024AsynchronousPG) have also developed and analyzed other distributed/federated variants of the classical natural policy gradient method (Kakade2001PolicyGradient). Assran2019GossipActorCritic; Espeholt2018Impala; Mnih2016AsynchronousDeepRL developed distributed algorithms to train deep RL networks more efficiently.
Federated Q-learning.
Federated Q-learning has been explored relatively recently with theoretical sample and communication complexity guarantees. Khodadadian2022FederatedQL proposed and analyzed a federated Q-learning algorithm in the asynchronous setting, however, its sample complexity guarantee exhibits pessimistic dependencies with respect to salient problem-dependent parameters. Woo2023FedSynQ provided improved analyses for federated Q-learning under both synchronous and asynchronous settings, and introduced importance averaging to tame the heterogeneity of local behavior policies in the asynchronous setting to further improve the sample complexity, showing that a collaborative coverage of the entire state-action space suffices for federated Q-learning. Moving to the offline setting, Woo2024FedOffline proposed a federated Q-learning algorithm for offline RL in the finite-horizon setting and established sample and communication complexities that only require a collaborative coverage of the state-action pairs visited by the optimal policy. zheng2024federated, on the other hand, established a linear speedup for federated Q-learning in the online setting from the regret minimization perspective.
Accuracy-communication trade-off in federated learning.
The trade-off between communication complexity and accuracy (or equivalently, sample complexity) has been studied in various federated and distributed learning problems, including stochastic approximation algorithms for convex optimization. Duchi2014DistributedEstimation; Braverman2016CommunicationLowerBounds established the celebrated inverse linear relationship between the error and the communication cost for the problem of distributed mean estimation. Similar trade-offs for distributed stochastic optimization, multi-armed bandits and linear bandits have been studied and established across numerous studies, e.g., (Woodworth2018GraphOracle; Woodworth2021IntermittentComm; Tsitsiklis1987; Shi2021FMAB; Salgia2023LinearBandits).
2 Background and Problem Formulation
In this section, we provide a brief background of MDPs, outline the performance measures for federated Q-learning algorithms and describe the class of intermittent communication algorithms considered in this work.
2.1 Markov decision processes
We focus on an infinite-horizon MDP, denoted by , over a state space and an action space with a discount factor . Both the state and action spaces are assumed to be finite sets. In an MDP, the state evolves dynamically under the influence of actions based on a probability transition kernel, . The entry denotes the probability of moving to state when an action is taken in state . An MDP is also associated with a deterministic reward function , where denotes the immediate reward obtained for taking action in state . Thus, the transition kernel along with the reward function completely characterize an MDP.
A policy is a rule for selecting actions across different states, where denotes the simplex over and denotes the probability of choosing action in state . Each policy is associated with a state value function and a state-action value function, or the Q-function, denoted by and respectively. Specifically, and measure the expected discounted cumulative reward attained by policy starting from certain state and state-action pair respectively. Mathematically, and are given as
(1) |
where , for all , and the expectation is taken w.r.t. the randomness in the trajectory . Since the rewards lie in , it follows immediately that both the value function and the Q-function lie in the range .
An optimal policy is a policy that maximizes the value function uniformly over all the states and it has been shown that such an optimal policy always exists (Puterman2014MDPBook). The optimal value and Q-functions are those corresponding to that of an optimal policy , denoted as and respectively. The optimal Q-function, , is also the unique fixed point of the Bellman optimality operator , given by
(2) |
The popular Q-learning algorithm (Watkins1992QL) aims to learn the optimal policy by first learning as the solution to the above fixed-point equation — via stochastic approximation when is only accessed through samples — and then obtaining a deterministic optimal policy via greedy action selection, i.e., .
2.2 Performance measures in federated Q-learning
We consider a federated learning setup consisting of agents, where all the agents face a common, unknown MDP, i.e., the transition kernel and the reward function are the same across agents. In addition, we consider the synchronous setting (Wainwright2019ConeContractive), where each agent has access to an independent generative model or simulator from which they can draw independent samples from the unknown underlying distribution for all state-action pair (Kearns1998FiniteSampleQL) simultaneously. Let be a corresponding random vector whose -th coordinate is drawn from the distribution , independently of all other coordinates. We define the random operator as
(3) |
where . The operator can be interpreted as the sample Bellman operator, where we have the relation for all Q-functions. For a given value of , the objective of agents is to collaboratively learn an -optimal estimate (in the sense) of the optimal Q-function of the unknown MDP.
We measure the performance of a federated Q-learning algorithm using two metrics — sample complexity and communication complexity. For a given MDP , let denote the estimate of , the optimal Q-function of the MDP , returned by an algorithm , when given access to i.i.d. samples from the generative model for each pair at all the agents. The error rate of the algorithm , denoted by , is defined as
(4) |
where the expectation is taken over the samples and any randomness in the algorithm. Given a value of , the sample complexity of , denoted by is given by
(5) |
Similarly, we can also define a high-probability version for any as follows:
We measure the communication complexity of any federated learning algorithm both in terms of the frequency of information exchange and the total number of bits uploaded by the agents. For each agent , let and respectively denote the number of times agent sends a message, and, the total number of bits uploaded by agent to the server when an algorithm is run with i.i.d. samples from the generative model for each pair at all the agent. The communication complexity of , when measured in terms of the frequency of communication and the total number of bits exchanged, is given by
(6) |
respectively. Similarly, for a given value of , we can also define and when is run to guarantee an error of at most , as well as the high-probability version for any as and .
2.3 Intermittent-communication algorithm protocols
We consider a popular class of federated learning algorithms with intermittent communication. The intermittent communication setting provides a natural framework to extend single-agent Q-learning algorithms to the distributed setting. As the name suggests, under this setting, the agents intermittently communicate with each other or a central server, sharing their updated beliefs about . Between two communication rounds, each agent updates their belief about using stochastic approximation iterations based on the locally available data, similar to a single-agent setup. Such intermittent communication algorithms have been extensively studied and used to establish lower bounds on communication complexity of distributed stochastic convex optimization (Woodworth2018GraphOracle; Woodworth2021IntermittentComm).
A generic federated Q-learning algorithm with intermittent communication is outlined in Algorithm 1. It is characterized by the following five parameters: (i) the total number of updates ; (ii) the number of communication rounds ; (iii) a step size schedule ; (iv) a communication schedule ; (v) batch size . During the -th iteration, each agent computes , a minibatch of sample Bellman operators at the current estimate , using samples from the generative model for each pair, and obtains an intermediate local estimate using the Q-learning update rule as follows:
(7) |
Here, is the step size chosen corresponding to the -th time step. The intermediate estimates are averaged based on a communication schedule consisting of rounds, i.e.,
(8) |
In the above equation, the averaging step can also be replaced with any distributed mean estimation routine that includes compression to control the bit level costs. Without loss of generality, we assume that for all agent and , i.e., the last iterates are always averaged. It is straightforward to note that the number of samples taken per agent by an intermittent communication algorithm is , i.e, and the communication complexity .
3 Communication Complexity Lower Bound
In this section, we investigate the first of the two questions regarding the lower bound on communication complexity. The following theorem establishes a lower bound on the communication complexity of a federated Q-learning algorithm with intermittent communication as described in Algorithm 1.
Theorem 1.
Assume that and the state and action spaces satisfy and . Let be a federated Q-learning algorithm with intermittent communication (as described in Algorithm 1) that is run for steps with a step size schedule of either or for all . If
for some universal constants , then for all choices of communication schedule, batch size , and , the error of satisfies
for all and . Here is a constant that depends only on .
The above theorem states that for any federated Q-learning algorithm with intermittent communication to obtain any benefit of collaboration, i.e., for the error rate to decrease w.r.t. the number of agents, it must have at least rounds of communication and transmit bits of information per agent, both of which scale inverse proportionally to the effective horizon of the MDP. The above lower bound on the communication complexity also immediately applies to federated Q-learning algorithms that offer order-optimal sample complexity, and thereby a linear speedup with respect to the number of agents. Therefore, this characterizes the converse relation for the sample-communication tradeoff in federated Q-learning.
The above lower bound on the communication complexity of federated Q-learning is a consequence of the bias-variance trade-off that governs the convergence of the Q-learning algorithm. While a careful choice of step sizes alone is sufficient to balance this trade-off in the centralized setting, the choice of communication schedule also plays an important role in balancing this trade-off in the federated setting. The local steps between two communication rounds induce a positive estimation bias that depends on the standard deviation of the iterates and is a well-documented issue of “over-estimation” in Q-learning (Hasselt2010DoubleQL). Since such a bias is driven by local updates, it does not reflect any benefit of collaboration. During a communication round, the averaging of iterates across agents allows the algorithm an opportunity to counter this bias by reducing the effective variance of the updates through averaging. In our analysis, we show that if the communication is infrequent, the local bias becomes the dominant term and averaging of iterates is insufficient to counter the impact of the positive bias induced by the local steps. As a result, we do not observe any statistical gains when the communication is infrequent. Our argument is inspired by the analysis of Q-learning in Li2023QLMinimax and is based on analyzing the convergence of an intermittent communication algorithm on a specifically chosen “hard” MDP instance. The detailed proof is deferred to Appendix A.
Remark 1 (Communication complexity of policy evaluation).
Several recent studies (liu2023distributed; tian2024one) established that a single round of communication is sufficient to achieve linear speedup of TD learning for policy evaluation, which do not contradict with our results focusing on Q-learning for policy learning. The latter is more involved due to the nonlinearity of the Bellman optimality operator. Specifically, if the operator whose fixed point is to be found is linear in the decision variable (e.g., the value function in TD learning) then the fixed point update only induces a variance term corresponding to the noise. However, if the operator is non-linear, then in addition to the variance term, we also obtain a bias term in the fixed point update. While the variance term can be controlled with one-shot averaging, more frequent communication is necessary to ensure that the bias term is small enough.
Remark 2 (Extension to asynchronous Q-learning).
We would like to point out that our lower bound extends to the asynchronous setting (Li2023QLMinimax) as the assumption of i.i.d. noise corresponding to a generative model is a special case of Markovian noise observed in the asynchronous setting.
4 The Fed-DVR-Q Algorithm
Having characterized the lower bound on the communication complexity of federated Q-learning, we explore our second question of interest — designing a federated Q-learning algorithm that achieves this lower bound while simultaneously offering an optimal order of sample complexity (up to logarithmic factors).
We propose a new federated Q-learning algorithm, Fed-DVR-Q , that achieves not only a communication complexity of and but also order-optimal sample complexity (up to logarithmic factors), thereby providing a tight characterization of the achievability frontier that matches with the converse result derived in the previous section.
4.1 Algorithm description
R0.6 1: Input : Error bound , failure probability 2: 3: // Set parameters as described in Section 4.1.3 4: for do 5: 6: 7: end for 8: return Algorithm 2 Fed-DVR-Q
Fed-DVR-Q proceeds in epochs. During an epoch , the agents collaboratively update , the estimate of obtained at the end of the previous epoch, to a new estimate , with the aid of the sub-routine called RefineEstimate. The sub-routine RefineEstimate is designed to ensure that the suboptimality gap, , reduces by a factor of at the end of every epoch. Thus, at the end of epochs, Fed-DVR-Q obtains an -optimal estimate of , which is then set to be the output of the algorithm. Please refer to Algorithm 2 for a pseudocode.
4.1.1 The RefineEstimate sub-routine
RefineEstimate, starting from , an initial estimate of , uses variance-reduced Q-learning updates (Wainwright2019VarianceReduced; sidford2018near) to obtain an improved estimate of . It is characterized by four parameters — the initial estimate , the number of local iterations , the re-centering sample size and the batch size , which can be appropriately tuned to control the quality of the returned estimate. Additionally, it also takes input two parameters and required by the compressor , whose description is deferred to Section 4.1.2.
The first step in RefineEstimate is to collaboratively approximate for the variance-reduced updates. To this effect, each agent builds an approximation of as follows:
(9) |
where are i.i.d. samples with . Each agent then sends a compressed version, , of the difference to the server, which collects all the estimates from the agents and constructs the estimate
(10) |
and sends it back to the agents for the variance-reduced updates. Equipped with the estimate , RefineEstimate constructs a sequence using the following iterative update scheme initialized with . During the -th iteration, each agent carries out the following update:
(11) |
In the above equation, is the step size and , where is the minibatch of i.i.d. random variables drawn according to , independently at each agent for all iterations . Each agent then sends a compressed version of the update, i.e., , to the server, which uses them to compute the next iterate
(12) |
and broadcast it to the agents. After such updates, the obtained iterate is returned by the sub-routine. A pseudocode of RefineEstimateis given in Algorithm 3.
4.1.2 The compression operator
The compressor, , used in the proposed algorithm Fed-DVR-Q is based on the popular stochastic quantization scheme. In addition to the input vector to be quantized, the compressor or quantizer takes two input parameters and : (i) corresponds to an upper bound on the norm of , i.e., ; (ii) corresponds to the resolution of the compressor, i.e., number of bits used by the compressor to represent each coordinate of the output vector.
The compressor first splits the interval into intervals of equal length where correspond to end points of the intervals. Each coordinate of is then separately quantized as follows. The value of the -th coordinate, , is set to be with probability and to with the remaining probability, where . It is straightforward to note that each coordinate of can be represented using bits.
4.1.3 Setting the parameters
The desired convergence of the iterates is obtained by carefully choosing the parameters of the sub-routine RefineEstimate and the compression operator . Given a target accuracy and , the total number of epochs is set to
(13) |
For all epochs , we set the number of iterations , the batch size , and the number of bits of the compressor to be
(14a) | ||||
(14b) | ||||
(14c) |
respectively. The re-centering sample sizes and bounds of the compressor are set to be the following functions of epoch index respectively:
(15) |
where . The piecewise definition of is crucial to obtain the optimal dependence with respect to , similar to the two-step procedure outlined in Wainwright2019VarianceReduced.
4.2 Performance guarantees
The following theorem characterizes the sample and communication complexities of Fed-DVR-Q .
Theorem 2.
Consider any and . The sample and communication complexities of the Fed-DVR-Q algorithm, when run with the choice of parameters described in Section 4.1.3 and a learning rate , satisfy the following relations for some universal constant :
A proof of Theorem 2 can be found in Appendix LABEL:appendix:dvr_analysis. A few implications of the theorem are in order.
Optimal sample-communication complexity trade-off.
As shown by the above theorem, Fed-DVR-Q offers a linear speedup in the sample complexity with respect to the number of agents while simultaneously achieving the same order of communication complexity as dictated by the lower bound derived in Theorem 1, both in terms of frequency and bit level complexity. Moreover, Fed-DVR-Q is the first federated Q-learning algorithm that achieves a sample complexity with optimal dependence on all the salient parameters, i.e., and , in addition to linear speedup w.r.t. to the number of agents, thereby bridging the existing gap between upper and lower bounds on the sample complexity for federated Q-learning. Thus, Theorem 1 and 2 together provide a characterization of optimal operating point of the sample-communication complexity trade-off in federated Q-learning.
Role of minibatching.
The commonly adopted approach in intermittent communication algorithm is to use a local update scheme that takes multiple small (i.e., ), noisy updates between communication rounds, as evident from the algorithm design in Khodadadian2022FederatedQL; Woo2023FedSynQ and even numerous FL algorithms for stochastic optimization (Mcmahan2017FedAvg; Haddadpour2019LUPASGD; Khaled2020LocalSGDHeterogenous). In Fed-DVR-Q , we replace the local update scheme of taking multiple small, noisy updates by a single, large update with smaller variance, obtained by averaging the noisy updates over a minibatch of samples. The use of updates with smaller variance in variance-reduced Q-learning yields the algorithm its name. While both the approaches result in similar sample complexity guarantees, the local update scheme requires more frequent averaging across clients to ensure that the bias of the estimate, also commonly referred to as “client drift”, is not too large. On the other hand, the minibatching approach does not encounter the problem of bias accumulation from local updates and hence can afford more infrequent averaging, allowing Fed-DVR-Q to achieve optimal order of communication complexity.
Compression.
Fed-DVR-Q is the first federated Q-learning algorithm to analyze and establish communication complexity at the bit level. While all existing studies on federated Q-learning focus only on the frequency of communication and assume transmission of real numbers with infinite bit precision, our analysis provides a more holistic view point of communication complexity at the bit level, which is of great practical significance. While some recent other studies (Wang2023FederatedTDHeterogeneity) also considered quantization in federated RL, their objective is to understand the impact of message size on the convergence with no constraint on the frequency of communication, unlike the holistic viewpoint adopted in this work.
5 Conclusion
We presented a complete characterization of the sample-communication trade-off for federated Q-learning algorithms with intermittent communication. We showed that no federated Q-learning algorithm with intermittent communication can achieve any speedup with respect to the number of agents if its number of communication rounds are sublinear in . We also proposed a new federated Q-learning algorithm called Fed-DVR-Q that uses variance reduction along with minibatching to achieve optimal-order sample and communication complexities. In particular, we showed that Fed-DVR-Q has a per-agent sample complexity of , which is order-optimal in all salient problem parameters, and a communication complexity of rounds and bits.
The results in this work raise several interesting questions that are worth exploring. While we focus on the tabular setting in this work, it is of great interest to investigate to the trade-off in other settings where we use function approximation to model the and functions. Moreover, it is interesting to explore the trade-off in the finite-horizon setting, where there is no discount factor. Furthermore, it is also worthwhile to explore if the communication complexity can be further reduced by going beyond the class of intermittent communication algorithms.
Appendix A Proof of Theorem 1
In this section, we prove the main result of the paper, the lower bound on the communication complexity of federated Q-learning algorithms. At a high level, the proof consists of the following three steps.
Introducing the “hard” MDP instance.
The proof builds upon analyzing the behavior of a generic algorithm outlined in Algorithm 1 over a particular instance of MDP. The particular choice of MDP is inspired by, and borrowed from, other lower bound proofs in the single-agent setting (Li2023QLMinimax) and helps highlight core issues that lie at the heart of the sample-communication complexity trade-off. Following Li2023QLMinimax, the construction is first over a small state-action space that allows us to focus on a simpler problem before generalizing it to larger state-action spaces.
Establishing the performance of intermittent communication algorithms.
In the second step, we analyze the error of the iterates generated by an intermittent communication algorithm . The analysis is inspired by the single-agent analysis in Li2023QLMinimax, which highlights the underlying bias-variance trade-off. Through careful analysis of the algorithm dynamics in the federated setting, we uncover the impact of communication on the bias-variance trade-off and the resulting error of the iterates to obtain the lower bound on the communication complexity.
Generalization to larger MDPs.
As the last step, we generalize our construction of the “hard” instance to more general state-action space and extend our insights to obtain the statement of the theorem.
A.1 Introducing the “hard” instance
We first introduce an MDP instance that we will use throughout the proof to establish the result. Note that this MDP is identical to the one considered in Li2023QLMinimax to establish the lower bounds on the performance of single-agent Q-learning algorithm. It consists of four states . Let denote the action set associated with the state . The probability transition kernel and the reward function of is given as follows:
(16a) | ||||||
(16b) | ||||||
(16c) | ||||||
(16d) | ||||||
(16e) |
where the parameter . We have the following results about the optimal and functions of this hard MDP instance.
Lemma 1 ((Li2023QLMinimax, Lemma 3)).
Consider the MDP constructed in Eqn. (16). We have,
Throughout the next section of the proof, we focus on this MDP with four states and two actions. In Appendix A.4, we generalize the proof to larger state-action spaces.
A.2 Notation and preliminary results
For convenience, we first define some notation that will be used throughout the proof.
Useful relations of the learning rates.
We consider two kinds of step size sequences that are commonly used in Q-learning — the constant step size schedule, i.e., for all and the rescaled linear step size schedule, i.e., , where is a universal constant that is independent of the problem parameters.
We define the following quantities:
(17) |
where we take and use the convention throughout the proof that if a product operation does not have a valid index, we take the value of that product to be . For any integer , we have the following relation, which will be proved at the end of this subsection for completeness:
(18) |
Similarly, we also define,
(19) |
which satisfies the relation
(20) |
for any integer . The claim follows immediately by plugging in (18). Note that for constant step size, the sequence is clearly increasing. For the rescaled linear step size, we have,
(21) |
whenever . Thus, is an increasing sequence as long as . Similarly, is also clearly increasing for the constant step size schedule. For the rescaled linear step size schedule, we have,
whenever . The last bound follows from Eqn. (21).
Proof of (18).
We can show the claim using backward induction. For the base case, note that,
as required. Assume (18) is true for some . We have,
thus completing the induction step.
Sample transition matrix.
Recall is a random vector whose -th coordinate is drawn from the distribution . We use to denote the sample transition at time and agent obtained by averaging i.i.d. samples from the generative model. Specifically let denote a collection of i.i.d. copies of collected at time at agent . Then, for all ,
(22) |
where for .
Preliminary relations of the iterates.
We state some preliminary relations regarding the evolution of the Q-function and the value function across different agents that will be helpful for the analysis later.
We begin with the state , where we have for all agents and . This follows almost immediately from the fact that state is an absorbing state with zero reward. Note that holds for all clients . Assuming that for all clients for some time instant , by induction, we have,
Consequently, and , for all agents , irrespective of whether there is averaging.
For state , the iterates satisfy the following relation:
where the second step follows by noting . Once again, one can note that averaging step does not affect the update rule implying that the following holds for all and :
(23) |
where the last step follows from Eqn. (18) with .
Similarly, for state and , we have,
(24) | ||||
(25) | ||||
(26) |
Since the averaging makes a difference in the update rule, we further analyze the update as required in later proofs.
A.3 Main analysis
We first focus on establishing a bound on the number of communication rounds, i.e., (where we drop the dependency with other parameters for notational simplicity), and then use this lower bound to establish the bound on the bit level communication complexity .
To establish the lower bound on for any intermittent communication algorithm , we analyze the convergence behavior of on the MDP . We assume that the averaging step in line of Algorithm 1 is carried out exactly. Since the use of compression only makes the problem harder, it is sufficient for us to consider the case where there is no loss of information in the averaging step for establishing a lower bound. Lastly, throughout the proof, without loss of generality we assume that
(27) |
otherwise, the lower bound in Theorem 1 reduces to the trivial lower bound.
We divide the proof into following three parts based on the choice of learning rates and batch sizes:
-
1.
Small learning rates: For constant learning rates, and for rescaled linear learning rates, the constant satisfies .
-
2.
Large learning rates with small : For constant learning rates, and for rescaled linear learning rates, the constant satisfies (c.f. (27)). Additionally, the ratio satisfies .
-
3.
Large learning rates with large : We have the same condition on the learning rates as above. However, in this case the ratio satisfies .
We consider each of the cases separately in the following three subsections.
A.3.1 Small learning rates
In this subsection, we prove the lower bound for small learning rates, which follow from similar arguments in Li2023QLMinimax.
For this case, we focus on the dynamics of state . We claim that the same relation established in Li2023QLMinimax continues to hold, which will be established momentarily:
(28) |
Consequently, for all , we have
(29) |
To obtain lower bound on , we need to obtain a lower bound on , which from (Li2023QLMinimax, Eqn. (120)) we have
when for both choices of learning rates. On plugging this bound in (29), we obtain,
(30) |
holds for all , and . Thus, it can be noted that the error rate is bounded away from a constant value irrespective of the number of agents and the number of communication rounds. Thus, even with , we will not observe any collaborative gain if the step size is too small.
Proof of (28).
Recall that from (26), we have,
Here, as the second state has only a single action.
-
•
When is not an averaging instant, we have,
(31) On taking expectation on both sides of the equation, we obtain,
(32) In the second step, we used the fact that is independent of .
-
•
Similarly, if is an averaging instant, we have,
(33) Once again, upon taking expectation we obtain,
(34)
Eqns. (32) and (34) together imply that for all ,
(35) |
On unrolling the above recursion with for all , we obtain the desired claim (28).
A.3.2 Large learning rates with small
In this subsection, we prove the lower bound for case of large learning rates when the ratio is small. For the analysis in this part, we focus on the dynamics of state . Unless otherwise specified, throughout the section we implicitly assume that the state is .
We further define a key parameter that will play a key role in the analysis:
(36) |
It can be noted that for constant step size sequence and for rescaled linear stepsize .
Step 1: introducing an auxiliary sequence.
We define an auxiliary sequence for and all to aid our analysis, where we drop the dependency with state for simplicity. The evolution of the sequence is defined in Algorithm 4, where . In other words, the iterates evolve exactly as the iterates of Algorithm 1 except for the fact that sequence is initialized at the optimal -function of the MDP. We would like to point out that we assume that the underlying stochasticity is also identical in the sense that the evolution of both and is governed by the same matrices. The following lemma controls the error between the iterates and , allowing us to focus only on .
Lemma 2.
The following relation holds for all agents , all and :
By Lemma 2, bounding the error of the sequence allows us to obtain a bound on the error of . To that effect, we define the following terms for any and all :
In addition, we use to denote the error of the averaged iterate111We use this different notation in appendix as opposed to the half-time indices used in the main text to improve readability of the proof., and similarly,
(37) |
We first derive a basic recursion regarding . From the iterative update rule in Algorithm 4, we have,
Here in the last line, we used the following relation:
as .
Recursively unrolling the above expression, and using the expression (19), we obtain the following relation: for any when there is no averaging during the interval
(38) |
For any with , we define the notation
(39) | ||||
(40) | ||||
(41) |
Note that by definition, for and all , and . Plugging them into the previous expression leads to the simplified expression
We specifically choose and to be the consecutive averaging instants to analyze the behaviour of across two averaging instants. Consequently, we can rewrite the above equation as
(42) |
Furthermore, after averaging, we obtain,
(43) |
Step 2: deriving a recursive bound for .
Bounding (42), we obtain,
(44a) | ||||
(44b) |
where in the first step we used the fact that
(45) |
On taking expectation, we obtain,
(46a) | ||||
(46b) |
Similarly, using (43) and (45) we can write,
(47a) | ||||
(47b) |
On combining (46b) and (47b), we obtain,
(48) |
In order to simplify (48), we make use of the following lemmas.
Lemma 3.
Let be two consecutive averaging instants. Then for all ,
where .
Lemma 4.
For all consecutive averaging instants satisfying and all , we have,
where .
Lemma 5.
For all , we have
Proof of (50).
We have,
(51) |
To show (50), it is sufficient to show that for . Thus, for we have,
(52) |
Since , . For , the function , proving the claim.
Step 3: lower bounding .
We are now interested in evaluating based on the recursion (49). To this effect, we introduce some notation to simplify the presentation. Let
(53) |
For , we define the following terms:
With these notations in place, the recursion in (49) can be rewritten as
(54) |
for all . We claim that satisfies the following relation for all (whose proof is deferred to the end of this step):
(55) |
where we recall that if there is no valid index for a product, its value is taken to be .
Invoking (55) for and using the relation , we obtain,
(56) |
where we used the fact and that . Consider the expression
(57) |
Consequently,
(58) |
Note that satisfies the following bound
(59) |
We split the remainder of the analysis based on the step size schedule.
-
•
For the constant step size schedule, i.e., , we have, , with and (as all agents start at the same point). If , then, (57), (58) and (59) yield the following relations:
On plugging the above relations into (56), we obtain
(60) where recall that . Consider the function . We claim that for and all ,
(61) The proof of the above claim is deferred to the end of the section. In light of the above claim, we have,
(62) where we used the fact that , is an increasing function and the relation .
-
•
Next, we consider the rescaled linear step size schedule, where (cf. (36)). To begin, we assume . It is straightforward to note that
If then, (57), (58) and (59) yield the following relations:
For , we have,
where we used in the second step. Similarly, for , we have,
On plugging the above relations into (56), we obtain
(63) where we again used the facts that , , is an increasing function and the relation .
-
•
Last but not least, let us consider the rescaled linear step size schedule case when . The condition implies that the time between the communication rounds and is at least . Thus, (49) yields that
(64) Using the above relation along with (55), we can conclude that
(65) In the above relation, we used the trivial bounds and a crude bound on the term corresponding to , similar to (56). Let us first consider the case of . We have,
Similarly, for , we have,
The above relations implies that for some constant , which only depends on . On plugging this into (65), we obtain a relation that is identical to that in (56) up to leading constants. Thus, by using a similar sequence of argument as used to obtain (63), we arrive at the same conclusion as for the case of .
Step 4: finishing up the proof.
Thus, (62), (63) along with the above conclusion together imply that there exists a numerical constant such that
(66) |
The above equation along with Lemma 2 implies
(67) |
On the other hand, from (23) we know that
(68) |
Hence,
(69) |
where the third step follows from (67) and (68) and the fourth step uses .
Proof of (55).
We now return to establish (55) using induction. For the base case, (54) yields
(70) |
Note that this is identical to the expression in (55) for as
based on the adopted convention for products with no valid indices. For the induction step, assume (55) holds for some . On combining (54) and (55), we obtain,
(71) |
If , then and . Consequently,
(72) |
On the other hand, if , then . Consequently, we have,
(73) |
Proof of (61).
To establish this result, we separately consider the cases and .
-
•
When , we have
(74) where in the last step, we used the relation .
-
•
Let us now consider the case . The second derivative of is given by . Clearly, for all , implying that is a concave function. It is well-known that a continuous, bounded, concave function achieves its minimum values over a compact interval at the end points of the interval (Bauer’s minimum principle). For all , we have,
Consequently, we can conclude that for all ,
(75)
A.3.3 Large learning rates with large
In order to bound the error in this scenario, note that controls the variance of the stochastic updates in the fixed point iteration. Thus, when is large, the variance of the iterates is large, resulting in a large error. To demonstrate this effect, we focus on the dynamics of state . This part of the proof is similar to the large learning rate case of Li2023QLMinimax. For all , define:
(76) |
Thus, from (35), we know that obeys the following recursion:
Upon unrolling the recursion, we obtain,
Thus, the above relation along with (18) and the value of yields us,
(77) |
Similar to Li2023QLMinimax, we define
If such a does not exist, it implies that either or . If the former is true, then,
(78) |
Similarly, if , it implies . Using (35), we have,
Consequently,
(79) |
For the case when exists, we divide the proof into two cases.
-
•
We first consider the case when the learning rates satisfy:
(80) The analysis for this case is identical to that considered in Li2023QLMinimax. We explicitly write the steps for completeness. Specifically,
(81) where the first line follows from (77), the second line from the condition on step sizes and the third line from the definition of .
-
•
We now consider the other case where,
(82) Using (Li2023QLMinimax, Eqn.(134)), for any and all agents , we have the relation
The above equation is directly obtained by unrolling the recursion in (26) along with noting that for all . Consequently, we have,
(83) Let be a filtration such that is the -algebra corresponding to . It is straightforward to note that is a martingale sequence adapted to the filtration . Thus, using the result from (Li2023QLMinimax, Eqn.(139)), we can conclude that
(84) We have,
(85) where the first line follows from that fact that variance of sum of i.i.d. random variables is the sum of their variances, the second line from variance of Binomial random variable and the third line from Jensen’s inequality. Thus, (84) and (85) together yield,
(86) where the second line follows from the definition of . We focus on bounding the third term in the above relation. We have,
(87) where the second line follows from monotonicity of and the numerical constant in the fifth step is given by the following claim whose proof is deferred to the end of the section:
(88) Thus, (86) and (87) together imply
(89) where the last inequality follows from the bound on .
Thus, for all , we have,
for some numerical constant . Similar to the small learning rate case, the error rate is bounded away from a constant value irrespective of the number of agents and the number of communication rounds. Thus, even with , we will not observe any collaborative gain in this scenario.
Proof of (88).
To establish the claim, we consider two cases:
- •
- •
A.4 Generalizing to larger state action spaces
We now elaborate on how we can extend the result to general state-action spaces along with the obtaining the lower bound on the bit level communication complexity. For the general case, we instead consider the following MDP. For the first four states , the probability transition kernel and reward function are given as follows.
(94a) | ||||||
(94b) | ||||||
(94c) | ||||||
(94d) |
where the parameter . The overall MDP is obtained by creating copies of the above MDP for all sets of the form for . It is straightforward to note that the lower bound on the number of communication rounds immediately transfers to the general case as well. Moreover, note that the bound on implies the bound as every communication entails sending bits.
To obtain the general lower bound on bit level communication complexity, note that we can carry out the analysis in the previous section for all pairs of actions in state corresponding to the set of states . Moreover, the algorithm , needs to ensure that the error is low across all the pairs. Since the errors are independent across all these pairs, each of them require bits of information to be transmitted during the learning horizon leading to a lower bound of . Note that since we require a low error, needs to ensure that the error is low across all the pairs, resulting in a communication cost linearly growing with . Upon repeating the argument across all copies of the MDP, we arrive at the lower bound of .
A.5 Proofs of auxiliary lemmas
A.5.1 Proof of Lemma 2
Note that a similar relationship is also derived in Li2023QLMinimax, but needing to take care of the averaging over multiple agents, we present the entire arguments for completeness. We prove the claim using an induction over . It is straightforward to note that the claim is true for and all agents . For the inductive step, we assume that the claim holds for for all clients. Using the induction hypothesis, we have the following relation between and :
(95) |
For and , we have,
(96) |
For and , we have,
(97) |
where the last step follows using the same set of arguments as used in (96). The inductive step follows from (96) and (97).
A.5.2 Proof of Lemma 3
In order to bound the term , we make use of the relation in (46a), which we recall
-
•
To aid the analysis, we consider the following recursive relation for any fixed agent :
(98) Upon unrolling the recursion, we obtain,
(99) Initializing in (99) and plugging this into (46a), we have
where we used (cf. (20)). We now further simply the expression of . By rewriting (98) as
it is straight forward to note that is given as
(100) Consequently, we have,
(101) - •
A.5.3 Proof of Lemma 4
We begin with bounding the first term ; the second bound follows in an almost identical derivation.
Step 1: applying Freedman’s inequality.
Using the relation , we can rewrite as
(104) |
where we used the definition in (40) and the fact that . Decompose as
(105) |
where for all
with
Let be a filtration such that is the -algebra corresponding to . It is straightforward to note that is a martingale sequence adapted to the filtration . We will use the Freedman’s inequality (Freedman1975Martingale; Li2023QLMinimax) to obtain a high probability bound on .
-
•
To that effect, note that
(106) where the second step follows from the bounds and and the third step uses and the fact that is increasing in in this regime. (cf. (21)).
-
•
Similarly,
(107)
Using the above bounds (106) and (107) along with Freedman’s inequality yield that
(108) |
Setting , with probability at least , it holds
(109) |
Consequently, plugging this back to (104), we obtain
(110) |
Here, the penultimate step used the fact that , and the last step used the definition of . Thus, it is sufficient to obtain a lower bound on in order obtain a lower bound for .
Step 2: lower bounding .
To proceed, we introduce the following lemma pertaining to lower bounding that will be useful later.
Lemma 6.
For all time instants and all agent :
Step 3: finishing up.
We finish up the proof by bounding for . We have
(112) |
where (i) follows from the monotonicity of . Plugging (112) into the expressions of (cf. (109)) we have
where the second line follows from (111) and (112), and the third line follows from (112). On combining the above bound with (110), we obtain,
(113) |
where . Note that we have,
Combining the above bound with (113) yields us the required bound.
Step 4: repeating the argument for the second claim.
A.5.4 Proof of Lemma 5
Using Eqns. (43) and (40), we can write
Upon unrolling the recursion, we obtain,
If we define a filtration as the -algebra corresponding to , then it is straightforward to note that is a martingale sequence adapted to the filtration . Using Jensen’s inequality, we know that if is a martingale adapted to a filtration , then for a convex function such that is integrable for all , is a sub-martingale adapted to . Since is a convex function, is a submartingale adapted to the filtration . As a result,
(114) |
We use the following observation about a martingale sequence adapted to a filtration to evaluate the above expression. We have,
(115) |
where the third step uses the facts that is measure and and fourth step is obtained by recursively applying second and third steps. Using the relation in Eqn. (115) in Eqn. (114), we obtain,
(116) |
Let us focus on the term involving the step sizes. We separately consider the scenario for constant step sizes and linearly rescaled step sizes. For constant step sizes, we have,
(117) |
Similarly, for linearly rescaled step sizes, we have,
(118) |
where the second step uses and the fact that is increasing in in this regime. (See Eqn. (21)) and fifth step uses . On plugging results from Eqns. (117) and (118) into Eqn. (116) along with the value of , we obtain,
(119) |
as required.
A.5.5 Proof of Lemma 6
For the proof, we fix an agent . In order to obtain the required lower bound on , we define an auxiliary sequence that evolves as described in Algorithm 5. Essentially, evolves in a manner almost identical to except for the fact that there is only one action and hence there is no maximization step in the update rule.
It is straightforward to note that , which can be shown using induction. From the initialization, it follows that . Assuming the relation holds for , we have,
Since and follow the same averaging schedule, it immediately follows from the above relation that . Since , we will use the sequence to establish the required lower bound on .
We claim that for all time instants and all agents ,
(120) |
Assuming (120) holds, we have
as required. In the above expression, the first inequality follows from Jensen’s inequality, the second from the relation and the third from (120).
We now move now to prove the claim (120) using induction. For the base case, holds by choice of initialization. Assume that holds for some for all .
-
•
If is not an averaging instant, then for any client ,
(121) The third line follows from the independence of and and the fourth line uses the inductive hypothesis.
-
•
If is an averaging instant, then for all clients ,
(122) where we again make use of independence and the inductive hypothesis.