Joint Optimization of Communications and Federated Learning Over the Air
Abstract
Federated learning (FL) is an attractive paradigm for making use of rich distributed data while protecting data privacy. Nonetheless, nonideal communication links and limited transmission resources may hinder the implementation of fast and accurate FL. In this paper, we study joint optimization of communications and FL based on analog aggregation transmission in realistic wireless networks. We first derive closed-form expressions for the expected convergence rate of FL over the air, which theoretically quantify the impact of analog aggregation on FL. Based on the analytical results, we develop a joint optimization model for accurate FL implementation, which allows a parameter server to select a subset of workers and determine an appropriate power scaling factor. Since the practical setting of FL over the air encounters unobservable parameters, we reformulate the joint optimization of worker selection and power allocation using controlled approximation. Finally, we efficiently solve the resulting mixed-integer programming problem via a simple yet optimal finite-set search method by reducing the search space. Simulation results show that the proposed solutions developed for realistic wireless analog channels outperform a benchmark method, and achieve comparable performance of the ideal case where FL is implemented over noise-free wireless channels.
Index Terms:
Federated learning, analog aggregation, convergence analysis, joint optimization, worker scheduling, power scaling.I Introduction
In recent years, with the development of IoT and social networks, huge amounts of data have been generated at the edge of networks [1]. To obtain useful information from big data, machine learning has been widely applied to deal with complex models and tasks in emerging data-driven applications, such as autonomous driving, virtual and augmented reality [2]. Standard machine learning is usually developed under a centralized architecture, where each node located at the edge sends its collected data to a central node for centralized data processing. However, with the exponential growth of the volume of local data and the increasing concerns on user data privacy, it is neither practical nor safe to directly transmit the data of local devices to a central node due to the limited communication and processing capability as well as the lack of privacy protection. As such, distributed machine learning is well motivated to overcome these issues.
In the regime of distributed machine learning, federated learning (FL) has been proposed as a well noted approach for collaborative learning[3]. In FL, local workers train local models from their own data, and then transmit their local updates to a parameter server (PS). The PS aggregates these received local updates and sends the averaged update back to the local workers.These iterative updates between the PS and workers, can be either model parameters or their gradients, for model averaging [4] and gradient averaging [5], respectively. In this way, FL relieves communication overheads and protects user privacy compared to traditional data-sharing based collaborative learning, especially when the local data is in large volume and privacy-sensitive. Existing research on FL mostly focuses on FL algorithms under idealized link assumptions. However, the impacts of wireless environments on FL performance should be taken into account in the design of FL deployed in practical wireless systems. Otherwise, such impacts may introduce unwanted training errors that dramatically degrade the learning performance in terms of accuracy and convergence rate[6].
To solve this problem, research efforts have been spent on optimizing network resources used for transmitting model updates in FL[7, 8]. These works of FL over wireless networks adopt digital communications, using a transmission-then-aggregation policy. Unfortunately, the communication overhead and transmission latency become large as the number of active workers increases. On the other hand, it is worth noting that FL aims for global aggregation and hence only utilizes the averaged updates of distributed workers rather than the individual local updates from workers. Alternatively, the nature of waveform superposition in wireless multiple access channel (MAC) [9, 10, 11, 12] provides a direct and efficient way for transmission of the averaged updates in FL, also known as analog aggregation based FL[13, 14, 15, 16, 17, 18]. As a joint transmission-and-aggregation policy, analog aggregation transmission enables all the participating workers to simultaneously upload their local model updates to the PS over the same time-frequency resources as long as the aggregated waveform represents the averaged updates, thus substantially reducing the overhead of wireless communication for FL.
The research on analog aggregation based FL is still at early stage, leaving some fundamental questions unexplored, such as its convergence behavior and design of efficient algorithms. Given the limited transmit power and communication bandwidth at user devices, users may have to contend for communication resources when transmitting their local updates to the PS. It gives rise to the need for an efficient transmission paradigm, along with network resource allocation in terms of worker selection and transmit power control. All these practical issues motivate our work to study FL from the perspectives of both wireless communications and machine learning. In this paper, we quantify the impact of analog aggregation on the convergence behavior and performance of FL. Such quantitative results are essential in guiding the joint optimization of communication and computing resources. This paper aims at a comprehensive study on problem formulation, solution development, and algorithm implementation for the joint design and optimization of wireless communication and FL. Our key contributions are summarized as follows:
-
•
We derive closed-form expressions for the expected convergence rate of FL over the air in the cases of convex and non-convex, respectively, which not only interprets but also quantifies the impact of wireless communications on the convergence and accuracy of FL over the air. Also, full-size gradient descent (GD) and mini-batched statistical gradient descent (SGD) methods are both considered in this work. These closed-form expressions unveil a fundamental connection between analog wireless communication and FL with analog aggregation, which provides a fresh perspective to measure how the parameter design of analog wireless systems affects the performance of FL over the air.
-
•
Based on the closed-form theoretical results, we formulate a joint optimization problem of learning, worker selection, and power control, with a goal of minimizing the global FL loss function given limited transmit power and bandwidth. The optimization formulation turns out to be universal for the convex and non-convex cases with GD and SGD. Further, for practical implementation of the joint optimization problem in the presence of some unobservable parameters, we develop an alternative reformulation that approximates the original unattainable problem as a feasible optimization problem under the operational constraints of analog aggregation.
-
•
To efficiently solve the approximate problem, we identity a tight solution space by exploring the relationship between the number of workers and the power scaling. Thanks to the reduced search space, we propose a simple discrete enumeration method to efficiently find the globally optimal solution.
We evaluate the proposed joint optimization scheme for FL with analog aggregation in solving linear regression and image classification problems, respectively. Simulation results show that our proposed FL is superior to the benchmark scheme that uses random worker selection and power control, and achieves comparable performance to the ideal case where FL is implemented over noise-free wireless channels.
The remainder of this paper is organized as follows. Related work is presented in Section II. The system model for FL over the air and the associated joint communication and learning optimization formulation are presented in Section III. Section IV derives the closed-form expressions of the expected convergence rate of the FL over the air as the foundation for algorithm design and performance analysis. Section V provides a framework of joint optimization of communication and FL, and develops the corresponding algorithms. Section VI presents numerical results, followed by conclusions in Section VII.
II Related Work
This section reviews the literature and highlights the novelty of this paper with respect to related works.
To achieve communication efficiency in distributed learning, most of the existing strategies focus on digital communications, which may involve the preprocessing of transmitted updates or the management of wireless resources. For example, a popular line of work is to reduce the communication load per worker by compression of the updates under the assumptions of ideal communication links, such as exploiting coding schemes [19], utilizing the sparsity of updates [20], employing quantization of the updates [21], and avoiding less informative local updates via communication censoring schemes [22, 23, 24, 25, 26]. Another line of work is to support FL through communication resource management, such as worker scheduling schemes to maximize the number of participating workers[27], joint optimization of resource allocation and worker scheduling[7], and communication and computation resource allocation and scheduling for cell-free networks[8].
There are some pioneering works on analog aggregation based FL [13, 14, 16, 15, 17, 18], most of which focus on designing transmission schemes[13, 14, 16, 15]. They adopt preselected participating workers and fix their power allocation without further optimization along FL iterations. The optimization issues are considered in [17, 18], but they are mainly conducted on the communication side alone, without an underlying connection to FL. When communication-based metrics are used, the optimization in existing works often suggests to maximize the number of selected workers that participate FL, but our theoretical results indicate that selecting more workers is not necessarily better over imperfect links or under limited communication resources. Thus, unlike these existing works, we seek to analyze the convergence behavior of analog aggregation based FL, which provides a fresh angle to interpret the specific relationship between communications and FL in the paradigm of analog aggregation. Such a connection leads to this work on a joint optimization framework for analog communications and FL, in which the work selection and power allocation decisions are optimized during the iterative FL process.
III System Model

As shown in Fig. 1, we consider a one-hop wireless network consisting of a single parameter server (PS) at a base station and user devices as distributed local workers. Through federated learning, the PS and all workers collaborate to train a common model for supervised learning and data inference, without sharing local data.
III-A FL Model
Let denote the local dataset at the -th worker, , where is the input data vector, is the labeled output vector, , and is the number of data samples available at the -th worker. With samples in total, these workers seek to collectively train a learning model parameterized by a global model parameter of dimension , by minimizing the following loss function
(1) |
where the global loss function is a summation of data-dependent components, each component is a sample-wise local function that quantifies the model prediction error of the same data model parameterized by the shared model parameter , and .
In distributed learning, each worker trains a local model from its local data , which can be viewed as a local copy of the global model . That is, the local loss function is
(2) |
where is the local model parameter. Through collaboration, it is desired to achieve , , so that all workers reach the globally optimal model . Such a distributed learning can be formulated via consensus optimization as[4, 28]
(3a) |
To solve P1, this paper adopts a model-averaging algorithm for FL [4, 28]. It is essentially an iterative process consisting of both computing and communication steps at each iteration. Specifically, in each communication round, the PS broadcasts the current to all workers. Then, the -th worker uses a learning algorithm to update its by minimizing its local data-dependent loss function in (2). In this work, gradient descent111In this work, we take the basic gradient descent as an example, while the proposed methodology can be extended to mini-batch gradient descent as well. is applied, in which the local model at the -th local worker is updated as
(4) |
where is the learning rate, and is the gradient of with respect to .
When local updating is completed, each worker transmits its updated parameter to the PS via wireless uplinks to update the global as
(5) |
Then, the PS broadcasts in (5) to all participating workers as their initial value in the next round. The FL implements the local model-updating in (III-A) and the global model-averaging in (5) iteratively, until convergence. It has been shown that this FL algorithm converges to the globally optimal solution of the original problem in P1 under the conditions that is a convex function and the data transmission between the PS and workers is error-free[4, 28].
Note that the implementation steps in (III-A) and (5) only concern the computational aspect of FL, by assuming perfect communications for both the global and local between the PS and all workers. However, the communication impacts on FL performance should not be ignored. Especially in practical wireless network environments, certain errors are inevitably introduced during transmissions of the updates due to the imperfect characteristics of wireless channels.
III-B Analog Aggregation Transmission Model
To avoid heavy communication overhead and save transmission bandwidth of FL over wireless channels, we adopt analog aggregation without coding, which allows multiple workers to simultaneously upload their local model updates to the PS over the same time-frequency resources. All workers transmit their local ’s in an analog form with perfect time synchronization among them222The implementation of time synchronization and the impact of imperfect synchronization are beyond the scope of this work. Interested readers are referred to [12, 29].. In this way, the local updates ’s are aggregated over the air to implement the global model updating step in (5). Such an analog aggregation is conducted in an entry-wise manner. That is, the -th entries from all workers, , are aggregated to compute in (5), for any .
Let denote the power control vector of worker at the -th iteration. Noticeably, the choice of in FL over the air should be made not only to effectively implement the aggregation rule in (5), but also to properly accommodate the need for network resource allocation. Accordingly, we set the power control policy as
(6) |
where is the channel gain between the -th worker and the PS at the -th iteration333In this paper, we assume the channel state information (CSI) to be constant within each iteration, but may vary over iterations. We also assume that the CSI is perfectly known at the PS, and leave the imperfect CSI case in future work., is the power scaling factor, and is a transmission scheduling indicator. That is, means that the -th entry of the local model parameter of the -th worker is scheduled to contribute to the FL algorithm at the -th iteration, and , otherwise. Through power scaling, the transmit power used for uploading the -th entry from the -th worker should not exceed a maximum power limit for any , as follows:
(7) |
At the PS side, the received signal at the -th iteration can be written as
(8) |
where represents Hadamard product, , , , and is additive white Gaussian noise (AWGN).
Given the received , the PS estimates via a post-processing operation as
(9) |
where is a properly chosen scaling vector to produce equal weighting for participating ’s in (III-B) as desired in (5), and represents the inverse Hadamard operation of that calculates its entry-wise reciprocal. Noticeably, in order to implement the averaging of (5) in FL over the air, such a post-processing operation requires to be the same for all workers for given and , which allows to eliminate from the first term in (III-B).
Comparing (III-B) with (5), there exist differences between and due to the effect of wireless communications. This work aims to mitigate such a gap through optimizing the worker selection and power scaling for FL over the air. To this end, our next step is to unveil an important but unexplored foundation, i.e., how wireless communications affect the convergence behavior of FL over the air.
IV The Convergence Analysis of FL with Analog Aggregation
In this section, we study the effect of analog aggregation transmission on FL over the air, by analyzing its convergence behavior for both the convex and the non-convex cases. To average the effects of instantaneous SNRs, we derive the expected convergence rate of FL over the air, which quantifies the impact of wireless communications on FL using analog aggregation transmissions.
IV-A Convex Case
We first make the following assumptions that are commonly adopted in the optimization literature [30, 31, 32, 33, 7, 34].
Assumption 1 (Lipschitz continuity, smoothness): The gradient of the loss function is uniformly Lipschitz continuous with respect to , that is,
(10) |
where is a positive constant, referred to as a Lipschitz constant for the function .
Assumption 2 (strongly convex): is strongly convex with a positive parameter , obeying
(11) |
Assumption 3 (bounded local gradients): The sample-wised local gradients at local workers are bounded by their global counterpart[32, 33]
(12) |
where .
According to [5, 35], the FL algorithm applied over ideal wireless channels is able to solve P1 and converges to an optimal . In the presence of wireless transmission errors, we derive the expected convergence rate of the FL over the air with analog aggregation, as in Theorem 1.
Theorem 1.
Adopt Assumptions 1-3, and denote the globally optimal learning model in (3) as . The model updating rule for of the FL-over-the-air scheme is given by (III-B), . Given the transmit power scaling factors , worker selection vectors , and setting the learning rate to be , the expected performance gap of at the -th iteration is given by
(13) |
with
(14) |
(15) |
where the expectation is over the channel AWGN of zero mean and variance .
Based on Theorem 1, we further derive the cumulative performance gap resulted from wireless communications and the worker selection of the whole FL process, summarized by the following Lemma 1.
Lemma 1.
Given an initial global model , the cumulative performance gap of FL after iterations is bounded by
(16) |
Proof.
Given the expected performance gap at the -th iteration in (13), we carry out recursions as follows:
(17) |
. ∎
Lemma 1 reveals that the FL algorithm converges asymptotically in under mild conditions, as stated in Proposition 1.
Proposition 1.
Given the learning rate , the convergence of the FL algorithm is guaranteed with , as long as in (12) satisfies the following condition:
(18) |
where .
Proof.
Proposition 1 indicates that the convergence behavior of the FL algorithm depends on both the learning-related parameters, i.e., , and communication-related parameters, including , and . Interestingly, the channel noise and do not affect , and hence they do not affect the convergence of the FL algorithm but determine the steady state that the FL algorithm converges to.
Lemma 1 also provides the expected convergence rate of an FL algorithm when the transmission link is error-free. In this ideal case, it offers the fastest convergence rate achievable by the FL algorithm, which is derived by the following Lemma 2.
Lemma 2.
Consider a resource-unconstrained and error-free mode where the effects of wireless channels, as well as that of noise, are already mitigated or fully compensated. Given the optimal global and the learned in (III-B), the upper bound of for the FL over the air is given by
(21) |
Proof.
It is worth noting that Lemma 2 provides the convergence rate for the ideal case, which assumes that the impacts of wireless communications, including noise, channel and constrained resources, are all mitigated to result in error-free transmission. According to (16) in the realistic case, the trajectory of exhibits jump discontinuity with a gap term at each step , as defined in (16):
This gap reflects the impact of wireless communication factors on FL, by means of the worker selection, transmit power scaling and AWGN. Intuitively, this gap diminishes as the number of selected workers increases, which reduces the value of . Meanwhile, as the power scaling factor increases, is decreased, which leads to a reduction of the gap as well. Hence, it is necessary to optimize transmit power scaling factors and worker selection in order to minimize the gap in (16) for the implementation of FL algorithms over a realistic wireless network.
IV-B Non-convex Case
When the loss function is nonconvex, such as in the case of convolutional neural networks, we derive the convergence rate of the FL over the air with analog aggregation for the nonconvex case without Assumption 2, which is summarized in Theorem 2.
Theorem 2.
Under the Assumptions 1 and 3 for the non-convex case, given the transmit power scaling factors , worker selection vectors , the optimal global FL model , and the learning rate , the convergence at the -th iteration is given by
(22) |
Proof.
Please refer to Appendix B. ∎
IV-C Stochastic gradient descent
Our work can be extended to stochastic versions of gradient descent (SGD) as well. Here, we provide convergence analysis for mini-batch gradient descent with a constant mini-batch size , while the results directly apply to the standard SGD by setting . Theorem 3 summarizes the convergence behavior of SGD for the strongly convex case.
Theorem 3.
Under the Assumptions 1, 2 and 3 for the convex case, and given the transmit power scaling factors , worker selection vectors , the optimal global FL model , the learning rate and the mini-batch size , the convergence behavior of the SGD implementation of FL over the air is given by
(25) |
where
(26) |
(27) |
Proof.
Please refer to Appendix C. ∎
From Theorem 3, the cumulative performance gap of FL after the -th iteration for the SGD case is bounded by
(28) |
Remark 1.
If is set to be , then Theorem 3 for SGD is the same as Theorem 1 for GD. In addition, since the common mini-batch size is no larger than the minimum local data size, i.e., , both in (26) and in (27) decrease as increases, which leads to a smaller in (25). In other words, FL has a better convergence performance with a larger . On the other hand, such an improvement on performance is achieved at the cost of high computation load at the local workers in each communication round, which reflects the tradeoff between training performance and computational complexity in SGD.
Similarly, we can also derive the mild convergence condition for SGD, given by the following Proposition 2.
Proposition 2.
Given the learning rate , the convergence of the FL algorithm for SGD cases is guaranteed with , as long as in (12) satisfies the following condition:
(29) |
V Performance Optimization for Federated Learning over the Air
In this section, we first formulate a joint optimization problem to reduce the gap for FL over the air, which turns out to be applicable for both the convex and non-convex cases, using either GD or SGD implementations. To make it applicable in practice in the presence of some unobservable parameters at the PS, we reformulate it to an approximate problem by imposing a conservative power constraint. To efficiently solve such an approximate problem, we first identify a tight solution space and then develop an optimal solution via discrete programming.
V-A Problem Formulation for Joint Optimization
Since we are concerned with convergence accuracy, our optimization problem boils down to minimizing the performance gap for different cases (i.e., , , and ) at each iteration under the corresponding convergence conditions (i.e., Proposition 1 and Proposition 2).
We recognize that solving P1 amounts to iteratively minimizing those gap , , and under the transmit power constraint in (7). At the -th iteration, the objective functions for those three cases are given by
(32) | ||||
(33) | ||||
(34) |
where and . Note that when the optimization is executed at the -th iteration, and can be treated as constants.
Considering the entry-wise transmission for analog aggregation, we remove irrelevant items and extract the component of the -th entry from those gap in (32), (33) and (34) as the objective to minimize, which is given by
(35) | ||||
(36) | ||||
(37) |
Since all entries indexed by are separable with respect to the design parameters, we perform entry-wise optimization by considering and one entry at a time, where the superscript and the index of different cases are omitted hereafter. To determine the worker selection vector and the power scaling factor at the -th iteration, the PS carries out a joint optimization problem formulated as follows:
(38a) | ||||
s.t. | (38b) | |||
where should be in (38b) for the SGD case.
However, in (38b), the knowledge of is needed but is unavailable to the PS due to analog aggregation.
To overcome this issue, we reformulate a practical optimization problem via an approximation that , in light of the consensus constraint in P1. According to (47) in the proof of Theorem 1, each local parameter is updated from the broadcast along the direction of the averaged gradient over its local data . Hence, it is reasonable to make the following common assumption on bounded local gradients, considering that the local gradients can be controlled by adjusting the learning rate or through simple clipping [21, 28, 37, 38].
Assumption 4 (bounded local gradients): The gap between the global parameter and the local parameter update is bounded by
(39) |
where is related to the learning rate that satisfies the following condition444This implies the value range of . In practice, can take . In addition, for the SGD case, we have
(40) |
Under Assumption 4, we reformulate the original optimization problem (P2) into the following problem (P3), by replacing in (38b) by its approximation:
(41a) | ||||
s.t. | (41b) | |||
(41c) |
where the power constraint (41b) is constructed based on the fact that
(42) |
Since is always available at the PS, P3 becomes a feasible formulation for adoption in practice. Next, we develop the optimal solution to P3.
V-B Optimal Solution to P3 via Discrete Programming
At first glance, a direct solution to P3 leads to a mixed integer programming (MIP), which unfortunately incurs high complexity. To solve P3 in an efficient manner, we develop a simple solution by identifying a tight search space without loss of optimality. The tight search space, given in the following Theorem 4, is a result of the constraints in (41b) and (41c), irrespective of the objective function (41a). Hence, it holds universally for any , such as those in (35)-(37).
Theorem 4.
When all the required parameters in P3 i.e., , are available at the PS, the solution space of in P3 can be reduced to the following tight search space without loss of optimality as
(43) |
where is a function of , in the form:
(44) |
and is the Heaviside step function, i.e., for , and otherwise.
Proof.
Please see Appendix B. ∎
Thanks to Theorem 4, we equivalently transform P3 from a MIP into a discrete programming (DP) problem P4 as follows
(45) |
According to P4, the objective can only take on possible values corresponding to the feasible values of ; meanwhile, given each , the value of is uniquely determined. Hence the minimum can be obtained via line search over the feasible points in (43). Note that the feasible points in (43) are determined by the channel gains, power limits and data sizes of the workers. Hence, the optimal transmission policy decided by P4 reflects the tradeoff among workers in terms of their channel quality, available power resource, and data quality.
It is worth noting that the solution of P4 may exceed the maximum value allowed at a worker, due to the approximation introduced in Assumption 4. To strictly comply to the power constraint, each worker needs to take the following bounding step when sending its local parameters:
1) if , then the -th worker sends ;
2) otherwise, it sends , where is the sign function.
Putting together, we propose a joint optimization for FL over the air (INFLOTA) as summarized in Algorithm 1, which is a dynamic scheduling and power scaling policy. By using different , our INFLOTA can be adjust to all the considered cases including the convex and non-convex, using either GD or SGD implementations.
Remark 2.
(Optimality) P3 is equivalently reformulated as P4, which is solved by a search method with much reduced computational complexity thanks to the reduced search space. Comparing P3 to P2, the constraint (41b) of P3 is more restrictive than the constraint in (38b) of P2. Since P3 reduces the feasible domain of P2, the solution to P3 cannot be superior to that of P2. Therefore, the optimal solution of P3 is an upper bound of P2, i.e., calculated by solving P3 is larger than the actual one.
Remark 3.
(Complexity) Algorithm 1 provides a holistic solution for implementation of the overall FL at both the PS and workers sides. Its computational complexity is mainly determined by that of the optimization step in P4. The complexity order of the optimization step is low at , since the search space is reduced to points only via P4.
Remark 4.
(Implementation) To implement the FL over the air in Algorithm 1, the PS must know the CSI, the number of the data samples and the maximum transmit power of all local workers. This information can be obtained by the PS when local workers initially connect to it. Before the implementation of Algorithm 1, the PS must first broadcast the global model information to the workers. Noticeably, taking P4 into the implementation of FL, some workers need to send to meet the requirement on the maximum transmit power. Such a bounding method can be viewed as a quantization measure, which does not affect the convergence [39].
VI Simulation Results And Analysis
In the simulations, we evaluate the performance of the proposed INFLOTA for both linear regression and image classification tasks, which are based on a synthetic dataset and the MNIST dataset, respectively.
The considered network has workers, whose maximum power is set to be mW for any . The receiver noise power at PS is set to be mW, i.e., dB. The wireless channel gain between the workers and the PS are generated from a Rayleigh fading model. Here, is generated from an exponential distribution with unit mean for different and .
We use two baseline methods for comparison: a) an FL algorithm that assumes idealized wireless transmissions with error-free links to achieve perfect aggregation, and b) an FL algorithm that randomly determines the power scalar and user selection. They are named as Perfect aggregation and Random policy, respectively. In Random policy, the probability of each worker being selected is 50% and the power scalar is generated from an exponential distribution with unit mean.
VI-A Linear regression experiments
In linear regression experiments, the synthetic data used to train the FL algorithm is generated randomly from . The input and the output follow the function where follows a Gaussian distribution . The FL algorithm is used to model the relationship between and .

Since linear regression only involves two parameters, we train a simple two-layer neural network, with one neuron in each layer, without activation functions, which is the convex case. The loss function is the MSE of the model prediction and the labeled true output . The learning rate is set to 0.01.
Fig. 2 shows an example of using FL for linear regression. The optimal result of a linear regression is , because the original data generation function is . In Fig. 2, we can see that the most accurate approximation is achieved by Perfect aggregation, which is the ideal case without considering the influence of wireless communication. Random policy considers the influence of wireless communication but without any optimization. Thus, its performance is worst. Our proposed INFLOTA performs closely to the ideal case, which jointly considers the learning and the influence of wireless communication. This is because that our proposed INFLOTA can optimize worker selection and power control so as to reduce the effect of wireless transmission errors on FL.

In Fig. 3, we show how wireless transmission affects the convergence behavior of the global FL model training in terms the value of the loss function and the global FL model remains unchanged which shows that the global FL model converges. As we can see, as the number of iterations increases, the MSE values of all the considered learning algorithms decrease at different rates, and eventually flatten out to reach their steady state. All schemes converge, but to different steady state values. This behavior corroborates the results in Lemma 1 and Proposition 1 that the channel noise does not affect the convergence of the FL algorithm but it affects the value that the FL algorithm converges to.

Fig. 4 shows how MSE varies with the total number of workers . In general, the MSE performance of all considered FL schemes decreases as increases. This is due to the fact that an increase in the number of workers leads to an increased volume of data available for the FL training and hence improved accuracy of the estimated model parameters. Moreover, as the number of workers increases, the effect of wireless transmission on the global FL model accuracy starts to diminish. This is because the data samples may be already enough for accurate training when exceeds a certain level.

In Fig. 5, we present how the MSE changes with the average number of samples per worker . The number of data samples per worker fluctuates around the average number, i.e., we set . As increases, all of the considered learning algorithms have more data samples available for training, and hence the MSE of all of considered FL algorithms decrease in Fig. 5. As the average data samples per worker continues to increase, the MSE improvement slows down and eventually saturates. This is because that as the data samples per worker continues to increase, the data samples are enough for training the FL model.

Fig. 6 presents how the AWGN received by the PS affects the MSE. We can see that as the noise variance increases, the MSE values of all of considered FL algorithms increase, except for Perfect aggregation. When the noise variance is small (e.g., less than ), it has little effect on the performance of FL algorithms.
VI-B Evaluation on the MNIST dataset
In order to evaluate the performance of our proposed INFLOTA in realistic application scenarios with real data, we train a multilayer perceptron (MLP) on the MNIST dataset555http://yann.lecun.com/exdb/mnist/ with a 784-neuron input layer, a 64-neuron hidden layer, and a 10-neuron softmax output layer, which is a non-convex case. We adopt cross entropy as the loss function, and rectified linear unit (ReLU) as the activation function. The total number of parameters in the MLP is 50890. The learning rate is set as 0.1. In MNIST dataset, there are 60000 training samples and 10000 test samples. We randomly take out training samples and distribute them to 20 local workers as their local data. Then the three trained FL are tested with 10000 test samples. We provide the results of cross entropy and test accuracy versus the iteration index in Fig. 7 and Fig. 8, respectively. Since the MNIST dataset is designed for handwritten digit identification, the test accuracy presents the identification accuracy. As we can see, our proposed INFLOTA outperforms Random policy, and achieves comparable performance as Perfect aggregation.


VII Conclusion
In this paper, we have studied the joint optimization of communications and FL over the air with analog aggregation, in which both worker selection and transmit power control are considered under the constraints of limited communication resources. Under the convex and non-convex cases with either the GD or SGD implementations, we respectively derive closed-form expressions for the expected convergence rate of the FL algorithm, which can quantify the impact of resource-constrained wireless communications on FL under the analog aggregation paradigm. Through analyzing the expected convergence rate, we have proposed a joint optimization scheme of worker selection and power control, which can mitigate the impact of wireless communications on the convergence and performance of the FL algorithm. More significantly, our joint optimization scheme is applicable for both the convex and non-convex cases, using either GD or SGD implementations. Simulation results show that the proposed optimization scheme is effective in mitigating the impact of wireless communications on FL.
Acknowledgments
We are very grateful to all reviewers who have helped improve the quality of this paper. This work was partly supported by the National Natural Science Foundation of China (Grant Nos. 61871023 and 61931001), Beijing Natural Science Foundation (Grant No. 4202054), and the National Science Foundation of the US (Grant Nos. 1741338 and 1939553).
Appendix A Proof of Theorem 1
Theorem 1 considers the full GD method for convex problems. Following the proof for the gradient methods with noise in [33], we first present the inequality implied by the Assumption 1, as follows
(46) |
Employing a standard full GD method, the -th worker updates its local FL model parameter at the -th iteration by
(47) |
Given the learning rate (a special setting for simple expression without loss of generality), the expected optimization function of can be expressed as
(50) |
where the step (a) is derived from the fact that
(51) |
can be derived as follows
(52) |
where is the all- vector of length . The dimension of is the same length as that of .
Employing the triangle inequality of norms , the submultiplicative property of norms , and the Jensen’s inequality, (A) can be further derived as follows
(53) |
(54) |
Subtract from both sides of (A), we have:
(56) |
To minimize both sides of (11), we have
(57) |
The minimization of the left-hand side is achieved by , while the minimization of the right-hand side is achieved by . Thus, we have
(58) |
Then
(59) |
The proof is completed.
Appendix B Proof of Theorem 2
Theorem 2 considers the full GD method for non-convex problems. The proof of Theorem 2 follows that of Theorem 1 until (A). From (A), we have
(65) |
Summing up the above inequality from to , we get
(66) |
which leads to
(67) |
Recalling Proposition 1, we have
(68) |
As a result, we have the conclusion in Theorem 2,
(70) |
Appendix C Proof of Theorem 3
Exploiting the SGD method, the local parameter of the -th worker is updated at the -th iteration by
(71) |
where is the expectation, which represents that the -th worker randomly chooses samples from its local dataset to compute the local gradient.
Let denote the set of the samples that are not chosen by the -th worker at the -th iteration, can be derived as follows
(74) |
Applying Assumption 3, we get
(75) |
Appendix D Proof of Theorem 4
To minimize , it can be seen from (35), (36) and (37) that we should maximize the number of the selected workers and the transmit power scaling factor in the -th iteration. Thus, the selected workers should send their parameters at their maximum power. In order to reach the desired parameter aggregation at the PS as in (5), each worker needs to use the same transmit power scaling factor , which is a parameter that needs to be optimized ( determines the worker selection). According to (35), (36) and (37), a larger leads to a smaller . On the other hand, (41b) indicates that a larger results in less workers is selected, which then results in an increase of .
Rewriting (38b) and replacing with , we obtain the maximum acceptable of the -th worker as
(81) |
Accordingly, should be chosen from . Once is determined, can be determined by verifying whether the transmit power meets the condition in (7). As a result, we obtain a reduced solution space of the optimization problem P2 as
(82) |
with
(83) |
where
(84) |
is the Heaviside step function.
References
- [1] M. Chiang and T. Zhang, “Fog and iot: An overview of research opportunities,” IEEE Internet of Things Journal, vol. 3, no. 6, pp. 854–864, 2016.
- [2] J. Park, S. Samarakoon, M. Bennis, and M. Debbah, “Wireless network intelligence at the edge,” Proceedings of the IEEE, vol. 107, no. 11, pp. 2204–2239, 2019.
- [3] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020.
- [4] H. B. McMahan, E. Moore, D. Ramage, S. Hampson et al., “Communication-efficient learning of deep networks from decentralized data,” arXiv preprint arXiv:1602.05629, 2016.
- [5] J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” arXiv preprint arXiv:1610.02527, 2016.
- [6] G. Zhu, D. Liu, Y. Du, C. You, J. Zhang, and K. Huang, “Toward an intelligent edge: wireless communication meets machine learning,” IEEE Communications Magazine, vol. 58, no. 1, pp. 19–25, 2020.
- [7] M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, “A joint learning and communications framework for federated learning over wireless networks,” IEEE Transactions on Wireless Communications, 2020.
- [8] T. T. Vu, D. T. Ngo, N. H. Tran, H. Q. Ngo, M. N. Dao, and R. H. Middleton, “Cell-free massive mimo for wireless federated learning,” IEEE Transactions on Wireless Communications, 2020.
- [9] B. Nazer and M. Gastpar, “Computation over multiple-access channels,” IEEE Transactions on information theory, vol. 53, no. 10, pp. 3498–3516, 2007.
- [10] L. Chen, N. Zhao, Y. Chen, F. R. Yu, and G. Wei, “Over-the-air computation for iot networks: Computing multiple functions with antenna arrays,” IEEE Internet of Things Journal, vol. 5, no. 6, pp. 5296–5306, 2018.
- [11] M. Goldenbaum, H. Boche, and S. Stańczak, “Harnessing interference for analog function computation in wireless sensor networks,” IEEE Transactions on Signal Processing, vol. 61, no. 20, pp. 4893–4906, 2013.
- [12] O. Abari, H. Rahul, D. Katabi, and M. Pant, “Airshare: Distributed coherent transmission made seamless,” in 2015 IEEE Conference on Computer Communications (INFOCOM). IEEE, 2015, pp. 1742–1750.
- [13] M. M. Amiri and D. Gündüz, “Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air,” IEEE Transactions on Signal Processing, vol. 68, pp. 2155–2169, 2020.
- [14] ——, “Federated learning over wireless fading channels,” IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3546–3557, 2020.
- [15] M. M. Amiri, T. M. Duman, and D. Gündüz, “Collaborative machine learning at the wireless edge with blind transmitters,” arXiv preprint arXiv:1907.03909, 2019.
- [16] G. Zhu, Y. Wang, and K. Huang, “Broadband analog aggregation for low-latency federated edge learning,” IEEE Transactions on Wireless Communications, vol. 19, no. 1, pp. 491–506, 2019.
- [17] Y. Sun, S. Zhou, and D. Gündüz, “Energy-aware analog aggregation for federated learning with redundant data,” arXiv preprint arXiv:1911.00188, 2019.
- [18] K. Yang, T. Jiang, Y. Shi, and Z. Ding, “Federated learning via over-the-air computation,” IEEE Transactions on Wireless Communications, vol. 19, no. 3, pp. 2022–2035, 2020.
- [19] M. Ye and E. Abbe, “Communication-computation efficient gradient coding,” arXiv preprint arXiv:1802.03475, 2018.
- [20] A. F. Aji and K. Heafield, “Sparse communication for distributed gradient descent,” arXiv preprint arXiv:1704.05021, 2017.
- [21] Y. Liu, K. Yuan, G. Wu, Z. Tian, and Q. Ling, “Decentralized dynamic admm with quantized and censored communications,” in 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2019, pp. 1496–1500.
- [22] Y. Liu, W. Xu, G. Wu, Z. Tian, and Q. Ling, “Communication-censored admm for decentralized consensus optimization,” IEEE Transactions on Signal Processing, vol. 67, no. 10, pp. 2565–2579, 2019.
- [23] P. Xu, Z. Tian, Z. Zhang, and Y. Wang, “Coke: Communication-censored kernel learning via random features,” in 2019 IEEE Data Science Workshop (DSW), 2019, pp. 32–36.
- [24] T. Chen, G. Giannakis, T. Sun, and W. Yin, “Lag: Lazily aggregated gradient for communication-efficient distributed learning,” in Advances in Neural Information Processing Systems, 2018, pp. 5050–5060.
- [25] P. Xu, Z. Tian, and Y. Wang, “An energy-efficient distributed average consensus scheme via infrequent communication,” in 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2018, pp. 648–652.
- [26] P. Xu, Y. Wang, X. Chen, and T. Zhi, “Coke: Communication-censored kernel learning for decentralized non-parametric learning,” arXiv preprint arXiv:2001.10133, 2020.
- [27] Q. Zeng, Y. Du, K. K. Leung, and K. Huang, “Energy-efficient radio resource allocation for federated edge learning,” arXiv preprint arXiv:1907.06040, 2019.
- [28] J. Wang and G. Joshi, “Cooperative sgd: A unified framework for the design and analysis of communication-efficient sgd algorithms,” arXiv preprint arXiv:1808.07576, 2018.
- [29] M. Goldenbaum and S. Stanczak, “Robust analog function computation via wireless multiple-access channels,” IEEE Transactions on Communications, vol. 61, no. 9, pp. 3863–3877, 2013.
- [30] O. Shamir, N. Srebro, and T. Zhang, “Communication-efficient distributed optimization using an approximate newton-type method,” in International conference on machine learning, 2014, pp. 1000–1008.
- [31] S. Magnússon, H. Shokri-Ghadikolaei, and N. Li, “On maintaining linear convergence of distributed learning and optimization under limited communication,” IEEE Transactions on Signal Processing, 2020.
- [32] D. P. Bertsekas, J. N. Tsitsiklis, and J. Tsitsiklis, Neuro-Dynamic Programming. Athena Scientific, 1996.
- [33] M. P. Friedlander and M. Schmidt, “Hybrid deterministic-stochastic methods for data fitting,” SIAM Journal on Scientific Computing, vol. 34, no. 3, pp. A1380–A1405, 2012.
- [34] D. Alistarh, T. Hoefler, M. Johansson, N. Konstantinov, S. Khirirat, and C. Renggli, “The convergence of sparsified gradient methods,” in Advances in Neural Information Processing Systems, 2018, pp. 5973–5983.
- [35] K. Yuan, Q. Ling, and W. Yin, “On the convergence of decentralized gradient descent,” SIAM Journal on Optimization, vol. 26, no. 3, pp. 1835–1854, 2016.
- [36] L. Bottou, F. E. Curtis, and J. Nocedal, “Optimization methods for large-scale machine learning,” Siam Review, vol. 60, no. 2, pp. 223–311, 2018.
- [37] S. U. Stich, J.-B. Cordonnier, and M. Jaggi, “Sparsified sgd with memory,” in Advances in Neural Information Processing Systems, 2018, pp. 4447–4458.
- [38] H. Tang, C. Yu, X. Lian, T. Zhang, and J. Liu, “Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression,” in International Conference on Machine Learning. PMLR, 2019, pp. 6155–6165.
- [39] D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, “Qsgd: Communication-efficient sgd via gradient quantization and encoding,” in Advances in Neural Information Processing Systems, 2017, pp. 1709–1720.