Resource Consumption for Supporting Federated Learning in Wireless Networks
Abstract
Federated learning (FL) has recently become one of the hottest focuses in wireless edge networks with the ever-increasing computing capability of user equipment (UE). In FL, UEs train local machine learning models and transmit them to an aggregator, where a global model is formed and then sent back to UEs. In wireless networks, local training and model transmission can be unsuccessful due to constrained computing resources, wireless channel impairments, bandwidth limitations, etc., which degrades FL performance in model accuracy and/or training time. Moreover, we need to quantify the benefits and cost of deploying edge intelligence, as model training and transmission consume certain amount of resources. Therefore, it is imperative to deeply understand the relationship between FL performance and multiple-dimensional resources. In this paper, we construct an analytical model to investigate the relationship between the FL model accuracy and consumed resources in FL empowered wireless edge networks. Based on the analytical model, we explicitly quantify the model accuracy, available computing resources and communication resources. Numerical results validate the effectiveness of our theoretical modeling and analysis, and demonstrate the trade-off between the communication and computing resources for achieving a certain model accuracy.
Index Terms:
Federated learning, edge intelligence, resource consumption, FL performance.I Introduction
Edge intelligence is boosted by the unprecedented computing capability of smart devices. Nowadays, more than 10 billion Internet-of-Things (IoT) equipment and 5 billion smartphones have emerged that are equipped with artificial intelligence (AI)-empowered computing modules, such as AI chips and graphic processing units (GPUs) [1]. On the one hand, the user equipment (UE) can be potentially deployed as computing nodes to support emerging services, such as collaborative tasks, which paves the way for applying AI in wireless edge networks. On the other hand, in the paradigm of machine learning (ML), the powerful computing capability on these UEs can decouple conventional ML from acquiring, storing, and training data in data centers as conventional methods.
Federated learning (FL) has recently been widely acknowledged as one of the most essential enablers to bring edge intelligence into reality, as it facilitates collaborative training of ML models, while enhancing individual user privacy and data security [2, 3]. In FL, ML models are trained locally, therefore raw data remains in the device. Specifically, FL uses an iterative approach that requires a number of global iterations to achieve a certain global model accuracy. In each global iteration, UEs perform several local iterations to reach a local model accuracy [2, 3]. As a result, the implementation of FL in wireless networks can also reduce the costs of transmitting raw data, relieve the burden on backbone networks, and reduce latency for real-time decisions, etc.
While FL offers these attractive and valuable benefits, it also faces many challenges, especially when being deployed in wireless edge networks. For example, both local training and model transmission can be unsuccessful due to constrained resources and unstable transmission. Moreover, different from the conventional ML approaches, where raw datasets are sent to a central server, only the lightweight model parameters (i.e., weights, gradients, etc.) are exchanged in FL. Nevertheless, the communication cost of FL could be still fairly large and cannot be ignored. The experimental results in [4] show that the model size of a 5-layer convolutional neural network used for MNIST (classification) is about 4.567MB per global iteration for images, while the model size of ResNet-110 used for CIFAR-10 (classification) is around 4.6MB per global iteration for images [5]. Therefore, before deploying FL empowered wireless edge networks, we need to answer two fundamental questions: (1) How accurate of an ML model can be achieved by using FL, and (2) How much cost is incurred to guarantee certain required FL performance? Obviously, answering these two questions is of paramount importance for facilitating edge network intelligence. Therefore, we need to deeply understand the relationship between FL performance and consumed multi-dimensional resources.
In this paper, we theoretically analyze how many resources are needed to support an FL-empowered wireless edge network by assuming spatial-temporal domain Poisson distribution. We first derive the distribution of signal-to-interference-plus-noise ratio (SINR), signal-noise ratio (SNR), model transmission success probability, and resource consumption. Then, we evaluate the impact of the amount of resources on FL performance. Numerical results validate the accuracy of our theoretical modeling and analysis. The main contributions of this paper can be summarized as follows,
-
(1)
We develop an analytical model for FL empowered wireless edge networks, where UE geographical distribution and arrival rate of the interfering UEs are modeled as Poisson Point Process (PPP).
-
(2)
We theoretically analyze SINR, SNR, and the local/global model transmission success probability. Specifically, we derive the probability density function (PDF) of SINR and SNR, where we obtain the transmission success probability of the local/global model.
-
(3)
Based on the analytical model, we derive the explicit expression of the model accuracy, as a function of the amount of resources (including communication resources and computing resources) under FL framework.
-
(4)
We investigate three specific cases according to the sufficiency of respective communication and computing resources. We use simulation experiments to validate the effectiveness of our theoretical modeling and analysis, and demonstrate the trade-off relationship between the communication resources and computing resources for achieving certain machine learning model accuracy.
In the rest of this paper, we review related work in Section II. Then we present the FL empowered edge network model in Section III and the analysis for the communication and computing resource consumption in Section IV. In Section V, the relationship between FL performance and consumed resources is derived. In Section VI, different cases based on the sufficiency of respective communication and computing resources are discussed. Finally, we present the numerical results in Section VII and conclude the paper in Section VIII.
II Related Work
Currently, there has been a large body of work on developing various FL algorithms for FL empowered wireless edge networks. The authors of [6] designed an appropriate user selection scheme to minimize FL convergence time and training loss by jointly optimizing user selection and resource block allocation. The authors of [7] proposed a collaborative FL framework to enable UEs to implement FL with less reliance on a central server by aggregating the local FL models received from the associated UEs. In [8], the authors presented a Stackelberg-game-based approach to develop an FL incentive scheme by modeling the incentive-based interaction between a global server and participating UEs. In [9], we proposed a hybrid FL scheme to make a global UE association decision for heterogeneous models by exploiting two levels of model aggregation. All aforementioned investigations aimed to facilitate edge intelligence by developing suitable FL algorithms in wireless edge networks. However, the authors have not explicitly addressed the resource cost under FL framework. In fact, the communication cost under FL framework could be still fairly large and cannot be ignored, although only the lightweight model parameters are exchanged.
To support these improved FL algorithms even the legacy FL algorithms, resource-efficient and FL performance guarantee are indispensable basis for achieving the FL empowered wireless edge intelligence. The authors of [10] developed a low-cost sampling-based algorithm by adapting various control variables to minimize cost components (e.g., learning time, energy consumption). In [10], the authors considered a multivariate control problem for energy-efficient FL to guarantee convergence by designing principles for different optimization goals. The authors of [11] proposed an over-the-air computation based approach to improve communication efficiency by modeling joint device selection and beamforming design as a sparse and low-rank optimization problem. In [12], the authors introduced update-importance-based client scheduling schemes to reduce the required number of model training rounds by selecting a subset of clients for local updates in each round of training. The authors of [13] proposed a convergent over-the-air FL scheme to reduce bandwidth and energy consumption by inducing precoding and scaling upon transmissions to gradually mitigate the effect of the noisy channel. In [14], the authors proposed a federated dropout scheme to enable FL on resource-constrained devices by tackling both the communication and computation resource bottlenecks. All these investigations aimed to achieve resource-efficient FL algorithms in wireless edge networks. However, the vulnerability of wireless links is largely ignored, which directly degrades FL performance by affecting local training and model transmission. Therefore, deeply understanding the relationship among FL performance, wireless factors, and multi-dimensional resources is essential for enabling wireless edge intelligence.
So far, there has been little attention on quantifying the relationship between FL performance and consumed resources, while considering the vulnerability of wireless links. The authors of [15] investigated the trade-off between the number of local iterations and the number of global iterations to capture the relationship between FL training time and energy consumption. However, they have not considered the unsuccessful model transmission in a real network. In [16], the authors studied a joint learning and communication optimization scheme to minimize an FL loss function, where the limited resources and unstable wireless links were considered. However, they only focused on optimizing FL performance without comprehensively quantifying the relationship between FL performance and consumed resources.
III FL empowered Wireless Network Model
We consider an FL empowered wireless edge network consisting of a central base station (BS) and multiple UEs, as shown in Fig.1. The UEs can be regarded as local computing nodes for local model training, while the server (e.g., edge servers) serves as the model aggregator co-located with the BS [2, 9, 3]. To quantitatively present the FL empowered wireless network model, we need to model the distribution of the UEs and arrival rate of the interfering UEs. As one of the most commonly used point processes, PPP model has been widely used to model UEs distribution and/or arrival rate of the interfering UEs in wireless networks, where a huge amount of data has validated the accuracy of the model [17, 18, 19, 20]. Nevertheless, other point processes, such as Poisson cluster process (PCP) [21] and cox process [22] for some specific scenarios, can also be used in our analytical model.

III-A FL Model
III-A1 Loss Function
Let random variable denote the number of UEs that are geographically distributed as homogeneous PPP with intensity , where denotes the value of . Similarly, we use capital letter to denote a random variable and the corresponding lower case to its value in the following. For convenience, the frequently used notations are summarized in TABLE I.
Notation | Definition | Notation | Definition |
Number of UEs (random variable) | The value of | ||
Number of interfering UEs | The value of | ||
Number of UEs in interfering area | The value of | ||
UE density | Interfering UE arrival rate | ||
Datasets of UE | The amount of | ||
The index of local iterations, | The value of | ||
Number of local iterations | Number of communication rounds | ||
Global model at -th round | Local model of UE | ||
Computing capacity of UE | Interference of UE | ||
Transmit power of the UE | Transmit power of the BS | ||
Distance between the UE and the BS | Radius of the BS coverage | ||
Distance vector for interfering UEs | Radius of interfering area | ||
SINR of uplink | SNR of downlink | ||
SNR threshold | SINR threshold | ||
Transmission rate (downlink) | Transmission rate (uplink) | ||
Bandwidth consumption (downlink) | Bandwidth consumption (uplink) | ||
Training time of UE for | Number of CPU cycles for | ||
one local iteration | computing a sample |
For a specific UE , it has a local dataset with data samples, where . Moreover, we define as a loss function for data sample of UE , where represents the model parameter of UE at the -th local iteration during the -th global iteration. The loss function is different for various FL learning tasks [23]. For example, for a linear regression, the loss function is . For neural network, the loss function could be mean squared error (i.e., ), where represents the predicted value of . Based on , we define as a local loss function to capture the local training performance, which is as follows,
(1) |
In addition, we define as the global loss function on all distributed datasets to measure the global training performance, which is expressed by
(2) |
where . The goal of the BS is to derive a vector satisfying .
III-A2 Updating Model
In FL, each global iteration is called a communication round [9], as shown in Fig.2. A communication round consists of phases including local model updating, local iterations (also called local training), local model transmission, global model updating, and global model transmission. In the following, we present the details of the local and global model updating respectively.

-
(1)
Local Model Updating: The local model updating can be performed based on a local learning algorithm, such as gradient descent (GD), actor-critic (AC), etc. Specifically, if , the local model is updated by , while if , where represents the index of local iterations, represents the total number of local iterations during a communication round. Moreover, is the step size, and represents the global model at the -th communication round.
-
(2)
Global Model Updating: After local iterations, i.e., , UEs will achieve a certain local accuracy and send the local models to the aggregator. Then the global aggregation is performed at the aggregator according to
(3)
III-B Computing Resource Consumption Model
For a specific UE , let denote its computing capacity in cycles/s. (cycles/sample) denotes the number of CPU cycles required for computing one data sample at UE . represents the local computing time (training time) needed for one local iteration. Therefore, similar to that in [24], the consumed computing resources during one local iteration for UE is given by .
III-C Communication Resource Consumption Model
III-C1 Uplink
The transmission time for UE to transmit the local model (uplink direction) is denoted by . Since the dimensions of local models are fixed for all UEs that participate in local training, the data size of the local model on each UE is constant and is denoted by [24]. The transmission rate of UE on the wireless channel to the BS during the -th communication round is represented by . Therefore, we have , where
(4) |
where represents the amount of consumed bandwidth for transmitting the local model . In addition, is the SINR. represents the distance between the UE and the BS, denotes the distance vector for all interfering UEs of UE , represents the number of interfering UEs with , denotes the noise power, represents the transmit power of the UE. represents the large-scale channel gain between the BS and UEs. Indeed, the channel gain model could be either large-scale (e.g., path loss) or small-scale (e.g., Rayleigh fading, Rician fading), which only affects SINR/SNR. Furthermore, let denote the SINR threshold that the BS can successfully decode the received updates from UE . Therefore, local model transmission is successful only if .
III-C2 DownLink
Let us analyze SNR for the BS to transmit the global model (downlink direction). The transmission time for transmitting the global model in the downlink is denoted by . From equation (3), we see that the dimensions of the global model is similar to that of local models. Therefore, the data size of the global model that the BS sends to each UE is also equal to [16]. We assume that the transmission rate of the BS during the -th communication round is represented by . Therefore, we have , where
(5) |
where represents the consumed bandwidth for transmitting the global model. In addition, , where represents the transmit power of the BS allocated to all UEs. Furthermore, let denote the SNR threshold that UEs can successfully decode the received updates from the BS. Therefore, the global model transmission is successful only if .
IV Wireless Bandwidth and Computing Resources Consumed for Supporting FL empowered Edge Intelligence
A certain amount of wireless bandwidth is consumed in the uplink/downlink, when models are exchanged between the BS and the UEs. Specifically, in the uplink, the UEs send their local models to the BS via some channel partitioning schemes, such as orthogonal frequency division multiplexing (OFDM). in the downlink, the BS sends the global model to individual UEs. Indeed, the wireless transmission environment of the uplink/downlink will affect the transmission process of the local/global model, and thus affect the global aggregation and local training. In this section, we theoretically analyze SINR, SNR, as well as wireless bandwidth consumed in the uplink/downlink to support FL empowered wireless edge networks.
IV-A SINR Analysis for Uplink
IV-A1 Probability Density Function (PDF) of SINR
To derive the PDF of SINR, we separately investigate the signal power and interference. We have assumed that the UEs are geographically distributed as homogeneous PPP with intensity , and thus the number of UEs is a variable of Poisson distribution with density parameter , where represents the radius of the BS coverage. For a specific UE , the signal power is also a random variable, i.e., , as it only relates to the distance and is always fixed for each UE.
Proposition 1.
The PDF of the distance between a specific UE and the serving BS is .
Proof:
As the PDF of location for UE is , the CDF of distance is . Therefore, the PDF of is . ∎
Therefore, we can obtain the PDF of signal power (i.e., ) for deriving the closed-form expression of . Next, we investigate the distribution of the received interference in the uplink. Note that only the transmitting UEs located in the interfering area with radius , can contribute to the interference. We assume that the number of UEs within the interfering area is represented by , which is also a variable of Poisson distribution with density parameter . Moreover, the transmission time for the UEs is represented by , where the transmitting UEs during time can contribute to interference. Therefore, for a specific UE, the number of interfering UEs is distributed as PPP with parameter , where denotes the arrival rate of interfering UEs. Therefore, the interference probability of a transmitting UE during time is .
Therefore, the probability of the number of interfering UEs given is
(6) |
where is the combination number. Therefore, the PDF of is
(7) |
where . Based on Proposition 1, we can derive the PDF of interference generated by UE , i.e., . As the total interference is affected by the number of interfering UEs as well as the distance of these interfering UEs , we have the PDF of , as follows,
(8) |
Therefore, the PDF of SINR can be given by
(9) |
IV-A2 transmission success probability of Local Models
Local model transmission is successful if . Therefore, the transmission success probability of local models is given by
(10) |
where represents the area of that satisfies . As we can obtain in equation (10), we only need to find the interfering area .
For the distance between the UE and the serving BS, intuitively , when [20]. Therefore, the satisfying range of is . Therefore, when given , we can obtain the number of interfering UEs and the location of these interfering UEs. Let represent the mean of random variable . Based on the UE distribution and interfering UE arrival models, we can derive as follows,
(11) |
Therefore, SINR is only related to and , expressed as , where represents the interference generated by UE . Therefore, we have
(12) |
In a typical FL framework, the number of UEs involved in local model training is fairly large, say at least hundreds of UEs [25]. Therefore, based on the central limit theorem, follows a normal distribution [20, 26]. Furthermore, we have and , which are the mean and variance of respectively [26]. More details about and can be found in Appendix A.
Let , where and . Therefore, we have , where
(13) |
where and represents the cumulative distribution function (CDF) of standard normal distribution. Therefore, we have
(14) |
IV-B SNR Analysis for Downlink
As , we can obtain the PDF of the signal power, as , when given transmit power and noise level . Therefore, given , where we assume monotonically increases, we have
(15) |
IV-C Wireless Bandwidth Consumed for Transmitting Local/Global Models
Based on equation (4), the bandwidth consumed for transmitting the local model during the -th communication round is given by . As and are constant, the PDF of for UE in the uplink is equal to . Therefore, the mean of bandwidth for all UEs transmitting local models during communication rounds is as follows,
(16) |
Similarly, the mean of bandwidth for transmitting the global models during communication rounds is given by
(17) |
IV-D Consumed Computing Resources in FL
We assume the processing capacity of all UEs are constant in cycles/sample. Moreover, as different ML models pose different degrees of complexity, we assume all UEs train the same FL task in our analytical model, where the local ML models have the same size and structure. Therefore, the total amount of computing resources needed to support local model training for all UEs is affected by the number of training UEs as well as the amount of datasets on the UEs. On the one hand, the number of training UEs is affected by the wireless transmission of the global model. On the other hand, many of the existing studies explicitly indicate that the amount of different datasets distributed on the UEs is imbalanced, as the data is collected directly and stored persistently [27, 28]. Note here data imbalance means the different amount of local datasets, instead of different dataset contents, so our analytical model is based on i.i.d. data, where non-i.i.d data case is left for future work. Therefore, we theoretically analyze computing resources consumption for supporting local training from the perspective of SNR and imbalanced datasets in this section.
We assume that the amount of datasets on the UEs follows the normal distribution [29, 30], i.e., , where or/and could be different for specific UEs. Indeed, other distributions, such as Beta distribution and Gamma distribution, can also be used in our analytical model. Moreover, as the computing resources consumption of UE for one local iteration is , the PDF of is equal to , i.e., .
For a specific UE , if , we say UE can successfully receive the global model. In other words, UE will continue to perform local training in the next communication round and consume certain computing resources. Let indicate the certain computing resources consumed by all UEs, where the value of is set to if UE successfully receives the global model and 0 otherwise. Therefore, we can obtain the PDF of as follows,
(18) |
Therefore, based on equation (18), we can derive the mean of computing resources consumed by all UEs for one local iteration, which is given by . Therefore, the total computing resources consumed for local model training is given by
(19) |
where and represent the total number of local trainings and communication rounds respectively. Armed with the above preparation, we are now starting to analyze how the resources affect the FL performance by evaluating local and global model accuracy.
V The Relationship between FL Performance and Consumed Resources
Indeed, the unsuccessful transmission of local models in the uplink affects the aggregation of the global model, while the unsuccessful transmission in the downlink affects the updating and training of local models. Therefore, it is necessary to analyze how the computing resources and communication resources affect the FL performance by evaluating both the local and global model accuracy.
V-A Local Model Accuracy
In an FL framework, no matter what local machine learning algorithm is used, each UE solves the local optimization problem for local training [24, 15, 31], i.e.,
(20) |
where is constant and represents the difference between the global model and the local model for UE . Without loss of generality, we use the GD algorithm to update local models, as it can achieve the required high accuracy and facilitate the convergence analysis [24], as follows,
(21) |
where represents the step size and denotes the value of at the -th local iteration with given global model vector . Moreover, is the gradient of at point . In addition, represents the local model of UE at the -th local iteration. For a small step , we can derive a set of solutions , which satisfies
To provide the convergence condition for the GD method, we introduce local model accuracy loss [24, 15], which resembles the approximate factors in [15] [24], as follows,
(22) |
where represents the optimal solution of problem (20). Note that each UE aims to solve the local optimization problem with a target local model accuracy . To achieve the local model accuracy and the global model accuracy loss in the following, we first make the following three assumptions on the local loss function , as that in [24, 16, 31].
Assumption 1: Function is -Lipschitz, i.e.,
Assumption 2: Function is -strongly convex, i.e., .
Assumption 3: is twice-continuously differentiable. And .
Based on the three assumptions, we can obtain the lower bound on the number of local iterations during each communication round, which is shown as Proposition 2.
Proposition 2.
Local model accuracy loss is achieved if and run the GD method
iterations during each communication round at each UE that participates in local training.
Proof:
See Appendix B. ∎
The lower bound in Proposition 2 reflects the growing trend of the number of local iterations with respect to the local model accuracy, which can approximate the consumption of computing resources for training local models.
V-B Global Model Accuracy
In FL algorithms, a global model accuracy is also needed. For a specific FL task, we define as its global model accuracy loss (the global model accuracy is ), as follows,
(23) |
where represents the actual optimal solution. Moreover, we provide the following Proposition 3 about the number of communication rounds for achieving the global model accuracy .
Proposition 3.
Global model accuracy is achieved if the number of communication rounds meets
when running FL algorithm shown as Algorithm 1 with
Proof:
See Appendix C. ∎
Note that it is very hard to derive a closed-form expression of the global model during each communication round due to the dynamic nature of the wireless channel and the uncertain nature of multiple random variables. Therefore, we assume the amount of datasets on each UE is fixed to facilitate the proof of Proposition 3. In addition, from Proposition 2 and Proposition 3, we can see that there is a trade-off between the number of communication rounds and the number of local iterations characterized by : small leads to large , yet small , from which we can jointly approximate the communication and computing resources consumed by training FL tasks. The details can be found in the next section.
Input:
The required local model accuracy loss , the required global model accuracy loss .
output:
The global model , the number of local iterations , and the number of communication rounds .
VI Discussions of Three cases
In general, the resources used for training FL tasks at the wireless edge network should be limited, as 1) Communication and computing resources at the wireless edge network are limited and precious. 2) Resource consumption quickly increases with the widespread use of smart terminals. In this section, we discuss three specific cases for different sufficiency of communication and computing resources. Furthermore, we derive the explicit expression of the model accuracy under FL framework, as a function of the amount of the consumed resources based on the sufficiency of respective communication and computing resources.
VI-A Case 1: Sufficient Communication Resources and Computing Resources
When both communication resources and computing resources are sufficient, we can approximate the amount of communication/computing resources needed for the FL algorithm based on Proposition 2 and Proposition 3. Specifically, the bandwidth needed for transmitting local models in the uplink should meet
(24) |
Similarly, based on equation (17), we can obtain the bandwidth needed for transmitting the global model in the downlink, as follows,
(25) |
Furthermore, based on equation (19), given local accuracy with , the total amount of computing resources should meet the following constraint,
(26) |
VI-B Case 2: Sufficient Computing Resources and Insufficient Communication Resources
When computing resources are sufficient, while communication resources are insufficient, we aim to reduce bandwidth consumption by reducing the number of communication rounds. In this case, the number of local iterations still follows Proposition 2, as computing resources are sufficient. However, Proposition 3 may not be met due to the lack of communication resources, which decreases the number of communication rounds . As a result, the required global model accuracy cannot be achieved. Specifically, the maximal number of communication rounds is limited by the communication resources, i.e., , where and are the maximal available bandwidth that can be used for FL in the uplink and the downlink respectively, and are the mean bandwidth consumption on the uplink and downlink at one global iteration, respectively. To achieve the required global accuracy , when the number of communication round is limited, based on Appendix C, we first give the following relationship,
(27) |
where we reasonably expect the real achieved global accuracy loss can be expressed by
(28) |
Therefore, we have , where is the realistic local model accuracy loss. Moreover, we can see that when is fixed, will decrease if decreases. If we want to reduce thus to reduce bandwidth consumption, while keeping unchanged, we should decrease by increasing the number of local iterations. As a result, the computing resource consumption will increase. In other words, there exists a trade-off to some extent between the communication resources and computing resources for achieving a certain ML model accuracy. In addition, from the perspective of communication resources, the number of communication rounds should meet . Therefore, we have .
Therefore, based on Proposition 2, the number of local iterations should meet
(29) |
As a result, the total computing resources consumed for the FL task is given by
(30) |
VI-C Case 3: Sufficient Communication Resources and Insufficient Computing Resources
When communication resources are sufficient, while computing resources are insufficient, we aim to reduce the computing resource consumption by reducing the number of local iterations. As communication resources are sufficient, the number of communication rounds still follows Proposition 3, while Proposition 2 may not be met due to the lack of computing resources, which decreases the number of local iterations. As a result, the required local model accuracy cannot be achieved. In addition, from the perspective of computing resources, the number of local iterations should meet , where represents the maximal computing resources used for local training on UE . To achieve the required local accuracy although the number of local iterations are limited, we give the following relationship based on Appendix B,
(31) |
from which we can reasonably expect that the real local model accuracy loss is expressed by , where we can obtain the number of local iterations, i.e., .
Therefore, when , we have . In other words, when the total amount of available computing resource decreases, the lower bound of will increase. Moreover, based on in Section IV. D, we can derive the lower bound of the number of communication rounds, as follows,
(32) |
Therefore, the bandwidth for transmitting local models and the global model in the uplink and the downlink are respectively given by
(33) |
(34) |
Therefore, based on the analysis aforementioned, we provide Proposition 4 about the resource consumption for the three cases discussed above.
Proposition 4.
-
(1)
Case 1-Sufficient Communication and Computing Resources: To achieve the required model accuracy and , the consumption of bandwidth in the uplink is , the consumption of bandwidth in the downlink is , and the consumption of computing resources used for local training is
-
(2)
Case 2-Sufficient Computing Resources and Insufficient Communication Resources: To achieve the required global model accuracy , the consumption of computing resources is
-
(3)
Case 3-Sufficient Communication Resources and Insufficient Computing Resources: To achieve the required global model accuracy , the consumption of bandwidth in the uplink is
, while the consumption of bandwidth in the downlink is
.
VII Numerical Results and Discussion
In this section, we verify our analytical modeling using numerical simulations by (1) verifying the analytical results of transmission success probability (uplink and downlink) and resource (bandwidth resource and computing resource) consumption; (2) Measuring the performance of FL settings; (3) Examining the trade-off between the computing resources and communication resources under FL framework.
VII-A Simulation Setting
We consider an FL empowered wireless network composed of multiple UEs that are randomly generated and one central BS with a cloud server that serves as the FL model aggregator. The coverage of the BS is a circular area with a radius of . The radius of the interfering area is set to m. The transmit power of UEs and the serving BS is set to dBm and dBm respectively [20]. Moreover, the noise power is set to dBm [20]. The density of interfering UEs is set to and is randomly chosen within ms. The path loss is modeled as [9]. The number of CPU cycles required for computing one sample data is randomly chosen within cycles/sample [24]. and are randomly chosen within and .
We consider using FL to solve the multi-class classification problem over MNIST datasets [32] for model training, where all datasets of UEs are randomly splitted with 75-25 ratio, for training and testing respectively [28]. Moreover, we use a fully-connected two-layer network built over PyTorch, where the size of input layer, hidden layer and output layer is set to 784 (), 40 and 10 respectively. The activation function is ReLU, as it can greatly accelerate the convergence of gradient descent and increase the number of the activated neurons [33, 34]. In addition, a constant learning rate has always been the conventional choice [35, 36, 37]. Inspired by the hyper-parameter analysis and the corresponding experimental results in [35, 36], we set learning rate . In addition, according to our neural network settings, the transmitted model size is around 1.156 MB, when using 32-bit floating-point computing. In addition, we set , , , [24].
VII-B Simulation Results
VII-B1 Verifying Analytical Results
First, we examine the local and the global model transmission success probability with varying UE density respectively. In the two simulations, based on PPP model, we randomly generated 30 specific point distributions for each UE density (0.1 intervals), where both the simulation results of and are averaged over these 30 different channel instances for each UE density. Fig. 3 and Fig. 4 show the probability in the uplink and in the downlink respectively for both analytical and simulation results with varying UE density under different threshold parameters. The analytical results of and are computed based on equation (14) and equation (15) respectively. From Fig.3 and Fig. 4, we can see that the curves of analytical results match closely to simulations for both the uplink and the downlink. Moreover, we can see that the smaller threshold ( in Fig. 3, in Fig. 4), the larger transmission success probability.



Next, we examine the bandwidth consumption in the uplink and the downlink respectively for both analytical and simulation results with respect to the global accuracy loss . We first randomly select UE density within and randomly select 10 specific point distributions under the corresponding UE density. Then, we train the same FL task (i.e., classification on MNIST datasets) for each point distribution, where the simulation results are averaged over 10 point distributions. Fig. 5 and Fig. 6 show the bandwidth consumption in the uplink and the downlink changes with the global accuracy loss respectively. From Fig. 5 and Fig. 6, we can see that the curves of analytical results match closely to simulations for both the uplink and the downlink. Moreover, both the bandwidth consumption in the uplink and the downlink decrease with respect to the global accuracy loss. In addition, we also find that the lower local accuracy leads to more bandwidth consumption to guarantee a specific global accuracy when training i.i.d data. Specifically, as shown in Fig. 5, in the uplink, the amount of bandwidth consumed to guarantee is Mbps more than that to guarantee on average, while the amount of bandwidth consumed to guarantee is Mbps more than that to guarantee on average. As shown in Fig. 6, in the downlink, the amount of bandwidth consumed to guarantee is Mbps more than that to guarantee on average, while the amount of bandwidth to guarantee is Mbps more than that to guarantee on average. The reason is that the lower local accuracy needs more communication rounds to aggregate the local models to achieve a certain global accuracy, and thus consumes more bandwidth.



In the following, we examine the computing resource consumption for both analytical and simulation results with respect to the density of UEs. Specifically, the analytical results of computing resource consumption is computed based on equation (19), while the simulation results are averaged over 10 randomly generated point distributions for each UE density. Fig. 7 shows the computing resource consumption changes with the density of UEs. From Fig. 7, we can see that the amount of computing resource consumption increases in the beginning and then decreases with the density of UEs. Specifically, the amount of computing resource consumption increases with UE density, when it is approximately below 2 (i.e., ), as the number of UEs that participate in local training increases with UE density. When approximately , the amount of computing resource consumption decreases with UE density, as poor SNR causes that some UEs fail in successfully receiving the global model. As a result, the number of UEs that participate in local training decreases in the next communication round, and thus the amount of computing resource consumption decreases. Moreover, we also find that achieving higher local accuracy needs more computing resources to train local models. Specifically, the amount of computing resources consumed to guarantee is cycles/s more than that to guarantee on average, while the amount of computing resources consumed to guarantee is cycles/s more than that to guarantee on average.
VII-B2 Measuring the performance of FL settings
First, we examine the convergence property by using simulation experiments. In this simulation experiment, the UE density is randomly chosen within and data points are randomly generated (the same settings for the following simulations). Moreover, we set , dB, and dB. As shown in Fig. 8, we randomly choose UEs to observe the changes of the local optimization function, where the local optimization function converges in about 40 epochs. In addition, as shown in Fig. 9, we can observe that the global loss function convergences in around 12 communication rounds.



Next, we examine the global accuracy loss with the number of communication rounds when fixed local accuracy loss . In this simulation, we still set dB and dB. Fig. 10 shows the global model accuracy loss changes with the number of communication rounds. From Fig. 10, we can see that the global model accuracy loss decreases with the number of communication rounds. Moreover, the difference between the actual global accuracy loss and is always within when the learning convergences. Please note that SINR and SNR practically affect the global aggregation and local training respectively.
Next, we examine whether the well trained model is effective for the test datasets. In this simulation experiment, the test datasets are drawn from the same distribution as the training data. We randomly select 3 UEs and calculate the testing accuracy every 2 communication rounds. As shown in Fig. 11, we can see that the testing accuracy increases with training accuracy, where is the local training accuracy and is the testing training accuracy. We can also see, in general, the difference between the training accuracy and the testing accuracy is within .
VII-B3 Examining the trade-off between the computing resources and communication resources under FL framework
First, we examine the relationship between the global model accuracy and available bandwidth in the uplink. In this simulation experiment, we first assume that bandwidth in the downlink and computing resources are sufficient, and then we fix the required local model accuracy () to verify the relationship between the global model accuracy and the amount of available bandwidth in the uplink. From Fig. 12, we can see that the global model accuracy sharply increases in the beginning and then increases slowly with the amount of available bandwidth in the uplink, as the number of local UEs that can participate in global aggregation quickly increases with the amount of bandwidth in the beginning. When the amount of bandwidth increases to be sufficient, it has little effect on the transmission success probability of local models, and thus the global model accuracy keeps fairly steady. Moreover, in the beginning, higher local model accuracy (lower ) leads to a higher global model accuracy. Specifically, the global model accuracy when is higher than that when , while the global model accuracy when is higher than that when .



After that, we examine the relationship between the local model accuracy and computing resources, as shown in Fig. 13, where we randomly select 3 different UEs and assume that the bandwidth in the uplink/downlink is sufficient. From Fig. 13, we can see that the local model accuracy quickly increases in the beginning and then keeps fairly steady with the amount of computing resources. The reason is similar to that in Fig. 12, i.e., more computing resources leads to more local iterations in the beginning, while sufficient computing resources have little effect on local iterations. Please note the fluctuations of the curves in the Fig. 13 are due to that the local model accuracy in the Fig. 13 is recorded per local iteration, while the global accuracy in the Fig. 12 is recorded per communication round composed of 30 local iterations.
Finally, we examine the trend of the global model accuracy with respect to the amount of computing resources and the amount of bandwidth. Fig. 14 shows the relationship among the global model accuracy, computing resources, and bandwidth, where the trade-off between the amount of computing resources and the amount of bandwidth is verified. As shown in Fig. 14, both the amount of bandwidth and the amount of computing resources affect the global model accuracy, where we can flexibly adjust the amount of computing resources and the amount of bandwidth to guarantee a specific global model accuracy. Specifically, when we fix the amount of bandwidth used for transmitting the local models, we can increase the global model accuracy by increasing the amount of computing resources. When we fix the amount of computing resources used for local training, we can increase the global model accuracy by increasing the amount of bandwidth.
VIII Conclusion
Wireless edge network intelligence enabled by FL has been widely acknowledged as a very promising means to address a wide range of challenging network issues. In this paper, we have theoretically analyzed how accurate of an ML model can be achieved by using FL and how many resources are consumed to guarantee a certain local/global accuracy. Specifically, we have derived the explicit expression of the model accuracy under FL framework, as a function of the amount of computing resources/communication resources for FL empowered wireless edge networks. Numerical results validate the effectiveness of our theoretical modeling. The modeling and results can provide some fundamental understanding for the trade-off between the learning performance and consumed resources, which is useful for promoting FL empowered wireless network edge intelligence.
Appendix A Calculate and
where , Therefore, can be given by
where is the minimum distance between UEs and BS, and we define and .
Appendix B Proof of Proposition 2
First, based on the definition of in problem (20), we have . Therefore, also meets the -smooth, -convex, and twice-differentiable assumptions, i.e.,
(B.1) | |||
(B.2) | |||
(B.3) | |||
(B.4) |
Then, when given , we rewrite using the second-order Taylor expansion as
As in GD method, we have
Therefore, based on C.1 and C.3, we have
Next we start to find the lower bound of . For the optimal of , we always have . Therefore, we have
(B.6) |
Therefore, combining (B.5) and (B.6), we have
where (c) can be obtained from . Therefore, to ensure that , we have
Therefore, when (as ), we have
Appendix C Proof of Proposition 3
Under Assumption 1, Assumption 2, and Assumption 3, the following conditions hold on
(C.1) |
(C.2) |
The proof of (C.2) is similar to that of (B.6) in Appendix C. For (C.1), based on Lagrange median theorem, we always have a such that
(C.3) |
For the optimal solution of local optimization problem, we always have
(C.4) |
With the above equalities and inequalities, we now start to prove Proposition 4.
(C.5) |
As
(C.6) | |||
(C.7) |
Therefore, we have
According to equation (C.6), we can calculate and as follows,
Therefore, we have
Therefore, based on (D.8), we have
To guarantee the global accuracy, i.e., , we have
Therefore, when , we have .
References
- [1] B. Jovanović, “Internet of Things Statistics for 2021 – Taking Things Apart.” [Online]. Available: https://dataprot.net/statistics/iot-statistics/
- [2] W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y.-C. Liang, Q. Yang, D. Niyato, and C. Miao, “Federated Learning in Mobile Edge Networks: A Comprehensive Survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 2031–2063, 2020.
- [3] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated Machine Learning: Concept and Applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019.
- [4] E. Jeong, S. Oh, H. Kim, J. Park, M. Bennis, and S.-L. Kim, “Communication-efficient On-device Machine Learning: Federated Distillation and Augmentation under Non-iid Private Data,” arXiv preprint arXiv:1811.11479, 2018.
- [5] C. He and M. Annavaram, “Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge,” in proceedings of Advances in Neural Information Processing Systems 33 (NeurIPS 2020), no. 33, 2020.
- [6] M. Chen, H. V. Poor, W. Saad, and S. Cui, “Convergence Time Optimization for Federated Learning over Wireless Networks,” IEEE Transactions on Wireless Communications, vol. 20, no. 4, pp. 2457–2471, 2020.
- [7] M. Chen,H. V. Poor, W. Saad, and S. Cui, “Wireless Communications for Collaborative Federated Learning,” IEEE Communications Magazine, vol. 58, no. 12, pp. 48–54, 2020.
- [8] L. U. Khan, S. R. Pandey, N. H. Tran, W. Saad, Z. Han, M. N. H. Nguyen, and C. S. Hong, “Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism,” IEEE Communications Magazine, vol. 58, no. 10, pp. 88–93, 2020.
- [9] Y.-J. Liu, G. Feng, Y. Sun, S. Qin, and Y.-C. Liang, “Device Association for RAN Slicing based on Hybrid Federated Deep Reinforcement Learning,” IEEE Transactions on Vehicular Technology, vol. 69, no. 12, pp. 15 731–15 745, 2020.
- [10] B. Luo, X. Li, S. Wang, J. Huang, and L. Tassiulas, “Cost-Effective Federated Learning Design,” in proceedings of IEEE INFOCOM 2021 - IEEE Conference on Computer Communications, 2021, pp. 1–10.
- [11] K. Yang, T. Jiang, Y. Shi, and Z. Ding, “Federated Learning via Over-the-Air Computation,” IEEE Transactions on Wireless Communications, vol. 19, no. 3, pp. 2022–2035, 2020.
- [12] W. Xia, W. Wen, K.-K. Wong, T. Q. Quek, J. Zhang, and H. Zhu, “Federated-Learning-Based Client Scheduling for Low-Latency Wireless Communications,” IEEE Wireless Communications, vol. 28, no. 2, pp. 32–38, 2021.
- [13] T. Sery, N. Shlezinger, K. Cohen, and Y. C. Eldar, “COTAF: Convergent Over-the-Air Federated Learning,” pp. 1–6, 2020.
- [14] D. Wen, K.-J. Jeon, and K. Huang, “Federated Dropout–A Simple Approach for Enabling Federated Learning on Resource Constrained Devices,” arXiv preprint arXiv:2109.15258, 2021.
- [15] C. T. Dinh, N. H. Tran, M. N. Nguyen, C. S. Hong, W. Bao, A. Y. Zomaya, and V. Gramoli, “Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation,” IEEE/ACM Transactions on Networking, vol. 29, no. 1, pp. 398–409, 2020.
- [16] M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, “A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks,” IEEE Transactions on Wireless Communications, vol. 20, no. 1, pp. 269–283, 2021.
- [17] I. Flint, H.-B. Kong, N. Privault, P. Wang, and D. Niyato, “Analysis of Heterogeneous Wireless Networks using Poisson Hard-core Hole Process,” IEEE Transactions on Wireless Communications, vol. 16, no. 11, pp. 7152–7167, 2017.
- [18] A. M. Hunter, J. G. Andrews, and S. Weber, “Transmission Capacity of Ad Hoc Networks with Spatial Diversity,” IEEE Transactions on Wireless Communications, vol. 7, no. 12, pp. 5058–5071, 2008.
- [19] S. Weber, J. G. Andrews, and N. Jindal, “An Overview of the Transmission Capacity of Wireless Networks,” IEEE Transactions on Communications, vol. 58, no. 12, pp. 3593–3604, 2010.
- [20] Y. Sun, L. Zhang, G. Feng, B. Yang, B. Cao, and M. A. Imran, “Blockchain-Enabled Wireless Internet of Things: Performance Analysis and Optimal Communication Node Deployment,” IEEE Internet of Things Journal, vol. 6, no. 3, pp. 5791–5802, 2019.
- [21] Y. J. Chun, M. O. Hasna, and A. Ghrayeb, “Modeling heterogeneous cellular networks interference using poisson cluster processes,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 10, pp. 2182–2195, 2015.
- [22] V. V. Chetlur and H. S. Dhillon, “Coverage Analysis of a Vehicular Network Modeled as Cox Process Driven by Poisson Line Process,” IEEE Transactions on Wireless Communications, vol. 17, no. 7, pp. 4401–4416, 2018.
- [23] C. Hennig and M. Kutlukaya, “Some Thoughts about the Design of Loss Functions,” REVSTAT–Statistical Journal, vol. 5, no. 1, pp. 19–39, 2007.
- [24] Z. Yang, M. Chen, W. Saad, C. S. Hong, and M. Shikh-Bahaei, “Energy Efficient Federated Learning over Wireless Communication Networks,” IEEE Transactions on Wireless Communications, vol. 20, no. 3, pp. 1935–1949, 2020.
- [25] J. Wang, Z. Charles, Z. Xu, G. Joshi, H. B. McMahan, M. Al-Shedivat, G. Andrew, S. Avestimehr, K. Daly, D. Data et al., “A field Guide to Federated Optimization,” arXiv preprint arXiv:2107.06917, 2021.
- [26] P. L. Hsu and H. Robbins, “Complete Convergence and the Law of Large Numbers,” in Proceedings of the National Academy of Sciences, vol. 33, no. 2, pp. 25–31, 1947.
- [27] T. H. Hsu, H. Qi, and M. Brown, “Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification,” CoRR, vol. abs/1909.06335, 2019. [Online]. Available: http://arxiv.org/abs/1909.06335
- [28] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, vol. 54, pp. 1273–1282, 20–22 Apr 2017. [Online]. Available: http://proceedings.mlr.press/v54/mcmahan17a.html
- [29] Y. Gao, M. Kim, S. Abuadbba, Y. Kim, C. Thapa, K. Kim, S. A. Camtep, H. Kim, and S. Nepal, “End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things,” in Proceedings of 2020 IEEE Computer Society International Symposium on Reliable Distributed Systems (SRDS), pp. 91–100, 2020.
- [30] Y. Liu, S. Garg, J. Nie, Y. Zhang, Z. Xiong, J. Kang, and M. S. Hossain, “Deep Anomaly Detection for Time-series Data in Industrial IoT: a Communication-efficient on-device Federated Learning Approach,” IEEE Internet of Things Journal, vol. 8, no. 8, pp. 6348–6358, 2020.
- [31] H. H. Yang, Z. Liu, T. Q. S. Quek, and H. V. Poor, “Scheduling Policies for Federated Learning in Wireless Networks,” IEEE Transactions on Communications, vol. 68, no. 1, pp. 317–333, 2020.
- [32] Y. LeCun, “The MNIST Database of Handwritten Digits,” http://yann. lecun. com/exdb/mnist/, 1998.
- [33] Y. Li and Y. Yuan, “Convergence Analysis of Two-layer Neural Networks with ReLU Activation,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 597–607.
- [34] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks,” Advances in neural information processing systems, vol. 25, pp. 1097–1105, 2012.
- [35] C. Darken and J. E. Moody, “Note on Learning Rate Schedules for Stochastic Optimization,” in Proceedings of the 4th International Conference on Neural Information Processing Systems, vol. 91, 1990, pp. 832–838.
- [36] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han, “On the Variance of the Adaptive Learning Rate and Beyond,” in Proceedings of International Conference on Learning Representations, 2019.
- [37] Y.-J. Liu, G. Feng, J. Wang, Y. Sun, and S. Qin, “Access Control for RAN Slicing based on Federated Deep Reinforcement Learning,” in proceedings of ICC 2021-IEEE International Conference on Communications, pp. 1–6, 2021.