Toward Multiple Federated Learning Services Resource Sharing in Mobile Edge Networks
Abstract
Federated Learning is a new learning scheme for collaborative training a shared prediction model while keeping data locally on participating devices. In this paper, we study a new model of multiple federated learning services at the multi-access edge computing server. Accordingly, the sharing of CPU resources among learning services at each mobile device for the local training process and allocating communication resources among mobile devices for exchanging learning information must be considered. Furthermore, the convergence performance of different learning services depends on the hyper-learning rate parameter that needs to be precisely decided. Towards this end, we propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL, regarding the energy consumption of mobile devices and overall learning time. We design a centralized algorithm based on the block coordinate descent method and a decentralized JP-miADMM algorithm for solving the MS-FEDL problem. Different from the centralized approach, the decentralized approach requires many iterations to obtain but it allows each learning service to independently manage the local resource and learning process without revealing the learning service information. Our simulation results demonstrate the convergence performance of our proposed algorithms and the superior performance of our proposed algorithms compared to the heuristic strategy.
Index Terms:
Federated Learning, resource allocation, multi-access edge computing, decentralized optimization.I Introduction
Nowadays, following the great success of Machine Learning (ML) and Artificial Intelligence (AI) applications, there are more and more intelligent services that have transformed our lives. This progress has been drastically elevated by the ubiquity of device-generated data that is available to the service operator and stronger computing power at cloud data centers and mobile devices. Recently, the deployment of MEC servers at the edge networks has been acknowledged as one of the key pillars to revolutionize mobile communication by assisting cellular base stations with low latency computing capability. When compared to cloud datacenter, the machine learning training process can be done at the mobile edge network with the help of multi-access edge computing (MEC) servers, resulting in lower communication latency for exchanging learning information. Therefore, these enablers unlock the full potential of edge ML applications for the vision of truly intelligent next-generation communication systems in 6G [1]. However, the ML applications raise a privacy concern in the data collection for training purposes. In many ML applications (e.g., Crowdtracker [2], Waze Carpool [3], etc.), users are required to share their sensitive personal information (i.e., user location, user identity, user photos, etc.) to the server. Furthermore, uploading a massive amount of data throughout radio access links or the Internet to the cloud data centers is costly. Hence, the strong computation capabilities of the increasingly powerful mobile devices empower the local inference and fine-tuning of Deep Neural Networks (DNNs) model on device without sending their training data to the server [4, 5]. On the other hand, using solely the personalized local data could lead to the overfitting problem of the local training models. Thus, sharing the local learning model parameters among user equipments (UEs) equip to build up a generalized global model is the primary idea of a brand-new ML scheme, namely federated learning [6, 7, 8]. The deployment of this learning scheme at the edge networks brings up latency and transmission cost reduction by sending the weight parameters of local learning models to the MEC server instead of sending device-generated data to the cloud and enhances the user privacy compared to the conventional centralized training [9]. Inevitably, the federated learning scheme is one of the vital enablers to bring edge intelligence into reality.

In the typical federated learning scheme, the actual training process is decentralized as each UE constructs a local model based on its local dataset. Then a shared global model is updated at the server by aggregating local learning weight parameters from all UEs such as gradient, and learning weight parameters. After that the updated global model is broadcast back to UEs. As an example, the work of [6] has provided the simplest form of a federated learning algorithm in which the learning parameters of the global model are averaged from local ones at UEs. However, the performance of this learning scheme at the edge networks firmly depends on the allocated computation resources for local training and communication resources for uploading the updated local model parameters to the MEC server. Therefore, the allocation problem for both computation and communication resources is crucial for deploying a federated learning scheme at the edge networks. This type of problem is well-studied in [10, 11] that design resource allocation frameworks regarding learning performance and resource cost. Since these works focus on the model of a single federated learning service in the edge networks, there is the necessity of an extensive study and analysis for the upcoming multiple learning services systems. Indeed, more data consisting of network-level mobile data (e.g., call detail record, radio information, performance indicator, etc.) and app-level data (e.g., device profile, sensing data, photos, video streaming, voice data, etc.) [12] can be collected on user equipment such as mobile devices, wearable devices, augmented reality headset, IoT devices. As a result, it enables many ML applications and services can be deployed on UEs or in the edge networks. In addition to the independent deployment for different learning tasks, multiple deep learning models can be deployed together and provide better performance systems such as the mobile vision system [4, 5].
According to the essential deployment of multiple federated learning services, the computation resources necessarily are shared among these services for the local training while the communication resources are shared among mobile devices for exchanging learning information from different services. Moreover, the performance of learning services depends on the learning parameters that need to be precisely decided regarding the resource allocation cost and the overall learning time. In this paper, we study the under-explored problem - the shared computation, communication resource allocation, and the learning parameter control for multiple federated learning services coexisting at the edge networks.
In Fig. 1, we depict the system of multiple federate learning services where a Federated Learning Orchestrator (FLO) at the MEC server is in charge of the computation and communication resource management, controls the hyper-learning rate parameter of learning services and operates these federated learning services. Accordingly, FLO performs two main processes as follows
Resource allocation process: In this work, we consider the flexible CPU sharing model such as the CPU frequency sharing among virtual machines or containers to perform the local learning updates. Since those virtual instances often require a high deployment cost, we consider the pre-allocating CPU strategy for different services. To capture the trade-off between the energy consumption of mobile devices and overall learning time, we propose a resource optimization problem, namely MS-FEDL that decides the optimal CPU frequency for each learning service and the fraction of total uplink bandwidth for each UE. As shown in Fig. 1, computing (i.e., CPU cycles) and communication (i.e., bandwidth) resources are shared among learning services and UEs, respectively. In addition to resource allocation, it also controls the hyper-learning rate of learning services such as the relative accuracy of the local learning problem at the UEs.
Federated learning process: After the shared computation and communication resources allocation, FLO performs the federated learning process iteratively according to the following steps: training local model, transmitting local learning model to FLO, updating the global model, and broadcasting the global model to UEs until the convergence is observed.
In order to provide an efficient approach for the resource allocation and learning parameter control of multiple federated learning services, in this work, we develop the problem design and analysis for FLO, which can be summarized as follows
-
•
In Section III, we first propose a resource-sharing model for multiple learning services. Then, we pose the resource allocation and learning parameter control problem for FLO to manage multiple learning services with the MS-FEDL problem, which is in the form of a multi-convex problem.
-
•
In Section IV, we develop both centralized and decentralized approaches to solve the MS-FEDL problem. Specifically, We first propose a centralized algorithm based on block coordinate descent framework that provides a quick scheme by decoupling the global problem into three subproblems such as CPU allocation, bandwidth allocation, and hyper-learning rate decision subproblems. After that, we develop a decentralized approach based on the JP-miADMM algorithm which allows each learning service to independently manage the resource allocation, local learning process and cooperatively operate under the management of FLO. Although the decentralized approach requires many iterations to convergence, it provides a more flexible and scalable method for resource allocation without revealing the learning service information.
- •
II Related Works
Many attempts on distributed training over multiple machines have recently given rise to research on decentralized machine learning [13, 11, 14]. However, most of the algorithms in these works are designed for machines having balanced and i.i.d. data, and is connected to high-throughput networks in data centers. With a different scheme, Federated Learning (and related on-device intelligence approaches), which has attracted much attention recently [15, 16, 7, 6, 17, 18, 19], exploits the collaboration of mobile devices that can be large in number, slow and/or unstable in Internet connections, and have non-i.i.d. and unbalanced data locally. In the simplest form of a federated learning algorithm (i.e., FedAvg [6]), the authors provide a simple mechanism of averaging the updated learning parameters of the local model using local data at individual UEs. On the other hand, CoCoA+ [17] provides a general framework under a strong theoretical convergence analysis, in which the local learning problem of UEs is transformed into dual problems and can be solved by any arbitrary solvers. Different from CoCoA+ framework, in our recent work [20], we develop a new federated learning algorithm, namely FEDL that uses an additional Bregman divergence in the learning objective [21] to encourage the local model being close to the global model. Most of these existing works aim to provide the learning algorithms that focus on learning performance and providing the theoretical convergence analysis. However, for the practical deployment in mobile edge networks, a resource allocation problem for federated learning service needs to be carefully designed to manage the computation and communication resources. Furthermore, it is important to control the learning parameters by considering energy consumption and learning time convergence. The work of [11] introduced this type of problem for the distributed gradient descent analysis, in which the authors propose a control algorithm that determines the best trade-off between local updates and global parameter aggregation to minimize the loss function under a given resource budget.
In our previous work [10, 20], we propose a resource allocation problem among UEs and the hyper-learning rate control of learning services in the wireless environment regarding the computation, communication latency, UE energy consumption, and the heterogeneity of UEs. In this paper, we study the extensive design for multiple federated learning services that co-exist at the edge networks with the change in the sharing of bandwidth allocation based on OFDMA instead of transmission power control in our previous work. Also, we consider the additional broadcast time, extra communication overhead, and the time for averaging operation at the edge server. Furthermore, both resource allocation and learning processes can be controlled by a FLO at the MEC server which performs the resource allocation in the centralized or decentralized approaches and is being an aggregator for the global learning update. For the centralized solution approach, the MS-FEDL problem is bi-convex and we adopt the alternative minimization algorithm to provide a solid approach that can help the subproblems of the MS-FEDL problem can be solved by arbitrary convex solvers. When all services from the same owner such as multiple deep learning models can be deployed together and provide better performance mobile vision systems in [4, 5], this approach can be sufficient to provide an efficient resource allocation mechanism in which the sharing of service information in the MS-FEDL problem is not an issue. However, the centralized approach will be limited when scaling up to a large number of users and require the sharing of all information among services. To resolve these problems, we develop another flexible decentralized solution approach based on the combination of multi-convex and parallel setting of ADMM. The conventional ADMM algorithm can be applied in convex problem [22], and later extended to parallelly solve subproblem with JP-ADMM [23]. Recently, the new analysis for the convergence of the multi-convex ADMM problem [24]. To the best of our knowledge, the combination of these two extended algorithms for ADMM hasn’t applied in other works as in our decentralized approach. Our decentralized approach can be useful for independent service providers such as multi-tenant FL services that use the common shared resources from the third-party edge provider. The decisions can be made by each service and shared with FLO only without revealing the learning service information (i.e., dataset information, exchange local updates information between UEs and the MEC server, the number of CPU cycles for each UE to execute one sample of data).
III Multi-Service Federated Learning at the Edge
Local Training: Each UE solves its local learning problem (3) in rounds to achieve -approximation solution satisfying (5). |
Communication: All UEs transmit and to the edge server. |
Aggregation and Feedbacks: The edge server updates the global model as in (9) and then fed-backs and to all UEs. |
III-A Federated Learning Algorithm Design
In this subsection, we summarize our recent federated learning algorithm design according to [20] as in the detail of Algorithm 1. Accordingly, in the typical setting of federated learning for a general supervised learning problem, given a sample data with input , the learning task is required to train the model parameter to predict the correct label by minimizing the loss function . The training data at UEs can be the usage information or sensing data from the integrated sensors. Different from the conventional centralized learning, the dataset of the federated learning scheme is distributed over a set of UEs where each participating UE collects training data samples and stores a local dataset such that
The local loss function of the learning problem using the local dataset of UE is defined as
(1) |
Assumption 1.
The local loss function is -smooth and -strongly convex, , respectively, as follows, :
where denotes the inner product of vectors and and is Euclidean norm. These strong convexity, smoothness assumptions are also used in [25], and satisfied in popular ML problems such as -regularized linear regression model with , and -regularized logistic regression with . Accordingly, we denote as the condition number of ’s Hessian matrix.
Then, the global loss function of the global learning problem is as follows
(2) |
Accordingly, the global learning model can be obtained by solving the global problem (2) using an iterative update process at the server and UEs in the federated learning scheme. These updates perform alternatively within a number of global rounds (i.e., ) that consists of four following steps at one global round as
-
•
S1. Local Training: Every UE needs to train a local model by using the local training data .
-
•
S2. Upload local model: UEs transmit the local learning model and global gradient updates to the server.
-
•
S3. Update global model: The global model is constructed based on the weight parameters of local models at the server.
-
•
S4. Broadcast global model: The updated global model and gradient are broadcast to all UEs.
Local Training at UEs: According to [20], instead of solving the local objective in the equation (1), the surrogate problem is solved to attain the local model for each global round as follows
(3) |
where denotes the Bregman divergence [21] of
denotes the first-order approximation of the global function at
and is the weight that balances between two objectives which is also our controlled learning parameter. The Bregman divergence is the generalized distance between the local model solution and the latest global model parameter (e.g., square Euclidean distance) that is widely applied in machine learning applications, statistics, and information geometry [21]. Thus, the local model at UEs can be constructed by minimizing the surrogate objective with the approximated loss function minimization such that its parameters is close to the latest global model parameter . Then the equivalent local learning problem is derived as follows
(4) |
Since it is usually difficult to obtain the optimal solution in the learning problem (4), UEs is required find a (possibly weak) solution instead. As an analogy from the definition for the relative accuracy in [26, 14] for the approximation, the local weight parameters at all UEs satisfy
(5) |
where the relative local accuracy is common to all UEs. This parameter also defines the quality of the approximation solutions when solving the local learning problem (3), in which the optimal solution is obtained, while we have no progress (i.e., ). Since the objectives and have the same Hessian matrix, is also -strongly convex and -smooth. Accordingly, the gradient descent (GD) method is reasonable to solve (4) and requires number of local iterations to achieve the accuracy -approximation of the solution.
(6) |
where is a learning rate. Note that each UE holds a small portion of samples, i.e, , . In case of large , mini-batch SGD can be used to alleviate the computation burden on UEs, but the convergence rate will be different. We assume that the generated convergent sequence for the local model satisfying a linear convergence rate [27] as follows
(7) |
where is the optimal solution of the local problem (4), and and are constants depending on .
Lemma 1.
Global model updates at the server: Considering a synchronous federated learning scheme, the global model parameter is then updated by aggregating the local model parameter from all UEs as follows
(9) |
This updated global model is then broadcast along with to all UEs (line 5) to all UEs. The convergence of the global problem (2) is achieved by satisfying
(10) |
where is an arbitrarily small constant, is the optimal solution of the problem (2). The algorithm enhances data privacy by exchanging learning information only rather than the local training data. The convergence analysis for FEDL is developed in [20].
Theorem 1.
With Assumption 1, the convergence of the FEDL algorithm is achieved with linear rate
(11) |
where is defined as
(12) |
Corollary 1.
The number of global rounds for FEDL to achieve the convergence satisfying (10) is
(13) |
Learning Time Model: According to the convergence analysis of the federated learning algorithm, we obtain the convergence rate and the global rounds depend on the hyper-learning rate and the relative accuracy of the local learning problem as in Corollary 1. Therefore, the total learning time can be defined in the general form as follows
(14) |
where is the one round of communication time, is the required time to obtain the relative accuracy of the local learning algorithm, and is the required number of global rounds in the equation (12) and the is defined in the equation (12). In a common setting of many federated learning frameworks [6, 18], the number of local iterations is often fixed for each UE, thus, the remaining control parameter is the hyper-learning rate that affects to the number of global rounds. Accordingly, we substitute the constants and get the simplified form as follows
(15) |
where
Var | Definition |
---|---|
Index denoting FL service | |
Index denoting participating UE | |
The size of local dataset of the service at UE | |
The number of CPU cycles required to process 1 bit of data sample of the service | |
The energy consumption of service at UE to compute one local iteration | |
The energy consumption of service at UE to transmit the local updates | |
The local training time for one local iteration of service | |
The transimission time of service | |
The total uplink bandwidth | |
The downlink bandwidth | |
The size of local information updates of the service | |
The number of local iterations of service | |
The number of global rounds using FEDL | |
The total cost of the learning service | |
The hyper-learning rate using FEDL | |
The allocated CPU-cycle frequency for service at UE | |
The allocated fraction of the uplink bandwidth for UE |
III-B Multi-Service Sharing Model
In this paper, we consider a multi-service federated learning scheme with one Federated Learning Orchestrator (FLO) at the MEC server and a set of UEs as shown in Fig. 1. Each participating UE stores a local data set with size for each federated learning service . Then, we can define the total data size of a service by . The CPU resource of each UE is consumed to perform the local learning problem in (4) for each service by using the local data. Therefore, it is crucial to share the CPU resource of each UE amongst the local learning problems of multiple services efficiently. After the local training, all UEs upload their updated local model parameters to the MEC server by using the wireless medium. Hence, it is also important to efficiently share the communication resource (i.e., bandwidth) among the UEs. At the MEC server, FLO is deployed to manage computation (i.e., CPU) and communication resources sharing among learning services and UEs. In addition to resource allocation, FLO also controls the hyper-learning rate of learning services.
III-B1 Local Computation Model
We denote the required number of CPU cycles for each UE to execute one sample of data belong to service by , which can be measured offline [28]. The required CPU cycles are directly proportional to the number of samples in the local dataset. Since all samples have the same size (i.e., number of bits), the number of CPU cycles required for UE to run one local iteration of learning service is . The allocated CPU-cycle frequency for the service is denoted by . Then the energy consumption of UE to compute one local iteration for learning service can be expressed as follows [29]
(16) |
where is the effective capacitance coefficient of UE ’s computing chipset. In addition, the computation time per local iteration of the UE for a service is Using a synchronous federated learning scheme, local training time for one local iteration of the learning service is the same with the computation time of the slowest UE as
(17) |
where is the extra overhead to access memory. We denote the vector of by
III-B2 Communication Model
After processing the local learning problem, UEs need to exchange the local information to FLO on a shared wireless medium via a multi-access protocol (i.e., OFDMA). Therefore, the achievable transmission rate (bps) of UE on the given the allocated fraction of the total uplink bandwidth is defined as follows:
(18) |
where is the background noise, is the transmission power, and is the channel gain of the UE . Since the dimensions of local models and global gradient updates in line 4 of the Alg. 1 are fixed for all UEs, the data size (in bits) of local information updates does not change and is denoted by for each learning service . Thus, uplink transmission time of each UE for a service is
(19) |
In addition, the downlink broadcast delay should be taken into account to transmit the global changes to all users using the downlink bandwidth as follows
(20) |
Since the global information and the local information has the same size , then, the downlink transmission time for the updated global information is Thus, the downlink of the service is .
Accordingly, the communication time of a learning service consisting of the uplink transmission time and downlink transmission time is defined as
(21) |
where is the extra communications overhead during the transmission (e.g., establishing TCP connection) and assumed to be random constants for each FL service communication.
Furthermore, the energy consumption for the uplink communication of UE for the service is defined as
III-B3 Global Round Model
As aforementioned in subsection II.A, we have the number of global rounds and the number of local iterations are and , respectively. Then, the running time of one global round of the learning service includes local learning time and transmission time which is defined as follows
(22) |
where is the computation time of the averaging operation. Since the simple averaging operation can be performed very quickly with strong computation capabilities of the edge server and the size of local updates information (i.e., local learning model, global gradient updates) are the same for every global round, is assumed to be small constant for each service .
Furthermore, in one global round of each service , the total energy consumption of all UEs for learning and uplink transmission is expressed as follows
(23) |
Finally, the total cost of a learning service is defined as
where is the trade-off between running time and the energy consumption of UEs that needs to be minimized.
III-C Problem formulation
Since the learning services jointly occupy the shared CPU resource in each UE, and the shared uplink bandwidth resource to upload the weight parameters of local models. Thus, FLO takes a role to manage these shared resources and controls the hyper-learning rate of learning services. To minimize the running time cost and the energy consumption of UEs, we propose the multi-service Federated Learning optimization problem for FLO, MS-FEDL, as follows
s.t. | (24) | |||
(25) | ||||
(26) | ||||
(27) | ||||
(28) | ||||
(29) | ||||
(30) |
where is the total CPU frequency of UE . The main decision variables include the allocated CPU frequency (i.e., ) for each service at UE , the allocated fraction (i.e., ) of total uplink bandwidth for each UE , and the relative accuracy (i.e., ) of the local learning problem at UEs. According to the constraints in (26), (27), all of the learning services and UEs are required to be allocated at least the minimum amount of CPU frequency and bandwidth to train and upload the learning parameters of local model. Besides, the auxiliary variables is the computational time of one local iteration, and is the uplink transmission time which depends on and in the constraint (29), (30), respectively. The constraint (24) indicates the shared CPU resource among learning services and the constraint (25) defines the shared uplink bandwidth for each UE. Lastly, the constraint (25) provides the feasible ranges of the learning parameters in the FEDL algorithm for each learning service.
IV Solutions to MS-FEDL
IV-A Centralized Approach
Even though the MS-FEDL problem is non-convex, we later show the specific form of this problem is bi-convex [30]. The convexity proofs of subproblems are shown in the appendices. For solving this type of problem, we adopt the popular technique, Block Coordinate Descent (BCD) algorithm [30]. The block coordinate descent method cyclically solves the optimization problem for each block of variables while fixing the remaining blocks at their last updated values by following the Gauss-Seidel update scheme. In the MS-FEDL problem, there are two blocks of variables such as block of and block of . The detail of the centralized algorithm is shown in Algorithm 2.
Given the fixed values for the resource allocation variable block and the corresponding computation, communication time , the total cost for each learning service (i.e., ), we have the learning parameter decision problem as follows
SUB1-c : Learning Parameter Decision Problem
s.t. | |||
where . In , are the functions of the hyper-learning rate and it is defined in the equation (15).
However, the hyper-learning rate decision subproblem SUB1-c can be decentralized for each learning service without any coupling among services in the constraints. Thus, each service can make the decision independently by solving the following decentralized subproblem.
SUB1-d : Decentralized Learning Parameter Decision
s.t. | (31) | |||
(32) |
Lemma 2.
There exists a unique solution of the convex problem SUB1-d satisfying the following equation:
where the relative accuracy of the local learning problem sufficient small and closed to .

Accordingly, we provide the proof for Lemma 2 in the appendix section. In addition, Fig. 2 illustrates the convexity of the SUB1-d subproblem for three learning services. The optimal solutions are obtained by using Lemma 2 and marked as the circles which are also the lowest values of the objective curves. As an observation, both high and low values of the hyper-learning rate cause a higher number of global rounds and higher total cost for each learning service.
According to Lemma 2, the optimal hyper-learning rate solutions do not depend on the total cost for each learning service . Thus, the optimal solution of this problem is independent to the other decisions. Then, given the optimal learning parameter and the corresponding , the problem can be decomposed into two independent sub-problems for CPU frequency allocation and bandwidth allocation as follows
SUB2-c : CPU Allocation Problem
s.t. | |||
where .
This problem decides a number of CPU frequency for each learning service at UEs.
SUB3-c : Bandwidth Allocation Problem
s.t. | |||
where .


Using block coordinate descent, FLO solves alternatively three convex subproblems of the hyper-learning rate decision, CPU allocation, bandwidth allocation. Discussion as mentioned above, the optimal hyper-learning rate decision can be obtained independently, then only one iteration is required in block coordinate descent style algorithm. These convex problems that can be easily solved by using a off-the-shelf convex solver (i.e., IpOpt solver [31]). After solving these problems, the resource allocation solutions and the decision of the hyper-learning rate are sent to the UEs to start the learning process. The whole operation process is illustrated in Fig. 3.
IV-B Decentralized Approach
In addition to the centralized algorithm, we develop a decentralized algorithm, which leverages the parallelism structure for subproblems update of Jacobi-Proximal ADMM[23] into the multi-convex ADMM framework[24], namely JP-miADMM. Since the original form of multi-convex ADMM using the conventional Gauss-Seidel scheme does not allow solving the CPU allocation subproblem independently, the integrated Jacobi-Proximal ADMM form provides the parallelism structure for this subproblem. The JP-miADMM algorithm consists of two procedures, such as primal update which can be independently solved by each service and dual update takes the role of a coordinator from the solutions of learning services. Note that in the following primal subproblems of JP-miADMM, the objectives comprise the addictive norm-2 terms which are the augmented term that originally is introduced in ADMM and proximal term in Jacobi-Proximal ADMM. These two updates are performed iteratively until the convergence condition is obtained. In this algorithm, we introduce the dual variables , and the auxiliary variable that are used in the next subproblems. The detail of the decentralized algorithm is shown in Algorithm 3 by alternatively updating the primal variables, primal residual and dual variables until the convergence conditions in line 14 are obtained. Specifically, the first condition is the condition of CPU allocation based on the Frobenius norm of allocation matrix while the second one is based on the vector norm of bandwidth allocation solution .
Since the optimal hyper-learning rate decision (i.e., ) is obtained independently according to the closed-form in Lemma 2 for each learning service, the JP-miADMM algorithm consists of an iterative process on the shared CPU and bandwidth allocation as follows
IV-B1 Primal Update
In the primal update, each service solves iteratively its CPU allocation, bandwidth allocation, and hyper-learning rate decision subproblems.
SUB2-d : Decentralized CPU Allocation (finding )
s.t. | |||
where is the function of and the decision variable is the vector of . Under the mild conditions, i.e., the splittable objective functions are closed proper convex and the existence of a saddle point satisfying KKT condition, the sufficient condition of JP-ADMM for the global convergence to the saddle point according to Theorem 2.1 in [23] can be guaranteed by choosing parameters such that
In the bandwidth allocation subproblem, there is a coupling among services due to all services have the same allocated bandwidth allocation at UE . Thus, we transform the SUB3-c subproblem into the following equivalent problem by introducing the auxiliary consensus variable (i.e., ) and the consensus constraint (33). This transformation is commonly used to handle global consensus variables in ADMM framework [22].
s.t. | ||||
(33) |
Accordingly, each service decides the allocated bandwidth by solving individually its subproblem as follows
SUB3-d : Decentralized Bandwidth Allocation
s.t. | |||
where .
The optimal solution of these convex problems can be easily obtained by using a convex solver (i.e., IpOpt solver [31]).
IV-B2 Dual Update
After independently updating the resource allocation, hyper-learning rate decision for each learning service, the dual update is performed to coordinate these solutions and update the dual variables for the next iteration. We first update the global consensus variable of the allocated bandwidth for each UE as follows
(34) |
Update primal residual:
(35) | ||||
(36) |
where is the vector of devices.
Update dual variable:
(37) | ||||
(38) |
where .
Using JP-miADMM, FLO needs Management Aggregator and a particular module for each learning service. First, each FL learning service module performs CPU allocation, bandwidth allocation, and hyper-learning rate decision then sends the CPU and bandwidth allocation decision to Management Aggregator and then running the aggregation process for variable , primal residual, and dual variables. This process iteratively performs until the convergence condition is achieved. Then, the resource allocation solutions and the decision of the hyper-learning rate are sent to the UEs that participate in the learning process. The whole process of the algorithm deployment is illustrated in Fig. 4. In the MEC server, each service can run its own virtual instance to manage the resource allocation and learning aggregator. Although the decentralized approach requires many iterations to convergence, it provides a more flexible and scalable approach for the resource allocation without revealing the learning service information (i.e., dataset information, the learning weight parameters exchange between UEs and the MEC server, the number of CPU cycles for each UE to execute one sample of data). The decisions can be made by each service and shared with FLO only.







Note that the chosen parameters , in the augmented and proximal terms could affect the convergence performance of the decentralized approach. Furthermore, for a particular global convex problem with additional running conditions, JP-ADMM obtains convergence rate according to Theorem 2.2 in [23], where denotes the number of iterations. Specifically, where is the primal solution at the iteration and is defined in [23]. Accordingly, the gaps between updated primal variables become gradually smaller throughout the iterative updates and the solutions converge toward the optimal ones. Conventionally, for a global convex problem, JP-ADMM converges faster than the dual decomposition method [23] but still requires a higher number of iterations compared to Gauss-Seidel ADMM as shown in the simulation results of [32]. For the multi-convex case, miADMM can guarantee the global convergence to the Nash point (i.e., stationary point) with the convergence rate [24]. In the next section, we provide numerical results for the convergence performance and the efficiency of the proposed algorithms.
V Performance Evaluation
V-A Numerical Settings
In this work, we assume that three learning services are deployed at the edge networks. Moreover, 50 heterogeneous UEs are positioned within the coverage area of the base station to participate in the federated learning system. Similar to our prior works [20, 10], we consider that the channel gain between the base station and the UE follows the exponential distribution with the mean where , the reference distance between BS and UEs. Here, the actual distance between the UEs and the base station is uniformly generated between . Furthermore, the uplink system bandwidth is shared amongst UEs, the Noise power spectral is , and the transmit power of UEs and BS are and , respectively. In this work, we assume that the size of the uploaded local model and downloaded global model is the same and it is set to for each service.
For the local training model at UEs, we first set the training data size of UEs in each learning service following a uniform distribution in . The maximum computation capacity (i.e., CPU frequency) at each UE is uniformly distributed between . The required CPU cycles to train one bit of data at the UE for each learning service is . Furthermore, the minimum required CPU frequency for each service is . We consider that the effective capacitance coefficient is the same for all UEs as and the trade-off parameter is set to . For the federated learning parameters, we set and . Then, the relative accuracy of the local problem at UEs for each service is . This setting reflects the CPU frequency requirement and model size as above and defines the required number of local iterations for each learning service correspondingly. Finally, for the algorithm setting, we set the convergence thresholds in the algorithms as and are and , respectively. Then, the values of the parameters , , and are , , and respectively.






V-B Numerical Results
We first illustrate a realization for the random location and local dataset size of UEs as shown in Fig. 5. For this realization, we demonstrate the convergence of the total cost, primal residual, CPU allocation, bandwidth from two solution approaches in Fig. 6. Accordingly, the centralized approach solely requires one iteration to get the optimal solution and the decentralized solution approach performs an iterative update process with many iterations to achieve that convergence condition such as the changes of solutions below small thresholds. Starting from the same initial points, the centralized approach is quickly converged within one iteration while the decentralized approach requires iterations to achieve the same solution in Fig. 6(a). Even though, the decentralized algorithm needs only 35 iterations to get almost similar cost compared to the optimal one, however, the allocated CPU frequency in Fig. 6(d) needs more iterations to obtain the same optimal solution from the solver or centralized algorithm. Different from a slow convergence of CPU frequency, the bandwidth solutions are quickly converged after 3 iterations and so the primal residual in the equation (36). In practical usage, we can stop when the primal residual starts converging to zero after iterations as illustrated in Fig. 6(b), and 6(c). As a result, these solutions still guarantees to be a feasible solution and obtain such a very similar to optimal cost. In light of this observation, we later test the convergence performance of the integrated early stopping strategy in the decentralized algorithm.
Now, we will discuss the characteristic of the optimal solution in Fig. 6 as follows. As an example, by the reason of Service having the smallest local model parameter size to update, it has the highest hyper-learning rate and correspondingly takes more global rounds, less number of local iterations. Thereby, in order to proceed with the local learning and perform more global rounds quickly, Service 1 occupies most of the CPU frequency of UEs. Unlike Service 1, Service 3 has the lowest hyper-learning rate, takes the least CPU frequency, and performs less global rounds. As the learning scheme in this work follows a synchronous federated learning, all UEs have to complete the local update and send the local weight parameters to the server before updating model at the MEC server. Thus, the users who are far from the base station and have more training data to process will require longer time to train, then upload the local model. Accordingly, these devices receive the larger fraction of the uplink system bandwidth to upload their local parameters to the server. As shown in Fig. 5, UE48 is one of the furthest UEs from BS and has a larger amount of local data. Therefore, the largest fraction of the uplink system bandwidth is allocated to UE48 as shown in Fig. 6(f).
Moreover, we compare the optimal learning time, energy consumption, and total cost with two heuristic approaches as shown in Fig. 7. In the first heuristic approach, the CPU frequency of the UEs is equally allocated to the learning services. Moreover, the uplink system bandwidth is equally allocated to the UEs as well to upload the local learning weight parameter to the server. Then, FLO decides solely the optimal hyper-learning rate for each learning service by solving the SUB1-c problem. The second heuristic approach is adopted to proportionally allocate the local CPU based on the local data size (i.e., ) of each service at UEs and allocate bandwidth based on the transmission capacity of each UE. From the figures, we observe that the cost including the learning time and energy consumption of UEs for all services is reduced more than and than that of the Heuristic and Heuristic strategies. Among three learning services, Service 1 has the lowest CPU requirement and the smallest size of local model parameters. Therefore, it needs the lowest learning time and energy consumption as well. However, the minimum local performance at UEs of Service 1 compels more global rounds and the lowest values of the hyper-learning rate than that of Service 2, and 3.
Furthermore, we vary the trade-off parameter to study the effects of the conflicting goals of minimizing the time cost and energy cost of each FL service. As increasing the trade-off parameter (i.e., ), the MS-FEDL gives a higher priority to minimize the running time than that of the energy consumption of UEs. Consequently, Fig. 8(c) shows the decreasing curve of running time. Meanwhile, the services require more energy consumption from UEs. Moreover, the priority of services in terms of running time can be controlled by varying the trade-off parameter of the service . Accordingly, the service which has a higher priority can be set with a higher value of to reduce the learning time in solving the MS-FEDL problem. In Fig. 8, we only increase the trade-off parameter of Service 3 to demonstrate the higher priority of Service 3 than that of the other services. As a result, the total running time of Service 3 can be decreased while the total time of Service 1 and 2 are increased. On the other hand, the energy consumption of UEs to serve Service 3 is increased while the energy consumption of UEs to serve other services is slightly decreased.
To boost the convergence speed, we can apply the early stopping strategy for the decentralized algorithm, namely the Decentralized-ES algorithm, by using the convergence condition on the primal residuals instead of the primal variables as we discuss above. In Fig. 9, we run realizations for the random location and local dataset size of UEs while keeping the other settings to validate the convergence speed of the decentralized approach using the Jacobi-Proximal scheme for multi-convex ADMM, the early stopping version of the decentralized algorithm and the original multi-convex ADMM algorithm using Gauss-Seidel scheme. Accordingly, we observe that the median values of the required iterations of the decentralized, decentralized-ES and miADMM algorithms for convergence are iterations, iterations, and iterations, respectively. The decentralized algorithm requires higher number of iterations on average than the original miAdMM algorithm. However, the early stopping strategy helps to speed up the convergence rate and obtain nearly the optimal cost. Statistically, there is only a difference with the optimal cost. Note that, the original miADMM does not fully support the parallel operation in the primal problem and requires cyclic learning service operating based on Gauss-Seidel update scheme. Besides, even though the proposed decentralized algorithms require a higher number of iterations to convergence, they provide privacy preserved and flexible approaches to independently control the learning process and resource allocation for each learning service. Note that, if the stringent time-constrained is required, the centralized algorithm is applicable because it can provide the solution within two iterations.

VI Conclusion
In this paper, we analyzed a multi-service federated learning scheme that is managed by a federated learning orchestrator to provide the optimal computation, communication resources and control the learning process. We first formulate the optimization model for computation, communication resource allocation, and the hyper-learning rate decision among learning services regarding the learning time and energy consumption of UEs. We then decompose the proposed multi-convex problem into three convex sub-problems and solve them alternatively by using the block coordinate descent algorithm in the centralized manner. Besides, we develop a decentralized algorithm to preserve the privacy of each learning service without revealing the learning service information (i.e., dataset information, exchange local updates information between UEs and the MEC server, the number of CPU cycles for each UE to execute one sample of data) to FLO. The simulation results demonstrate the superior convergence performance of the centralized algorithm and the efficiency of the proposed approach compared to the heuristic strategy. Furthermore, by experiment, the early stopping strategy can boost the convergence speed of the decentralized algorithm with an infinitesimal higher value than the optimal solution.
To scale up our design, different MEC servers in different cells can independently allocate their resources. The recent works have analyzed the hierarchical federated learning framework in [33, 34] and game-strategic formulation that provide promising directions to extend this work regarding the scalability issue. We advocate the wireless dynamic and packet losses impacts as in [35] on the bounds of global iterations in the FEDL algorithm and MS-FEDL problem are also important to employ FL scheme to the realistic communication scenarios. We leave the possibly extensive analysis of the proposed approaches for our future works.
References
- [1] J. Park, S. Samarakoon, M. Bennis, and M. Debbah, “Wireless Network Intelligence at the Edge,” Proceedings of the IEEE, vol. 107, no. 11, pp. 2204–2239, Nov 2019.
- [2] “CrowdTracker: Optimized Urban Moving Object Tracking Using Mobile Crowd Sensing,” IEEE Internet of Things Journal, vol. 5, no. 5, pp. 3452–3463, Oct 2018.
- [3] “Free Community-based GPS, Maps & Traffic Navigation App — Waze,” https://www.waze.com/.
- [4] A. Mathurz, N. D. Lanezy, S. Bhattacharyaz, A. Boranz, C. Forlivesiz, and F. Kawsarz, “DeepEye: Resource efficient local execution of multiple deep vision models using wearable commodity hardware,” in MobiSys 2017 - Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. Association for Computing Machinery, Inc, Jun 2017, pp. 68–81.
- [5] B. Fang, X. Zeng, and M. Zhang, “NestDNN: Resource-aware multi-tenant on-device deep learning for continuous mobile vision,” in Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM. Association for Computing Machinery, Oct 2018, pp. 115–127.
- [6] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics, 2017, pp. 1273–1282.
- [7] “Federated Learning: Collaborative Machine Learning without Centralized Training Data,” http://ai.googleblog.com/2017/04/federated-learning-collaborative.html.
- [8] “We are making on-device AI ubiquitous,” https://www.qualcomm.com/news/onq/2017/08/16/we-are-making-device-ai-ubiquitous, Aug. 2017.
- [9] W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y.-C. Liang, Q. Yang, D. Niyato, and C. Miao, “Federated learning in mobile edge networks: A comprehensive survey,” IEEE Communications Surveys & Tutorials, 2020.
- [10] N. H. Tran, W. Bao, A. Zomaya, M. N. Nguyen, and C. S. Hong, “Federated Learning over Wireless Networks: Optimization Model Design and Analysis,” in IEEE INFOCOM 2019, Paris, France, Apr. 2019.
- [11] S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, K. Chan, and I. T. J. Watson, “When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning,” in IEEE INFOCOM 2018.
- [12] C. Zhang, P. Patras, and H. Haddadi, “Deep Learning in Mobile and Wireless Networking: A Survey,” IEEE Communications Surveys and Tutorials, vol. 21, no. 3, pp. 2224–2287, 2019.
- [13] C. Ma, J. Konečný, M. Jaggi, V. Smith, M. I. Jordan, P. Richtárik, and M. Takáč, “Distributed optimization with arbitrary local solvers,” Optimization Methods and Software, vol. 32, no. 4, pp. 813–848, Jul. 2017.
- [14] S. J. Reddi, J. Konečnỳ, P. Richtárik, B. Póczós, and A. Smola, “Aide: Fast and communication efficient distributed optimization,” arXiv:1608.06879, 2016.
- [15] J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” arXiv:1610.02527, 2016.
- [16] J. Konečný, H. B. McMahan, F. X. Yu, P. Richtarik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” in NIPS Workshop on Private Multi-Party Machine Learning, 2016. [Online]. Available: https://arxiv.org/abs/1610.05492
- [17] J. Konečný, Z. Qu, and P. Richtárik, “Semi-stochastic coordinate descent,” Optimization Methods and Software, vol. 32, no. 5, pp. 993–1005, Sep. 2017.
- [18] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” in Proceedings of Machine Learning and Systems 2020, 2020, pp. 429–450.
- [19] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020.
- [20] C. Dinh, N. H. Tran, M. N. H. Nguyen, C. S. Hong, W. Bao, A. Y. Zomaya, and V. Gramoli, “Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation,” arXiv:1910.13067, 2019.
- [21] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, “Clustering with Bregman Divergences,” Journal of Machine Learning Research, vol. 6, pp. 1705–1749, Dec. 2005.
- [22] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, 2010.
- [23] W. Deng, M.-J. Lai, Z. Peng, and W. Yin, “Parallel Multi-Block ADMM with o(1/k) Convergence,” Journal of Scientific Computing, vol. 71, no. 2, pp. 712–736, May 2017.
- [24] J. Wang, L. Zhao, and L. Wu, “Multi-convex inequality-constrained alternating direction method of multipliers,” arXiv:1902.10882, 2019.
- [25] S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “Adaptive Federated Learning in Resource Constrained Edge Computing Systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1205–1221, Jun. 2019.
- [26] V. Smith, S. Forte, C. Ma, M. Takáč, M. I. Jordan, and M. Jaggi, “CoCoA: A General Framework for Communication-Efficient Distributed Optimization,” Journal of Machine Learning Research, vol. 18, no. 230, pp. 1–49, 2018.
- [27] Y. Nesterov, Lectures on Convex Optimization. Springer International Publishing, 2018, vol. 137.
- [28] A. P. Miettinen and J. K. Nurminen, “Energy Efficiency of Mobile Clients in Cloud Computing,” in USENIX HotCloud’10, Berkeley, CA, USA, 2010, pp. 4–4.
- [29] T. D. Burd and R. W. Brodersen, “Processor Design for Portable Systems,” Journal of VLSI Signal Processing System, vol. 13, no. 2-3, pp. 203–221, Aug. 1996.
- [30] Y. Xu and W. Yin, “A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion,” SIAM Journal on Imaging Sciences, vol. 6, no. 3, pp. 1758–1789, Jan 2013.
- [31] A. Wächter and L. T. Biegler, “On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming,” Mathematical Programming, vol. 106, no. 1, pp. 25–57, Mar 2006.
- [32] T. Lin, S. Ma, and S. Zhang, “On the Global Linear Convergence of the ADMM with MultiBlock Variables,” SIAM Journal on Optimization, vol. 25, no. 3, pp. 1478–1497, Jan 2015.
- [33] M. S. H. Abad, E. Ozfatura, D. Gunduz, and O. Ercetin, “Hierarchical federated learning across heterogeneous cellular networks,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 8866–8870.
- [34] L. Liu, J. Zhang, S. Song, and K. B. Letaief, “Client-edge-cloud hierarchical federated learning,” in ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE, 2020, pp. 1–6.
- [35] M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, “A joint learning and communications framework for federated learning over wireless networks,” arXiv preprint arXiv:1909.07972, 2019.
-A Proof for Lemma 2
In this proof, we skip the subscription for each service.
According to the equation (15), we have and where
and is defined in (5), are defined in Assumption 1.
Given and ,
We have
Thus,
We first verify the convexity of the SUB1-d problem then derive a closed-form solution for this problem. The problem is strictly convex by validating the first and second derivatives of the objective function as follows
(39) |
(40) |
Thus, the problem is strictly convex and has a unique solution.
(41) |
Note that, in our prior work, the sufficient small value of can guarantee the feasibility of the problem and the closed-form solution (41). Also, This happens when many local iterations are required to fit the local models and causes the small values of .
-B Convexity Proof for SUB2-c problem
Firstly let us introduce the Lagrangian function of the SUB2-c problem as follows
(42) |
where and are Lagrangian multipliers. The first-order partial derivative of the Lagrangian function in (42) is as follows
Then, the second-order derivative as follows
(43) | |||
(44) |
The Hessian matrix of the Lagrangian function is positive semi-definite. Therefore, we can conclude that the SUB2-c problem is a convex problem.
-C Convexity Proof for SUB3-c problem
The Lagrangian function of the SUB3-c problem of BW allocation is as follows
(45) |
where and are Lagrangian multipliers. Then, the first-order derivative of the Lagrangian function in (45) as follows
(46) |
The second-order derivative of is as follows:
(47) | |||
(48) |
Therefore, we can conclude that the SUB3-c problem is a convex problem.