User-Centric Cooperative MEC Service Offloading
Abstract
Mobile edge computing provides users with a cloud environment close to the edge of the wireless network, supporting the computing intensive applications that have low latency requirements. The combination of offloading with the wireless communication brings new challenges. This paper investigates the service caching problem during the long-term service offloading in the user-centric wireless network. To meet the time-varying service demands of a typical user, a cooperative service caching strategy in the unit of the base station (BS) cluster is proposed. We formulate the caching problem as a time-averaged completion delay minimization problem and transform it into time-decoupled instantaneous problems with a virtual caching cost queue at first. Then we propose a distributed algorithm which is based on the consensus-sharing alternating direction method of multipliers to solve each instantaneous problem. The simulations validate that the proposed online distributed service caching algorithm can achieve the optimal time-averaged completion delay of offloading tasks with the smallest caching cost in the unit of a BS cluster.
Index Terms:
User-centric network, MEC service offloading, online distributed caching.I Introduction
With more intelligent and big-data applications developed on user terminals, the resources required by the computing intensive tasks from these applications are getting higher. For example, the popular virtual reality applications consume a lot of computing and storage resources to construct realistic images. However, the user terminals with extremely limited resources are helpless. Meanwhile, offloading these tasks to the remote cloud incurs non-negligible transmission delay through the already-congested backbone network. Hopefully, mobile edge computing (MEC) [1, 2] deploys small servers at the edge of the network, which provides a cloud environment in proximity of the user. Then the computation-intensive tasks can be offloaded to MEC with a low response latency.
[3, 4, 5, 6] have made some contributions to the MEC task offloading. MEC-assisted vehicle platooning is considered in [4], where the authors minimize the average total energy consumption under the constraint of meeting the deadlines of tasks, and propose a Lyapunov optimization algorithm to solve it. Amit Samanta et al. in [6] try to solve the service offloading problem considering tasks that have different delay requirements. Specifically, service offloading means prefetching the dependent databases and libraries, i.e., services, to support the computing-intensive tasks at first and then performing computing. The caching process during service offloading involves the joint allocation of both computing and storage resources, which brings challenges that has not been solved perfectly.
Furthermore, the wireless network plays a significant role in many end-to-end services [7, 8, 9, 10], which can also be performance bottleneck in the offloading process. But it is ignored and not well studied in existing works. Traditionally, multiple users associate with one base station (BS) and offload their tasks to the server deployed in the associated BS. The competition for limited wireless communication resources causes a non-negligible delay compared with the processing delay when the load on servers is light. Furthermore, users located at the edge of the conventional cells experience poor signal-to-interference-plus-noise ratio (SINR). With the evolution of 5G ultra-dense network, the user-centric architecture [11], [12] which replaces the traditional network-centric architecture, enables each user to be served by a proprietary virtual network. To implement this, multiple BSs form a cluster to serve a user cooperatively [13], [14]. The data sent by the user will be jointly decoded by the BS cluster. Combined with interference cancellation technology, the uplink data rate can be increased. Consequently, the wireless communication process will not be the bottleneck of delay performance.
Nevertheless, service offloading in the user-centric network brings new challenges. In order to meet the time-varying offloading demands of the typical user in a long time span and make full use of the limited resources on the edge servers, BSs in the cluster that serves a typical user should conduct service caching cooperatively. Although there exist some studies on the cooperative service caching problem [15], [16], most of them discuss the caching strategies based on the popularity of multiple users, where the individual demands are not considered. Combining the benefits of the user-centric network for offloading process, we study the service caching problem in the user-centric wireless network to provide customized offloading for each user. In a word, this paper investigates the cooperative caching problem from a long-term perspective for a typical user. The main contributions are summarized as follows:
1) We discuss the service offloading in the user-centric network. The service caching strategy is developed by minimizing the time-averaged completion delay of the offloading tasks in a long time span, for a typical user who is served by a BS cluster cooperatively.
2) In the absence of future information, a Lyapunov optimization based online algorithm is designed, which transforms the long-term service caching problem into multiple instantaneous optimization problems. Further to implement the cooperative service caching in each time slot, a distributed consensus-sharing algorithm under the alternating direction method of multipliers (ADMM) framework is proposed.
3) We carry out extensive simulations to validate the optimality and the superior convergence performance of the proposed algorithm. The results demonstrate that our algorithm outperforms the instantaneous optimal caching strategy and iteration based algorithm in terms of different metrics.
The rest of this paper is organized as follows. Section II describes the system model. The problem is formulated and analyzed in section III. Section IV proposes the algorithm to solve the problem. The simulations are presented in Section V. Finally, this paper is concluded in section VI.
II System Model
We consider a wireless network with BSs, denoted by set . Each BS is equipped with antennas and endowed with cloud resources, i.e., processing and storage capabilities. Constrained by the deployment overhead, the BS () has a CPU with maximum processing capacity (CPU cycles per Hz) and a storage with maximum space .
There are active users in this network, denoted by the set . Each user is served by a cluster of BSs. BSs in the cluster receive user data and decode it jointly by exchanging channel state information through the backhaul link. We assume that each user generates the computing tasks that request for one type of service each time. All types of the services are stored at remote cloud, denoted by the set . The service () has a size of , which will take up the storage space on edge servers of size if it is cached. Furthermore, it has a computing requirement of to process per unit of data. Fig. 1 describes the main elements of the system vividly. Each user is served by 3 BSs, and overlapping between users is allowed for maximizing the utility of resources.
To describe the dynamic of the communication system and users’ requests, we separate the long time span into discrete time slots (), and assume that the channel state and users’ requests in each time slot remain constant. The details of the system model are illustrated next.

II-A User-centric Communication model
Firstly, the user-centric wireless network model requires a clear statement of BS clustering. We use a binary variable which indicates that the BS belongs to the cluster of the user at time with value 1, otherwise 0. The BS cluster of the user is denoted by and the users served by is denoted by . Taking user as the typical user, the users belonging to are called intra-cluster users, while the remaining users are called inter-cluster users. With the clustering model clarified, the signal received by BS is
(1) | ||||
where is the signal power of user , is the complex channel coefficient between BS and user . is the complex symbol transmitted by user at time and is the white Gaussian noise with variance and zero mean. All the signals received by the BS can be divided into three parts: the first part is the useful signal of user , the second part consists of the signals sent by intra-cluster users, i.e., intra-cluster interference, while the last part is the inter-cluster interference.
There are multiple technologies that can mitigate the interference nowadays, e.g., non-orthogonal multiple access [17]. In this paper, to reflect the gain of BS clustering under the user-centric mode, beamforming (or precoding) is an effective interference cancellation technology. By designing the beamforming vector, the intra-cluster interference can be eliminated at the receiving BS cluster [13].The projection transformation zero-forcing beamformer for user is calculated as follows
(2) |
where and . Then we have the uplink SINR of user as
(3) |
The uplink data rate of user is .
II-B Offloading Task Model
The task generated by user at time is denoted by , which is characterized by representing the size of data and workload separatively. The workload refers to the computing resources required to process each unit of data, e.g., CPU cycles per bit, which can be acquired from the profile of the task. To clarify the type of service that the task requests for, we use a binary variable to indicate that the task generated by user at time requests for service with value 1, otherwise 0. We assume that each user uploads a computing task that requests for one type of service at each time slot. A task which consists of multiple types of services will be considered as several independent tasks generated by different users. According to the above assumption, we have the constraint .
II-C Service Offloading Model
The process of service offloading includes uploading tasks, processing tasks, and returning the computing results. Above all, the BSs have to prefetch the services from remote cloud to support the processing according to a caching strategy at the beginning of each time slot. Then the tasks arrive in each time slot.
Let be the caching strategy for service in BS at time . It indicates caching () or removing () by the sign, and the absolute value represent the corresponding probability. represents the caching strategy of BS . denote the caching strategy of the BS cluster.
We use to denote the probability that service has been cached on BS until time . Services that cached on servers will occupy resources. Consequently, limited resources on the edge servers limit the number of services that can be cached, which is represented by the constraints and . Services can be cached or removed by a certain probability. Consequently, the next caching status is only determined by the current caching status and decision, which means that the probabilistic service caching process is a Markov decision process. The state transition function in our service caching model is
(4) | ||||
where the means . The last equation comes from the independence between and .
Obtaining services from the cloud costs money, e.g., purchasing or transmitting. We assume that these costs differentiated by service types are proportional to the size of services. Let denote the cost for prefetching service , where is the cost coefficient of per unit of service data, then the expected prefetching cost of BS is calculated as
In each time slot, the user sends the offloading task with the size of , which leads to an uplink communication delay
(5) |
The data will be received by all the BSs in the cluster and decoded jointly, and the task dispatching strategy adopted in this paper is that the BS in the cluster which has the highest probability cached the requested service will process the task. Other task dispatching strategy can be integrated, but it is not our focus here. Let denote the service that requests for, and denotes the BS that will process . The processing delay on the edge server is
(6) |
Meanwhile, the task will be uploaded to the remote cloud with probability . Transmitting through the backbone network incurs a large delay
(7) |
where is the data rate of the backbone network which is set as a small constant. We assume that the resources on the remote cloud is ample, thus the processing delay on the remote cloud is negligible. Then the delay that offloading to the remote cloud mainly comes from the transmission.
Generally, the results of calculations are very small, so we ignore the delay caused by returning the results. Summing up all the delay generated during the offloading process, the completion delay of is
(8) | ||||
III Problem Formulation
After clarifying the system model, the long-term service caching problem is formulated as a minimization of the average task completion delay of the whole offloading process for multiple time-slots in a long time span, which is illustrated by problem P.
(9a) | ||||
(9b) | ||||
(9c) | ||||
(9d) | ||||
(9e) |
Among where, is a hyperparameter deciding the cost threshold of the service provider, we hope for a small caching cost to guarantee the response latency during the service offloading process. Constraint (9b) is the average service caching cost for multiple time slots. (9c) and (9d) are the expected storage resources and computing resources constraints on the edge servers, and (9e) indicates the domain of the variables.
Analyzing the above problem, the service caching strategy is the decision variable of the user-centric service caching problem. From (8) we can tell that the delay of wireless communication is independent of the caching process, while the processing delay is affected by the caching strategy of the BS cluster, which means that the service caching strategy depends on the clustering. Consequently, we solve the service caching problem based on the optimal BS clustering [18]. Naturally, our optimization goal is to minimize the averaged processing delay
The challenges of solving this problem come from the unknown dynamic requests of the user and the huge decision space of the BS cluster. For the unknown dynamic requests, the prediction of requests in next several time slots is generally more accurate, but it is difficult to predict the long-term requests. An online algorithm is necessary in the absence of future information. Troubled by the huge decision space, a distributed solution is more effective. Taking the above two points into consideration, we propose an online distributed service caching algorithm which is elaborated next.
IV Algorithms: On-ConShAD
IV-A Online Service Caching
Assuming that we have acquired the optimal clustering strategy for the typical user at the beginning of each time slot, then we solve the service caching problem (9).
The long-term cluster-based service caching problem hopes to improve the diversity of cached services through the collaborative caching between BSs, to handle the phenomenon that services cached may not be requested immediately but repeatedly in the future. To achieve this, the caching cost should be as low as possible, which is reflected in the form of constraint (9b) where the caching variables of different time slots are coupled. Based on Lyapunov optimization, we transform the long-term constraint (9b) into a queue stability problem. Specifically, we construct a virtual caching cost queue with the assumption that , indicating the deviation of current caching cost, and its dynamic evolves as follows
(10) |
where , and .
The Lyapunov function is defined as and the Lyapunov drift is . According to the above definitions, we have
(11) | ||||
where , which can be proved that . Consequently, the long-term constrained problem (9) is transformed into an instantaneous problem of minimizing an upper bound on the drift-plus-penalty expression under Lyapunov optimization framework
(12a) | ||||
(12b) |
where is a non-negative weight that is chosen as desired to trade off between the completion delay of the users and the caching cost of the edge computing service providers.
IV-B Distributed Parallel Caching: An ADMM Approach
As the dynamics of BS clustering, the service caching problem of one user involves the strategies of the whole BS set, which has the size of . The large-scale problem calls for a decentralized algorithm. According to [19], ADMM is a widely used algorithm for solving large-scale optimization problems. Furthermore, in order to improve the diversity of cached services with a small caching cost from the perspective of BS cluster, BSs in the same cluster should cooperate with each other. To handle it, we introduce a cooperative service caching algorithm based on the consensus-sharing framework of ADMM.
To solve the online caching problem (12) using ADMM, we analysis whether the optimization target (12a) is decomposable at first. On one hand, the drift item can be calculated independently between each BS and then get together. It can be rewritten as the sum of the decomposed items (the time index is omitted here)
(13) | ||||
where is the caching strategy of all the BSs in the system. is the service cost vector and is the caching state vector of BS , means that the corresponding elements of two vectors or matrices are multiplied.
On the other hand, the penalty item in (12a), i.e., the expected processing delay of , is determined by the probability . Hence, the penalty depends on the caching result of the whole BS cluster. This dependency prevents the penalty item from being decomposed directly into multiple agents. Therefore, we design a sharing optimization goal according to this to promote the cooperation in the consensus-sharing framework.
The penalty item in (12a) can be expressed as follows
(14) |
where is the service type indicating vector of task . According to the analysis of processing delay, we design the shared minimization goal in the form of a function related to the maximum of all the local caching variables, which is elaborated as follows (the time index is omitted here and later)
(15) |
where is a nonlinear function of the local caching strategy of BS .
With the above transformations, we can reformulate (12) as a consensus-sharing problem by introducing a consensus variable set with constraint .
(16a) | ||||
(16b) | ||||
(16c) | ||||
(16d) |
where is the size vector of services. The augmented Lagrange function is
(17) | ||||
with dual variables .
By defining the residual , the variables are updated iteratively by solving the following scaled optimization problems
(18a) | ||||
(18b) | ||||
(18c) |
where . The minimization of (18a) can be solved completely decentralized, i.e., each BS solves the local minimization problem (18a) under the constraints (16c), (16d) in parallel.
Finally, the Online Consensus and Sharing ADMM based Distributed algorithm (On-ConShAD) which solves the problem (9) is performed as Algorithm. 1.
V Performance Evaluation


In this section, we evaluate the performance of On-ConShAD by some simulations. We set 10 BSs and 6 types of services. Each of the BS is equipped with 2 antennas and is endowed with a resources-limited server. Each service has specific configurations including size, cache overhead factor and resources requirement. We introduce interferences by setting 4 communication users.
First, we validate the superiority of serving user with BS cluster. In Fig. 2(a), the size 1 represents the case of serving user with single BS. On one hand, the decreasing circle line shows that serving user with BS cluster has a better delay performance. It is worth noting that the trend of the total delay and the uplink delay is basically the same, which means that the contribution of the BS cluster is mainly reflected in the uplink communication process. The joint decoding and interference elimination in the clustering mode lead to a higher uplink data rate, thus the uplink communication delay is smaller. On the other hand, the caching cost of BS cluster keeps decreasing until the size gets up to 4 while it increases when the size is larger. This reveals that the BSs in a larger cluster are more likely to perform redundant caching because of available resources, which aims at ensuring a minimum processing delay.
Further to explore the influence of cluster’s size on our algorithm, we test the convergence performance of distributed consensus-sharing ADMM. As shown in Fig. 2(b), the convergence always takes only several iterations as the size changes, which reveals that the size of the BS cluster has slight effect on the convergence performance of the distributed algorithm, indicating that the algorithm is scalable and can be applied to large-scale problems.




Second, although we take the advantage of the cooperations of BS clusters, we did not discuss the division of BS clusters. The proposed algorithm is based on the assumption that the BS cluster is divided optimally and dynamically. Therefore, we have to figure the degree how much the clusters’ division impacts on the caching performance. In Fig. 3, the circle line represents the dynamic division while the diamond line represents the fixed division. The size of cluster is set as 3. The delay performance is unaffected (Fig. 3(a)). However, the caching performance degraded when we divide the cluster dynamically (Fig. 3(b)). Though the dynamic clustering achieves optimal uplink performance, it interrupts the long-term nature of the caching status. The services cached in the history are no longer useful due to re-clustering, which results in redundant caching.
Further to verify the effectiveness and optimality of the proposed On-ConShAD, we set the size of BS cluster as 3 and compare it with two algorithms. Single BS: This is the baseline of our algorithm, in which the typical user is served by a single BS. At each time slot, the typical user chooses one BS following the rule of achieving the highest uplink data rate, and offloads to that BS. By minimizing the weighted sum of the completion delay and the caching cost on the single BS to meet user’s requirement in each time slot, the solution is instantaneous optimal. Gibbs: This is also an iteration-based algorithm which adopts the variation of Gibbs Sampling method [16]. It makes binary caching decisions (cache with probability 1 or 0) and fractional offloading decisions (percentage of tasks computed at the edge).
Fig. 4(a) shows the time-averaged completion delay of these three algorithms. From which we can tell that the On-ConShAD performs as excellent as the instantaneous delay-optimal single BS performs. The completion delay of the tasks obtained by On-ConShAD is a little bit smaller, which has been validated in Fig. 2(a). Fig. 4(b) illustrates the effectiveness of the online strategy, where the minimization of the drift of the virtual caching cost queue limits the long-term caching cost to a small value. At first, On-ConShAD and single BS have equivalent caching cost. As time grows, single BS performs frequent replacement of services to minimize the completion delay, while On-ConShAD is gradually getting lower by minimizing the cumulative drift of the virtual caching cost queue. The poor performance of Gibbs is due to the certain caching strategy and the slow convergence of gibbs sampling. The convergence performance in [16] shows that it takes hundreds of iterations to get convergence while ADMM only takes a few iterations (2(b)).
VI Conclusion
In this paper, we discussed the service offloading problem in the user-centric wireless network, where each user is served by several BSs cooperatively. We mainly studied the cooperative service caching strategy during the offloading process from a long-term perspective. An online distributed algorithm is proposed to solve the problem. At last, the simulations show the benefit of BS cluster and validate the effectiveness of the proposed algorithm. Nevertheless, there are still some limitations that can be completed in the future, e.g., the BS clustering problem should be optimized together with the caching problem thus acquiring the joint optimal offloading solutions.
Acknowledgment
This work was supported in part by the National Science Foundation of China (NSFC) (Grants 61771445, 61631017 and U19B2044).
References
- [1] N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie, “Mobile edge computing: A survey,” IEEE Internet of Things Journal, vol. 5, no. 1, pp. 450–465, 2017.
- [2] Q. Pham, F. Fang, V. N. Ha, M. J. Piran, M. Le, L. B. Le, W. Hwang, and Z. Ding, “A survey of multi-access edge computing in 5g and beyond: Fundamentals, technology integration, and state-of-the-art,” IEEE Access, vol. 8, pp. 116 974–117 017, 2020.
- [3] Y. Zhu, Y. Hu, and A. Schmeink, “Delay minimization offloading for interdependent tasks in energy-aware cooperative mec networks,” in 2019 IEEE Wireless Communications and Networking Conference (WCNC), April 2019, pp. 1–6.
- [4] Y. Hu, T. Cui, X. Huang, and Q. Chen, “Task offloading based on lyapunov optimization for mec-assisted platooning,” in 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), 2019, pp. 1–5.
- [5] Z. Qin, S. Leng, J. Zhou, and S. Mao, “Collaborative edge computing and caching in vehicular networks,” in 2020 IEEE Wireless Communications and Networking Conference (WCNC), May 2020, pp. 1–6.
- [6] A. Samanta and Z. Chang, “Adaptive service offloading for revenue maximization in mobile edge computing with delay-constraint,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 3864–3872, 2019.
- [7] H. Lu, M. Zhang, Y. Gui, and J. Liu, “Qoe-driven multi-user video transmission over sm-noma integrated systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 9, pp. 2102–2116, 2019.
- [8] Y. Fu, C.-A. Jiang, Y. Qin, and L. Yin, “Secure routing and transmission scheme for space-ocean broadband wireless network,” Science China Information Sciences, vol. 63, 04 2020.
- [9] L. Gong and Z. Zhu, “Virtual optical network embedding (vone) over elastic optical networks,” Journal of Lightwave Technology, vol. 32, no. 3, pp. 450–460, 2014.
- [10] P. Lu, L. Zhang, X. Liu, J. Yao, and Z. Zhu, “Highly efficient data migration and backup for big data applications in elastic optical inter-data-center networks,” IEEE Network, vol. 29, no. 5, pp. 36–42, 2015.
- [11] S. Chen, F. Qin, B. Hu, X. Li, and Z. Chen, “User-centric ultra-dense networks for 5g: challenges, methodologies, and directions,” IEEE Wireless Communications, vol. 23, no. 2, pp. 78–85, 2016.
- [12] C. Pan, M. Elkashlan, J. Wang, J. Yuan, and L. Hanzo, “User-centric c-ran architecture for ultra-dense 5g networks: Challenges and methodologies,” IEEE Communications Magazine, vol. 56, no. 6, pp. 14–20, 2018.
- [13] C. Zhu and W. Yu, “Stochastic modeling and analysis of user-centric network mimo systems,” IEEE Transactions on Communications, vol. 66, no. 12, pp. 6176–6189, 2018.
- [14] D. Su and C. Yang, “User-centric downlink cooperative transmission with orthogonal beamforming based limited feedback,” IEEE Transactions on Communications, vol. 63, no. 8, pp. 2996–3007, 2015.
- [15] S. Zhang, P. He, K. Suto, P. Yang, L. Zhao, and X. Shen, “Cooperative edge caching in user-centric clustered mobile networks,” IEEE Transactions on Mobile Computing, vol. 17, no. 8, pp. 1791–1805, 2018.
- [16] J. Xu, L. Chen, and P. Zhou, “Joint service caching and task offloading for mobile edge computing in dense networks,” in IEEE INFOCOM 2018 - IEEE Conference on Computer Communications, April 2018, pp. 207–215.
- [17] H. Lu, X. Jiang, and C. W. Chen, “Distortion-aware cross-layer power allocation for video transmission over multi-user noma system,” IEEE Transactions on Wireless Communications, pp. 1–1, 2020.
- [18] J. Lin, Q. Li, Y. Li, and C. Jiang, “Dynamic base station clustering and beamforming for an uplink simo cloud radio access network,” in 2014 IEEE International Conference on Communiction Problem-solving, 2014, pp. 421–424.
- [19] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine learning, vol. 3, no. 1, pp. 1–122, 2011.