This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

User-Centric Cooperative MEC Service Offloading

Ruoyun Chen, Hancheng Lu, Pengfei Ma
Department of Electrical Engineering and Information Science,
University of Science and Technology of China, Hefei, Anhui 230027 China
[email protected], [email protected], [email protected]
Abstract

Mobile edge computing provides users with a cloud environment close to the edge of the wireless network, supporting the computing intensive applications that have low latency requirements. The combination of offloading with the wireless communication brings new challenges. This paper investigates the service caching problem during the long-term service offloading in the user-centric wireless network. To meet the time-varying service demands of a typical user, a cooperative service caching strategy in the unit of the base station (BS) cluster is proposed. We formulate the caching problem as a time-averaged completion delay minimization problem and transform it into time-decoupled instantaneous problems with a virtual caching cost queue at first. Then we propose a distributed algorithm which is based on the consensus-sharing alternating direction method of multipliers to solve each instantaneous problem. The simulations validate that the proposed online distributed service caching algorithm can achieve the optimal time-averaged completion delay of offloading tasks with the smallest caching cost in the unit of a BS cluster.

Index Terms:
User-centric network, MEC service offloading, online distributed caching.

I Introduction

With more intelligent and big-data applications developed on user terminals, the resources required by the computing intensive tasks from these applications are getting higher. For example, the popular virtual reality applications consume a lot of computing and storage resources to construct realistic images. However, the user terminals with extremely limited resources are helpless. Meanwhile, offloading these tasks to the remote cloud incurs non-negligible transmission delay through the already-congested backbone network. Hopefully, mobile edge computing (MEC) [1, 2] deploys small servers at the edge of the network, which provides a cloud environment in proximity of the user. Then the computation-intensive tasks can be offloaded to MEC with a low response latency.

[3, 4, 5, 6] have made some contributions to the MEC task offloading. MEC-assisted vehicle platooning is considered in [4], where the authors minimize the average total energy consumption under the constraint of meeting the deadlines of tasks, and propose a Lyapunov optimization algorithm to solve it. Amit Samanta et al. in [6] try to solve the service offloading problem considering tasks that have different delay requirements. Specifically, service offloading means prefetching the dependent databases and libraries, i.e., services, to support the computing-intensive tasks at first and then performing computing. The caching process during service offloading involves the joint allocation of both computing and storage resources, which brings challenges that has not been solved perfectly.

Furthermore, the wireless network plays a significant role in many end-to-end services [7, 8, 9, 10], which can also be performance bottleneck in the offloading process. But it is ignored and not well studied in existing works. Traditionally, multiple users associate with one base station (BS) and offload their tasks to the server deployed in the associated BS. The competition for limited wireless communication resources causes a non-negligible delay compared with the processing delay when the load on servers is light. Furthermore, users located at the edge of the conventional cells experience poor signal-to-interference-plus-noise ratio (SINR). With the evolution of 5G ultra-dense network, the user-centric architecture [11], [12] which replaces the traditional network-centric architecture, enables each user to be served by a proprietary virtual network. To implement this, multiple BSs form a cluster to serve a user cooperatively [13], [14]. The data sent by the user will be jointly decoded by the BS cluster. Combined with interference cancellation technology, the uplink data rate can be increased. Consequently, the wireless communication process will not be the bottleneck of delay performance.

Nevertheless, service offloading in the user-centric network brings new challenges. In order to meet the time-varying offloading demands of the typical user in a long time span and make full use of the limited resources on the edge servers, BSs in the cluster that serves a typical user should conduct service caching cooperatively. Although there exist some studies on the cooperative service caching problem [15], [16], most of them discuss the caching strategies based on the popularity of multiple users, where the individual demands are not considered. Combining the benefits of the user-centric network for offloading process, we study the service caching problem in the user-centric wireless network to provide customized offloading for each user. In a word, this paper investigates the cooperative caching problem from a long-term perspective for a typical user. The main contributions are summarized as follows:

1) We discuss the service offloading in the user-centric network. The service caching strategy is developed by minimizing the time-averaged completion delay of the offloading tasks in a long time span, for a typical user who is served by a BS cluster cooperatively.

2) In the absence of future information, a Lyapunov optimization based online algorithm is designed, which transforms the long-term service caching problem into multiple instantaneous optimization problems. Further to implement the cooperative service caching in each time slot, a distributed consensus-sharing algorithm under the alternating direction method of multipliers (ADMM) framework is proposed.

3) We carry out extensive simulations to validate the optimality and the superior convergence performance of the proposed algorithm. The results demonstrate that our algorithm outperforms the instantaneous optimal caching strategy and iteration based algorithm in terms of different metrics.

The rest of this paper is organized as follows. Section II describes the system model. The problem is formulated and analyzed in section III. Section IV proposes the algorithm to solve the problem. The simulations are presented in Section V. Finally, this paper is concluded in section VI.

II System Model

We consider a wireless network with MM BSs, denoted by set \mathcal{M}. Each BS is equipped with AA antennas and endowed with cloud resources, i.e., processing and storage capabilities. Constrained by the deployment overhead, the BS mm (mm\in\mathcal{M}) has a CPU with maximum processing capacity CmC_{m} (CPU cycles per Hz) and a storage with maximum space SmS_{m}.

There are UU active users in this network, denoted by the set 𝒰\mathcal{U}. Each user is served by a cluster of BSs. BSs in the cluster receive user data and decode it jointly by exchanging channel state information through the backhaul link. We assume that each user generates the computing tasks that request for one type of service each time. All KK types of the services are stored at remote cloud, denoted by the set 𝒦\mathcal{K}. The service kk (k𝒦k\in\mathcal{K}) has a size of sks_{k}, which will take up the storage space on edge servers of size sks_{k} if it is cached. Furthermore, it has a computing requirement of fkf_{k} to process per unit of data. Fig. 1 describes the main elements of the system vividly. Each user is served by 3 BSs, and overlapping between users is allowed for maximizing the utility of resources.

To describe the dynamic of the communication system and users’ requests, we separate the long time span TT into discrete time slots tt (1tT,t1\leq t\leq T,t\in\mathbb{Z}), and assume that the channel state and users’ requests in each time slot remain constant. The details of the system model are illustrated next.

Refer to caption
Figure 1: MEC-enabled user-centric network model.

II-A User-centric Communication model

Firstly, the user-centric wireless network model requires a clear statement of BS clustering. We use a binary variable cu,m(t)c_{u,m}(t) which indicates that the BS mm belongs to the cluster of the user uu at time tt with value 1, otherwise 0. The BS cluster of the user uu is denoted by Φu(t)\Phi_{u}(t) and the users served by Φu(t)\Phi_{u}(t) is denoted by Ωu(t)\Omega_{u}(t). Taking user uu as the typical user, the users belonging to {v:vu,vΩu(t)}\{v:\forall v\neq u,v\in\Omega_{u}(t)\} are called intra-cluster users, while the remaining users are called inter-cluster users. With the clustering model clarified, the signal received by BS mm is

bm(t)=\displaystyle b_{m}(t)= u𝒰pu𝐠mu(t)au(t)+n(t)\displaystyle\sum_{u\in\mathcal{U}}\sqrt{p_{u}}\mathbf{g}_{mu}(t)a_{u}(t)+n(t) (1)
=\displaystyle= pu𝐠mu(t)au(t)+vu,vΩu(t)pv𝐠mv(t)av(t)\displaystyle\sqrt{p_{u}}\mathbf{g}_{mu}(t)a_{u}(t)+\!\!\!\!\!\!\!\!\sum_{\tiny{\begin{array}[]{c}v\!\!\neq\!\!u,\\ v\!\!\in\!\!\Omega_{u}(t)\end{array}}}\!\!\!\!\!\!\!\!\sqrt{p_{v}}\mathbf{g}_{mv}(t)a_{v}(t)
+wu,wΩu(t)pw𝐠mw(t)aw(t)+n(t),\displaystyle+\!\!\!\!\!\!\!\!\sum_{\tiny{\begin{array}[]{c}w\!\!\neq\!\!u,\\ w\!\!\notin\!\!\Omega_{u}(t)\end{array}}}\!\!\!\!\!\!\!\!\sqrt{p_{w}}\mathbf{g}_{mw}(t)a_{w}(t)+n(t),

where pup_{u} is the signal power of user uu, 𝐠mu(t)𝒞A×1\mathbf{g}_{mu}(t)\in\mathcal{C}^{A\times 1} is the complex channel coefficient between BS mm and user uu. au(t)a_{u}(t) is the complex symbol transmitted by user uu at time tt and n(t)n(t) is the white Gaussian noise with variance σn2\sigma_{n}^{2} and zero mean. All the signals received by the BS mm can be divided into three parts: the first part is the useful signal of user uu, the second part consists of the signals sent by intra-cluster users, i.e., intra-cluster interference, while the last part is the inter-cluster interference.

There are multiple technologies that can mitigate the interference nowadays, e.g., non-orthogonal multiple access [17]. In this paper, to reflect the gain of BS clustering under the user-centric mode, beamforming (or precoding) is an effective interference cancellation technology. By designing the beamforming vector, the intra-cluster interference can be eliminated at the receiving BS cluster [13].The projection transformation zero-forcing beamformer for user uu is calculated as follows

𝐰u(t)=(𝐈A|Φu(t)|𝐆u(t)𝐆u(t))𝐠uu(t)(𝐈A|Φu(t)|𝐆u(t)𝐆u(t))𝐠uu(t)2,\displaystyle\mathbf{w}_{u}(t)=\frac{\big{(}\mathbf{I}_{A\left|\Phi_{u}(t)\right|}-\mathbf{G}_{-u}(t)\mathbf{G}_{-u}^{\dagger}(t)\big{)}\mathbf{g}_{u}^{u}(t)}{\left\|\big{(}\mathbf{I}_{A\left|\Phi_{u}(t)\right|}-\mathbf{G}_{-u}(t)\mathbf{G}_{-u}^{\dagger}(t)\big{)}\mathbf{g}_{u}^{u}(t)\right\|_{2}}, (2)

where 𝐠vu(t)=[,𝐠mv(t),]mΦu(t)T\mathbf{g}^{u}_{v}(t)=[\cdots,\mathbf{g}_{mv}(t),\cdots]_{m\in\Phi_{u}(t)}^{T} and 𝐆u(t)=[,𝐠vu(t)T,]vu,vΩuT\mathbf{G}_{-u}(t)=[\cdots,\mathbf{g}_{v}^{u}(t)^{T},\cdots]^{T}_{v\neq u,v\in\Omega_{u}}. Then we have the uplink SINR of user uu as

SINRu(t)\displaystyle S\!I\!N\!R_{u}\!(t) =pu|𝐰u(t)H𝐠uu(t)|2wΩu(t)pw|𝐰u(t)H𝐠wu(t)|2+|𝐰u(t)|2σn2.\displaystyle=\!\frac{p_{u}\left|\mathbf{w}_{u}(t)^{H}\mathbf{g}^{u}_{u}(t)\right|^{2}}{\tiny\sum\limits_{w\notin\Omega_{u}(t)}\!\!p_{w}\!\left|\mathbf{w}_{u}(t)^{H}\mathbf{g}^{u}_{w}(t)\right|^{2}\!\!+\!\left|\mathbf{w}_{u}(t)\right|^{2}\!\sigma_{n}^{2}}. (3)

The uplink data rate of user uu is ru(t)=log2(1+SINRu(t))r_{u}(t)=log_{2}(1+S\!I\!N\!R_{u}(t)).

II-B Offloading Task Model

The task generated by user uu at time tt is denoted by Tu(t)T_{u}(t), which is characterized by (dTu(t),wTu(t))(d_{T_{u}(t)},w_{T_{u}(t)}) representing the size of data and workload separatively. The workload refers to the computing resources required to process each unit of data, e.g., CPU cycles per bit, which can be acquired from the profile of the task. To clarify the type of service that the task requests for, we use a binary variable ok,Tu(t)o_{k,T_{u}(t)} to indicate that the task generated by user uu at time tt requests for service kk with value 1, otherwise 0. We assume that each user uploads a computing task that requests for one type of service at each time slot. A task which consists of multiple types of services will be considered as several independent tasks generated by different users. According to the above assumption, we have the constraint k𝒦ok,Tu(t)=1\sum_{k\in\mathcal{K}}o_{k,T_{u}(t)}=1.

II-C Service Offloading Model

The process of service offloading includes uploading tasks, processing tasks, and returning the computing results. Above all, the BSs have to prefetch the services from remote cloud to support the processing according to a caching strategy at the beginning of each time slot. Then the tasks arrive in each time slot.

Let xk,m(t)[1,1]x_{k,m}(t)\in[-1,1] be the caching strategy for service kk in BS mm at time tt. It indicates caching (xk,m(t)>0x_{k,m}(t)>0) or removing (xk,m(t)<0x_{k,m}(t)<0) by the sign, and the absolute value represent the corresponding probability. 𝐱m(t)=[x1,m(t),,xK,m(t)]T\mathbf{x}_{m}(t)=[x_{1,m}(t),\cdots,x_{K,m}(t)]^{T} represents the caching strategy of BS mm. 𝐗u(t)=[𝐱m(t)|mΦu(t)]\mathbf{X}^{u}(t)=[\mathbf{x}_{m}(t)|m\in\Phi_{u}(t)] denote the caching strategy of the BS clusterΦu(t)\Phi_{u}(t).

We use hk,m(t)[0,1]h_{k,m}(t)\in[0,1] to denote the probability that service kk has been cached on BS mm until time tt. Services that cached on servers will occupy resources. Consequently, limited resources on the edge servers limit the number of services that can be cached, which is represented by the constraints k𝒦hk,m(t+1)skSm\sum_{k\in\mathcal{K}}h_{k,m}(t+1)s_{k}\leq S_{m} and k𝒦hk,m(t+1)fkCm\sum_{k\in\mathcal{K}}h_{k,m}(t+1)f_{k}\leq C_{m} m\forall m\in\mathcal{M}. Services can be cached or removed by a certain probability. Consequently, the next caching status is only determined by the current caching status and decision, which means that the probabilistic service caching process is a Markov decision process. The state transition function in our service caching model is

[state(t+1)]\displaystyle\mathbb{P}[state(t+1)] =𝒫(state(t),action(t))\displaystyle\!\!=\!\mathcal{P}(state(t),action(t)) (4)
=[xk,m(t)]++hk,m(t)|xk,m(t)|hk,m(t),\displaystyle\!\!=\!\![x_{k,m}(t)]^{\!+}\!\!\!+\!h_{k,m}(t)\!-\!\left|x_{k,m}(t)\right|\!h_{k,m}(t),

where the []+[\cdot]^{+} means max(,0)max(\cdot,0). The last equation comes from the independence between xk,m(t)x_{k,m}(t) and hk,m(t)h_{k,m}(t).

Obtaining services from the cloud costs money, e.g., purchasing or transmitting. We assume that these costs differentiated by service types are proportional to the size of services. Let Costk=ξkskCost_{k}=\xi_{k}s_{k} denote the cost for prefetching service kk, where ξk\xi_{k} is the cost coefficient of per unit of service data, then the expected prefetching cost of BS mm is calculated as

Costm(t)=k𝒦ξksk[xk,m(t)]+(1hk,m(t)).Cost_{m}(t)=\sum_{k\in\mathcal{K}}\xi_{k}s_{k}[x_{k,m}(t)]^{+}(1-h_{k,m}(t)).

In each time slot, the user sends the offloading task with the size of dTu(t)d_{T_{u}(t)}, which leads to an uplink communication delay

DTu(t)UCN=dTu(t)ru(t).\displaystyle D_{T_{u}(t)}^{UCN}=\frac{d_{T_{u}(t)}}{r_{u}(t)}. (5)

The data will be received by all the BSs in the cluster and decoded jointly, and the task dispatching strategy adopted in this paper is that the BS in the cluster which has the highest probability cached the requested service will process the task. Other task dispatching strategy can be integrated, but it is not our focus here. Let k^=argmaxkok,Tu(t)\widehat{k}=\arg\max_{k}o_{k,T_{u}(t)} denote the service that Tu(t)T_{u}(t) requests for, and m^=argmaxmΦu(t)hk^,m(t+1)=argmaxmcu,m(t)hk^,m(t+1)\widehat{m}=\arg\max_{m\in\Phi_{u}(t)}h_{\widehat{k},m}(t+1)=\arg\max_{m}c_{u,m}(t)h_{\widehat{k},m}(t+1) denotes the BS that will process Tu(t)T_{u}(t). The processing delay on the edge server is

DTu(t)edge=dTu(t)wTu(t)fk^.\displaystyle D_{T_{u}(t)}^{edge}=\frac{d_{T_{u}(t)}w_{T_{u}(t)}}{f_{\widehat{k}}}. (6)

Meanwhile, the task will be uploaded to the remote cloud with probability 1hk^,m^(t+1)1-h_{\widehat{k},\widehat{m}}(t+1). Transmitting through the backbone network incurs a large delay

DTu(t)BKB=dTu(t)R,\displaystyle D_{T_{u}(t)}^{BKB}=\frac{d_{T_{u}(t)}}{R}, (7)

where RR is the data rate of the backbone network which is set as a small constant. We assume that the resources on the remote cloud is ample, thus the processing delay on the remote cloud is negligible. Then the delay that offloading to the remote cloud mainly comes from the transmission.

Generally, the results of calculations are very small, so we ignore the delay caused by returning the results. Summing up all the delay generated during the offloading process, the completion delay of Tu(t)T_{u}(t) is

DTu(t)total=\displaystyle D_{T_{u}(t)}^{total}= DTu(t)UCN+\displaystyle D_{T_{u}(t)}^{UCN}+ (8)
hk^,m^(t+1)DTu(t)edge+(1hk^,m^(t+1))DTu(t)BKB.\displaystyle h_{\widehat{k},\widehat{m}}(t\!+\!1)D_{T_{u}(t)}^{edge}\!+\!\big{(}1\!-\!h_{\widehat{k},\widehat{m}}(t\!+\!1)\big{)}D_{T_{u}(t)}^{BKB}.

III Problem Formulation

After clarifying the system model, the long-term service caching problem is formulated as a minimization of the average task completion delay of the whole offloading process for multiple time-slots in a long time span, which is illustrated by problem P.

𝐏:\displaystyle\mathbf{P}: min𝐗u(t)1TtTDTu(t)total,\displaystyle\quad\min_{\mathbf{X}^{u}(t)}\frac{1}{T}\sum_{t\in T}D_{T_{u}(t)}^{total}, (9a)
s.t.\displaystyle s.t. 1TtTmcu,m(t)Costm(t)<Costth,\displaystyle\quad\frac{1}{T}\sum_{t\in T}\sum_{m\in\mathcal{M}}c_{u,m}(t)Cost_{m}(t)<Cost^{th}, (9b)
k𝒦hk,m(t+1)skSm,\displaystyle\quad\sum_{k\in\mathcal{K}}h_{k,m}(t+1)s_{k}\leq S_{m}, (9c)
k𝒦hk,m(t+1)fkCm,\displaystyle\quad\sum_{k\in\mathcal{K}}h_{k,m}(t+1)f_{k}\leq C_{m}, (9d)
xu,m(t)[1,1].\displaystyle\quad x_{u,m}(t)\in[-1,1]. (9e)

Among where, CostthCost^{th} is a hyperparameter deciding the cost threshold of the service provider, we hope for a small caching cost to guarantee the response latency during the service offloading process. Constraint (9b) is the average service caching cost for multiple time slots. (9c) and (9d) are the expected storage resources and computing resources constraints on the edge servers, and (9e) indicates the domain of the variables.

Analyzing the above problem, the service caching strategy 𝐗u(t)=[𝐱m(t)|mΦu(t)]\mathbf{X}^{u}(t)=[\mathbf{x}_{m}(t)|m\in\Phi_{u}(t)] is the decision variable of the user-centric service caching problem. From (8) we can tell that the delay of wireless communication is independent of the caching process, while the processing delay is affected by the caching strategy of the BS cluster, which means that the service caching strategy depends on the clustering. Consequently, we solve the service caching problem based on the optimal BS clustering [18]. Naturally, our optimization goal is to minimize the averaged processing delay

1/TtThk^,m^(t+1)DTu(t)edge+(1hk^,m^(t+1))DTu(t)BKB.1/T\sum_{t\in T}h_{\widehat{k},\widehat{m}}(t+1)D_{T_{u}(t)}^{edge}\!+\!\big{(}1\!-\!h_{\widehat{k},\widehat{m}}(t+1)\big{)}D_{T_{u}(t)}^{BKB}.

The challenges of solving this problem come from the unknown dynamic requests of the user and the huge decision space of the BS cluster. For the unknown dynamic requests, the prediction of requests in next several time slots is generally more accurate, but it is difficult to predict the long-term requests. An online algorithm is necessary in the absence of future information. Troubled by the huge decision space, a distributed solution is more effective. Taking the above two points into consideration, we propose an online distributed service caching algorithm which is elaborated next.

IV Algorithms: On-ConShAD

IV-A Online Service Caching

Assuming that we have acquired the optimal clustering strategy for the typical user 𝐂u(t)\mathbf{C}^{u^{*}}(t) at the beginning of each time slot, then we solve the service caching problem (9).

The long-term cluster-based service caching problem hopes to improve the diversity of cached services through the collaborative caching between BSs, to handle the phenomenon that services cached may not be requested immediately but repeatedly in the future. To achieve this, the caching cost should be as low as possible, which is reflected in the form of constraint (9b) where the caching variables of different time slots are coupled. Based on Lyapunov optimization, we transform the long-term constraint (9b) into a queue stability problem. Specifically, we construct a virtual caching cost queue C(t)C(t) with the assumption that C(t)=0C(t)=0, indicating the deviation of current caching cost, and its dynamic evolves as follows

C(t+1)=[C(t)+a(t)b(t)]+,\small C(t+1)=[C(t)+a(t)-b(t)]^{+}, (10)

where a(t)=mcu,m(t)(k𝒦ξksk[xk,m(t)]+(1hk,m(t)))a(t)\!\!=\!\!\sum_{m\!\in\!\mathcal{M}}\!c_{u,m}\!(t)\!\Big{(}\!\sum_{k\!\in\!\mathcal{K}}\!\xi_{k}\!s_{k}[x_{k,m}(t)]^{+}\!\big{(}1\!-\!h_{k,m}\!(t)\!\big{)}\!\!\Big{)}, and b(t)=Costthb(t)=Cost^{th}.

The Lyapunov function is defined as L(t)=1/2C(t)2L(t)=1/2C(t)^{2} and the Lyapunov drift is (t)=L(t+1)L(t)\triangle(t)=L(t+1)-L(t). According to the above definitions, we have

(t)\displaystyle\triangle(t) =12C(t+1)212C(t)2\displaystyle=\frac{1}{2}C(t+1)^{2}-\frac{1}{2}C(t)^{2} (11)
12(C(t)+a(t)b(t))212C(t)2\displaystyle\leq\frac{1}{2}\big{(}C(t)+a(t)-b(t)\big{)}^{2}-\frac{1}{2}C(t)^{2}
=Q(t)+C(t)(a(t)b(t)),\displaystyle=Q(t)+C(t)\big{(}a(t)-b(t)\big{)},

where Q(t)=1/2(a(t)b(t))2Q(t)=1/2\big{(}a(t)-b(t)\big{)}^{2}, which can be proved that Q(t)Q(t)¯=1/2((mcu,m(t)k𝒦ξksk(1hk,m(t)))2+(Costth)2)Q(t)\leq\overline{Q(t)}=1/2\big{(}\big{(}\sum_{m\in\mathcal{M}}c_{u,m}(t)\sum_{k\in\mathcal{K}}\xi_{k}s_{k}(1-h_{k,m}(t))\big{)}^{2}+(Cost^{th})^{2}\big{)}. Consequently, the long-term constrained problem (9) is transformed into an instantaneous problem of minimizing an upper bound on the drift-plus-penalty expression under Lyapunov optimization framework

min𝐗u(t)Q(t)¯+C(t)(a(t)b(t))+VDTu(t)pro,\displaystyle\quad\min_{\mathbf{X}^{u}(t)}\overline{Q(t)}+C(t)\big{(}a(t)-b(t)\big{)}+VD_{T_{u}(t)}^{pro}, (12a)
s.t.\displaystyle s.t. (9c),(9d),(9e),\displaystyle\quad(\ref{storeC}),(\ref{compuC}),(\ref{variC}), (12b)

where VV is a non-negative weight that is chosen as desired to trade off between the completion delay of the users and the caching cost of the edge computing service providers.

IV-B Distributed Parallel Caching: An ADMM Approach

As the dynamics of BS clustering, the service caching problem of one user involves the strategies of the whole BS set, which has the size of K×MK\times M. The large-scale problem calls for a decentralized algorithm. According to [19], ADMM is a widely used algorithm for solving large-scale optimization problems. Furthermore, in order to improve the diversity of cached services with a small caching cost from the perspective of BS cluster, BSs in the same cluster should cooperate with each other. To handle it, we introduce a cooperative service caching algorithm based on the consensus-sharing framework of ADMM.

To solve the online caching problem (12) using ADMM, we analysis whether the optimization target (12a) is decomposable at first. On one hand, the drift item can be calculated independently between each BS and then get together. It can be rewritten as the sum of the decomposed items (the time index tt is omitted here)

f(X)\displaystyle f(\mathrm{X}) =Q¯+mCcu,mΞT([𝐱m]+(1𝐡m))CCostth\displaystyle=\!\!\overline{Q}\!+\!\!\!\!\!\sum_{m\in\mathcal{M}}\!\!\!Cc_{u,m}^{*}\Xi^{T}\!\!\big{(}[\mathbf{x}_{m}]^{+}\!\!\odot\!(1\!-\!\mathbf{h}_{m})\!\big{)}\!-\!CCost^{th} (13)
=mCcu,mΞT([𝐱m]+(1𝐡m))+1M(Q¯CCostth)\displaystyle=\!\!\!\!\!\sum_{m\in\mathcal{M}}\!\!\!\!Cc_{u,m}^{*}\!\Xi^{T}\!\!\big{(}[\mathbf{x}_{m}]^{\!+}\!\!\odot\!\!(1\!\!-\!\!\mathbf{h}_{m})\!\big{)}\!\!+\!\!\frac{1}{M}\!(\overline{Q}\!-\!CC\!ost^{\!th}\!)
=mfm(𝐱m),\displaystyle=\!\!\!\sum_{m\in\mathcal{M}}\!f_{m}(\mathbf{x}_{m}),

where X\mathrm{X} is the caching strategy of all the BSs in the system. 𝚵K×1=[ξ1s1,,ξKsK]T\mathbf{\Xi}_{K\times 1}=[\xi_{1}s_{1},\cdots,\xi_{K}s_{K}]^{T} is the service cost vector and 𝐡m(t)K×1=[h1,m(t),,hK,m(t)]T\mathbf{h}_{m}(t)_{K\times 1}=[h_{1,m}(t),\cdots,h_{K,m}(t)]^{T} is the caching state vector of BS mm, \odot means that the corresponding elements of two vectors or matrices are multiplied.

On the other hand, the penalty item in (12a), i.e., the expected processing delay of Tu(t)T_{u}(t), is determined by the probability hk^,m^(t+1)=maxm{[xk^,m(t)]++hk^,m|xk^,m(t)|hk^,m}h_{\widehat{k},\widehat{m}}(t+1)=\max_{m}\{[x_{\widehat{k},m}(t)]^{+}+h_{\widehat{k},m}-\left|x_{\widehat{k},m}(t)\right|h_{\widehat{k},m}\}. Hence, the penalty depends on the caching result of the whole BS cluster. This dependency prevents the penalty item from being decomposed directly into multiple agents. Therefore, we design a sharing optimization goal according to this to promote the cooperation in the consensus-sharing framework.

The penalty item in (12a) can be expressed as follows

P=DTu(t)pro=maxm{cu,m(t)𝐨T𝐡m(t+1)}(DTu(t)edgeDTu(t)BKB),\displaystyle P\!=D_{T_{u}(t)}^{pro}=\!\max_{m\in\mathcal{M}}\!\!\Big{\{}\!c_{u,m}\!(t)^{\!*}\mathbf{o}^{\!T}\mathbf{h}_{m}\!(t\!+\!1)\!\Big{\}}(\!D_{T_{u}(t)}^{edge}\!-\!D_{T_{u}(t)}^{BKB}\!), (14)

where 𝐨K×1=[o1,Tu(t),,oK,Tu(t)]T\mathbf{o}_{K\!\times\!1}\!=\![o_{1,T_{\!u}\!(t)},\!\cdots,\!o_{\!K,T_{\!u}\!(t)}]^{T} is the service type indicating vector of task Tu(t)T_{u}(t). According to the analysis of processing delay, we design the shared minimization goal in the form of a function related to the maximum of all the local caching variables, which is elaborated as follows (the time index tt is omitted here and later)

g(𝐋(𝐗))=maxm{m(𝐱m)}(DTu(t)edgeDTu(t)BKB),\displaystyle g(\mathbf{L}(\mathrm{\mathbf{X}}))=\max_{m\in\mathcal{M}}\{\mathcal{L}_{m}(\mathbf{x}_{m})\}(D_{T_{u}(t)}^{edge}-D_{T_{u}(t)}^{BKB}), (15)

where m(𝐱m)=cu,m𝐨T([𝐱m]++𝐡m|𝐱m|𝐡m)\mathcal{L}_{m}(\mathbf{x}_{m})\!\!=\!\!c_{u,m}^{*}\mathbf{o}^{T}\!\big{(}[\mathbf{x}_{m}]^{+}\!+\!\mathbf{h}_{m}\!-\!\left|\mathbf{x}_{m}\right|\!\odot\mathbf{h}_{m}) is a nonlinear function of the local caching strategy of BS mm.

With the above transformations, we can reformulate (12) as a consensus-sharing problem by introducing a consensus variable set {zm}m\{z_{m}\!\!\in\!\!\mathbf{\mathcal{R}}\}_{m\in\mathcal{M}} with constraint m(𝐱m)=zm\mathcal{L}_{m}(\mathbf{x}_{m})\!=\!z_{m}.

minXmfm(𝐱m)+Vg(𝐳),\displaystyle\quad\min_{\mathrm{X}}\!\sum_{m\in\mathcal{M}}f_{m}(\mathbf{x}_{m})+Vg(\mathbf{z}), (16a)
s.t.\displaystyle s.t. m(𝐱m)=zm,m,\displaystyle\quad\mathcal{L}_{m}(\mathbf{x}_{m})=z_{m},\forall m\in\mathcal{M}, (16b)
([𝐱m]++𝐡m|𝐱m|𝐡m)T𝐬Sm,\displaystyle\quad\big{(}[\mathbf{x}_{m}]^{+}+\mathbf{h}_{m}-\left|\mathbf{x}_{m}\right|\odot\mathbf{h}_{m}\big{)}^{T}\mathbf{s}\leq S_{m}, (16c)
([𝐱m]++𝐡m|𝐱m|𝐡m)T𝐟Cm,\displaystyle\quad\big{(}[\mathbf{x}_{m}]^{+}+\mathbf{h}_{m}-\left|\mathbf{x}_{m}\right|\odot\mathbf{h}_{m}\big{)}^{T}\mathbf{f}\leq C_{m}, (16d)

where 𝐬K×1=[s1,,sK]T\mathbf{s}_{K\times 1}=[s_{1},\cdots,s_{K}]^{T} is the size vector of services. The augmented Lagrange function is

Lρ(X,𝐳,𝐲)=Vg(𝐳)+m(fm(𝐱m)\displaystyle L_{\rho}(\mathrm{X},\mathbf{z},\mathbf{y})=Vg(\mathbf{z})+\sum_{m\in\mathcal{M}}\big{(}f_{m}(\mathbf{x}_{m}) (17)
+ym(m(𝐱m)zm)+ρ2m(𝐱m)zm2),\displaystyle+y_{m}(\mathcal{L}_{m}(\mathbf{x}_{m})-z_{m})+\frac{\rho}{2}\left\|\mathcal{L}_{m}(\mathbf{x}_{m})-z_{m}\right\|^{2}\big{)},

with dual variables ym,my_{m}\in\mathbf{\mathcal{R}},\forall m\in\mathcal{M}.

By defining the residual rmτ=m(𝐱mτ)zmτr_{m}^{\tau}=\mathcal{L}_{m}(\mathbf{x}_{m}^{\tau})-z_{m}^{\tau}, the variables are updated iteratively by solving the following scaled optimization problems

𝐱mτ+1:=\displaystyle\quad\mathbf{x}_{m}^{\tau+1}:= argmin𝐱mfm(𝐱m)+ρ2rm+umτ2\displaystyle\arg\min_{\mathbf{x}_{m}}f_{m}(\mathbf{x}_{m})+\frac{\rho}{2}\left\|r_{m}+u_{m}^{\tau}\right\|^{2} (18a)
𝐳τ+1:=\displaystyle\quad\mathbf{z}^{\tau+1}:= argmin𝐳Vg(𝐳)+ρ2mzmm(𝐱mτ+1)umτ2\displaystyle\arg\min_{\mathbf{z}}Vg(\mathbf{z})+\!\frac{\rho}{2}\!\!\sum_{m\in\mathcal{M}}\!\left\|z_{m}\!-\!\mathcal{L}_{m}(\mathbf{x}_{m}^{\tau\!+\!1})\!-\!u_{m}^{\tau}\right\|^{2} (18b)
umτ+1:=\displaystyle\quad u_{m}^{\tau+1}:= umτ+m(𝐱mτ+1)zmτ+1,\displaystyle u_{m}^{\tau}+\mathcal{L}_{m}(\mathbf{x}_{m}^{\tau+1})-z_{m}^{\tau+1}, (18c)

where um=1/ρymu_{m}\!=\!1\!/\!\rho\;y_{m}. The minimization of (18a) can be solved completely decentralized, i.e., each BS solves the local minimization problem (18a) under the constraints (16c), (16d) in parallel.

Finally, the Online Consensus and Sharing ADMM based Distributed algorithm (On-ConShAD) which solves the problem (9) is performed as Algorithm. 1.

Algorithm 1 On-ConShAD.
1:  Initialization: C(0)=0C(0)=0, hk,m(0)=0,xk,m(0)=0,k𝒦,mh_{k,m}(0)=0,x_{k,m}(0)=0,\forall k\in\mathcal{K},m\in\mathcal{M};
2:  for t=1t=1; tTt\leq T; t++t++  do
3:     Acquire the optimal BS clustering 𝐂u(t)\mathbf{C}^{u^{*}}(t), and calculate the uplink delay;
4:     Observe the caching state 𝐡m(t),m\mathbf{h}_{m}(t),\forall m\in\mathcal{M} of last time slot, and calculate the cost queue C(t)C(t) according to (10);
5:     Predict the offloading task request Tu(t)T_{u}(t).
6:     Initialization: τ=0\tau=0, error tolerance ϵ>0\epsilon>0, the maximum iterations itermaxiter^{max}, the dual variable 𝐮=𝟎\mathbf{u}=\mathbf{0};
7:     repeat
8:        Step1: m\forall m\in\mathcal{M}, acquire 𝐱m(t)τ+1\mathbf{x}_{m}(t)^{\tau+1} by solving (18a) under (16c), (16d) in parallel;
9:        Step2: Gather 𝐱m(t)τ+1\mathbf{x}_{m}(t)^{\tau+1} from all BSs in cluster and then update zm(t)τ+1z_{m}(t)^{\tau+1} by solving (18b) and calculate dual variable 𝐮\mathbf{u} according to (18c);
10:        Step3: Set τ=τ+1\tau=\tau+1;
11:     until the termination criterion is satisfied, i.e.,mm(𝐱mτ)zm(t)τ2ϵ\sum_{m\in\mathcal{M}}\left\|\mathcal{L}_{m}(\mathbf{x}_{m}^{\tau})-z_{m}(t)^{\tau}\right\|^{2}\leq\epsilon or τ>itermax\tau>iter^{max}
12:     Calculate the time-averaged caching cost and the time-averaged task completion delay.
13:  end for

V Performance Evaluation

Refer to caption
(a) Time-averaged performance.
Refer to caption
(b) Convergence of ADMM.
Figure 2: Performance under different size of BS cluster.

In this section, we evaluate the performance of On-ConShAD by some simulations. We set 10 BSs and 6 types of services. Each of the BS is equipped with 2 antennas and is endowed with a resources-limited server. Each service has specific configurations including size, cache overhead factor and resources requirement. We introduce interferences by setting 4 communication users.

First, we validate the superiority of serving user with BS cluster. In Fig. 2(a), the size 1 represents the case of serving user with single BS. On one hand, the decreasing circle line shows that serving user with BS cluster has a better delay performance. It is worth noting that the trend of the total delay and the uplink delay is basically the same, which means that the contribution of the BS cluster is mainly reflected in the uplink communication process. The joint decoding and interference elimination in the clustering mode lead to a higher uplink data rate, thus the uplink communication delay is smaller. On the other hand, the caching cost of BS cluster keeps decreasing until the size gets up to 4 while it increases when the size is larger. This reveals that the BSs in a larger cluster are more likely to perform redundant caching because of available resources, which aims at ensuring a minimum processing delay.

Further to explore the influence of cluster’s size on our algorithm, we test the convergence performance of distributed consensus-sharing ADMM. As shown in Fig. 2(b), the convergence always takes only several iterations as the size changes, which reveals that the size of the BS cluster has slight effect on the convergence performance of the distributed algorithm, indicating that the algorithm is scalable and can be applied to large-scale problems.

Refer to caption
(a) Time-averaged processing delay.
Refer to caption
(b) Time-averaged caching cost.
Figure 3: Impact of the way the BS cluster is divided.
Refer to caption
(a) Time-averaged total delay.
Refer to caption
(b) Time-averaged caching cost.
Figure 4: Time-averaged performance comparison.

Second, although we take the advantage of the cooperations of BS clusters, we did not discuss the division of BS clusters. The proposed algorithm is based on the assumption that the BS cluster is divided optimally and dynamically. Therefore, we have to figure the degree how much the clusters’ division impacts on the caching performance. In Fig. 3, the circle line represents the dynamic division while the diamond line represents the fixed division. The size of cluster is set as 3. The delay performance is unaffected (Fig. 3(a)). However, the caching performance degraded when we divide the cluster dynamically (Fig. 3(b)). Though the dynamic clustering achieves optimal uplink performance, it interrupts the long-term nature of the caching status. The services cached in the history are no longer useful due to re-clustering, which results in redundant caching.

Further to verify the effectiveness and optimality of the proposed On-ConShAD, we set the size of BS cluster as 3 and compare it with two algorithms. Single BS: This is the baseline of our algorithm, in which the typical user is served by a single BS. At each time slot, the typical user chooses one BS following the rule of achieving the highest uplink data rate, and offloads to that BS. By minimizing the weighted sum of the completion delay and the caching cost on the single BS to meet user’s requirement in each time slot, the solution is instantaneous optimal. Gibbs: This is also an iteration-based algorithm which adopts the variation of Gibbs Sampling method [16]. It makes binary caching decisions (cache with probability 1 or 0) and fractional offloading decisions (percentage of tasks computed at the edge).

Fig. 4(a) shows the time-averaged completion delay of these three algorithms. From which we can tell that the On-ConShAD performs as excellent as the instantaneous delay-optimal single BS performs. The completion delay of the tasks obtained by On-ConShAD is a little bit smaller, which has been validated in Fig. 2(a). Fig. 4(b) illustrates the effectiveness of the online strategy, where the minimization of the drift of the virtual caching cost queue limits the long-term caching cost to a small value. At first, On-ConShAD and single BS have equivalent caching cost. As time grows, single BS performs frequent replacement of services to minimize the completion delay, while On-ConShAD is gradually getting lower by minimizing the cumulative drift of the virtual caching cost queue. The poor performance of Gibbs is due to the certain caching strategy and the slow convergence of gibbs sampling. The convergence performance in [16] shows that it takes hundreds of iterations to get convergence while ADMM only takes a few iterations (2(b)).

VI Conclusion

In this paper, we discussed the service offloading problem in the user-centric wireless network, where each user is served by several BSs cooperatively. We mainly studied the cooperative service caching strategy during the offloading process from a long-term perspective. An online distributed algorithm is proposed to solve the problem. At last, the simulations show the benefit of BS cluster and validate the effectiveness of the proposed algorithm. Nevertheless, there are still some limitations that can be completed in the future, e.g., the BS clustering problem should be optimized together with the caching problem thus acquiring the joint optimal offloading solutions.

Acknowledgment

This work was supported in part by the National Science Foundation of China (NSFC) (Grants 61771445, 61631017 and U19B2044).

References

  • [1] N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie, “Mobile edge computing: A survey,” IEEE Internet of Things Journal, vol. 5, no. 1, pp. 450–465, 2017.
  • [2] Q. Pham, F. Fang, V. N. Ha, M. J. Piran, M. Le, L. B. Le, W. Hwang, and Z. Ding, “A survey of multi-access edge computing in 5g and beyond: Fundamentals, technology integration, and state-of-the-art,” IEEE Access, vol. 8, pp. 116 974–117 017, 2020.
  • [3] Y. Zhu, Y. Hu, and A. Schmeink, “Delay minimization offloading for interdependent tasks in energy-aware cooperative mec networks,” in 2019 IEEE Wireless Communications and Networking Conference (WCNC), April 2019, pp. 1–6.
  • [4] Y. Hu, T. Cui, X. Huang, and Q. Chen, “Task offloading based on lyapunov optimization for mec-assisted platooning,” in 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), 2019, pp. 1–5.
  • [5] Z. Qin, S. Leng, J. Zhou, and S. Mao, “Collaborative edge computing and caching in vehicular networks,” in 2020 IEEE Wireless Communications and Networking Conference (WCNC), May 2020, pp. 1–6.
  • [6] A. Samanta and Z. Chang, “Adaptive service offloading for revenue maximization in mobile edge computing with delay-constraint,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 3864–3872, 2019.
  • [7] H. Lu, M. Zhang, Y. Gui, and J. Liu, “Qoe-driven multi-user video transmission over sm-noma integrated systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 9, pp. 2102–2116, 2019.
  • [8] Y. Fu, C.-A. Jiang, Y. Qin, and L. Yin, “Secure routing and transmission scheme for space-ocean broadband wireless network,” Science China Information Sciences, vol. 63, 04 2020.
  • [9] L. Gong and Z. Zhu, “Virtual optical network embedding (vone) over elastic optical networks,” Journal of Lightwave Technology, vol. 32, no. 3, pp. 450–460, 2014.
  • [10] P. Lu, L. Zhang, X. Liu, J. Yao, and Z. Zhu, “Highly efficient data migration and backup for big data applications in elastic optical inter-data-center networks,” IEEE Network, vol. 29, no. 5, pp. 36–42, 2015.
  • [11] S. Chen, F. Qin, B. Hu, X. Li, and Z. Chen, “User-centric ultra-dense networks for 5g: challenges, methodologies, and directions,” IEEE Wireless Communications, vol. 23, no. 2, pp. 78–85, 2016.
  • [12] C. Pan, M. Elkashlan, J. Wang, J. Yuan, and L. Hanzo, “User-centric c-ran architecture for ultra-dense 5g networks: Challenges and methodologies,” IEEE Communications Magazine, vol. 56, no. 6, pp. 14–20, 2018.
  • [13] C. Zhu and W. Yu, “Stochastic modeling and analysis of user-centric network mimo systems,” IEEE Transactions on Communications, vol. 66, no. 12, pp. 6176–6189, 2018.
  • [14] D. Su and C. Yang, “User-centric downlink cooperative transmission with orthogonal beamforming based limited feedback,” IEEE Transactions on Communications, vol. 63, no. 8, pp. 2996–3007, 2015.
  • [15] S. Zhang, P. He, K. Suto, P. Yang, L. Zhao, and X. Shen, “Cooperative edge caching in user-centric clustered mobile networks,” IEEE Transactions on Mobile Computing, vol. 17, no. 8, pp. 1791–1805, 2018.
  • [16] J. Xu, L. Chen, and P. Zhou, “Joint service caching and task offloading for mobile edge computing in dense networks,” in IEEE INFOCOM 2018 - IEEE Conference on Computer Communications, April 2018, pp. 207–215.
  • [17] H. Lu, X. Jiang, and C. W. Chen, “Distortion-aware cross-layer power allocation for video transmission over multi-user noma system,” IEEE Transactions on Wireless Communications, pp. 1–1, 2020.
  • [18] J. Lin, Q. Li, Y. Li, and C. Jiang, “Dynamic base station clustering and beamforming for an uplink simo cloud radio access network,” in 2014 IEEE International Conference on Communiction Problem-solving, 2014, pp. 421–424.
  • [19] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine learning, vol. 3, no. 1, pp. 1–122, 2011.