This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

DDPS: Dynamic Differential Pricing-based Edge Offloading System with Energy Harvesting Devices

Hai Xue, , Yun Xia, Neal N. Xiong, , Di Zhang, , Songwen Pei This work was supported in part by the National Natural Science Foundation of China (NSFC) under grant 61975124, and the Joint Funds of NSFC under grant U22A2001. (Corresponding author: Songwen Pei.) Hai Xue, Yun Xia, and Songwen Pei are with the School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China (e-mail: [email protected], [email protected], [email protected]). Neal N. Xiong is with the Department of Computer Science and Mathematics, Sul Ross State University, Alpine, TX 79830 USA (e-mail: [email protected]). Di Zhang is with the School of Electrical and Information Engineering, Zhengzhou University, the Henan International Joint Laboratory of Intelligent Health Information System, the National Telemedicine Center, and the National Engineering Laboratory for Internet Medical Systems and Applications, Zhengzhou 450001, China (e-mail: [email protected]).
Abstract

Mobile edge computing (MEC) paves the way to alleviate the burden of energy and computation of mobile users (MUs) by offloading tasks to the network edge. To enhance the MEC server utilization by optimizing its resource allocation, a well-designed pricing strategy is indispensable. In this paper, we consider the edge offloading scenario with energy harvesting devices, and propose a dynamic differential pricing system (DDPS), which determines the price per unit time according to the usage of computing resources to improve the edge server utilization. Firstly, we propose an offloading decision algorithm to decide whether to conduct the offloading operation and how much data to be offloaded if conducted, the algorithm determines offloading operation by balancing the energy harvested with the energy consumed. Secondly, for the offloading case, we formulate the game between the MUs and the server as a Stackelberg game, and propose a differential pricing algorithm to determine the optimal computing resources required by MUs. Furthermore, the proposed algorithm also reallocates computing resources for delay-sensitive devices while server resources are surplus after the initial allocation, aiming to make full use of the server computing resources. Extensive simulations are conducted to demonstrate the effectiveness of the proposed DDPS scheme.

Index Terms:
Mobile edge computing, pricing strategy, resource allocation, Stackelberg game.

I Introduction

Internet of things (IoT) is a transformative technology that empowers large-scale, resource-constrained devices with software programming, sensors, and network technology, which brings us closer to the realization of internet of everything. According to ITU-R WP5D, the ubiquitous and intelligent IoT will be the main focus of the forthcoming 6G research towards 2035. However, when mobile IoT devices are deployed in remote areas and resource-scarce environments (e.g., forests, deserts, and oceans), they often struggle to support applications demanding substantial computation resources and energy. Although mobile IoT devices are becoming more and more powerful, their computing and battery power capabilities are still insufficient due to the limitation of physical size, which poses a great challenge to perform computation-intensive tasks on IoT devices [1, 2, 3].

Mobile edge computing (MEC) is expected as an effective solution to address the feeble computing capability and limited battery power challenges by offloading computational tasks from mobile IoT devices to MEC servers. MEC deploys servers with abundant computational resources at the network edge, minimizing the requirement of long-distance data transmission, which results in low latency and prolongs operational longevity for mobile IoT devices. Nonetheless, the IoT device necessitates periodic recharging or battery replacement, particularly in remote areas which incurs substantial financial costs. Therefore, in this paper, we consider the edge offloading scenario with energy-harvesting devices to provide a sustainable power source for mobile IoT devices, for which not only mitigates the requirement for frequent manual battery replacements but also extends the overall lifetime of these mobile IoT devices.

Beyond the consideration of solar power, terminal devices also have to prioritize cost minimization. While terminal devices offload tasks to edge servers, they must carefully evaluate the trade-off between offloading operation and local execution costs to maximize their utility. To this end, questions such as whether to conduct offloading operation, how much data to be offloaded, and the optimal amount of computing resources to request from the edge server become paramount. Simultaneously, it is vital to foster an incentive mechanism to stimulate edge servers to willingly engage the task process for edge devices. Without adequate motivation, servers are disinclined to allocate resources to process the offloaded computation-intensive tasks. To this end, our focus shifts to the development of pricing strategies to incentivize server participation. That is to say, it is challenging to develop a trade-off pricing scheme between mobile users (MUs) and edge server while fully utilizing the server computing resources.

Most of the existing pricing schemes denoted the cost as an optimal constant per CPU cycle [4, 5, 6], and the unit price remains consistent with each unit of data length. In other words, the payments are directly proportional to the quantity of offloaded data with these existing schemes. That is, if multiple users111 Unless stated otherwise, we use the terms MUs, terminal devices, and users, interchangeably. offload the same amount of data and process identical applications, they would pay identical charges, irrespective of their computational demands on the server. To this end, MUs may be inclined to occupy substantial computational resources on the server. However, from the perspective of the server side, it is imperative to consider each user’s utilization of computation, given the finite nature of server resources. Elastic pricing schemes that consider both computing resource usage and the volume of offloaded data alleviate the concerns of resource over-utilization. As a consequence, pricing strategies should encompass considerations of computing resource usage alongside offloaded data volumes.

Therefore, in this paper, we propose an effective dynamic differential pricing system with energy harvesting devices (DDPS). The proposed DDPS scheme considers each MU usage of server computing resources when determining the unit price. At the meantime, energy-harvesting devices are also utilized to calculate the optimal data offloading volume for MUs. For the game of terminal device and server, we employ the Stackelberg game to establish a cooperative decision-making process, which involves determining the computation resources required by the terminal devices and devising a corresponding pricing strategy for the server.

The contributions of this paper are summarized as follows.

  1. 1)

    We firstly propose an efficient task offloading decision algorithm, which incorporates energy harvesting capability into the sensor devices to determine the volume of data offloaded based on the amount of harvested energy.

  2. 2)

    We propose a dynamic differential pricing algorithm taking into account each MU’s usage of server computing resources. For the proposed pricing algorithm, the unit price is not fixed at a specific value, but a variable function of CPU usage of MU. Exactly speaking, MU should pay in proportion to the usage of server computing resources as well as the amount of offloaded data.

  3. 3)

    We establish a single-server multi-user Stackelberg game to determine the optimal computing resources required by MUs and the optimal server pricing. In addition, we also propose a redistribution strategy for the situation where the server has surplus resources to maximize the server’s utility.

  4. 4)

    Extensive simulations are conducted to verify the effectiveness of the proposed DDPS scheme, the simulation results demonstrate that our proposed scheme enhances the utility of server, MUs ratio of service, and reduces the average delay of MUs.

The rest of this paper is organized as follows. In Section II, we present the related work. In Section III, the proposed system model is illustrated, including the model of energy consumption and delay. Section IV exhibits the problem formulation and solutions. In Section V, simulation results are presented and discussed. The conclusion is given in Section VI.

II Related work

In the realm of MEC systems, extensive research has been conducted on resource allocation and pricing strategies to incentivize edge servers to serve users. We divide them into 3 categories from different perspectives as follows.

II-A Game pricing

First of all, a common method is applying the Stackelberg game to address challenges related to resource provisioning and pricing between the MUs and server. Tao etet al.al. [7] utilized a Stackelberg game framework to establish the relationship between MEC server resources pricing and offloading data volume. The authors employed a differential evolution algorithm to seek the optimal pricing strategies. Chen etet al.al. [8] decomposed the multi-resources allocation and pricing problem into sub-problems, constructing Stackelberg games for each sub-problem. The authors proposed an iterative algorithm to find equilibrium prices. Li etet al.al. [9] introduced a novel multi-leader single-follower Stackelberg game, where each leader sets an optimal price based on the computation resources required by followers. Qin etet al.al. [10] modeled the interaction between MEC servers and vehicles as a Stackelberg game, presenting a dynamic iterative algorithm to find Nash equilibrium for pricing determination. Wang etet al.al. [11] framed the interaction between UAV-MEC servers and MUs as a Stackelberg game, developing an arithmetic descent-based MRIG algorithm for computing resource price. Mitsis etet al.al. [12] modeled user data offloading decisions as a non-cooperative game and employed semi-autonomous game methods or fully autonomous reinforcement learning to obtain optimal computing service price. Li etet al.al. [13] formulated the resources management and pricing problem between MEC server and IoT devices as a bidirectional auction game, introducing the EWA algorithm to incorporate artificial intelligence for adaptive learning of optimal price.

II-B Auction pricing

A large body of literature has used auction theory to study resource pricing in the context of MEC. Hai etet al.al. [14] addressed the resource allocation challenge between MUs and microclouds. The authors proposed a suitable auction mechanism to determine the price charged by microclouds to MUs. Shen etet al.al. [15] delved into edge cloud pricing issues, introducing an FPTAS auction mechanism to achieve socially optimal welfare. Ng etet al.al. [16] presented a full-payment auction mechanism to incentivize edge devices to actively participate in encoding computational tasks. For this mechanism, the bid from each edge device indicates its CPU capability, influencing the allocation of computational tasks. Wang etet al.al. [17] firstly established the relationship between the resources offered by edge clouds and the charges to MUs in a non-competitive environment. Subsequently, in a competitive environment, the authors designed an online PMMRA auction mechanism, effectively determining the price paid by MUs for the resources provided by MEC servers. Sun etet al.al. [18] proposed two auction mechanisms, DAMB and BFDA, where MUs declare bids when requesting multi-task services, and edge servers collaboratively provide services to MUs. Wu etet al.al. [19] designed an auction mechanism for a MEC system comprising multiple MUs and a service provider. They presented a precise algorithm to maximize social welfare and a perturbation-based randomized allocation algorithm to achieve an approximate (1-α\alpha) optimal social welfare, demonstrating the effectiveness of their auction mechanism. Ma etet al.al. [20] introduced a TCDA auction mechanism, providing distinct pricing strategies based on critical values and VCG mechanisms for MUs and edge servers. Wang etet al.al. [21] devised an auction pricing strategy where the highest bidder in each round could perform task offloading, confirming the efficacy of this strategy. Su etet al.al. [22] proposed a TCA auction mechanism and verified its effectiveness in addressing resource allocation and pricing challenges.

Refer to caption
Figure 1: System model.

II-C Dynamic Pricing Based on Physical Layer Parameters

While existing pricing works often rely on abstract concave utility functions, some literature explores dynamic differential pricing based on MUs’ physical layer parameters, contributing to a more nuanced understanding of resource pricing in MEC systems. Han etet al.al. [23] utilized idle computation resources from parked vehicles in MEC systems, the authors dynamically adjusted price based on the current system state to minimize average costs. Chang etet al.al. [24] employed binary search to find optimal offloading delay to guarantee the processing latency, and dynamically adjusting price within the latency range. Liu etet al.al. [25] proposed unified and differential pricing strategies based on the extent of edge cloud awareness of network information. Seo etet al.al. [1], Liang etet al.al. [26], and Kim etet al.al. [27] determined pricing dynamically based on MU utilization of server computing resources, with different approaches such as linear, quadratic, and profit-maximization formulations.

Although existing pricing schemes yielded excellent success, they still suffer from several issues. Most of these schemes[4, 5, 6, 25, 27] define cost as a fixed optimal value per CPU cycle, resulting in a consistent unit price for each data length unit. That is, the payment amount becomes directly proportional to the data volume, disregarding MUs’ computational demands on the server when multiple users offload and process equal amounts of data and applications. This may result in excessive utilization of computing resources on the server. Some papers [28, 29, 30] fail to consider both the quantity of offloaded data and computing resources usage, instead relying on an abstract concave utility function to uniformly price all MUs seeking server access. In other words, regardless of their specific requirements for offloaded data or computing resources used, all MUs are charged equally to maximize total social welfare. By doing so, this approach leads to reduced server utility and user satisfaction due to its inherent unfairness. From a server perspective with limited computing resources available, it is crucial to consider individual user utilization of these resources. Elastic pricing schemes help alleviate resource over-utilization problems by dynamically adjusting prices based on demand fluctuations.

III System model

As depicted in Fig. 1, a single server and multiple MUs are considered in this paper, and we assume that all the tasks follow the FCFS rule in the queue. MUs are charged for utilizing server computing resources, as long as MEC servers are available to handle the tasks in the queue. We denote that the amount of CPU resources utilized by a MU is decided by the maximum delay. Upon the task arrival, it first enters the first level queue (i.e., Class 1). if the server does not have enough computation resources to execute the current task under the delay constraint, the task is forwarded into the second level queue (i.e., Class 2) and waits for service, so that Class 1 generates a game between the subsequent task and the server [28]. At the beginning of the next time slot, the server assigns priority to serve tasks in Class 2, and then returns back to Class 1. For the task in Class 2, if the computing resources are still not enough, it continues to wait until the task exceeds the maximum delay and is dequeued. For the sake of convenience, Table I summarizes the important notations in this paper.

TABLE I: SUMMARY OF PRIMARY NOTATIONS
Symbol Description
lml_{m} The maximum amount of data offloading of the user
lil_{i} The amount of data generated by the user
qoq_{o} The optimal data offloading amount of the user
qiq_{i} The amount of data offloaded by the iith user
FtF_{t} The remaining computing resources of the server
FiF_{i} The computing resource requested by ithi^{th} user from server
FlociF_{loc}^{i} The local computing capability of MU ii
hh The number of cycles required to process a bit
AA The area of the solar energy harvesting device
HH The average amount of solar radiation
η\eta The efficiency of the solar device
kak_{a} The correction factor
EhE_{h} The energy harvested by a solar installation
EuE_{u} The amount of energy consumed by uploading task
EdE_{d} The amount of energy consumed by downloading task
kk The effective switched capacitance
μ\mu The service rate of the queue
λ\lambda The task arrival rate
treqit_{req}^{i} The delay requirement to be satisfied by the ithi^{th} MU
XX The pointer to the Class 1 queue
YY The pointer to the Class 2 queue

III-A MU energy consumption model

We assume that the total amount of data per task is lil_{i} bit, the amount of offloaded data is qiq_{i} bit, and the server takes hh cycles to process a bit of data. Therefore, the energy consumption of local execution is expressed by the following formula [31].

Eloci=k(liqi)hFloc2.E_{loc}^{i}=k\cdot(l_{i}-q_{i})h\cdot F_{loc}^{2}. (1)

where kk denotes the effective switched capacitance based on the chip architecture, and FlocF_{loc} denotes the CPU capacity of the MU. We also assume that the achievable uplink and downlink transmission rates from the user to the server are RuR_{u} and RdR_{d}, respectively. Since the data size is different from the original size after execution if the amount of task qiq_{i} is offloaded to the server, and we denote the ratio as rr [32], then, the downlink data is qirq_{i}r. To sum up, the uplink transmission time tupt_{up} and the downlink transmission time tdownt_{down} are expressed as follows.

tup=qiRu.t_{up}=\frac{q_{i}}{R_{u}}. (2)
tdown=qirRd.t_{down}=\frac{q_{i}r}{R_{d}}. (3)

Assume that the upload and download transmission power are PuP_{u} and PdP_{d}, respectively. Based on this, the upload transmission energy consumption EuE_{u} and the download transmission energy consumption EdE_{d} are accordingly expressed as follows.

Eu\displaystyle E_{u} =Putup\displaystyle=P_{u}t_{up} (4)
=PuqiRu.\displaystyle=P_{u}\frac{q_{i}}{R_{u}}.
Ed\displaystyle E_{d} =Pdtdown\displaystyle=P_{d}t_{down} (5)
=PdqirRd.\displaystyle=P_{d}\frac{q_{i}r}{R_{d}}.

Therefore, the total energy consumption EtotE_{tot} of MU ii per time slot tt is obtained as follows.

Etoti(t)=Eui(t)+Eloci(t)+Edi(t).E_{tot}^{i}(t)=E_{u}^{i}(t)+E_{loc}^{i}(t)+E_{d}^{i}(t). (6)

In the meantime, the battery itself cannot supply power to the device for a long time since we assume all MUs are difficult to replace batteries in remote areas. However, energy harvesting devices are deployed as aforementioned, and the energy conversion EhE_{h} is defined by the following equation.

Ehi(t)=AHηka.E_{h}^{i}(t)=AH\eta k_{a}. (7)

where AA is the area of the solar panel, HH is the average amount of solar radiation, kak_{a} \in [0, 1] is the correction factor, and η\eta \in (0,1) is the efficiency of the solar device. It should be noted that correction factors are added to represent the actual energy absorbed by energy devices due to objective reasons such as weather, shelter, etc. To sum up, the energy surplus ErE_{r} per time slot tt of the device is expressed by the following formulas.

Eri(t)\displaystyle E_{r}^{i}(t) =Ebi(t)+Ehi(t)Etoti(t).\displaystyle=E_{b}^{i}(t)+E_{h}^{i}(t)-E_{tot}^{i}(t). (8)
s.t. Eri(t)0,\displaystyle\quad E_{r}^{i}(t)\geq 0, (8a)
Eri(t)Eb.\displaystyle\quad E_{r}^{i}(t)\leq E_{b}. (8b)

where EbE_{b} is the battery capacity of MU ii.

III-B Queue model

Assume that the task arrival follows Poisson distribution and the probability function is expressed as the following formula.

P(N=n)=eλn!λn.P(N=n)=\frac{e^{-\lambda}}{n!}\lambda^{n}. (9)

where λ\lambda is the task arrival rate and the NN is the number of tasks. The average waiting delay twt_{w} is calculated as follows.

tw\displaystyle t_{w} =1/μ1λ/μ1μ\displaystyle=\frac{1/\mu}{1-\lambda/\mu}-\frac{1}{\mu} (10)
=λ/μ21λ/μ.\displaystyle=\frac{\lambda/\mu^{2}}{1-\lambda/\mu}.

where μ\mu is the service rate of the queue which represents the number of tasks in the queue processed per unit time. For the proposed scheme, it is assumed that the task is completed within the maximum delay. So, the average completion time tavet_{ave} is approximately denoted as the mean value of the maximum delay of all MUs. tavet_{ave} is derived as follows.

μ\displaystyle\mu =maxNtave.\displaystyle=\frac{\max N}{t_{ave}}. (11)
s.t. Fti=1NFi.\displaystyle\quad F_{t}\leq\sum_{i=1}^{N}F_{i}. (11a)
tave=i=1maxNtreqimaxN.t_{ave}=\frac{\sum_{i=1}^{\max N}t_{req}^{i}}{\max N}. (12)

where treqit_{req}^{i} is the delay requirement to be satisfied by the ithi^{th} MU, and FiF_{i} is the server CPU capacity utilized by the ithi^{th} MU, FtF_{t} denotes the available CPU capacity of the server. FiF_{i} and FtF_{t} are illustrated in subsection CC detailedly.

III-C Pricing model for the server

Denote FiF_{i} as the CPU capacity utilization by the ithi^{th} MU, and the processing time tpit_{p}^{i} at the server is shown as follows.

tpi=hqiFi.t_{p}^{i}=\frac{hq_{i}}{F_{i}}. (13)

Here, a pricing bidding function is proposed which considers the CPU utilization to set the price per unit time. The principle of defining the bidding function is that the function must be an increasing function [1]. The bidding feature should become more expensive as the user consumes more CPU capacity, and the server should be set at a higher price to ease the pressure on the server as it has fewer available resources. Let FtF_{t} denote the available server CPU capacity, the unit price is defined as the product of the pressure and the resources consumed, which is illustrated as follows.

wi=FiFtf(Fi).w_{i}=\frac{F_{i}}{F_{t}}f(F_{i}). (14)

where ww is the unit price. Finally, the total payment is the product of the price per unit time and the processing time, which is calculated as follows.

Wi\displaystyle W_{i} =wtpi\displaystyle=wt_{p}^{i} (15)
=hqiFtf(Fi).\displaystyle=\frac{hq_{i}}{F_{t}}f(F_{i}).

where WiW_{i} is the payment of the ithi^{th} MU. In addition, a discount strategy exists when the MU uses more server resources, we adopt the logarithmic function222The lg()lg(\cdot) function we priced is based on a logarithm with a base of 10, this is because computational resources are usually represented in scientific notation, and which is convenient for calculation. as the price bidding function [8, 1]. Then, the payment WiW_{i} is expressed as

Wi=hqiFtlg(Fi+d).\displaystyle W_{i}=\frac{hq_{i}}{F_{t}}\lg(F_{i}+d). (16)

Here, dd should not be smaller than 1 since the payoff is always positive. It is important to note that WiW_{i} should not be too large as well as wiw_{i}, otherwise, the user pays too much and abandons the task processing, which increases the packet loss rate as well as the corresponding penalty function increases. The penalty function PP is defined hereinafter.

Lemma 1

The payment WiW_{i} is an increasing function concerning the offload data size qiq_{i} and the CPU capacity of the server utilized by MUs FiF_{i}, respectively.

Proof 1

hh is the required number of CPU cycles for one-bit data, and FtF_{t} is the total CPU capacity of the server, they should be non-negative numbers by definition. The partial derivatives of payment concerning qiq_{i} and FiF_{i} are as follows:

Wiqi=hFtlg(Fi+d)0.\frac{\partial W_{i}}{\partial q_{i}}=\frac{h}{F_{t}}\lg(F_{i}+d)\geq 0. (17)
WiFi=hqiFt1(Fi+d)ln100.\frac{\partial W_{i}}{\partial F_{i}}=\frac{hq_{i}}{F_{t}}\frac{1}{(F_{i}+d)\ln 10}\geq 0. (18)

For either qiq_{i} or FiF_{i}, their first partial derivatives are greater than zero, so WiW_{i} is an increasing function.

Refer to caption
Figure 2: The procedure of Stackelberg game.

IV The proposed DDPS

In this section, we model the game between the server and MUs as a Stackelberg game with a single leader and multiple followers, where the server acts as a leader and multiple MUs act as followers. As depicted in Fig. 2, which exhibits the procedure of the established Stackelberg game between MUs and MEC server. When a MU intends to offload a task, it firstly provides task-related information, including its CPU capacity (FlocF_{loc}), the required server CPU capacity (FiF_{i}), the minimum latency requirement (treqt_{req}), and the data size to be processed (lil_{i}). Subsequently, the server evaluates the information to decide whether it can accommodate the task. If acceptable, the server then informs the user about the remaining server resources and quotes an appropriate service price to the user. According to the game rule we defined, aiming to maximize server utility function, when all MUs enter the server queue within a time slot and there are remaining server resources, the surplus computing resources are allocated based on the ratio of each MU’s offloaded data amount to the total server data amount, and then quote the higher price accordingly to increase user average expenditures.

Subsequently, a cost function for the user and a utility function for the server is formulated, respectively. Each user selfishly minimizes its cost function and the server selfishly maximizes its utility function, separately.

IV-A The utility function of MU and the optimal policy

The utility function of MU is obtained by subtracting the payment to the edge server from the profit of energy saving by offloading tasks. So, the utility function UuseriU_{user}^{i} of the MU is expressed as follows.

Uuseri=maxEri+minWi.U_{user}^{i}=\max E_{r}^{i}+\min W_{i}. (19)

Subsequently, the execution latency is modeled, for which a partial offloading model is considered [33]. The MU offloads part of the data and the lil_{i} is processed both on the server and locally. To this end, the execution delay is expressed as the higher value between the time taken for offloading tofft_{off} and the local processing time tloct_{loc}, which is expressed as follows.

te=max(tloc,toff).t_{e}=\max(t_{loc},t_{off}). (20)

The time tloct_{loc} required for all tasks to be executed locally is shown as follows.

tloc=(liqi)hFloc.t_{loc}=\frac{(l_{i}-q_{i})h}{F_{loc}}. (21)

The time consumed for processing the offloaded data is set as the sum of waiting delay twt_{w}, uplink data transmission time tupt_{up}, processing time on the server tpt_{p}, and downlink data transmission time tdownt_{down}.

toff=tup+tp+tdown+tw.t_{off}=t_{up}+t_{p}+t_{down}+t_{w}. (22)

Since the downlink transmission data is less than the uplink transmission data qiq_{i}, and the time consumed for offloading is expressed as follows.

toff\displaystyle t_{off} =qiRu+rqiRd+tp+tw\displaystyle=\frac{q_{i}}{R_{u}}+\frac{rq_{i}}{R_{d}}+t_{p}+t_{w} (23)
=(1Ru+rRd+hFi)qi+λ/μ21λ/μ.\displaystyle=(\frac{1}{R_{u}}+\frac{r}{R_{d}}+\frac{h}{F_{i}})q_{i}+\frac{\lambda/\mu^{2}}{1-\lambda/\mu}.

When the offloading delay exceeds the delay tolerance constraint of the task, then, the task is dropped and the penalty function PP of packet loss is given by the following equation [23].

P=γi=0MWi.P=\gamma\sum_{i=0}^{M}W_{i}. (24)

where MM is the number of dropped packets and γ\gamma \in (0, 1) is a weight coefficient. It should be noted that since the discarded tasks do not play a game with the server, the optimal FiF_{i} cannot be determined. Here, we calculate the FiF_{i} based on the maximum delay tolerance as the pricing directly.

Input: treqt_{req}
Output: XX
1 while X!=null do
2       Enter the X.lX.l ;
3       Get the EhE_{h} according to Eq. (7);
4       According to Lemma2, Lemma3 and Lemma4, get lcl_{c}, qooptq_{o}^{opt}, lml_{m}.;
5       if 0<X.l<lc0<X.l<l_{c} then
6             compute locally;
7            
8      else
9             if lcX.l<qooptl_{c}\leq X.l<q_{o}^{opt} then
10                   offload partially;
11                   calculate the X.F according to the treqt_{req};
12                   calculate the X.t according to the Eq. (20);
13            else
14                   if qooptX.llmq_{o}^{o}pt\leq X.l\leq l_{m} then
15                         calculate the X.F according to the treqt_{req};
16                         calculate the X.t according to the Eq. (23);
17                  else
18                         drop;
19                   end if
20                  
21             end if
22            
23       end if
24      X=X.next;
25 end while
Algorithm 1 Offloading decision444When designing the algorithm, for the variables involved in the table, we omitted the subscripts and used Pointers to represent the ithi^{th} user.

Aiming to obtain the optimal strategy for the MU, a budgeted energy offloading strategy is proposed based on the pricing strategy of the server. UserveriU_{server}^{i} consists of two functions, one is to maximize the residual energy EriE_{r}^{i}, and the other is to minimize the expenditure WiW_{i}. For the former, according to the amount of harvested energy EhiE_{h}^{i}, the task is executed locally or offloaded, and then the optimal amount of offloading data qioptq_{i}^{opt} is obtained, which is referred to Lemma 2, Lemma 3, Lemma 4. As a consequence, Eq. (19) is converted into the following optimization problem.

Problem 1

Optimal Offloading Decision

Uuseri\displaystyle U_{user}^{i} =minWi\displaystyle=\min W_{i} (25)
=minhqiFtlg(Fi+d).\displaystyle=\min\frac{hq_{i}}{F_{t}}\lg(F_{i}+d).
s.t.0qil,\textbf{s.t.}\quad 0\leq q_{i}\leq l, (25a)
0FiFt,\quad\quad\quad 0\leq F_{i}\leq F_{t}, (25b)
0tetreqi.\quad\quad\quad 0\leq t_{e}\leq t_{req}^{i}. (25c)

According to Eq. (25), it is an increasing function concerning qiq_{i}, which implies offloading data as little as possible. On the contrary, WW is a decreasing function concerning FiF_{i}, which indicates the selection of the minimum computing resources. However, fewer computation resources required lead to longer computing time. Therefore, it is necessary to complete the task within the maximum delay tolerance. To this end, MUs prefer to offload the smallest amount of data and less computing resources to be optimal for them. However, since each MU tends to use fewer server resources, it takes a longer time to complete the task. In addition, it causes slow task turnover and less profit for the server, so the server does not allow each MU to achieve the minimum computing resources to maximize its profit.

To solve the optimal offloading decision problem (i.e., Problem 1), we proposed Algorithm 4 to determine the eligibility of offloading conditions for the user. According to the harvested energy, and considering its maximum task delay and cost factors, the MU decides whether to execute locally or partially offload, and its time complexity is O(n)O(n).

Lemma 2

We define lcl_{c} as the critical amount of data whether to offload or not.

Proof 2

When the harvested energy is just enough to process all the data locally, we define the data volume of the task as lcl_{c}. At this point, the following formula is obtained.

Eh=Eloc,E_{h}=E_{loc}, (26)
AHηka=klchFloc2,AH\eta k_{a}=k\cdot l_{c}h\cdot F_{loc}^{2}, (27)
lc=AHηkakhFloc2.l_{c}=\frac{AH\eta k_{a}}{k\cdot h\cdot F_{loc}^{2}}. (28)

In the meantime, we set yy as a binary variable, when yy equals 1, it means offloading. Otherwise, it means local processing. Then, we can obtain the following expression.

y={1,li  lc;0,li  lc.y=\left\{\begin{array}[]{ll}\text{1},\quad\text{$l_{i}$ $\leq$ $l_{c}$};\\ \text{0},\quad\text{$l_{i}$ $\geq$ $l_{c}$}.\\ \end{array}\right. (29)
Lemma 3

When all the energy collected by the MU is used to offload the data, the amount of data offloaded by the MU reaches the maximum lml_{m} at this time.

Proof 3

The maximum amount of data is processed when all the energy is used for offloading [34, 35, 36]. At this point, we obtain the following expression.

Eh=Eu+Ed,E_{h}=E_{u}+E_{d}, (30)
AHηka=Pstup+Pstdown,AH\eta k_{a}=P_{s}t_{up}+P_{s}t_{down}, (31)
AHηka=PulmRu+PdrlmRd,AH\eta k_{a}=P_{u}\frac{l_{m}}{R_{u}}+P_{d}\frac{rl_{m}}{R_{d}}, (32)
lm=AHηkaB.l_{m}=\frac{AH\eta k_{a}}{B}. (33)

where B=PuRu+rPdRdB=\frac{P_{u}}{R_{u}}+\frac{rP_{d}}{R_{d}}.

Lemma 4

We define the balanced data qioptq_{i}^{opt} as the optimal amount of offloaded data when the harvested energy is just enough to meet the energy expenditure with the offloading case.

Proof 4

When ll \geq lcl_{c}, the energy consumed by the MU to offload the task includes the energy consumed by executing the task locally ElocE_{loc}, the energy consumed by uploading the task EuE_{u}, and the energy consumed by downloading the result EdE_{d}. Combining Eq. (1), Eq. (4), Eq. (5) and Eq. (7), we can get the following equation.

Eh\displaystyle E_{h} Eloc+Eu+Ed,\displaystyle\geq E_{loc}+E_{u}+E_{d}, (34)
AHηka\displaystyle AH\eta k_{a} k(lyqiopt)hFloc2+y(Putup+Pdtdown),\displaystyle\geq k\cdot(l-yq_{i}^{opt})h\cdot F_{loc}^{2}+y(P_{u}t_{up}+P_{d}t_{down}), (35)
l\displaystyle l yqo+AHηkay(Putup+Pdtdown)khFloc2,\displaystyle\leq yq_{o}+\frac{AH\eta k_{a}-y(P_{u}t_{up}+P_{d}t_{down})}{k\cdot h\cdot F_{loc}^{2}}, (36)
l\displaystyle l yqo+AHηkayqiopt(PuRu+PdrRd)khFloc2.\displaystyle\leq yq_{o}+\frac{AH\eta k_{a}-yq_{i}^{opt}(\frac{P_{u}}{R_{u}}+\frac{P_{d}r}{R_{d}})}{k\cdot h\cdot F_{loc}^{2}}. (37)
s.t.llcqooptlm.\displaystyle\textbf{s.t.}\quad l-l_{c}\leq q_{o}^{opt}\leq l_{m}. (37a)

IV-B The optimal price of the edge server

The utility of the edge server, UserverU_{server}, is equal to the payment of the user minus the penalty function PP to discard tasks, which is expressed as the following equations.

Userver\displaystyle U_{server} =i=1MWiP\displaystyle=\sum_{i=1}^{M}W_{i}-P (38)
=hFtlg(i=1M(Fi+d)qi)γi=0MWi.\displaystyle=\frac{h}{F_{t}}\lg(\prod_{i=1}^{M}(F_{i}+d)^{q_{i}})-\gamma\sum_{i=0}^{M}W_{i}.
s.t.i=1MFiFt.\textbf{s.t.}\quad\sum_{i=1}^{M}F_{i}\leq F_{t}. (38a)

Appropriate bidding function should be made by the server to guarantee its utility as non-negative. Here, the bidding function is treated as a logarithmic function of f(x)=lg(Fi+d)f(x)=\lg(F_{i}+d) [8, 1]. The input is regarded as the usage rate of the MU at the server. Since the bid function must be larger than 0, and the bid function should be 0 when FiF_{i} = 0, that is, when the MU is not using the server resources, d=1d=1; FiF_{i} is the optimal offloading decision of the MU. However, when FiF_{i} is large, the service period profit is correspondingly large, but according to the game rule, MU is not willing to request a large FiF_{i}, and when FiF_{i} is large, the number of users that can be served will be less. That is, as the penalty function increases the server utility becomes low. Therefore, within an acceptable range of FiF_{i} to the user, the utilization of server resources is maximized, and server profit is maximized simultaneously. In this case, the optimization problem to maximize the UserverU_{server} based on the optimal policy of the MU is expressed as follows.

Problem 2

Decision on price bidding function

Userver\displaystyle U_{server} =max(i=1MWiP).\displaystyle=\max(\sum_{i=1}^{M}W_{i}-P). (39)
s.t.i=1MFi=Ft.\textbf{s.t.}\quad\sum_{i=1}^{M}F_{i}=F_{t}. (39a)

To fully utilize the server resources and improve the revenue of the server, we add the restriction that each MU must increase FiF_{i} from the original to fully use the server resources if idle resources exist. MU takes up the extra resources of the server according to the proportion of their offloaded data. As the MUs occupy more server resources, they have to pay more, which implies the revenue of the server is improved. The justification for such a decision by referring to Appendix A.

Input: RuR_{u}, RdR_{d}, FlocF_{loc}, XX, YY
Output: UserverU_{server}, UuserU_{user}
1
2Check the YY;
3
4if Y == null then
5       Check the XX;
6       if X==null and XsX_{s} == null or FtϵF_{t}\leq\epsilon 555ϵ\epsilon is a critical value, when FtϵF_{t}\leq\epsilon, the server will not continue to access the next MU and serve the MUs already at the server. then
7             the server didn’t provide service;
8       end if
9      if X==null and XsX_{s} != null then
10             if Ft>0F_{t}>0 then
11                   for X.u;X!=null;X=X.next do
12                         ltot=X.ll_{tot}=X.l;
13                   end for
14                  Xs.F=Xs.F+Ft(Xs.lltot)X_{s}.F=X_{s}.F+F_{t}*(\frac{X_{s}.l}{l_{tot}}) calculate the actual time tfactt_{fact} according to Eq. (13);
15            else
16                   calculate the UserverU_{server} according to Eq. (38);
17                   calculate the UuserU_{user} according to Eq. (25);
18             end if
19            
20      else
21             if Ft>X.FF_{t}>X.F then
22                   X.u is served;;
23                   Ft=FtX.FF_{t}=F_{t}-X.F;
24                  
25            else
26                   if treq>X.tt_{req}>X.t then
27                         Y=X;
28                         Y=Y.next;
29                  else
30                         X.u is dropped;
31                         calculate the P according to Eq. (24);
32                   end if
33                  X=X.next;
34                  
35             end if
36            
37       end if
38      
39else
40       while FtY.FF_{t}\geq Y.F do
41             Y.u is served;
42             Ft=FtY.FF_{t}=F_{t}-Y.F;
43             Y = Y.next;
44            
45       end while
46      
47 end if
Algorithm 2 Differential pricing

The joint optimization Problem 1 and Problem 2 are solved by the proposed Algorithm 2, which determines the eligibility of a MU to be served by the server after offloading and calculates the utility for both the server and the user. At the beginning of each time slot, the server first iterates the tasks in Class 2. If the server meets the computing resources requirement by the task in Class 2, it processes the task. At this time, the server gives an optimal initial payment according to the amount of data offloaded and the required computing resources, otherwise, the task continues to wait. After iterating the tasks in Class 2, it continues to iterate the tasks in Class 1. If the computing resources of the server cannot meet the computing resources requirement by the task in Class 1, the task is forwarded into Class 2 and waits. Since the server iterates over Class 1 and Class 2, the entire process takes O(2n)O(2n) time. For each MU already in the server, if the server has any remaining computing resources, we adopt a redistribution strategy, it needs to iterate through the array XsX_{s} again,666XsX_{s} represents the array of MUs to be served. reallocates computing resources and calculates the execution delay, and the time complexity is O(n)O(n). Therefore, the entire time complexity is O(3n)O(3n).

V Performance evaluation

To validate the effectiveness of the proposed DDPS scheme, we compare it with four existing schemes: 1) the uniform pricing that the server picks an optimal price from a set of prices and charges all MUs by the price [25]; 2) the differentiated pricing that the unit price set by the server is inversely proportional to the computing resources requested by the user [25]; 3) the linear pricing that the unit price set by the server is linear with the computing resources requested by the user [1]; 4) the nonlinear pricing that the unit price set by the server is a quadratic function of the computing resources requested by the MU [26].

  • Uniform pricing [25]: The edge cloud uniformly sets and broadcasts a price μ\mu to all users, denoted as μ=μ1==μK\mu=\mu_{1}=...=\mu_{K}. Each user determines its optimal offloading strategy by solving the corresponding optimization problem for the given uniform price μ\mu. Meanwhile, the edge cloud determines its optimal price μ\mu^{*} based on each MU’s offloading decision. Subsequently, the edge cloud sequentially announces prices {1/Floci}k=1k\{1/F_{loc}^{i}\}^{k}_{k=1} to MUs in descending order until the computing power constraint is satisfied, thereby concluding the price negotiation process.

  • Differentiated pricing [25]: The equation μk=1/Floci\mu^{*}_{k}=1/F_{loc}^{i} implies that the optimal price for user kk, denoted as μ\mu^{*}, is equal to the reciprocal of its FlociF_{loc}^{i}. User kk chooses mkm_{k} bits of data to offload if its FlociF_{loc}^{i} is less than or equal to the threshold 1/μk1/\mu^{*}_{k}; Otherwise, it opts for local computation. Consequently, the optimal price μk\mu^{*}_{k} for user kk is dependent on its FlociF_{loc}^{i}, with a higher value assigned when the FlociF_{loc}^{i} is lower.

  • Linear pricing [1]: The price function is defined as a linear function, represented by f(x)=ax+bf(x)=ax+b, where xx represents the proportion of computing resources utilized by users on the server. The determination of values for aa and bb constitutes the decision variables for the server. By employing differentiated pricing, the server can ascertain an appropriate unit price based on the user’s utilization of computing resources, thereby maximizing its utility.

  • Nonlinear pricing [26]: The price function is defined as P(x)=ax2+bxP(x)=ax^{2}+bx, where aa and bb are non-negative parameters, and xx represents the computation capability used by the MU. The authors employ this quadratic function to approximate the price function P()P(\cdot) while satisfying three properties: 1) when no computing service is provided, P(0)P(0) equals zero; 2) P()P(\cdot) exhibits monotonicity; 3) P()P(\cdot) demonstrates convexity.

In summary, the two schemes of uniform pricing and differential pricing are solely dependent on the local computing power, whereas linear pricing and nonlinear pricing are contingent upon the server computational capabilities used by MUs.

Refer to caption
Figure 3: Effect of different λ\lambda on average latency.
Refer to caption
Figure 4: Effect of different λ\lambda on UserverU_{server}.

V-A Simulation Setup

For the simulation setup, we assume that λ\lambda of MUs are 0.1, 0.2, 0.3 [1]. To specify the services provided by MEC servers, we consider face recognition applications that can be executed in the edge cloud. For the parameters of delay, we rely on some existing works to get the representative values. Specifically, the transmission delay has a very large variance, and the mean value is assumed to be 100 to 300 ms [37]. The execution delay is set as 500 ms [38]. The local CPU frequency FkF_{k} for each MU ii is uniformly selected from the set {0.1, 0.2, …, 1} GHz, the data size for MU ii are uniformly distributed with lil_{i} \in [100, 500] KB. Unless otherwise noted, the remaining parameters are uniformly defined in Table II [25].

TABLE II: default parameter settings
Parameter PuP_{u} PdP_{d} FtF_{t} rr λ\lambda
Value 0.1W 1W 6×1096\times 10^{9} cycles/slot 0.2 0.3

V-B Impact of edge server computation capacity on average execution latency

Fig. 3 exhibits the impact of the edge server computation capacity on average execution latency. The computation capacity is progressively augmented in increments of 1×1091\times 10^{9} cycles/slot, up to a maximum of 6×1096\times 10^{9} cycles/slot.

As shown in Fig. 3, it is discernible that as the computational capacity of the edge server enhances, there is a concomitant diminution of the average latency across all pricing schemes. When the computing capacity is low, DDPS exhibits a relatively high average delay compared with the other schemes except for uniform pricing. The reason is that according to our Algorithm 4, DDPS selects the smallest FiF_{i} under the maximum delay to reach the optimal offloading decision, which leads to a long average delay. With the increase of server computing resources, Algorithm 2 initially allocates computing resources to MUs, and if any service computing resources remain, the server reallocates resources to delay-sensitive MUs, which accelerates the process of task execution.

In addition, from the perspective of pricing, in the case of low computing resources of the server, which means each MU obtains low computing resources. Compared with linear pricing and nonlinear pricing schemes, the log()log(\cdot) function of DDPS pricing is higher, which also suppresses MUs from requesting more computing resources. Similarly, with the increase of computing resources, the log()log(\cdot) function of DDPS is priced relatively lower, which encourages MUs to request more computing resources to reduce the latency accordingly. For the uniform pricing scheme, no matter how many computing resources are used by MUs, the price is uniform for all MUs, which leads to unfairness to a part of MUs that just need little computing resources. It reveals that when the MU executes tasks locally more, resulting in the highest latency, which accords with the original intention of MEC.

V-C Impact of edge server computation on UserverU_{server}

Fig. 4 exhibits the impact of the edge server computation capacity on UserverU_{server}. The computation capacity is varied as utilized in Fig. 3.

As shown in Fig. 4, with the increase of server computing resources, the revenue of all pricing schemes increases, and the DDPS scheme has the highest revenue. The utility function (UserverU_{server}) of the edge server is used as a substitution for revenue to verify the effectiveness of the proposed scheme, and we run the simulations for 1 minute. The reasons are explained as twofold. On the one hand, DDPS allows each MU to select the least server computing resources under the maximum latency, which means that the server can serve more MUs under the same condition. On the other hand, we implement a resource redistribution strategy, so that the server resources are fully utilized. Existing pricing schemes (e.g., Uniform pricing and Differentiated pricing [25]) mainly focus on optimizing the cost per CPU cycle, consequently, when the amount of data offloaded requires the same number of cycles, each MU dedicates to occupying more server computing resources to reduce execution time, which leads to sub-optimal utilization of server computing resources. Therefore, the proposed DDPS scheme gains more revenue than other existing schemes.

Refer to caption
Figure 5: Effect of different λ\lambda on average latency.

V-D Impact of λ\lambda on average execution latency

Fig. 5 exhibits the impact of λ\lambda on the average execution latency. Here, λ\lambda are set as 0.1, 0.2, and 0.3, respectively. As observed from Fig. 5, an uptick with λ\lambda leads to a concomitant increase in the average latency across all pricing schemes, and the DDPS outperforms the other schemes in terms of average execution latency. This is because the redistribution strategy of DDPS makes full use of server resources, that is, each MU can get more computing resources to quickly complete the task execution. In addition, compared with linear pricing and nonlinear pricing, our pricing scheme is also conducive to MU offloading due to the lower price, thereby reducing the delay. Compared with the uniform pricing scheme, the superiority of our scheme is proved by Appendix B.

V-E Impact of λ\lambda on UserverU_{server}

Fig. 6 exhibits the impact of λ\lambda on UserverU_{server}. λ\lambda are set as utilized in Fig. 5. The results presented in Fig. 6 show that DDPS stably generates the highest revenue with the increase of λ\lambda. Compared with existing schemes, in addition to the high income caused by the redistribution strategy, DDPS also has the advantage of attracting more MUs at a lower price compared with linear and nonlinear pricing. In addition, DDPS allocates the least amount of computing resources to each MU at the beginning, which also serves more MUs. Uniform pricing has low revenue performance due to the lack of fairness in pricing and the lack of a strategy to maximize revenue.

Refer to caption
Figure 6: Effect of different λ\lambda on UserverU_{server}.

V-F Impact of λ\lambda on ratio of service

Fig. 7 illustrates the average ratio of service (RoS) for MUs. We define the RoS as a binary value, 1 represents if the MU execution delay requirements are satisfied, and 0 otherwise for each MU [1]. The figure illustrates that as λ\lambda increases, the RoS of the proposed scheme remains relatively stable, whereas the others exhibit a declining trend. This is because Algorithm 4 of the DDPS scheme allocates the least computing resources under the premise of the maximum delay of each MU to avoid the situation that some MUs cannot be served. Therefore, compared with the strategy that MUs freely occupy server resources in other schemes (i.e., uniform pricing and differentiated pricing), DDPS serves more MUs. Notably, the reduction is particularly large for the uniform and differential pricing schemes as λ\lambda increases. For both pricing schemes, the rapid decrease is attributed to the fact that as the number of MUs increases, they are more likely to compete for more computing resources, which results in serving fewer MUs. However, for the linear and nonlinear pricing schemes, the price constraints prevent MUs from excessive competition for computing resources, thus their RoS is relatively stable.

In addition, the reason of the RoS value is not 1 is that it includes non-offloading case, because they cannot satisfy their requirements for offloading due to the situation such as insufficient server resources or qi>lmq_{i}>l_{m}.

Refer to caption
Figure 7: Effect of different λ\lambda on RoS.

VI Conclusion and future work

In this paper, we propose a DDPS pricing scheme as part of a task offloading scenario designed for sensor applications in remote areas. DDPS differentiates MUs based on their usage of server computing resource and the amount of offloaded data, leading to differential unit prices. Based on the proposed DDPS scheme, the optimal amount of data offloading is first determined. After that, the Stackelberg game between the MUs and the server is established considering the computing capacity requirement of the MUs. The goal of the game is to fully utilize the computing resources of the server and maximize the server’s revenue. Extensive simulation results demonstrate that the DDPS scheme effectively improves the server revenue and the MUs RoS, and performs well in terms of execution delay. Therefore, through the proposed DDPS scheme, MEC service providers can reduce instances where MUs overuse server resources and serve more MUs with limited resources, ultimately improving operational efficiency. In our future work, we will investigate an efficient resource reallocation-based pricing scheme with budget constraints to enhance task processing efficiency.

Appendix A

Our pricing function is lg()lg(\cdot) function, assuming that the same MU uses computing resources from 10n10^{n} to 10m10^{m}, then the conversion is equivalent to the unit price from nn to mm, doubled by m/nm/n times, but the computing resources are 10m/10n10^{m}/10^{n}, that is, 10(mn)10(m-n) times. In other words, MU’s usage of server computing resources has increased significantly, and the unit price has only changed slightly.

Appendix B

The uniform pricing scheme exhibits higher latency compared to the proposed scheme. In the uniform pricing model, MUs are allocated computation resources corresponding to the amount of data they offload. This implies that the average execution latency is a fixed value. We assume that 𝒩\mathcal{N} MUs offload the same amount of data, denoted as \mathcal{L}, meaning that 𝒩\mathcal{N} Mus share an equal allocation of computation resources \mathcal{F}. The average computation latency is given by the following expression.

t1¯=h/𝒩.\overline{t_{1}}=\frac{h\mathcal{L}}{\mathcal{F}/\mathcal{N}}. (40)

In the FCFS model we propose, although a time slot can serve multiple MUs, we can consider 𝒦\mathcal{K} MUs as an entirety. This entirety forms a new FCFS queue with the subsequent entirety having 𝒦\mathcal{K} MUs. Therefore, the computation latency for the first entirety is h/𝒦\frac{h\mathcal{L}}{\mathcal{F}/\mathcal{K}},777Here we assume that /𝒦{\mathcal{F}/\mathcal{K}} is an integer. If it is not an integer, then each MU in the last entirety will receive a larger FiF_{i}, resulting in lower compute latency. Therefore, we only discuss the case when it is an integer. the latency for the second member is 2h/𝒦\frac{2h\mathcal{L}}{\mathcal{F}/\mathcal{K}}, and so on. The latency for the ith member is thus ih/𝒦\frac{ih\mathcal{L}}{\mathcal{F}/\mathcal{K}}. Consequently, the average computation latency is as follows.

t2¯\displaystyle\overline{t_{2}} =(i=1𝒩/𝒦ih/𝒦)/(𝒩/𝒦)\displaystyle=(\sum_{i=1}^{\mathcal{N}/\mathcal{K}}i\frac{h\mathcal{L}}{\mathcal{F}/\mathcal{K}})/(\mathcal{N}/\mathcal{K}) (41)
=h(𝒦+𝒩)2F.\displaystyle=\frac{h(\mathcal{K}+\mathcal{N})\mathcal{L}}{2F}.

if and only if 𝒩=𝒦\mathcal{N}=\mathcal{K}, t1¯=t2¯\overline{t_{1}}=\overline{t_{2}}. Otherwise, t1¯>t2¯\overline{t_{1}}>\overline{t_{2}}, i.e., the average execution delay of the proposed scheme is less than that of uniform pricing. This completes the proof.

References

  • [1] H. Seo, H. Oh, J. K. Choi, and S. Park, “Differential Pricing-Based Task Offloading for Delay-Sensitive IoT Applications in Mobile Edge Computing System,” IEEE Internet Things J., vol. 9, no. 19, pp. 19116-19131, Oct. 2022.
  • [2] Z. Chang, L. Liu, X. Guo, and Q. Sheng, “Dynamic Resource Allocation and Computation Offloading for IoT Fog Computing System,” IEEE Trans. Ind. Inf., vol. 17, no. 5, pp. 3348-3357, May 2021.
  • [3] Y. Chen, Z. Chang, G. Min, S. Mao, and T. Hämäläinen, “Joint Optimization of Sensing and Computation for Status Update in Mobile Edge Computing Systems,” IEEE Trans. Wireless Commun., vol. 22, no. 11, pp. 8230-8243, Nov. 2023,
  • [4] X. Chen, W. Li, S. Lu, Z. Zhou, and X. Fu, “Efficient resource allocation for on-demand mobile-edge cloud computing,” IEEE Trans. Veh. Technol., vol. 67, no. 9, pp. 8769–8780, Sep. 2018.
  • [5] J. Yan, S. Bi, L. Duan, and Y.-J. A. Zhang, “Pricing-driven service caching and task offloading in mobile edge computing,” IEEE Trans. Wireless Commun., vol. 20, no. 7, pp. 4495–4512, Jul. 2021.
  • [6] C. Yi, J. Cai, and Z. Su, “A multi-user mobile computation offloading and transmission scheduling mechanism for delay-sensitive applications,” IEEE Trans. Mobile Comput., vol. 19, no. 1, pp. 29–43, Jan. 2020.
  • [7] M. Tao, K. Ota, M. Dong, and H. Yuan, “Stackelberg Game-Based Pricing and Offloading in Mobile Edge Computing,” IEEE Wireless Commun. Lett., vol. 11, no. 5, pp. 883-887, May 2022.
  • [8] Y. Chen, Z. Li, B. Yang, K. Nai, and K. Li, “A Stackelberg game approach to multiple resources allocation and pricing in mobile edge computing,” Future Gener Comput Syst, vol. 108, pp. 273-287, 2020.
  • [9] Y. Li, L. Li, Y. Xia, D. Zhang, and Y. Wang, “Multi-Leader Single-Follower Stackelberg Game Task Offloading and Resource Allocation Based on Selection Optimization in Internet of Vehicles,” IEEE Access, vol. 11, pp. 64430-64441, 2023.
  • [10] W. Qin, C. Zhang, H. Yao, T. Mai, S. Huang, D. Guo, and R. Gao, “Stackelberg Game-Based Offloading Strategy for Digital Twin in Internet of Vehicles,” in Proc. Int. Wirel. Commun. Mob. Comput. (IWCMC), 2023, pp. 1365-1370.
  • [11] M. Wang, L. Zhang, P. Gao, X. Yang, K. Wang, and K. Yang, “Stackelberg-Game-Based Intelligent Offloading Incentive Mechanism for a Multi-UAV-Assisted Mobile-Edge Computing System,” IEEE Internet Things J., vol. 10, no. 17, pp. 15679-15689, Sept. 2023.
  • [12] G. Mitsis, E. E. Tsiropoulou, and S. Papavassiliou, “Price and Risk Awareness for Data Offloading Decision-Making in Edge Computing Systems,” IEEE Syst. J., vol. 16, no. 4, pp. 6546-6557, Dec. 2022.
  • [13] Q. Li, H. Yao, T. Mai, C. Jiang, and Y. Zhang, “Reinforcement-Learning- and Belief-Learning-Based Double Auction Mechanism for Edge Computing Resource Allocation,” IEEE Internet Things J., vol. 7, no. 7, pp. 5976-5985, Jul. 2020.
  • [14] T. H. Hai and P. Nguyen, “A Pricing Model for Sharing Cloudlets in Mobile Cloud Computing,” in Proc. Int. Conf. Adv. Comput. Appl. (ACOMP), 2017, pp. 149-153.
  • [15] Z. Shen, J. Zhang, and H. Tan, “A Truthful FPTAS Auction for the Edge-Cloud Pricing Problem,” in Proc. 6th Int. Conf. Big Data Comput. Commun. (BIGCOM), China, 2020, pp. 140-144.
  • [16] J. S. Ng, W. Yang Bryan Lim, S. Garg, Z. Xiong, D. Niyato, M. Guizani, and C. Leung, “Collaborative Coded Computation Offloading: An All-pay Auction Approach,” in Proc. IEEE Int Conf Commun, 2021, pp. 1-6.
  • [17] Q. Wang, S. Guo, J. Liu, C. Pan, and L. Yang, “Profit Maximization Incentive Mechanism for Resource Providers in Mobile Edge Computing,” IEEE Trans. Serv. Comput., vol. 15, no. 1, pp. 138-149, Jan.-Feb. 2022.
  • [18] W. Sun, J. Liu, Y. Yue, and P. Wang, “Joint Resource Allocation and Incentive Design for Blockchain-Based Mobile Edge Computing,” IEEE Trans. Wireless Commun., vol. 19, no. 9, pp. 6050-6064, Sept. 2020.
  • [19] B. Wu, X. Chen, Y. Chen, and Y. Lu, “A Truthful Auction Mechanism for Resource Allocation in Mobile Edge Computing,” in Proc. IEEE 22nd Int. Symp. World Wirel., Mob. Multimed. Networks (WoWMoM), 2021, pp. 21-30.
  • [20] L. Ma, X. Wang, X. Wang, L. Wang, Y. Shi, and M. Huang, “TCDA: Truthful Combinatorial Double Auctions for Mobile Edge Computing in Industrial Internet of Things,” IEEE Trans Mob Comput, vol. 21, no. 11, pp. 4125-4138, Nov. 2022.
  • [21] R. Wang, C. Zang, P. He, Y. Cui, and D. Wu, “Auction Pricing-Based Task Offloading Strategy for Cooperative Edge Computing,” in Proc. IEEE Glob. Commun. Conf. (GLOBECOM), 2021, pp. 01-06.
  • [22] Y. Su, W. Fan, Y. Liu, and F. Wu, “A Truthful Combinatorial Auction Mechanism Towards Mobile Edge Computing in Industrial Internet of Things,” IEEE Trans. on Cloud Comput., vol. 11, no. 2, pp. 1678-1691, Apr.-Jun. 2023.
  • [23] D. Han, W. Chen, and Y. Fang, “A Dynamic Pricing Strategy for Vehicle Assisted Mobile Edge Computing Systems,” IEEE Wireless Commun. Lett., vol. 8, no. 2, pp. 420-423, Apr. 2019.
  • [24] Z. Chang, C. Wang, and H. Wei, “Flat-Rate Pricing and Truthful Offloading Mechanism in Multi-Layer Edge Computing,” IEEE Trans. Wirel. Commun., vol. 20, no. 9, pp. 6107-6121, Sept. 2021.
  • [25] M. Liu and Y. Liu, “Price-Based Distributed Offloading for Mobile-Edge Computing With Computation Capacity Constraints,” IEEE Wireless Commun. Lett., vol. 7, no. 3, pp. 420-423, Jun. 2018.
  • [26] B. Liang, R. Fan, H. Hu, Y. Zhang, N. Zhang, and A. Anpalagan, “Nonlinear Pricing Based Distributed Offloading in Multi-User Mobile Edge Computing,” IEEE Trans. Veh. Technol., vol. 70, no. 1, pp. 1077-1082, Jan. 2021.
  • [27] S. Kim, S. Park, M. Chen, and C. Youn, “An Optimal Pricing Scheme for the Energy-Efficient Mobile Edge Computation Offloading With OFDMA,” IEEE Commun. Lett., vol. 22, no. 9, pp. 1922-1925, Sept. 2018.
  • [28] L. Li, M. Siew, Z. Chen, and T. Q. S. Quek, “Optimal Pricing for Job Offloading in the MEC System With Two Priority Classes,” IEEE Trans. Veh. Technol., vol. 70, no. 8, pp. 8080-8091, Aug. 2021.
  • [29] Q. Yao and L. Tang, “An Approximation Algorithm for Pricing and Request Matching in Mobile ad hoc MEC System,” in Proc. Comput., Commun. and IoT Applications (ComComAp), 2019, pp. 432-437.
  • [30] M. Wang, L. Zhang, P. Gao, X. Yang, K. Wang, and K. Yang, “Stackelberg-Game-Based Intelligent Offloading Incentive Mechanism for a Multi-UAV-Assisted Mobile-Edge Computing System,” IEEE Internet Things J., vol. 10, no. 17, pp. 15679-15689, Sept. 2023.
  • [31] R. Wang, C. Zang, P. He, Y. Cui, and D. Wu, “Auction Pricing-Based Task Offloading Strategy for Cooperative Edge Computing,” in Proc. IEEE Glob. Commun. Conf. (GLOBECOM), Dec. 2021, pp. 01-06.
  • [32] Y. Wang, M. Sheng, X. Wang, L. Wang, and J. Li, “Mobile-edge computing: Partial computation offloading using dynamic voltage scaling,” IEEE Trans. Commun., vol. 64, no. 10, pp. 4268–4282, Oct. 2016.
  • [33] L. Dong, S. Han, Y. Gao, and Z. Tan, “A Game-Theoretical Approach for Resource Allocation in Mobile Edge Computing,” in Proc. IEEE 20th Int. Conf. Commun. Technol. (ICCT), 2020, pp. 436-440.
  • [34] W. Zhang, G. Zhang, and S. Mao, “Joint Parallel Offloading and Load Balancing for Cooperative-MEC Systems With Delay Constraints,” IEEE Trans. Veh. Technol., vol. 71, no. 4, pp. 4249-4263, Apr. 2022.
  • [35] J. Chen, Z. Chang, X. Guo, R. Li, Z. Han, and T. Hämäläinen, “Resource Allocation and Computation Offloading for Multi-Access Edge Computing With Fronthaul and Backhaul Constraints,” IEEE Trans. Veh. Technol., vol. 70, no. 8, pp. 8037-8049, Aug. 2021
  • [36] M. Guo, W. Wang, X. Huang, Y. Chen, L. Zhang, and L. Chen, “Lyapunov Based Partial Computation Offloading for Multiple Mobile Devices Enabled by Harvested Energy in MEC,” IEEE Internet Things J., vol. 9, no. 11, pp. 9025-9035, Jun. 2022.
  • [37] M. DeVirgilio, W. D. Pan, L. L. Joiner, and D. Wu, “Internet delay statistics: Measuring Internet feel using a dichotomous Hurst parameter,” in Proc. IEEE SoutheastCon, 2012, pp. 1-6.
  • [38] N. Powers, A. Alling, K. Osolinsky, T. Soyata, M. Zhu, H. Wang, H. Ba, W. Heinzelman, J. Shi, and M. Kwon, “The Cloudlet Accelerator: Bringing Mobile-Cloud Face Recognition into Real-Time,” in Proc. IEEE Globecom Workshops (GC Wkshps), 2015, pp. 1-7.
[Uncaptioned image] Hai Xue (Member, IEEE) received the B.S. degree from Konkuk University, Seoul, South Korea, in 2014, the M.S. degree from Hanyang University, Seoul, South Korea, in 2016, and the Ph.D. degree from Sungkyunkwan University, Suwon, South Korea, in 2020. He is currently an assistant professor at the School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology (USST), Shanghai, China. Prior to joining USST, he was a research professor with the Mobile Network and Communications Lab. at Korea University from 2020 to 2021, Seoul, South Korea. His research interests include edge computing/intelligence, SDN/NFV, machine learning, and swarm intelligence.
[Uncaptioned image] Yun Xia received the B.S. degree from Shanghai Ocean University, Shanghai, China, in 2022. He is currently working toward the master’s degree in computer science and technology with the University of Shanghai for Science and Technology, Shanghai, China. His research focuses on mobile edge computing and pricing strategy.
[Uncaptioned image] Neal N. Xiong (S’05, M’08, SM’12) is current a Professor, Computer Science Program Chair, at Department of Computer Science and Mathematics, Sul Ross State University, Alpine, TX 79830, USA. He received his both PhD degrees in Wuhan University (2007, about sensor system engineering), and Japan Advanced Institute of Science and Technology (2008, about dependable communication networks), respectively. Before he attended Sul Ross State University, he worked in Georgia State University, Northeastern State University, and Colorado Technical University (full professor about 5 years) about 15 years. His research interests include Cloud Computing, Security and Dependability, Parallel and Distributed Computing, Networks, and Optimization Theory. Dr. Xiong published over 250 IEEE journal papers and over 100 international conference papers. Some of his works were published in IEEE JSAC, IEEE or ACM transactions, ACM Sigcomm workshop, IEEE INFOCOM, ICDCS, and IPDPS. He has been a General Chair, Program Chair, Publicity Chair, Program Committee member and Organizing Committee member of over 100 international conferences, and as a reviewer of about 100 international journals, including IEEE JSAC, IEEE SMC (Park: A/B/C), IEEE Transactions on Communications, IEEE Transactions on Mobile Computing, IEEE Trans. on Parallel and Distributed Systems. He is serving as an Editor-in-Chief, Associate editor or Editor member for over 10 international journals (including Associate Editor for IEEE Tran. on Systems, Man & Cybernetics: Systems, Associate Editor for IEEE Transactions on Network Science and Engineering, Associate Editor for Information Science, Editor-in-Chief for Journal of Internet Technology (JIT), and Editor-in-Chief for Journal of Parallel & Cloud Computing (PCC)), and a guest editor for over 10 international journals, including Sensor Journal, WINET and MONET. He has received the Best Paper Award in the 10th IEEE International Conference on High Performance Computing and Communications (HPCC-08) and the Best student Paper Award in the 28th North American Fuzzy Information Processing Society Annual Conference (NAFIPS2009).
[Uncaptioned image] Di Zhang (S’13, M’17, SM’19) currently is an Associate Professor at Zhengzhou University. He received his PhD degree from Waseda University. He was a Visiting Scholar of Korea University, a Senior Researcher of Seoul National University, a Visiting Student of National Chung-Hsing University. He is serving as an area editor of KSII Transactions on Internet and Information Systems, has served as the guest editor of IEEE Wireless Communications and IEEE Network. He received the First Prize Award for Science and Technology Progresses of Henan Province in 2023, the First Prize Award for Science and Technology Achievements from Henan Department of Education in 2023, and the ITU Young Author Recognition in 2019. His research interests are the wireless communications and networking, especially the short packet communications and its applications.
[Uncaptioned image] Songwei Pei (Senior Member, IEEE) received the B.S. in the School of Computer from National University of Defense and Technology, Changsha, China, in 2003, and the Ph.D. in the School of Computer Science and Technology from Fudan University, Shanghai, China, in 2009. He is currently a full professor and leading director of CAPAL laboratory within the Computer Science and Engineering Department at the University of Shanghai for Science and Technology, and he currently works as a Guest Researcher at the Institute of Computing Technology, Chinese Academy of Sciences (2018-). He had studied as Research Fellow at University of California, Irvine from 2013 to 2015, and he studied as a senior visiting scholar at Queensland University of Technology in 2017. His research interests include intelligent computation, deep learning, heterogeneous multi-core processors, cloud computing, big data, and fault-tolerant computation, etc. He is a distinguished member of CCF, associate editor of FGCS and IEEE ACCESS, the chair of CCF YOCSEF Shanghai (2022-2023).