On the Regret of Online Edge Service Hosting 11footnotemark: 1
Abstract
We consider the problem of service hosting where a service provider can dynamically rent edge resources via short term contracts to ensure better quality of service to its customers. The service can also be partially hosted at the edge, in which case, customers’ requests can be partially served at the edge. The total cost incurred by the system is modeled as a combination of the rent cost, the service cost incurred due to latency in serving customers, and the fetch cost incurred as a result of the bandwidth used to fetch the code/databases of the service from the cloud servers to host the service at the edge. In this paper, we compare multiple hosting policies with regret as a metric, defined as the difference in the cost incurred by the policy and the optimal policy over some time horizon . In particular we consider the Retro Renting (RR) and Follow The Perturbed Leader (FTPL) policies proposed in the literature and provide performance guarantees on the regret of these policies. We show that under i.i.d stochastic arrivals, RR policy has linear regret while FTPL policy has constant regret. Next, we propose a variant of FTPL, namely Wait then FTPL (W-FTPL), which also has constant regret while demonstrating much better dependence on the fetch cost. We also show that under adversarial arrivals, RR policy has linear regret while both FTPL and W-FTPL have regret which is order-optimal.
organization=Indian Institute of Technology Bombay, addressline=Powai, postcode=400076, city=Mumbai, country=India
1 Introduction
Software as a Service (SaaS) instances like online navigation platforms, Video-on-Demand services, etc., have stringent latency constraints in order to provide a good quality of experience to their customers. While most SaaSs use cloud resources, low latency necessitates the use of storage/computational resources at the edge, i.e., close to the end-user. A service is said to be hosted at the edge if the code and databases needed to serve user queries are stored on the edge servers and requests can be served at the edge. In this work, the service can also be partially hosted at the edge, in which case, customers’ requests can be partially served at the edge [1, 2]. For instance, consider a SaaS application which provides access to news articles on demand. A typical news article includes some text, some images and possibly some embedded videos. One way to partially host such a service is to store the text corresponding to the news articles at the edge and store the images/videos in the cloud. Each user request will then be served partially by the edge.
We consider the setting where third-party resources can be rented via short-term contracts to host the service, and the edge hosting status of the SaaS can be dynamically changed over time. If the service is not hosted at the edge, it can be fetched from the cloud servers by incurring a fetch cost. The performance of a hosting policy is a function of the rent cost, the fetch cost, and the quality of experience of the users. We refer to the algorithmic challenge of determining what fraction of the service to host at the edge over time as the service hosting problem. Note that we study the service hosting problem from the perspective of a specific SaaS provider which can rent third-party edge resources to improve their customers’ quality of experience by reducing the latency of their service.
Novel online service hosting policies with provable performance guarantees have been proposed in [3, 4]. The metric of interest in [3, 4] is the competitive ratio, defined as the ratio of the cost incurred by an online policy to the cost incurred by an optimal offline policy for the same request arrival sequence. Since the competitive ratio is multiplicative by definition, the absolute value of the difference in the cost incurred by an online policy and the optimal policy can be large even though the competitive ratio of the online policy is close to one. This motivates studying the performance of candidate policies in terms of regret, defined as the difference in the cost incurred by the policy and the optimal policy. Regret is a widely used metric in online learning [5], including recently for the caching problem [6, 7], which is closely related to the service hosting problem. In this work, one of our goals is to design hosting policies with provable guarantees in terms of the regret. Another key dimension in the performance guarantees of online policies is the assumption made on the request arrival process. Commonly studied settings include the stochastic setting and the adversarial setting, and we provide performance guarantees for both settings.
1.1 Our Contributions
We propose a variant of the FTPL policy called Wait then Follow the Perturbed Leader (W-FTPL). W-FTPL is a randomized policy that takes into account the fetch cost in its decision-making. More specifically, W-FTPL does not fetch the service for an initial wait period which depends on the request arrivals and is an increasing function of the fetch cost. Following the wait period, W-FTPL mimics the FTPL policy.
For i.i.d. stochastic request arrivals and the regret metric, we show that RR is sub-optimal while FTPL and W-FTPL are both order-optimal with respect to the time horizon. While the regret of FTPL can increase linearly with the fetch cost, the regret of W-FTPL increases at most polylogarithmically. The improved performance of W-FTPL over FTPL is a consequence of the fact that W-FTPL avoids most of the fetches made by FTPL in the initial time-slots and by the end of the wait period, its estimate of the arrival rate is accurate enough to avoid multiple fetches. Further, we characterize the competitive ratio of FTPL and W-FTPL for the setting where a finite number of partial hosting levels are allowed. We compare these results with the competitive ratio of RR when a single partial hosting level is allowed [2] to conclude that FTPL and W-FTPL outperform RR with respect to the competitive ratio.
For the adversarial setting, we first characterize a fundamental lower bound on the regret of any online hosting policy. In terms of regret, we then show that RR is strictly suboptimal, while FTPL and W-FTPL have order-optimal performance with respect to time. We show that the competitive ratios of FTPL and W-FTPL are upper bounded by a constant (with respect to time) for any finite number of partial hosting levels. This is a more general setting than the one considered in [2], where the competitive ratio of a variant of RR is characterized when only one partial hosting level is permitted.
1.2 Related work
Several emerging applications such as Augmented / Virtual Reality (AR/VR), online gaming, the Internet of Things (IoT), and autonomous driving have very stringent latency requirements. In addition, many of these applications are also compute-and-storage heavy. This necessitates the migration of some of the storage and computation requirements of these services from remote cloud servers to the edge. To enable such offloading, several edge computing architectures have been proposed in the literature and their performance has been analyzed extensively, see for example [8, 9, 10] and references therein.
The service hosting problem has received a lot of attention recently, and various settings have been studied. One approach has been to pose the design problem of where to host which service as a one-shot large-scale optimization problem, see for example [11, 12, 13]. Our work differs from this line of work in that we consider an online setting where we adaptively decide when (and what fraction of) service to host at any time, depending on the varying number of requests seen thus far.
There have been recent works [4, 14] which consider the online service hosting problem, and design adaptive schemes whose performance is characterized in terms of the competitive ratio with respect to an oracle which known the entire request sequence apriori. Our work differs from these in a couple of ways. Firstly, the above mentioned works study the problem from the perspective of an edge resource provider who has to decide which amongst a collection of services to host at each time on a storage-constrained edge server. On the other hand, similar to some other works [15, 3], we study the problem from the perspective of a service provider which needs to decide when to host its application at the edge so as to minimize the overall cost. Secondly, most of the works mentioned above do not consider partial hosting of the service, which is allowed in our framework. While [2] does study partial hosting and proposed the Retro-Renting (RR) policy which is shown to have a constant competitive ratio, it only allows one level of partial hosting of the service. On the other hand, we consider a much more general setting where multiple partial service hosting levels are allowed and are able to show that both the FTPL and its variant W-FTPL are able to achieve a constant competitive ratio.
While the works mentioned above study competitive ratio as a performance metric, another popular measure of performance for online algorithms is the regret which considers the difference in the cost incurred by an online policy and the optimal (static) policy. In particular, for the closely related caching problem, several policies inspired by the recent advances in online convex optimization have been proposed, including Online Gradient Ascent [6], Online Mirror Descent [16], and Follow the Perturbed Leader (FTPL) [17, 7]. In particular, FTPL has been shown to have order-optimal regret for the caching problem in the adversarial setting [17, 7]. In this work, we study regret for the RR and FTPL policies for the service hosting problem under both the stochastic and adversarial settings. Since RR is a deterministic policy, its regret performance in the adversarial setting is poor. While FTPL does achieve order-optimal regret (with respect to time), one limitation is that it makes hosting decisions agnostic of the fetch cost. As a result, in some cases, FTPL is prone to fetching and evicting the service multiple times in the initial time-slots when its estimate of the request arrival rate is noisy, thus leading to poor performance. This motivated our design and regret analysis of W-FTPL, a variant of FTPL which takes into account the fetch cost. One recent work which also considers regret as a performance metric is [18], which studied the problem of joint online service hosting and routing and proposed a two time-scale algorithm based on online gradient descent with provably order-optimal regret under adversarial requests.
2 System Setup
We consider a system consisting of a back-end server and an edge-server to serve customers of a service. The back-end server always hosts the service and can serve any requests that are routed to it. In this work, we allow the service to be partially hosted at the edge-server. When the service is partially hosted at the edge, requests are partially served at the edge and partially at the back-end server. Further, the fraction of service hosted at the edge can be changed over time. If the service is not hosted or partially hosted at the edge, parts of the service can be fetched from the back-end server to host at the edge.
Sequence of events in each time-slot: We consider a time-slotted system. In each time-slot, we first make the service hosting decision for that time-slot. Following this, requests may arrive and are served at the edge and/or the back-end server.
Request arrivals: We consider two types of arrival processes in this paper.
-
-
Stochastic arrivals: In this case, request arrivals are i.i.d. across time-slots with mean .
-
-
Adversarial arrivals: In this case, the request arrival sequence is generated by an oblivious adversary222 The entire request sequence is assumed to be fixed apriori..
Let denote the number of request arrivals in time-slot . We consider bounded requests, specifically, , and let denote the cumulative requests observed by the end of time-slot . In both cases, we assume that there are at most arrivals per slot.
Costs: We model three types of costs.
-
-
Rent cost (): The system incurs a cost of units per time-slot to host the entire service at the edge. For hosting fraction of service, the system incurs a cost of units per time-slot.
-
-
Service cost (): This is the cost incurred to use the back-end server to serve (parts of) requests. If fraction of service is hosted at the edge, the system incurs a service cost of units per request, where is a decreasing function of . We fix , i.e., service cost is one unit when the service is not hosted at the edge.
-
-
Fetch cost (): The system incurs a cost of units for each fetch of the entire service from the back-end server to host on the edge-server. For fetching fraction of service, the system incurs a cost of units.
Let denote the edge hosting status of the service in time slot under policy . We consider the setting where and implies that fraction of the service is hosted at the edge-server in time-slot . Note that and in particular we consider , and for . It follows that implies that the entire service is hosted at the edge in time-slot , and implies that the service is not hosted at the edge in time-slot .
We make some assumptions about the various system parameters.
Assumption 1.
.
This assumption is motivated by the fact that if , it is strictly sub-optimal to host the entire service at the edge, irrespective of the number of arrivals in a time-slot.
Assumption 2.
For a fraction of service , , .
In [1, 2], it is shown that the benefits of partial hosting are limited to the setting where . Motivated by this, we consider the case where for all candidate values of .
Assumption 3.
For any fraction of service , ,
Assumption 4.
For stochastic arrivals, and for adversarial arrivals, .
This assumption eliminates degenerate cases where there are no request arrivals in the time-horizon of interest.
The total cost incurred in time-slot by policy , denoted by , is the sum of the rent, service, and fetch costs. It follows that,
(1) |
Let denote the request arrival sequence and denote the cumulative cost incurred by policy in time-slots 1 to . It follows that,
Performance metrics: We consider two performance metrics, namely regret and competitive ratio.
For i.i.d. stochastic arrivals, the regret of a policy , denoted by , is defined as the difference in the total expected cost incurred by the policy and the oracle static hosting policy. The oracle static hosting policy makes a hosting decision at using the knowledge of the statistics of the request arrival process but not the entire sample-path. For ease of notation, we define , . It follows that the expected cost incurred by the optimal static hosting policy in time-slots 1 to is , and therefore,
(2) |
where represents the expectation over the randomization in policy and the request arrival sequences .
For i.i.d. stochastic arrivals, the competitive ratio of a policy , denoted by , is defined as the ratio of the expected total cost incurred by the policy and expected total cost incurred by the optimal static hosting policy as discussed above. It follows that,
For adversarial arrivals, the regret of a policy , denoted by , is defined as the worst case difference in the total expected cost incurred by the policy and the optimal static hosting policy. The expectation is taken over the randomization in policy . It follows that for any request sequence , the total cost incurred by the optimal static hosting policy in time-slots 1 to is , and therefore,
We have the following relation between the regret induced by a policy under i.i.d stochastic arrivals and adversarial arrivals.
Lemma 1.
For any online policy and stochastic arrivals we have, .
The proof can be found in Section 7.
For adversarial arrivals, the competitive ratio of a policy , denoted by , is defined as the worst case ratio of the expected total cost incurred by the policy and the cost incurred by the offline optimal policy. The optimal offline policy can change the hosting status over time using the knowledge of the entire request process a priori. Let the cost incurred by the optimal offline hosting policy in time-slots 1 to be denoted by . It follows that,
Goal: The goal is to design online hosting policies with provable performance guarantees with respect to regret and competitive ratio.
Notations: We define some notations which will be used in the rest of this paper. Let , , and , where is the collection of all possible one hot vectors in and thus . The position of 1 in the one hot vector represents the level of service hosted at the edge in slot , that is . The total cost in slot as given in (1) can be rewritten using the above notations as follows,
where .
For ease of notation, for stochastic arrivals we define , , , , , , .
We summarize important notations used in this paper in Table 1.
Symbol | Description |
---|---|
Rent cost for hosting complete service in a slot | |
Fetch cost | |
Number of hosting levels allowed | |
Time index | |
Fraction of service hosted in time slot under policy | |
Maximum number of arrivals in a slot | |
Fraction of service | |
Fraction of request forwarded if fraction is hosted | |
Number of requests in time slot | |
Cumulative number of requests up to time slot | |
Time horizon | |
Cost incurred till under policy for request sequence | |
Regret under stochastic arrival setting | |
Competitive ratio under stochastic arrival setting | |
Regret under adversarial arrival setting | |
Competitive ratio under adversarial arrival setting | |
Learning rate | |
Hyper parameter in FTPL, W-FTPL policies | |
, | Hyperparameters in W-FTPL policy |
IID Gaussian vector | |
Collection of one hot vectors in | |
Waiting time for W-FTPL policy |
3 Policies
Our Policy: Our policy called Wait then Follow the Perturbed Leader (W-FTPL), is a variant of the popular FTPL policy. A formal definition of FTPL is given in Algorithm 1. FTPL is a randomized policy and is known to perform well for the traditional caching problem, in fact achieving order-wise optimal regret for adversarial arrivals [7]. Under FTPL, in any time slot , for each , FTPL considers the cumulative cost that would have been incurred had the system used hosting fraction in the entire duration , and then perturbs this by an appropriately scaled version of a sample of a standard Gaussian random variable. FTPL then hosts the fraction of service for which the perturbed cost is minimum in that slot.
We propose a policy called Wait then Follow The Perturbed Leader (W-FTPL). The key idea behind the W-FTPL policy is to not host the service for an initial wait period. This is to reduce the number of fetches made initially when the estimate of the arrivals is noisy. The duration of this wait period is not fixed apriori and is a function of the arrival pattern seen till that time. Following the wait period, W-FTPL mimics the FTPL policy. Refer to Algorithm 2 for a formal definition of W-FTPL. Let be the time slot, after which W-FTPL starts acting as FTPL. Formally we define , where , are constants.
Retro-Renting (RR) [3]: The RR policy is a deterministic hosting policy, and it either hosts the complete service or hosts nothing in a slot. The key idea behind this policy is to use recent arrival patterns to make hosting decisions. We refer the reader to Algorithm 1 in [3] for a formal definition of the RR policy. The performance of RR with respect to the competitive ratio was analyzed in [3].
RR [2] is a variant of RR where one partial level of hosting the service is allowed. The formal definition of RR, borrowed from [2, Algorithm 1] is given in Algorithm 3. Similar to RR, the key idea behind RR policy is to check whether the current hosting status is optimal in the hindsight using recent arrival patterns to make hosting decisions. RR checks the optimal offline policy cost for the recent request arrivals and chooses the fraction of service with the minimum cost. For , RR behaves as RR [2]. Therefore we refer RR as RR in this paper.
4 Main Results and Discussion
In this section, we state and discuss our key results.
4.1 Regret Analysis
Our first result characterizes the regret performance of the policies discussed in Section 3 and the fundamental limit on the performance of any online policy for adversarial arrivals.
Theorem 1 (Adversarial Arrivals).
Let the arrivals be generated by an oblivious adversary under the constraint that at most requests arrive in each time-slot. Then,
-
(a)
for any online policy .
-
(b)
-
(c)
For , ,
-
(d)
For , ,
-
(e)
For , , , ,
The key take-away from Theorem 1 is that RR has linear regret and FTPL, W-FTPL have order-optimal regret, i.e., with respect to the time horizon under adversarial arrivals. The proof of Theorem 1 is in Section 6.
Our second result characterizes the regret performance of the policies discussed in Section 3 for i.i.d. stochastic arrivals.
Theorem 2 (Stochastic Arrivals).
Let the arrivals in each time-slot be i.i.d. stochastic with mean and . Then we have
-
(a)
-
(b)
For , ,
where .
-
(c)
For , , , , and large enough we have,
where .
Corollary 1 (Stochastic Arrivals).
Let the arrivals in each time-slot be i.i.d. stochastic with mean and . For the W-FTPL policy, if , , then the regret scales at most logarithmically with .
Remark 1.
While the result in 2 is stated for , it can e shown that for and chosen suitably large, the regret of W-FTPL scales logarithmically with .
The key take-aways from Theorem 2 are that RR has linear regret with respect to time, and FTPL/W-FTPL has regret that does not scale with time. Also, the upper bound on the regret of W-FTPL scales as for where as for FTPL, the upper bound on the regret scales linearly with . W-FTPL thus outperforms FTPL w.r.t. to the dependence of regret on the fetch cost . We validate these results via simulations in Section 5.
4.2 Competitive Ratio Analysis
Our next result characterizes the competitive ratio of the policies discussed in Section 3. The competitive ratio of RR is characterized in [3, 2], and it is shown that RR has a constant competitive ratio and the competitive ratio decreases with an increase in fetch cost . We present the results for FTPL and W-FTPL policies below.
Theorem 3 (Adversarial Arrivals).
Let the arrivals be generated by an oblivious adversary under the constraint that at most requests arrive in each time-slot. Then we have
-
(a)
For , ,
-
(b)
For , , , ,
The key take-away from Theorem 3 is that the competitive ratios of both FTPL and W-FTPL are bounded by a constant. The competitive ratio of FTPL and W-FTPL deteriorates with , while the competitive ratio of RR improves with when only one partial hosting level is permitted [2]. Note that the competitive ratio of FTPL and W-FTPL is bounded by a constant for the setting where any finite number of partial hosting levels are allowed. Compared to this, for RR in [2], the competitive ratio result holds only for one partial hosting level.
Theorem 4 (Stochastic Arrivals).
Let the arrivals in each time-slot be i.i.d. stochastic with mean . Then we have
-
(a)
For large enough, .
-
(b)
For , .
-
(c)
For , , , , .
The key take-away from Theorem 4 is that for large enough, FTPL and W-FTPL outperform RR in terms of competitive ratio for stochastic arrivals.
In Section 5 we supplement the results stated in this section through simulations.
5 Simulations
In this section, we compare RR, FTPL and W-FTPL policies via simulations using synthetic request sequences, as well as real trace data from [24]. In all our simulations, we consider for FTPL and W-FTPL, and for W-FTPL.
5.1 Synthetic arrivals
In Figure 1, we plot the regret of different policies as a function of time horizon . The parameters considered are , , . The request arrival considered is similar to the one used in the proof of Theorem 3(b), i.e., we divide time into frames, and each frame starts with slots of requests each followed by slots of zero requests. We consider 100 such frames for this experiment. The regret is averaged over 50 experiments. We observe that RR has linear regret in this case which agrees with Theorem 1.

5.2 Stochastic arrivals
For the stochastic case, we consider arrivals in each slot to be Bernoulli with parameter . We will restrict attention to the case where there is only one partial level of hosting, i.e., . We consider and . In Figure2 and Figure3, we consider , . All the results plotted are averaged over 50 independent experiments. In Figure2, we fix and compare the regret performance of the policies as a function of time. We observe that W-FTPL performs better than other policies, and RR has linear regret, which agrees with Theorem 2. In Figure3, we compare the performance of the policies with respect to for . We observe that W-FTPL performs better than other policies, and the main difference between the cost of policies is due to fetch cost. Also note that W-FTPL has better dependence on as compared to FTPL, which agrees with Theorem 2. From Theorem 2 we observe that for large values of the regret of FTPL is proportional to where as W-FTPL is proportional to . Therefore in Figure4, we compare the performance of the policies by varying rent cost for , , and . Again we note that W-FTPL outperforms RR and FTPL. As gets closer to , the FTPL policy does multiple fetches, and as a result, the regret increases when is close to .


5.3 Trace driven arrivals
For trace-driven simulations, we use the data collected by Grid Workloads Archive [24], which consists of requests arriving at computational servers. Among the traces mentioned in [24] we use DAS-2 traces, which were provided by the Advanced School for Computing and Imaging (ASCI). We consider slot duration as 1 hour and plot a snapshot of the trace in Figure 5. We consider for all the experiments in this subsection. All the results plotted for trace driven experiments are averaged over 50 independent experiments.
We consider partial service hosting with one partial level for our simulations. This dataset provides information on the number of processors required to serve each request, and this number lies between 1 and 128 for all requests. For this experiment, we consider hosting the entire service as equivalent to having 128 processors at the edge and hosting the partial service as having 16 processors at the edge. It follows that the hosting status . To compute , we use the dataset to compute the fraction of requests that require more than 16 processors for service and obtain that .




In Figure6, we consider , , and compare the regret performance of policies as a function of time. We observe that RR performs better than other policies for most values of time horizon. This is because RR attempts to mimic the optimal offline policy, which is not a static policy for this arrival sequence. Note that FTPL and W-FTPL attempt to mimic the optimal static policy and perform worse than RR.
In Figure7, we compare the performance of different policies with respect to and fix , . We observe that RR performs better than the other policies for some values of because of the same reason mentioned before. In Figure8, we compare the performance of different policies by varying rent cost and fix , . Again we note that RR outperforms W-FTPL and FTPL in some cases.
Our simulation results thus indicate that FTPL and W-FTPL outperform RR for i.i.d. stochastic arrivals. In the case where the arrival process is such that the offline optimal policy is not static, RR can outperform FTPL and W-FTPL. Additionally, RR can have linear regret with respect to time, while FTPL and W-FTPL have sub-linear regret with respect to time for all arrival processes considered.
6 Proof of theorems
6.1 Proof of Theorem 1(a)
Proof of Theorem 1(a).
Recall that regret of any policy is given by
Note that where the expectation is w.r.t some specified distribution over request sequences. We lower bound to get a lower bound on .
Let , , and and i.i.d overtime, therefore . From Assumption 2 we have for all and therefore and is a valid Bernoulli random variable.
where follows from definition of , follows because of for , follows from Lemma 15 in Section 7, and follows from Lemma 16 in Section 7. ∎
6.2 Proof of Theorem 1(b)
Note that the RR algorithm in [3] can have only one intermediate level of partial hosting i.e., . So throughout the is subsection we consider . We use the following lemmas to prove Theorem 1(b).
Lemma 2.
RR does not fetch any non zero fraction of service for at least slots after eviction of complete service, where
.
Proof.
Without loss of generality we consider is when RR last evicted the service. If RR policy fetches fraction of service in slot , then .
We now prove the lemma by contradiction. Let us assume that RR fetches fraction of the service in time slot and Then by using the definition of , we get and which is a contradiction. ∎
Lemma 3.
Once RR fetches any non zero fraction of service, it hosts (some positive fraction of the service) for at least slots before it evicts the completely.
Proof.
Without loss of generality we consider is when RR last fetched the service and say it hosted fraction of the service. For the RR policy to evict the service in slot , we need , or equivalently .
We now prove the lemma by contradiction. Let us assume that RR evicts the service in time slot and . Then from the assumption above, we must have which is a contradiction. ∎
Proof of Theorem 1(b).
We split the entire time duration into frames of size and consider the arrival sequence in each frame as for and for , where , , and .
From Lemma 2 we know that RR does not fetch any fraction of service till time . At RR fetches some positive fraction of the service because holds by definition of .
From Lemma 3 we know that RR does not evict the service for . At RR evicts the service because for , holds by definition of .
So amongst the first time-slots, the RR policy does not host for the first slots and then hosts some positive fraction of service for the following slots. Let be the fraction of service to be hosted by optimal static policy. If the optimal static policy is to host fraction of service then we have a regret of at least for initial frame and form second frame onwards because of forwarding, which is a constant and does not depend on . If the optimal static policy is not to host the service, then we have a regret of till for some , which is a constant and does not depend on . So in either case, a regret larger than some positive constant(say ) is occurred in a frame of size . Since we split the entire time duration into frames of size and the request sequence is repeated in each frame, we get linear regret.
∎
6.3 Proof of Theorem 1(c)
The optimal static policy only fetches the optimal fraction of service to be hosted once. Therefore,
(3) |
Since FTPL policy does not consider fetch cost while making decisions we can decouple the fetch cost and the non fetch cost incurred by the FTPL policy. Therefore we can also decouple the expected regret incurred by FTPL into expected regret without fetch cost () and the fetch cost incurred by FTPL to bound (3). We first bound the expected regret of the FTPL policy with fetch cost and then add it’s expected fetch cost to get the final regret bound.
Lemma 4.
The regret for FTPL policy with non decreasing learning rate and is given by
Proof.
The proof of this theorem follows along the same lines as Theorem 1 of [25] and, Proposition 4.1 of [17]. Recall that represents the fraction of service hosted in slot by policy and is a one hot vector where position of one represents the level of service hosted at the edge. In slot FTPL hosts fraction of service , where . For ease of notation we suppress the policy in super script and denote as and as in further discussion. Define a time varying potential function
(4) |
and observe that the gradient of at is . So . Consequently
where is the line segment joining and which follows from Taylor’s expansion of .
(5) |
By Jensen’s inequality
where follows because ’s are standard Gaussian random variables. Recall that is the loss incurred by the offline static policy. Therefore by rearranging (5) and using Jensen’s inequality we get
(6) |
We bound each term in the RHS separately to get the final result.
The first term is
since Gaussian random variables are symmetric. By using the result of Lemma 9 in [25], we get
(7) |
Now we bound the expected fetch cost incurred by FTPL policy.
Lemma 5.
The expected fetch cost under FTPL policy upto time denoted by with learning rate is bounded as follows
Proof.
The proof of this theorem follows similar a line of arguments as in [17] for the case with and cache size 1 case. The fetch cost is incurred only when the service is fetched. The overall cost of fetching is . So, the expected fetch cost is given by
Let be the event that , . occurs only if , . Let , , note that , .
(10) | |||
Here follows from the fact that , where , follows because and and, follows because . Therefore,
∎
6.4 Proof of Theorem 1(d)
Proof of Theorem 1(d).
We characterize the fetch cost under FTPL to analyze the impact of on the total cost incurred under FTPL. Fetch cost is incurred if , . Consider a request sequence and for all . The optimal static policy is to not host any fraction of service i.e., to forward all the requests to back-end server.
The probability of hosting , fraction of service in first slot is . Hosting any non zero fraction of service will incur at least fetch cost. Therefore the expected fetch cost of FTPL policy will be at least which is a lower bound on the expected cost of FTPL policy for this request sequence. For FTPL we can decouple the fetch cost and the non fetch cost and FTPL does not consider while making decisions. Therefore is also a lower bound for regret of FTPL. This completes the proof.
∎
6.5 Proof of Theorem 1(e)
Proof of Theorem 1(e).
Recall that . For arrival sequences with requests arriving in time-slots 1 to , we have two possible cases, namely, and . We consider each case to bound the regret of W-FTPL policy.
Case I (): Let be a request sequence chosen by adversary such that , then
follows because for for W-FTPL.
Case II (): Let be a request sequence chosen by adversary such that , then by using case I we can bound
Thus combining both cases we get upper bound on regret in wait phase as
(11) |
Note that W-FTPL policy follows FTPL policy after its waiting time i.e., from time . Therefore W-FTPL and FTPL take same decisions and have same cost from time to . Thus the regret in a slot is also same for W-FTPL and FTPL from time to . If then only wait phase will be there and regret in FTPL phase is considered to be zero. We denote regret from time to as .
(12) |
∎
6.6 Proof of Theorem 2(a)
Proof of Theorem 2(a).
From Lemma 3 we know that once RR fetches any non-zero fraction of service, it hosts (some positive fraction of the service) for at least slots before it evicts completely. From Lemma 2, we know that, once RR evicts the complete service from the edge, then it does not fetch any non zero fraction of the service for at least slots. We divide the entire time into frames of size slots. Let us define an event as the frame that starts with slots with arrivals in each slot and followed by slots of zeros arrivals. Let the probability of event occurring be (independent of ). Conditioned on event , if optimal static policy is to host non zero fraction of service then we have regret at least in a frame where . If optimal is not to host any fraction of service then we have regret of in a frame where . Therefore conditioned on event RR always has a finite nonzero regret say which is independent of . Therefore,
So even in the stochastic case RR observes linear regret. ∎
6.7 Proof of Theorem 2(b)
Lemma 6.
Probability of hosting a sub-optimal fraction of service under the FTPL policy in time slot is bounded as follows
Proof.
FTPL hosts fraction of service if in time slot , for all . Let .
There can be two possibilities one is or we bound the probability by considering each case separately.
Case 1 (): Since is a decreasing function . Let be the event that .
where is obtained by using the fact that complementary CDF of standard Gaussian for and Hoeffding’s inequality [27].
Case 2 (): Let be an event .
where is obtained by using the fact that for and Hoeffding’s inequality [27].
Therefore by combining both cases we get
(13) |
∎
Lemma 7.
For ,
where .
Proof.
Let where . From (2) we have
where is obtained by using adversarial regret bound when . Now we consider each term in RHS separately and bound them.
By using Lemma 6 we have
Here is obtained using the fact that is a decreasing function over , and for , and . Therefore,
Here follows because for we have , and for . ∎
Lemma 8.
Fetch cost under FTPL policy with learning rate is bounded as follows
Proof.
The fetch cost is incurred when we fetch extra fraction of service. Similar to the proof of Lemma 5 we get
Let be the event that , . occurs if , .
We consider two cases and and bound the probability.
Case 1 (): Let be the event that ,
where is obtained by using the fact that for and Hoeffding’s inequality [27].
Case 2 (): Let be the event that ,
where is obtained by using the fact that for and Hoeffding’s inequality [27].
From combining both cases we get
(14) |
Therefore,
∎
6.8 Proof of Theorem 2(c)
Lemma 9.
Under the W-FTPL policy, if and only if there exist such that the following condition holds:
or | |||
where , .
Proof.
Let , . if and only if there exist such that
or | |||
This completes the proof. ∎
Lemma 10.
Under the W-FTPL policy, with probability at least .
Proof.
Let be the event that
where . By using Hoeffding’s inequality [27] we get
Let , where . Using the union bound, we get:
Let We prove the lemma by contradiction. Let us consider
and holds, then by Lemma 9, such that
or | |||
holds. There are two possibilities , we consider them separately to prove the final result.
Case 1 (): It follows that
which is a contradiction. Alternatively,
which is also a contradiction.
Case 2 (): It follows that:
which is a contradiction. Alternatively,
which is also a contradiction.
Therefore , if occurs, which happens with probability at least . ∎
Lemma 11.
Under the W-FTPL policy, for .
Proof.
For , we have . For , we have
and | |||
Let us consider the following cases
Case 1 (, ):
where is obtained by using Hoeffding’s inequality [27].
Case 2 (, ):
where is obtained by using Hoeffding’s inequality [27].
Case 3 (, ):
where is obtained by using Hoeffding’s inequality [27].
Case 4 (, ):
where is obtained by using Hoeffding’s inequality [27]. The result follows by combining these cases. ∎
Lemma 12.
Under the W-FTPL policy
Proof.
6.9 Proof of Theorem 3(a)
Lemma 13.
The cost of the offline optimal policy is lower bounded as follows
Proof.
The cost incurred by the optimal offline policy with is lower bounded by the cost incurred by the optimal offline policy when . For , there is an additional fetching cost. Therefore,
∎
Lemma 14.
The expected cost of FTPL policy for any request sequence for , , can be upper bounded as follows
Proof.
FTPL does not host any fraction the service for time if , for all . The probability of hosting any non zero fraction of service () in time slot is given as
Let , if then . Therefore for , for all and we get
(15) |
Therefore, the expected cost under FTPL can be bounded as
Here follows from (15). If , then the bound will only have the first term and even in that case the above bound holds. Therefore the above bound holds for any . ∎
6.10 Proof of Theorem 3(b)
6.11 Proof of Theorem 4(a)
6.12 Proof of Theorem 4(b)
6.13 Proof of Theorem 4(c)
References
- [1] R. S. Prakash, N. Karamchandani, V. Kavitha, S. Moharir, Partial service caching at the edge, in: 2020 18th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT), IEEE, 2020, pp. 1–8.
-
[2]
V. S. C. L. Narayana, M. Agarwala, R. S. Prakash, N. Karamchandani, S. Moharir,
Online partial service hosting at the
edge, CoRR abs/2103.00555 (2021).
arXiv:2103.00555.
URL https://arxiv.org/abs/2103.00555 - [3] V. C. L. Narayana, S. Moharir, N. Karamchandani, On renting edge resources for service hosting, ACM Transactions on Modeling and Performance Evaluation of Computing Systems 6 (2) (2021) 1–30.
- [4] T. Zhao, I.-H. Hou, S. Wang, K. Chan, Red/led: An asymptotically optimal and scalable online algorithm for service caching at the edge, IEEE Journal on Selected Areas in Communications 36 (8) (2018) 1857–1870.
- [5] T. L. Lai, H. Robbins, Asymptotically efficient adaptive allocation rules, Advances in applied mathematics 6 (1) (1985) 4–22.
- [6] G. S. Paschos, A. Destounis, L. Vigneri, G. Iosifidis, Learning to cache with no regrets, in: IEEE INFOCOM 2019-IEEE Conference on Computer Communications, IEEE, 2019, pp. 235–243.
- [7] R. Bhattacharjee, S. Banerjee, A. Sinha, Fundamental limits on the regret of online network-caching, Proceedings of the ACM on Measurement and Analysis of Computing Systems 4 (2) (2020) 1–31.
-
[8]
C. Puliafito, E. Mingozzi, F. Longo, A. Puliafito, O. Rana,
Fog computing for the internet of
things: A survey, ACM Trans. Internet Technol. 19 (2) (2019) 18:1–18:41.
doi:10.1145/3301443.
URL http://doi.acm.org/10.1145/3301443 - [9] Y. Mao, C. You, J. Zhang, K. Huang, K. B. Letaief, A survey on mobile edge computing: The communication perspective, IEEE Communications Surveys & Tutorials 19 (4) (2017) 2322–2358.
- [10] P. Mach, Z. Becvar, Mobile edge computing: A survey on architecture and computation offloading, IEEE Communications Surveys & Tutorials 19 (3) (2017) 1628–1656.
- [11] S. Pasteris, S. Wang, M. Herbster, T. He, Service placement with provable guarantees in heterogeneous edge computing systems, in: IEEE INFOCOM 2019-IEEE Conference on Computer Communications, IEEE, 2019, pp. 514–522.
- [12] S. Bi, L. Huang, Y. A. Zhang, Joint optimization of service caching placement and computation offloading in mobile edge computing systems, IEEE Transactions on Wireless Communications 19 (7) (2020) 4947–4963. doi:10.1109/TWC.2020.2988386.
- [13] O. Ascigil, A. Tasiopoulos, T. K. Phan, V. Sourlas, I. Psaras, G. Pavlou, Resource provisioning and allocation in function-as-a-service edge-clouds, IEEE Transactions on Services Computing (2021) 1–1doi:10.1109/TSC.2021.3052139.
- [14] H. Tan, S. H.-C. Jiang, Z. Han, M. Li, Asymptotically optimal online caching on multiple caches with relaying and bypassing, IEEE/ACM Transactions on Networking (2021).
- [15] X. Xia, F. Chen, Q. He, J. Grundy, M. Abdelrazek, H. Jin, Online collaborative data caching in edge computing, IEEE Transactions on Parallel and Distributed Systems 32 (2) (2020) 281–294.
- [16] T. S. Salem, G. Neglia, S. Ioannidis, No-regret caching via online mirror descent, in: ICC 2021-IEEE International Conference on Communications, IEEE, 2021, pp. 1–6.
- [17] S. Mukhopadhyay, A. Sinha, Online caching with optimal switching regret, in: 2021 IEEE International Symposium on Information Theory (ISIT), IEEE, 2021, pp. 1546–1551.
- [18] S. Fan, I.-H. Hou, L. B. Van Sy Mai, Online service caching and routing at the edge with unknown arrivals (2022).
- [19] S. Wang, R. Urgaonkar, M. Zafer, T. He, K. Chan, K. K. Leung, Dynamic service migration in mobile edge-clouds, in: 2015 IFIP Networking Conference (IFIP Networking), IEEE, 2015, pp. 1–9.
- [20] G. Xiong, R. Singh, J. Li, Learning augmented index policy for optimal service placement at the network edge, arXiv preprint arXiv:2101.03641.
- [21] D. Zeng, L. Gu, S. Pan, J. Cai, S. Guo, Resource management at the network edge: A deep reinforcement learning approach, IEEE Network 33 (3) (2019) 26–33.
- [22] L. Chen, J. Xu, Budget-constrained edge service provisioning with demand estimation via bandit learning, arXiv preprint arXiv:1903.09080 (2019).
- [23] W. He, D. He, Y. Huang, Y. Zhang, Y. Xu, G. Yun-feng, W. Zhang, Bandit learning-based service placement and resource allocation for mobile edge computing, in: 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications, IEEE, pp. 1–6.
- [24] A. Iosup, H. Li, M. Jan, S. Anoep, C. Dumitrescu, L. Wolters, D. H. Epema, The grid workloads archive, Future Generation Computer Systems 24 (7) (2008) 672–686.
- [25] A. Cohen, T. Hazan, Following the perturbed leader for online structured learning, in: International Conference on Machine Learning, PMLR, 2015, pp. 1034–1042.
- [26] J. Abernethy, C. Lee, A. Sinha, A. Tewari, Online linear optimization via smoothing, in: Conference on Learning Theory, PMLR, 2014, pp. 807–823.
- [27] W. Hoeffding, Probability inequalities for sums of bounded random variables, in: The collected works of Wassily Hoeffding, Springer, 1994, pp. 409–426.
- [28] D. Berend, A. Kontorovich, A sharp estimate of the binomial mean absolute deviation with applications, Statistics & Probability Letters 83 (4) (2013) 1254–1259.
- [29] H. Robbins, A remark on stirling’s formula, The American mathematical monthly 62 (1) (1955) 26–29.
7 Appendix
Proof of Lemma 1.
For i.i.d. stochastic arrivals, we have
where is obtained by Jensen’s inequality. ∎
Lemma 15.
If is a random variable and , then
Proof.
∎
Lemma 16.
If then