Cache Allocations for Consecutive Requests of Categorized Contents: Service Provider’s Perspective
Abstract
In wireless caching networks, a user generally has a concrete purpose of consuming contents in a certain preferred category, and requests multiple contents in sequence. While most existing research on wireless caching and delivery has focused only on one-shot requests, the popularity distribution of contents requested consecutively is definitely different from the one-shot request and has been not considered. Also, especially from the perspective of the service provider, it is advantageous for users to consume as many contents as possible. Thus, this paper proposes two cache allocation policies for categorized contents and consecutive user demands, which maximize 1) the cache hit rate and 2) the number of consecutive content consumption, respectively. Numerical results show how categorized contents and consecutive content requests have impacts on the cache allocation.
I Introduction
In multimedia services, e.g., on-demand streaming services, a relatively small number of popular contents generally occupies a large portion of the massive global data traffic, and most of user demands are overlapped and repeated [1]. To deal with this issue, wireless caching technologies have been studied, wherein the base station (BS) or the server pushes popular contents for off-peak hours to cache-enabled nodes so that these nodes provide contents directly to nearby mobile users during peak hours [2]. In practice, caching nodes (i.e., caching helpers and/or cache-enabled devices) have finite storage sizes, which leads the content placement problem to determine which content is better to be stored in caching nodes [3].
The goal of the content placement problem is to find optimal caching policies according to the popularity distribution of contents and network topology. In stochastic wireless caching networks, there exist research efforts on probabilistic content placement introduced in [4, 5]. Many probabilistic caching methods have been proposed for various systems, e.g., device-to-device (D2D) networks [6], -tier hierarchical networks [7], multi-quality dynamic streaming [8], and probabilistic coded caching was also recently proposed in [9].
Previous research optimized content placement for users requesting only one content, i.e., one-shot request, under the assumption that all content requests are independent. In multimedia services, the user typically accesses a service platform with the purpose of consuming a specific category of the content, and generally consumes more than one content. In this case, the sequence of consecutively consumed contents is highly correlated. For example, in video services such as YouTube, the related video list is recommended to the user after the first content is consumed [10]. In particular, the view count of a given video varies in almost the same scale as the average view count of the top referrer videos in the related list [11]; therefore, a user is highly likely to request one of the videos in the related category. Therefore, this paper considers the scenario in which a user consecutively requests multiple contents that are likely to be in its preferred category. In this context, this paper proposes two cache allocation policies for categorized contents and consecutive user demands, which maximize the cache hit rate and the expected number of consecutive content consumptions, respectively.
The previous work of [12] has proposed a caching policy for consecutive user demands with the assumption that the number of content requests is fixed. This assumption does not allow the service provider to maximize user’s content consumption; however, in the perspective of service providers, it is important to satisfy as many of the user’s requests as possible. In this paper, each user determines whether to continue to consume more contents depending on cache states in its vicinity, and the service provider aims at making users stay in the service longer.
The main contributions are as follows:
-
•
Different from most existing results on wireless caching in which one-shot requests are considered only, consecutive requests of categorized contents are considered. In practice, the sequence of consecutively consumed contents is highly correlated, and an advanced caching scheme is required.
-
•
Based on real data set, the recent work of [13] has modeled the category-based conditional content popularity distribution. This paper uses this measurement-based popularity model to obtain the proposed cache allocation rule for consecutive requests of categorized contents.
-
•
This paper proposes two cache allocation schemes which maximize cache hit rates and the expected number of consecutive content requests, respectively. The iterative algorithm is presented to find the optimal cache allocation rules and its convergence is proved.
-
•
Numerical results show how 1) the popularity concentration to the preferred category and 2) different numbers of contents in the different categories influence the cache allocation rule.
The rest of the paper is organized as follows. The system model is described in Section II. The cache allocation rules maximizing the cache hit probability and the number of consecutive content consumption are proposed in Sections III and IV, respectively. The numerical results are shown in Section V and Section VI concludes the paper.
II System Model
II-A Wireless Caching Network
This paper assumes that caching nodes are randomly distributed according to a general spatial distribution , and the server which has a content library pushes some popular contents to each caching node during off-peak hours. Suppose that a library consists of contents and all contents have a normalized unit size. Let all contents be grouped into categories, and contents are in category denoted by , for all , satisfying . Also, denote the content index set of by .
The caching nodes have the finite storage size , which means only contents can be cached in each node. Since in practice, caching nodes store a part of contents in . A user requests the content from caching nodes in its vicinity. If the user finds at least one caching node that stores the desired content, this case is called the cache hit. When the user requests multiple contents, we define the cache hit as the case where all of requested contents can be found in nearby caching nodes. When there is no caching node having some of the requested contents, the server can deliver them via a cellular link. However, this paper assumes that the transmission quality of the cellular link is insufficient due to delay and/or congestion that leads to unacceptable video quality, so that henceforth we do not consider direct transmission from the server.
Let the storage size be divided into fractions with unequal sizes denoted by for all , and contents in are stored within . These fractions will be called cache allocations for categories and satisfy and . Given all of , how to store individual contents within each category becomes a classical content placement problem, and we consider the probabilistic caching policy for individual contents as shown in [4, 5].
Number of categories | |
---|---|
Number of contents in category | |
Index of the preferred category | |
Cache size | |
Cache allocation for category | |
Global popularity of category | |
Popularity of the preferred category | |
Rank of category that requests contents | |
Number of consecutive content requests | |
Number of requested contents in the -ranked category | |
Category index of the -ranked category | |
Probability of not requesting the next content |
II-B Content Popularity Model
This paper focuses on the scenario in which the user requests multiple contents consecutively, different from most of existing caching policies which considered only one-shot requests. A representative example is a video streaming service. For example, a user can access the service platform with a concrete purpose of watching some sports highlight clips. In this case, we can postulate that sports is the user’s preferred category, therefore the probability of requesting sports videos in sequence is very high. In contrast, the probability of requesting contents in other categories, e.g., movie trailers, is very small although not zero.
Accordingly, the content request can be modeled by the following steps. First, the user randomly picks one category in . Each category has a global category popularity, which follows the Zipf distribution [4]: where denotes the popularity distribution skewness. Then, the selected category has the first rank among all categories; note that the global category popularity is only used for choosing the first rank. Other categories can have any rank except for the first rank, but this paper models all categories that are not the first for this particular user as statistically equivalent; in other words, the relative ranking from does not matter. After determining the preferred category, the user chooses one of categories to request the content depending on their ranks. Again, the category rank distribution given the preferred category is assumed to follow the Zipf distribution, i.e., , which represents the popularity of the -th ranked category and is the Zipf factor. We denote the popularity of the preferred category by . Note that is the probability of staying within the given preferred category, not the general probability of picking the 1st-ranked category as in [13], which is a different quantity. Here, we also consider the situation in which the user can stop to request contents by itself with small probability of . Therefore, the probability of requesting any content in the -th ranked category after consuming the first content becomes .
After choosing the category rank, the user requests the specific content in the category having the chosen rank. According to [13], the category-based conditional popularity distribution of contents in follows the Mandelbrot-Zipf (M-Zipf) distribution, i.e.,
(1) |
which represents the popularity of the -th content in for . and are the Zipf factor and the plateau factor of , respectively. Here, if is sufficiently large, and the popularity of contents in the preferred is much larger than that of any content in for all . Fig. 1 shows popularity distribution of 100 contents given the preferred category, grouped into 5 categories consisting of 20 contents. This figure is obtained by multiplying the rank probability and the category-based individual content popularity, when , and . Among them, contents whose indices are from 1 to 20 belong to the first-ranked category, and their popularity is relatively much larger than others. Therefore, if is sufficiently large, we approximate the popularity of contents outside the preferred as uniform distribution, i.e., for all and , irrespective of ranks of those categories. When given , the popularity of contents in is for still. Thus, consideration of two exclusive sets of the preferred category and all other contents is reasonable.
III Maximization of Cache Hit Probability
This section derives the cache hit probability and proposes a cache allocation rule that maximizes the cache hit probability.
III-A Cache hit probability
Suppose that the user request contents in sequence. Among contents, let contents belong to the -th ranked category satisfying . Then, when the preferred category is given, the cache hit probability given content requests, i.e., the probability that all of requested contents can be delivered from any caching node, can be expressed as
(2) |
where is the index of the -th ranked category and
(3) |
is the cache hit probability of a content request within given when is the preferred category. In Eq. (3), is the probability that there are caching nodes storing the requested content in the vicinity of the user. Also, is the caching probability of the -th content in given . However, computations required for scanning all combinations of values are exponentially increasing as grows.
When is large, is simplified by using approximations of , and into
(4) |
where
(5) |
which is the cache hit probability of a content request outside . Each term in Eq. (4) is the probability that among requested contents, are in and contents are outside , and all of contents can be found in caching nodes in the vicinity of the user. For simplicity, we will use the notation in the following sections.
The expected cache hit probability can be finally derived as
(6) |
where is the probability that the user requests contents in sequence, which is given by
(7) |
Therefore, the cache hit probability is arranged into
(8) | ||||
(9) |

In Eq. (9), any caching policy can be utilized within each category given the preferred and , and and are determined depending on the caching policy. Then, we can suppose that the caching policy that maximizes the cache hit rate [4, 5] is used for caching of individual contents within every category. Denote the maximum cache hit rates in and outside by and , respectively. Then, the cache hit probability of content requests becomes
(10) | ||||
(11) |
III-B Problem Formulation
The optimal cache allocations of can be obtained by maximizing (11) as follows:
(12) | |||
(13) | |||
(14) |
The constraint (13) is for the storage size of each caching node and the constraint (14) is for the cache allocation for each category. The following key lemmas are used to solve the above problem of (12)–(14).
Lemma 1.
is increasing with .
Proof.
In this proof, we simply use the notation of . Let and . Then, , where can be the cache hit probability within of any caching policy satisfying . Let . Then, since and is generally much closer to zero than one when the library size of is large,
(15) | |||
(16) |
is obtained by using the first-order Taylor approximation, i.e., .
Since the storage size is fixed, the cache size allocated to all categories except for is . With small , there exists any category for such that . Then, let and , , , , where for all and . In this case, cache allocations for other categories can remain unchanged, i.e., and for all . Then, similar to before,
(17) | |||
(18) |
Since and , and the above lemma is finally proved. ∎
Lemma 2.
The optimum vector satisfies .
Proof.
Assume that , then such that and for certain . Let . According to Lemma 1, is increasing with for all . Thus, and it obviously leads to contradiction. ∎
According to Lemma 2, an inequality constraint (13) can be converted into the equality constraint. The problem of (12)–(14) has optimization parameters, and the subproblem for finding the optimal and is formulated as follows:
(19) | ||||
(20) | ||||
(21) |
where . Since are fixed, also becomes a constant .
A multivariable function can be optimized by iteratively optimizing the subset of variables if the convergence is guaranteed. To find , the subproblem of (19)–(21) can be iteratively applied for all combinations of and , for and . We find the maximum of the dual-variable problem of (19)–(21) in each iteration, and the sequence of the updated values of is generated. Since this sequence is non-decreasing and the cache hit probability has a trivial upper bound of 1, i.e., , the convergence of the iterative algorithm is guaranteed.
Since and are obtained by using the bisection method [4, 5], however, the objective function of is not in closed-form and the problem of (19)–(21) should be numerically handled. Therefore, we consider integer values for cache allocations of and the greedy algorithm can solve the problem with not very large. If caching of content partitions is not considered, i.e., only caching of the whole content is allowed, the assumption that is the integer number for all is reasonable. The details of the iterative algorithm to solve the problem of (12)–(14) are described in Algorithm 1.
IV Maximization of expected number of consecutive content requests
From the service provider’s perspective, it is advantageous for the user to consume as many contents as possible. As explained in Section II, the user does not request the next content with the probability of . In addition, we assume that the user stops to consume the next content when no caching node in the vicinity of the user stores the desired content, even though the user requests the next one.
The probability of stopping to consume more is given by
(22) |
In (22), the first term is the probability of not requesting the next content, the second and third terms are probabilities that no caching node stores the requested content when the content belongs to and is not in , respectively. Then, the expected number of consecutive content consumption is computed as
(23) | ||||
(24) |
Then, the optimization problem of maximizing the expected number of consecutive video consumption is as follows:
(25) | |||
(26) | |||
(27) |
Similar to Lemmas 1 and 2, can be proved to be increasing with and the inequality constraint (26) can be converted into the equality constraint, i.e., .
Again, the multivariable function can be maximized by iteratively optimizing the following dual-variable subproblem:
(28) | |||
(29) | |||
(30) |
The sequence of the updated objective values in (28) is nondecreasing, and because . Thus, the algorithm which solves the problem of (25)–(27) by iteratively optimizing the dual-variable problem of (28)–(30) for all combinations of and , is guaranteed to converge. The whole algorithm is the same as Algorithm 1 except that should be changed into in lines 2, 9, 10.
V Numerical Results
In the subsequent simulations, contents and categories are considered. The global category popularity follows for . Caching nodes are distributed according to a Poisson point process with intensity of and caching nodes with distances less than from the user are only considered. In addition, , , , and are used for all . We consider three different category structures as follows:
-
•
Case A:
-
•
Case B:
-
•
Case C: .


In Figs. 2 and 3, plots of for every category are shown with and . Case A is considered in Fig. 2. As the skew factor grows, the probability of requesting the content in the preferred category becomes much larger than that of requesting the content in other categories. Therefore, as increases, more cache sizes are allocated to categories having relatively large global category popularities in Fig 2.
In Fig. 3, all plots are obtained with . Since all categories in Case A have the same number of contents, cache allocations of Case A depend only on global category popularity. In Case B, the category having a larger global popularity consists of more contents, therefore more cache sizes are allocated, i.e., in Case B becomes larger than that in Case A. Interestingly, in Case B is also larger than that in Case A. The reason is that is the smallest in Case B, i.e., the individual content popularity within is the largest among all categories. Thus, even though is smaller than other values, caching multiple contents of is favorable for consecutive content requests. On the other hand, in Case C, is smaller than , and because and should be smaller than . It does not mean that an importance of caching contents in decreases. Rather, it becomes more important because a portion of contents to be stored in caching nodes, i.e., , is larger than other cases. By saving the cache size for , a larger cache size can be allocated to other categories with low global popularities compared to Case A. Thus, Figs. 2 and 3 show that the skew factor as well as the number of contents in each category have a strong impact on the proposed cache allocation rule.
Fig. 4 shows plots of cache hit probabilities obtained from the problem of (12)–(14) versus . In Fig. 5, the expected numbers of consecutive content consumption obtained from the problem of (25)–(27) are shown. We compared the proposed scheme with the conventional caching method optimized for one-shot content request based on popularity of individual contents in [4, 5]. The comparison scheme is named as ‘L1’ in the figures. We can easily see in both figures that the proposed scheme outperforms ‘L1’ with different valuess of and for each category. As grows, i.e., as the number of caching nodes in the vicinity of the user grows, the performance improvement of the proposed scheme decreases, because the user becomes more likely to find caching nodes to deliver multiple requested contents even with ‘L1’. The performance gain of the proposed scheme over ‘L1’ is guaranteed when is large. Especially in Fig. 5, when , dominates the term in (22) representing the probability of stopping to consume contents; therefore, the advantage of the proposed scheme is not remarkable. As becomes smaller, however, the proposed algorithm is more advantageous for consecutive content consumption than ‘L1’. Thus, the service provider can create the opportunity for users to consume more contents and to stay in the service longer by using the proposed scheme.


VI Concluding Remarks
This paper proposes two optimal cache allocation rules when users request a random number of contents consecutively. The key characteristic that users are likely to consume content highly related to each other consecutively is well captured in the proposed scheme by maximizing the cache hit probability for multiple content requests from the same category. Another cache allocation which maximizes the number of consecutive content consumption is also proposed as it related to the benefits for the service providers. The impacts of categorized contents and consecutive content requests on the cache allocation rule are shown by numerical results.
Acknowledgment
This work was supported by NSF under projects NSF CCF-1423140 and NSF CNS-1816699, and Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2018-0-00170, Virtual Presence in Moving Objects through 5G).
References
- [1] X. Cheng, J. Liu, and C. Dale, “Understanding the Characteristics of Internet Short Video Sharing: A YouTube-based Measurement Study,” IEEE Trans. on Multimedia, vol. 15, no. 5, pp. 1184–1194, August 2013.
- [2] N. Golrezaei, K. Shanmugam, A. G. Dimakis, A. F. Molisch, and G. Caire, “FemtoCaching: Wireless Video Content Delivery through Distributed Caching Helpers,” in Proc. IEEE INFOCOM, Orlando, FL, USA, 2012.
- [3] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and G. Caire, “FemtoCaching: Wireless Content Delivery Through Distributed Caching Helpers,” IEEE Trans. on Inf. Theory, vol. 59, no. 12, pp. 8402–8413, December 2013.
- [4] B. Blaszczyszyn and A. Giovanidis, “Optimal Geographic Caching in Cellular Networks,” in Proc. IEEE Int’l Conf. on Communications (ICC), London, UK, 2015.
- [5] Z. Chen, N. Pappas, and M. Kountouris, “Probabilistic Caching in Wireless D2D Networks: Cache Hit Optimal Versus Throughput Optimal,” IEEE Commun. Letters, vol. 21, no. 3, pp. 584–587, March 2017.
- [6] M. Ji, G. Caire, and A. F. Molisch, “Wireless Device-to-Device Caching Networks: Basic Principles and System Performance,” IEEE J. Sel. Areas in Commun., vol. 34, no. 1, pp. 176–189, Jan. 2016.
- [7] K. Li, C. Yang, Z. Chen and M. Tao, “Optimization and Analysis of Probabilistic Caching in -Tier Heterogeneous Networks,” IEEE Trans. Wireless Commun., vol. 17, no. 2, pp. 1283-1297, Feb. 2018.
- [8] M. Choi, J. Kim, and J. Moon, ”Wireless Video Caching and Dynamic Streaming Under Differentiated Quality Requirements,” IEEE J. Sel. Areas in Commun., vol. 36, no. 6, pp. 1245–1257, June 2018.
- [9] D. Ko, B. Hong and W. Choi, “Probabilistic Caching Based on Maximum Distance Separable Code in a User-Centric Clustered Cache-Aided Wireless Network,” IEEE Trans. Wireless Commun., vol. 18, no. 3, pp. 1792-1804, March 2019.
- [10] M. Cha, H. Kwak, P. Rodriguez, Y. Ahn and S. Moon, “Analyzing the Video Popularity Characteristics of Large-Scale User Generated Content Systems,” IEEE/ACM Trans. Network., vol. 17, no. 5, pp. 1357-1370, Oct. 2009.
- [11] R. Zhou, S. Khemmarat, and L. Gao, “The impact of YouTube recommendation system on video views” in Proc. ACM IMC, New York, NY, USA, 404-410, 2010.
- [12] M. Choi, D. Kim, D.-J. Han, J. Kim and J. Moon, “Probabilistic Caching Policy for Categorized Contents and Consecutive User Demands”, IEEE Int’l Conf. on Communications (ICC), May 2019.
- [13] M. Lee, A. F. Molisch, N. Sastry and A. Raman, “Individual Preference Probability Modeling and Parameterization for Video Content in Wireless Caching Networks,” IEEE/ACM Trans. Network., 2019.