RPAF: A Reinforcement Prediction-Allocation Framework for Cache Allocation in Large-Scale Recommender Systems
Abstract.
Modern recommender systems are built upon computation-intensive infrastructure, and it is challenging to perform real-time computation for each request, especially in peak periods, due to the limited computational resources. Recommending by user-wise result caches is widely used when the system cannot afford a real-time recommendation. However, it is challenging to allocate real-time and cached recommendations to maximize the users’ overall engagement. This paper shows two key challenges to cache allocation, i.e., the value-strategy dependency and the streaming allocation. Then, we propose a reinforcement prediction-allocation framework (RPAF) to address these issues. RPAF is a reinforcement-learning-based two-stage framework containing prediction and allocation stages. The prediction stage estimates the values of the cache choices considering the value-strategy dependency, and the allocation stage determines the cache choices for each individual request while satisfying the global budget constraint. We show that the challenge of training RPAF includes globality and the strictness of budget constraints, and a relaxed local allocator (RLA) is proposed to address this issue. Moreover, a PoolRank algorithm is used in the allocation stage to deal with the streaming allocation problem. Experiments show that RPAF significantly improves users’ engagement under computational budget constraints.
1. Introduction
Modern recommender systems are built upon computation-intensive infrastructure (Jiang et al., 2020; Liu et al., 2017; Johnson et al., 2019; Wang et al., 2020). Millions or even billions of users visit the system, leading to huge traffic (Jiang et al., 2020; Zhou et al., 2023). Moreover, the recommender system’s traffic is usually highly different between peak and off-peak periods, and the computational burden during peak periods is several times that of off-peak periods (Chen et al., 2024), which leads to a significant challenge of computational burden in peak periods.
The result cache (Jiang et al., 2020; Chen et al., 2024) is widely used in large-scale recommender systems to address this issue. As shown in Figure 1, when the system receives a user’s request, it first provides a real-time recommendation, returning a group of items more than required, of which the rest are put into a result cache. When the user’s next request comes, the system will perform a real-time recommendation again if the traffic does not exceed the system’s affordability. Otherwise, it will recommend items directly from the result cache. In peak periods, the number of cached recommendations is comparable to that of real-time recommendations, and the cache largely mitigates the recommender systems’ computational burden (Chen et al., 2024; Zhou et al., 2023; Jiang et al., 2020).


The cached results are typically suboptimal compared to real-time recommendations due to the lack of real-time computation, and allocating the real-time and cached recommendations for each user request is critical to maximizing the overall user experiences under computational budget constraints. Research on computational resource allocation problems (Jiang et al., 2020; Yang et al., 2021) models the allocation problem as a constrained optimization problem. However, the cache allocation problem cannot be regarded as a special case of the computational resource allocation problem due to the following challenges:
-
•
Value-Strategy Dependency. Existing computational resource allocation methods assume that the requests of different time periods are independent and that the values of the computational resources are independent of the allocation strategy. However, such assumptions do not hold in the cache allocation problem. On the one hand, the size of the result cache is finite. Figure 2 shows the users’ average watch time when continuously receiving recommendations from the cache in a short video platform. If the system continuously provides cached recommendations to the same user, the result cache will soon be exhausted, and the user experience will quickly deteriorate. On the other hand, since the system’s choice of whether to use the cache will affect not only the users’ feedback on the current request but also the users’ future actions, the values of the current cache choices also depend on the future cache allocation strategies.
-
•
Streaming Allocation. Existing computational resource allocation approaches solve the allocation problem for a batch of requests in each time period. However, requests in online recommender systems arrive in a streaming manner, and the system needs to determine the cache choice when each individual request arrives while satisfying the global computational budget constraint.
To this end, we introduce a Reinforcement Prediction-Allocation Framework (RPAF). RPAF is a two-stage method with a prediction stage and an allocation stage. The prediction stage uses reinforcement learning to estimate the values under different cache choices considering value-strategy dependency, and the allocation stage uses the estimated values to perform the streaming allocation.
A key challenge in the RPAF is to deal with the budget constraint, which is global and strict, in the reinforcement learning model of the prediction stage. To address the globality issue, we introduce a relaxed local allocator (RLA). RLA is a policy function that can be optimized locally and continuously for each request. By using RLA, the constrained reinforcement learning problem in the prediction stage is transformed into a computationally tractable form. Based on RLA, the prediction stage provides an estimation of different cache choices considering the global budget constraint. To address the strictness issue, we provide a PoolRank algorithm to provide the allocation strategy in a streaming manner, satisfying the budget constraint strictly at each time step.
In summary, our contributions can be summarized as follows:
-
•
We propose RPAF to perform the cache allocation with value-strategy dependency and streaming allocation.
-
•
We introduce RLA to estimate the value of the cache choices considering the impacts of the allocation strategy.
-
•
We provide a PoolRank algorithm to allocate the real-time and cached recommendations in a streaming manner.
-
•
We validate the effectiveness of RPAF through offline experiments and online A/B tests. RPAF has been deployed online in real-world applications, bringing a notable improvement.
2. Related Work
2.1. Computation Resource Allocation in Recommender Systems
Recommender systems have been widely researched in recent years, but most studies focus on improving business revenue or user experience under sufficient computational resources (Zhu et al., 2018; Pi et al., 2020; Wang et al., 2020). Some studies focus on the computational resource allocation problem in recommender systems, such as DCAF (Jiang et al., 2020), CRAS (Yang et al., 2021), RL-MPCA (Zhou et al., 2023). DCAF and CRAS consider the computational resource allocation problem as a constrained optimization problem solved by linear programming. RL-MPCA discusses the computational resource allocation problem in multi-phase recommender systems. However, existing methods do not consider the value-strategy dependency in the cache allocation problem studied in this paper.
2.2. RL in Recommender Systems
Reinforcement learning (RL) in recommender systems has gained significant attention due to its ability to optimize user long-term satisfaction (Cai et al., 2023; Afsar et al., 2022; Chen et al., 2024). Value-based approaches estimate the user’s satisfaction with recommended items from the available candidate set, and then select the one predicted to yield the highest satisfaction (Chen et al., 2018; Zhao et al., 2018). Policy-based methods directly learn the recommendation policy and optimize it to increase user satisfaction (Chen et al., 2019; Ma et al., 2020; Chen et al., 2024). RL-based models can consider the temporal dependency of the cache allocation problem, but it is challenging to consider the computational budget constraints in RL models. RL-MPCA (Zhou et al., 2023) uses deep Q-Network (DQN) with a dual variable to model the constraints, but we will show in this paper that DQN-like value-based methods face challenges when modeling the cache allocation problem with global and strict budget constraint. In contrast, we propose the RPAF with RLA to tackle the abovementioned challenges.
2.3. CMDP
A common approach to solving constrained Markov Decision Processes (CMDP) is the Lagrangian relaxation method(Tessler et al., 2018; Chow et al., 2018; Dalal et al., 2018; Garcıa and Fernández, 2015; Liu et al., 2021). The constrained optimization problem is reduced to unconstrained by augmenting the objective with a sum of the constraint functions weighted by their corresponding Lagrangian multipliers. Then, the Lagrangian multipliers are updated in the dual problem to satisfy the constraints. However, existing methods only consider the local constraints under each request and, therefore, cannot be directly applied in the cache allocation problem where the budget constraints are global, based on all requests at that moment. Moreover, existing methods meet the constraint in the sense of expectation, while the budget constraint in our scenario is strict. In contrast, the RPAF considers globality and the strictness of budget constraints.


3. Cache Allocation Problem
This section first provides a simplified cache allocation problem with pre-defined value estimation. Then, we discuss the challenges arising from the value-strategy dependency and provide a CMDP modeling of the real cache allocation problem, considering the dependency between the value estimation and the allocation strategy.
3.1. Simplified Cache Allocation Problem
We first consider a simplified cache allocation problem, of which the values of different cache choices are pre-defined. Such a cache allocation problem is a constrained optimization problem shown in Figure 3. We denote as the set of users. At each time period , some users in open the app and send requests to the recommender system. We use to denote that User sends a request at Time , and to denote that User is inactive at Time . When , the system needs to recommend items to User . There are two choices for the system to provide such a recommendation, represented by :
-
•
Real-Time recommendation. When , the system will recommend by real-time computation. Specifically, there will be multiple stages, e.g., retrieval and ranking. In each stage, deep models will be used to predict the user’s possible feedback (watch time, follow, like, etc.), and return items, of which the top items are recommended to the user and the rest are put into a result cache for future use.
-
•
Recommendation by the cache. When , the system will provide recommendations by the cache. In such cases, the recommender system obtains items directly from the user’s result cache, and then updates the result cache.
Real-time recommendations usually perform better than cached recommendations since they can utilize users’ most recent feedback by deep models (Liu et al., 2018). However, the computational burden of real-time recommendation is much larger, and the total requests to be processed by real-time recommendation must not exceed a given value in each time period. Under the above discussions, the cache allocation problem can be written as:
CacheAlloc-Simplified:
(1) | ||||
(2) | s.t. | |||
(3) |
where means the value of User at Time when the system chooses the cache action , and is the maximum requests that can be processed by real-time recommendation in each time period. The value can be any user feedback according to the concrete scenario, such as the usage time in news/short video recommendations and the revenue of advertising systems.
Suppose we have obtained the reward for each request, then the cache allocation problem in Eq. (1)(3) can be easily solved:
Proposition 3.1.
Given for each and each , the solution to the CacheAlloc-Simplified is:
(4) |
where means that if is in the top users with ranked by the given scores, and otherwise .
Proof.
See Appendix A.∎
Proposition 3.1 means that the optimal allocations of the real-time recommendations are the top requests regarding the value difference between real-time and cached recommendations. Such results are also discussed in (Jiang et al., 2020).
Corollary 3.2.
The optimal solution to the CacheAlloc-Simplified satisfies , which means Eq. (2) reaches the equality.
3.2. Real Cache Allocation Problem
Although the solution to CacheAlloc-Simplified is simple according to Proposition 3.1, it assumes that the value under each is pre-defined before we compute the cache allocation problem. Such an assumption is used in the existing computational resource allocation methods (Jiang et al., 2020; Lu et al., 2023; Zhou et al., 2023), but it does not hold in the cache allocation problem. Moreover, CacheAlloc-Simplified assumes the cache choices can be determined after the information of all requests is obtained at each time step, which contradicts the streaming manner of online requests. Now, we discuss the two challenges and provide the modeling of the real cache-allocation problem.
3.2.1. Value-Strategy Dependency
The value-strategy dependency arises from the users’ consecutive interactions with the recommender system against the finiteness of the cache. When a user opens the app, he/she interacts with the recommender system through multiple requests, and the system responds to each request. Based on the user’s experience with the recommended items, the user decides whether to continue using the app or leave. Because the cache is finite, continuously providing cached recommendations to the same user’s multiple requests will soon exhaust the result cache and deteriorate the user’s experience, as shown in Figure 2. Therefore, the cache choice will affect the future value , and the future cache choice , which means the value should contain not only the user’s current feedback but also the user’s future feedback.
Now, we discuss the real cache allocation problem considering the value-strategy dependency. We use RL to model the real cache allocation problem, as shown in Figure 4. Specifically, we model the interaction between users and the recommender systems as a Constrained Markov Decision Process (CMDP) (Sutton and Barto, 2018) , where is the state space, is the action space (in this paper ), is the transition function, is the reward function, is the constraints, is the initial state, and is the discounting factor. When User opens the app, a session begins, which consists of multiple requests until the user leaves the app. At Step , the recommender system obtains a user state . According to the user state , the system generates an action , and performs the real-time or cached recommendations according to the action . After watching the recommended items, the user provides feedback . Then, the user transfers to the next state and determines whether to send the next request or leave. The constraint set contains the computational budget constraints as shown in Eq. (2).
The CMDP model considers the impacts of the actions on future user states and rewards. Under the abovementioned discussions, the rewards to maximize can be defined as:
(5) |
3.2.2. Streaming Allocation
In the online scenario, the user requests arrive streamingly, and the system needs to make decisions as soon as possible. Therefore, it is impossible to obtain all the requests at the time period to calculate the optimization problem. In other words, the cache choice should be determined locally by the user state :
(6) |
where is the parameter.
3.2.3. Real Cache Allocation Problem
Considering the value-strategy dependency and the streaming allocation problem, the real cache allocation problem can be written as:
CacheAlloc-Real:
(7) | ||||
s.t. | ||||
where we have replaced the budget constraint with an equality constraint according to Corollary 3.2.
The CacheAlloc-Real considers the value-strategy dependency by the CMDP modeling. However, solving this problem is still challenging because of the budget constraint and the demand for streaming allocation. The next section proposes the RPAF approach to solve the problem.
4. Methodology

This section proposes RPAF to solve the cache allocation problem while considering the impacts of the value-strategy dependency and streaming allocation. We first provide the overall framework in Sec. 4.1, which contains an RL-based prediction stage for value estimation under value-strategy dependency and an allocation stage to apply the streaming cache allocation. RPAF tackles the value-strategy dependency by the constrained RL modeling, and Sec. 4.2 proposes a relaxed local allocator (RLA) to train the constrained RL. Then, Sec. 4.3 discusses the allocation stage and proposes a PoolRank algorithm to tackle the streaming allocation problem.
4.1. Overall Framework
RPAF, as shown in Figure 5, contains two stages, i.e., a prediction stage to estimate the value under different cache choices considering the value-strategy dependency and an allocation stage to solve the stream cache allocation problem under the estimated value.
The prediction stage uses the actor-critic structure. Specifically, we let the cache choice be determined by an actor function parameterized by , and define the critic function , parameterized by , to estimate the long-term reward in Eq. (7) under the action . Then, the objective of critic learning is:
(8) |
And the objective of actor learning is:
(9) | ||||
(10) | s.t. |
After the prediction stage, we obtain the estimated critic function . Then, in the allocation stage, we solve the following constrained optimization problem:
(11) | ||||
s.t. | ||||
Remark: Why not DQN? It seems DQN is more proper in the discrete decision problem, and there exists research on applying DQN to the computational resource allocation problem(Zhou et al., 2023). The loss function of DQN is
(12) |
However, it is challenging to consider the global budget constraint, i.e. Eq. (2) in the loss function. Specifically, because real-time recommendations are more likely to perform better than cached recommendations, we usually have , and Eq. (12) will degrade to the following form:
(13) |
which means we may expect to be close to , of which the action in the time period is always 1. This result is clearly inconsistent with the budget constraint. Recently, there has been some research on constrained DQN and weakly coupled DQN(Kalweit et al., 2020; El Shar and Jiang, 2024; Zhou et al., 2023), which uses the Lagrangian technique to modify the -function. However, we find that the Lagrangian multiplier is difficult to obtain in the online streaming settings (also see the experiments in Sec. 5). Therefore, we do not use DQN in this paper.
The key technique in RPAF is to separate the value estimation and the cache allocation while considering their dependency. The next problem is to solve RPAF. If the budget constraint in Eq. (10) did not exist, the problem would degenerate into a typical actor-critic structure, and there is an amount of research on more effective algorithms, e.g. DDPG (Lillicrap et al., 2015), TD3 (Fujimoto et al., 2018), SAC (Haarnoja et al., 2018), etc. However, the budget constraint brings significant challenges to solving the problem:
-
•
Globality. The budget constraint is over all the users in Time , and the system needs to determine all actions according to all the user states, which is computationally unaffordable.
-
•
Strictness. The budget constraint should be satisfied strictly. However, typical RL algorithms are usually solved in the sense of expectation, which is unsatisfactory.
We use the relaxed local allocator (RLA) and the PoolRank technique to address these two issues, respectively.
4.2. Prediction Stage with RLA
The motivation of RLA is to transform the global and strict constraint in Eq. (10) into a local and relaxed constraint so that it can be solved by sample-wise gradient descent. We first relax the binary cache action to a continuous one . can be regarded as the probability of . We define RLA as , i.e., the allocator to determine the relaxed cache action . We reuse the critic as the estimated long-term reward under the relaxed action . By the probability explanation of , we have
(14) |
By replacing the true action with RLA, we can rewrite the actor learning in Eq. (9)(10) as:
(15) | ||||
s.t. |
Here, we rewrite the constraint by dividing by on both sides of the equality, and so . is the ratio of real-time recommendations at Time .
With the proposed RLA, i.e. , we are ready to derive the training process. We consider a penalty function satisfying:
-
•
reaches the minimum at .
-
•
is convex with regard to .
Typically, can be chosen as:
-
•
mean square error (MSE) function: .
-
•
Kullback–Leibler (KL) divergence:
By using the penalty function to replace the constraint, we combine the constraint into the objective of the actor learning:
(16) |
By minimizing , we obtain the policy , which maximizes the critic function under the budget constraints. However, the penalty function in Eq. (16) still contains a global sum over all users, which is computationally unaffordable. To tackle this challenge, we consider the upper bound of . According to the convexity of regarding , we have
(17) |
Then we obtain an upper bound satisfying :
(18) |
By minimizing the upper bound , the original loss will also be bounded from above. Note that terms of different in Eq. (18) are independent. In the training process, for each sample containing a sample of a user at the time period satisfying , the actor loss can be written in the local form:
(19) |
and the critic loss of each sample is:
(20) |
We provide the training algorithm with RLA in Algorithm 1.
4.3. Streaming Allocation with PoolRank
After the prediction stage, we have obtained the critic . Because the strict action is a special case of the relaxed action , can also be used in the allocation stage in Eq. (11). Now we discuss the solution to the streaming allocation problem. Existing approaches (Jiang et al., 2020; Zhou et al., 2023) usually use the Lagrangian technique to make the computational resource allocation algorithm satisfy the streaming fashion, but the convergence of the Lagrangian multiplier is also a significant challenge. Luckily, the structure of the cache allocation problem makes it possible to find more effective approaches. Here, we propose the PoolRank algorithm to perform the streaming allocation in the allocation stage of RPAF.
We first show the relationship between the final allocation strategy and the RLA output :
Proposition 4.1.
Proof.
See Appendix B.∎
Proposition 4.1 shows the relationship between the relaxed allocator and the final allocation result . Actually, if a request is more desired for the real-time recommendation, RLA will also return a larger value. To illustrate the PoolRank algorithm, we rewrite Eq. (21) as
(22) | ||||
where is the set of all LRA outputs at the time period , and the Rank operator computes the rank of in ordered from the largest to the smallest. Now, we modify Eq. (22) to a streaming fashion. Specifically, we introduce a detailed time , where the time period is regarded as . Assuming the requests come at , the rank function in Eq. (22) should be modified to
(23) | ||||
where the time of each request is replaced by the detailed time , while the is still the set of the RLA values of all the requests arriving from to .
Here, the challenge of streaming allocation is evident: we cannot obtain because it needs to obtain , which contains the requests coming after . To tackle this challenge, we can approximate by , because the real-time request volume at the time period will not differ much from the time period .
We call the request pool, which contains a set of requests with the LRA values. Then, the PoolRank formulation is
PoolRank:
(24) | ||||
The online PoolRank algorithm is shown in Algorithm 2. Besides the main formulation in Eq. (24), we also apply the following techniques to accelerate the computation:
-
•
Bucketization: the rank operator in Eq. (24) can be computed in a time complexity by bucketization. We first bucketize to . Then, a prefix array stores the number of requests in with values larger than . Then, the rank of the request can be obtained by a simple searching process, i.e., Line 6 in Algorithm 2.
-
•
Dual-Buffer Mechanism: note that the prefix array can be computed asynchronously. We use in the online process and update once a new is available. This mechanism avoids updating the prefix array online, which significantly accelerate the PoolRank process.
As a summary of this section, we provide RPAF to solve the cache allocation problem. The prediction stage utilizes RLA to learn the critic and policy function, as shown in Algorithm 1, while the allocation stage utilizes PoolRank to perform a streaming allocation, as shown in Algorithm 2. The next section will validate the effectiveness of RPAF by offline and online experiments.
5. Experiments
Extensive offline and online experiments are conducted to address the following research questions (RQs):
-
•
RQ1: How does RPAF perform compared to other state-of-the-art methods?
-
•
RQ2: Does RLA effectively estimate the value of cache choices while considering the impact of allocation?
-
•
RQ3: Can the PoolRank algorithm better allocate real-time and cached recommendations streamingly?
-
•
RQ4: Can RPAF improve the performance of real-world recommender systems with a significant amount of cached recommendations?
5.1. Offline Experiment Settings
5.1.1. Dataset and Metric
We select the KuaiRand (Gao et al., 2022) dataset as the offline experimental dataset. KuaiRand is a public dataset from Kuaishou, comprising 27,285 users and 32,038,725 items. It provides contextual features of users and items and various user feedback signals such as watch time, likes, follows, and more. We regard watch time as the user feedback signal, i.e., the reward represents the watch time of User at Time . The performance metric is the average WatchTime, i.e., the accumulated watching time per user.
5.1.2. Simulator of Feedback
The KuaiRand dataset provides user feedback on given items, but it does not consider the impact of the cache. To simulate the user’s behavior under different cache choices, we construct a simulator to mimic the interaction between user requests and the recommender system. Upon receiving a user request, the recommender system determines the cache choice . If , the system selects the real-time recommendation, including matching and ranking. The real-time recommendation returns items for each request, with the top items being recommended to the user while the remaining items are put into the user’s result cache. When , the system chooses the cached recommendation, which returns the top items from the result cache and subsequently updates the result cache by removing these items. If the result cache is empty, the system must select real-time recommendations. If the computational resource of the current time period has been exhausted, the system must choose cached recommendations or fail to recommend. When receiving real-time recommendations, the simulator employs a pre-trained model to simulate user feedback. When receiving cached recommendations, the simulator modifies the original predicted feedback by a discounting factor shown in Figure 2. The simulator is used to evaluate the performance of different cache allocation methods.
5.1.3. Details
The KuaiRand dataset is an hourly dataset, with the number of requests per hour ranging from 1000 to 8000. We set the maximum real-time recommendations per hour as 4500. Hence, the peak ratio of cached recommendations is 43.7%. In the RL problem during the prediction stage, the state contains the user profiles, browsing records, request context, and the current remaining content in the results cache. The action value indicates the probability that this request should opt for real-time recommendation instead of cached recommendation at the current time step. The reward is the watch time of the user at the recommended items. we keep the same model backbone (i.e. the model architecture for the actors), which is a five-layer multi-layer perception, for all the baselines and the proposed method for a fair comparison. For each experiment, 20 trials are conducted to calculate the mean and standard deviation. The other hyper-parameters are presented in Table 3 in Appendix C.
Methods | WatchTime (s) |
---|---|
Greedy | 1750 |
CRAS | 1821(21.8) |
DCAF | 1827(41.2) |
RL-MPCA | 1899(34.9) |
RPAF-DDPG-w/o RLA | 1911(21.3) |
RPAF-TD3-w/o RLA | 1969(22.7) |
RPAF-DDPG-KL | 2057(27.8) |
RPAF-DDPG-MSE | 2049(31.2) |
RPAF-TD3-KL | 2128(19.3) |
RPAF-TD3-MSE | 2146(24.2) |
All Real-Time (Ideal) | 2347 |

5.1.4. Compared Methods
The following methods are compared:
-
•
Greedy: The greedy approach prioritizes real-time recommendations over cached recommendations as long as the current system traffic load allows. Since requests come streaming, this method can always meet budget constraints.
-
•
DCAF: DCAF(Jiang et al., 2020) formulates the computational resource allocation problem as a constrained optimization problem and subsequently solves it with linear programming algorithms.
-
•
CRAS: CRAS(Yang et al., 2021) also provides a constrained optimization model and solves it by a multi-step feedback control method.
-
•
RL-MPCA (Zhou et al., 2023): RL-MPCA formulates the computational resource allocation problem as a Weakly Coupled MDP problem and solves it with DQN.
-
•
RPAF: The proposed RPAF method. Several details are to be determined in the ablation study of RPAF.
-
–
The RL backbone of the actor-critic structure to learn Eq. (19)(20): we choose two backbones, i.e. DDPG (Lillicrap et al., 2015) and TD3 (Fujimoto et al., 2018). Note that the choice of the RL backbones is independent of our work, and that we have merely chosen two representative methods as the backbone.
-
–
The penalty function in RLA: we choose three kinds of penalty functions, namely the KL divergence (denoted by KL), the MSE penalty, and no penalty function (denoted by w/o RLA).
-
–
In the rest of this section, we use RPAF-TD3-MSE to denote the RPAF method with the TD3 backbone and the MSE penalty function, and other denotations are similar.


5.2. Performance Comparison (RQ1)
We compare several methods as demonstrated in Table 1. To get the upper bound of the performance, an ideal strategy, All Real-Time, is included. This strategy chooses real-time recommendations regardless of the computational burden. As shown in Table 1, all dynamically allocated methods outperform the greedy approach. RPAF outperforms existing approaches, demonstrating the improvement achieved by considering the value-strategy dependency in cache allocation problems. Moreover, the comparison between RPAF with and without RLA demonstrates that RLA provides better performance by estimating the value of cache choice considering the constraint. A detailed discussion of RLA can be found in Section 5.3. Finally, RPAF with TD3 outperforms RPAF with DDPG, indicating that the RL backbone also impacts performance. Given that RPAF is independent of the RL backbone, any new studies on the actor-critic structure can be integrated into RPAF.
Additionally, Figure 6 illustrates the performance comparison across different time periods within a single day. During off-peak periods, for instance, from 0 AM to 6 AM, the total number of requests does not exceed the computational budget. Consequently, all the methods return real-time recommendations, resulting in the same reward. During peak periods, e.g., from 7 PM to 10 PM, cached recommendations become essential, and the performance of each method varies, of which RPAF achieves the best results.
5.3. Impacts of RLA (RQ2)
This subsection validates the impacts of RLA. We regard the RPAF without RLA as RPAF with no penalty function . Figure 7 shows the convergence curve of the average output of the allocator with and without the penalty function . The policy function is shown to collapse to without the penalty function . This means that all requests will be processed through real-time recommendation, which violates the budget constraint. In contrast, RLA with localized penalty functions is observed to converge to a reasonable ratio of real-time recommendations. As shown in Table 1 and Figure 8, RPAF with RLA significantly outperforms RPAF without RLA, while the two penalty functions chosen(KL and MSE) yield similar performance, indicating that RPAF is not largely dependent of the choice of penalty functions in RLA.
5.4. Impacts of PoolRank (RQ3)
Figure 9 shows the number of real-time recommendations per hour under different methods. It is assumed that requests come in a streaming manner, and the system needs to determine the cache choice immediately upon receipt of a request, while ensuring that the total real-time recommendation per hour does not exceed the budget. If the system chooses a real-time recommendation while the system has no buffer, the request will still be downgraded into the cached recommendation. We also consider the RPAF without PoolRank, which determines cache choices solely in accordance with the allocator . It is shown that DCAF, RL-MPCA, and RPAF without PoolRank cannot fully utilize all available real-time traffic, while the introduction of PoolRank enables the full exploitation of computational resources. This phenomenon shows the difference between batch allocation and streaming allocation. In streaming allocation, it is impossible to determine the cache choice according to all the requests in a time period. Existing methods may underestimate the available real-time computation resources in the future. PoolRank, however, achieves better performance due to its utilization of the most recent information on computational resources.

5.5. Online Experiments (RQ4)

We test the proposed RPAF method in Kwai, an online short video application with over 100 million users. The RPAF structure in the online system is shown in Figure 10. The model is trained continuously online. When a user’s session ends, the session’s data is immediately fed into the model for training, and the updated model is instantly deployed to the online service. We deploy the RPAF with the TD3 backbone and the MSE penalty function. There are 40% cached recommendations in the peak period. In this scenario, users’ experience is highly consistent with their daily usage time. Thus, we regard the watch time per request as the immediate reward . We sequentially conduct a series of experiments related to DCAF, RPAF (without PoolRank), and RPAF (with PoolRank). Each method is tested for 7 consecutive days, with the confidence intervals shown in Table 2.
As shown in Table 2, RPAF obtains a 1.13% increase in daily watch time compared to the baseline and a 0.83% increase compared to DCAF. We emphasize that in our system, a 0.1% improvement holds statistical significance. Furthermore, a 150-day online experiment was conducted during which the experimental group employed DCAF, RPAF (without PoolRank) and RPAF (with PoolRank) in sequence, while the base group used the greedy strategy. Figure 11 plots the comparison results of RPAF with the baseline, where the x-axis represents the number of days since deployment, and the y-axis indicates the percentage difference in the number of daily active users between the experimental group and the baseline group. Figure 11 shows that the users in the experimental group are more likely to stay active on the platform, indicating an improvement in users’ long-term values.
Methods | Daily WatchTime |
---|---|
baseline(greedy) | - |
DCAF | + 0.31% [-0.12%, 0.12%] |
RPAF (without PoolRank) | + 0.91% [-0.13%, 0.13%] |
RPAF (with PoolRank) | + 1.13% [-0.11%, 0.11%] |

6. Conclusion
This paper proposes RPAF, a cache allocation method that considers the value-strategy dependency and streaming allocation problem. RPAF is an RL-based two-stage framework containing prediction and allocation stages. The RLA is proposed to tackle the globality and strictness of budget constraints in the training of RPAF, and the PoolRank algorithm allocates the real-time and cached recommendations in a streaming manner. RPAF has proven its effectiveness in offline experiments and online A/B tests and has been deployed online in real-world applications, bringing a notable improvement.
References
- (1)
- Afsar et al. (2022) M Mehdi Afsar, Trafford Crump, and Behrouz Far. 2022. Reinforcement learning based recommender systems: A survey. Comput. Surveys 55, 7 (2022), 1–38.
- Cai et al. (2023) Qingpeng Cai, Zhenghai Xue, Chi Zhang, Wanqi Xue, Shuchang Liu, Ruohan Zhan, Xueliang Wang, Tianyou Zuo, Wentao Xie, Dong Zheng, et al. 2023. Two-stage constrained actor-critic for short video recommendation. In Proceedings of the ACM Web Conference 2023. 865–875.
- Chen et al. (2019) Haokun Chen, Xinyi Dai, Han Cai, Weinan Zhang, Xuejian Wang, Ruiming Tang, Yuzhou Zhang, and Yong Yu. 2019. Large-scale interactive recommendation with tree-structured policy gradient. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 3312–3320.
- Chen et al. (2018) Shi-Yong Chen, Yang Yu, Qing Da, Jun Tan, Hai-Kuan Huang, and Hai-Hong Tang. 2018. Stabilizing reinforcement learning in dynamic environment with application to online recommendation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1187–1196.
- Chen et al. (2024) Xiaoshuang Chen, Gengrui Zhang, Yao Wang, Yulin Wu, Kaiqiao Zhan, and Ben Wang. 2024. Cache-Aware Reinforcement Learning in Large-Scale Recommender Systems. arXiv preprint arXiv:2401.06470 (2024).
- Chow et al. (2018) Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, and Marco Pavone. 2018. Risk-constrained reinforcement learning with percentile risk criteria. Journal of Machine Learning Research 18, 167 (2018), 1–51.
- Dalal et al. (2018) Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin Paduraru, and Yuval Tassa. 2018. Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757 (2018).
- El Shar and Jiang (2024) Ibrahim El Shar and Daniel Jiang. 2024. Weakly Coupled Deep Q-Networks. Advances in Neural Information Processing Systems 36 (2024).
- Fujimoto et al. (2018) Scott Fujimoto, Herke Hoof, and David Meger. 2018. Addressing function approximation error in actor-critic methods. In International conference on machine learning. PMLR, 1587–1596.
- Gao et al. (2022) Chongming Gao, Shijun Li, Wenqiang Lei, Jiawei Chen, Biao Li, Peng Jiang, Xiangnan He, Jiaxin Mao, and Tat-Seng Chua. 2022. KuaiRec: A fully-observed dataset and insights for evaluating recommender systems. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 540–550.
- Garcıa and Fernández (2015) Javier Garcıa and Fernando Fernández. 2015. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research 16, 1 (2015), 1437–1480.
- Haarnoja et al. (2018) Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning. PMLR, 1861–1870.
- Jiang et al. (2020) Biye Jiang, Pengye Zhang, Rihan Chen, Xinchen Luo, Yin Yang, Guan Wang, Guorui Zhou, Xiaoqiang Zhu, and Kun Gai. 2020. Dcaf: a dynamic computation allocation framework for online serving system. arXiv preprint arXiv:2006.09684 (2020).
- Johnson et al. (2019) Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data 7, 3 (2019), 535–547.
- Kalweit et al. (2020) Gabriel Kalweit, Maria Huegle, Moritz Werling, and Joschka Boedecker. 2020. Deep constrained q-learning. arXiv preprint arXiv:2003.09398 (2020).
- Lillicrap et al. (2015) Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).
- Liu et al. (2018) Qiao Liu, Yifu Zeng, Refuoe Mokhosi, and Haibin Zhang. 2018. STAMP: short-term attention/memory priority model for session-based recommendation. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 1831–1839.
- Liu et al. (2017) Shichen Liu, Fei Xiao, Wenwu Ou, and Luo Si. 2017. Cascade ranking for operational e-commerce search. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1557–1565.
- Liu et al. (2021) Yongshuai Liu, Avishai Halev, and Xin Liu. 2021. Policy learning with constraints in model-free reinforcement learning: A survey. In The 30th international joint conference on artificial intelligence (ijcai).
- Lu et al. (2023) Xingyu Lu, Zhining Liu, Yanchu Guan, Hongxuan Zhang, Chenyi Zhuang, Wenqi Ma, Yize Tan, Jinjie Gu, and Guannan Zhang. 2023. GreenFlow: a computation allocation framework for building environmentally sound recommendation system. arXiv preprint arXiv:2312.16176 (2023).
- Ma et al. (2020) Jiaqi Ma, Zhe Zhao, Xinyang Yi, Ji Yang, Minmin Chen, Jiaxi Tang, Lichan Hong, and Ed H Chi. 2020. Off-policy learning in two-stage recommender systems. In Proceedings of The Web Conference 2020. 463–473.
- Pi et al. (2020) Qi Pi, Guorui Zhou, Yujing Zhang, Zhe Wang, Lejian Ren, Ying Fan, Xiaoqiang Zhu, and Kun Gai. 2020. Search-based user interest modeling with lifelong sequential behavior data for click-through rate prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2685–2692.
- Sutton and Barto (2018) Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.
- Tessler et al. (2018) Chen Tessler, Daniel J Mankowitz, and Shie Mannor. 2018. Reward constrained policy optimization. arXiv preprint arXiv:1805.11074 (2018).
- Wang et al. (2020) Zhe Wang, Liqin Zhao, Biye Jiang, Guorui Zhou, Xiaoqiang Zhu, and Kun Gai. 2020. Cold: Towards the next generation of pre-ranking system. arXiv preprint arXiv:2007.16122 (2020).
- Yang et al. (2021) Xun Yang, Yunli Wang, Cheng Chen, Qing Tan, Chuan Yu, Jian Xu, and Xiaoqiang Zhu. 2021. Computation resource allocation solution in recommender systems. arXiv preprint arXiv:2103.02259 (2021).
- Zhao et al. (2018) Xiangyu Zhao, Liang Zhang, Zhuoye Ding, Long Xia, Jiliang Tang, and Dawei Yin. 2018. Recommendations with negative feedback via pairwise deep reinforcement learning. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 1040–1048.
- Zhou et al. (2023) Jiahong Zhou, Shunhui Mao, Guoliang Yang, Bo Tang, Qianlong Xie, Lebin Lin, Xingxing Wang, and Dong Wang. 2023. Rl-mpca: A reinforcement learning based multi-phase computation allocation approach for recommender systems. In Proceedings of the ACM Web Conference 2023. 3214–3224.
- Zhu et al. (2018) Han Zhu, Xiang Li, Pengye Zhang, Guozheng Li, Jie He, Han Li, and Kun Gai. 2018. Learning tree-based deep model for recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 1079–1088.
Appendix A Proof of Proposition 3.1
We prove this by contradiction. Assume the optimal solution contains two users and with , satisfying
(25) |
(26) |
Then, we consider a new solution with other cache choices unchanged. According to Eq. (26), we have
(27) | ||||
which means is a better solution, leading to the contradiction.
Appendix B Proof of Proposition 4.1
According to Prop. 3.1, the solution to Eq. (11) is
(28) |
Then, it suffices to prove that is monotonically increasing with regard to . Actually, according to the actor loss in Eq. (19), the optimal solution to satisfies , i.e.
(29) |
Moreover, according to the definition of the critic function with regard to the RLA in Eq. (14), we have
(30) |
Therefore we have
(31) |
and the optimal solution in Eq. (29) becomes:
(32) |
Moreover, since the penalty function is convex and , is monotonically increasing with regard to . Therefore, the larger is, the larger the solution is, which finishes the proof.
Appendix C Hyper Parameters
Optimizer | Adam |
Actor Learning Rate | 0.0001 |
Critic Learning Rate | 0.0002 |
Number of Agents | 2 |
Action Dimensions Of Actor | 1 |
Discount Factor | 0.9 |
Action Upper Bound | 1.0 |
Replay Buffer Size | |
Train Batch Size | 1024 |
Fine-Tuning | True |
Normalized Observations | True |
Training Platform | Tensorflow |