On Modeling Long-Term User Engagement from Stochastic Feedback
Abstract.
An ultimate goal of recommender systems (RS) is to improve user engagement. Reinforcement learning (RL) is a promising paradigm for this goal, as it directly optimizes overall performance of sequential recommendation. However, many existing RL-based approaches induce huge computational overhead, because they require not only the recommended items but also all other candidate items to be stored. This paper proposes an efficient alternative that does not require the candidate items. The idea is to model the correlation between user engagement and items directly from data. Moreover, the proposed approach consider randomness in user feedback and termination behavior, which are ubiquitous for RS but rarely discussed in RL-based prior work. With online A/B experiments on real-world RS, we confirm the efficacy of the proposed approach and the importance of modeling the two types of randomness.
1. Introduction
Recommender Systems (RS) are software systems that help users discover engaging items. In particular, a sequential RS allows users to consume new items without ceasing, usually by scrolling down the interface to the RS. The main goal of sequential RS is to increase long-term user engagement. In this regard, myopic decision-making, such as ranking items solely by predicted click-through rate (pCTR), can be suboptimal. For instance, news can have higher pCTR than dramma bloopers, yet dramma bloopers prompt more consumption for videos about the dramma and staffs in the dramma.
Reinforcement learning (RL) has emerged as a promising research direction for optimizing long-term user engagement (10.1145/3289600.3290999, 2, 4, 12, 14). Using RL, sequential recommendation is modeled as interaction between an agent and an environment. At each step, the agent receives a state and selects an action. Usually, states are user profiles and their historical activities, and actions are items available for recommendation. After selecting each action the agent receives a scalar reward. The goal of RL is to learn a policy for action selection, with which the agent can maximize the sum of rewards received during interacting with the environment. Thus, RL is an anticing paradigm if one uses utility for items (e.g. clicks) as rewards, as it naturally optimizes long-term utility of sequential RS.
However, training RL models for industrial RS can be challenging. To understand the issue, consider a minimalist example in which an agent selects one item from a set of candidate items for a user. To train a model for pCTR, one needs to record information about the user, the selected item, and the feedbacks for items. However, to train a RL model such as the well-known deep Q-network (DQN) (mnih-dqn-2015, 6), one needs to additionally record the candidate item set. This induces huge overhead for industrial applications as the candidate set usually contains several hundreds of items. While the overhead can be reduced by learning RL models using item embedding vectors, it is reported that the performance is much worse than learning RL models using items directly (arXiv.2110.11073, 8).
This paper proposes an efficient alternative for optimizing long-term user engagement. The key observation behind the proposed approach is that in industrial setting RS are periodically trained on logs generated by some other RS, which is referred to as the behavior policy afterwards. For applications serving millions of users the behavior policy is usually highly optimized, so by mining the relation between user engagement and items selected by this policy we can obtain a model for user engagement. Notably, the proposed approach does not require the candidate item set in learning. So it can be integrated into any existing myopic RS effortlessly without introducing huge overhead.
In addition to the efficiency issue, the proposed approach also considers stochasticity in rewards and termination of interactions. RL algorithms such as DQN usually assume that (a) rewards are deterministic regarding to states and actions and (b) interactions last for infinite steps. Both of them are rarely satisfied in RS. Real-world RS need to serve users with diverse interests, but they only have limited information about users. Consider two different users that have the same feature values. As they have different interests, the feedbacks they provide for the same item will be different. From the perspective of RS, this means that the rewards for the same state and actions tend to be random. For the same reason, it is unlikely that they will interact with RS for the same number of steps. Unfortunately, both types of stochasticity have not been addressed so far. To model the former, this paper employs the recent distributional RL framework (Bellemare2017, 1). For the latter, this paper extends the distributional RL framework to random termination setting and learns a model to predict whether users terminate interaction.
We evaluated the proposed approach on an real-world industrial RS for short videos that serves millions of users. During a week of online A/B test, after deploying the proposed approach we were able to improves the average number of videos viewed by a user by 2.72% and the average duration of video views by 1.59%. Moreover, the experiment also confirmed that modeling stochasticity is essential for effectively modeling long-term user engagement.
The contribution of this paper can be summarized as follows.
-
•
This paper proposes a simple and efficient approach for modeling long-term user engagement in RS.
-
•
This paper proposes to consider stochasticity in user feedback and termination when modeling user engagement.
-
•
This paper evaluated the proposed method on an industrial RS and confirmed its efficacy.
The rest of this paper organizes as follows. Section 2 briefly reviews relevant literature, and section 3 provide technical background. Section 4 describes the proposed approach. Section 5 presents experiments on real-world RS, and section 6 concludes this paper.
2. Related Work
Existing attemts for RL-based recommendation can be classified into policy-gradient based (10.1145/3289600.3290999, 2) and variants of the DQN algorithm (10.1145/3240323.3240374, 13, 14). This paper address the efficiency issue, which can be consider as a complementary approach for resource-constrained scenarios.
In the literature of RS, significant effort has been devoted to neural architectures. The proposed approach leverages the DLRM (DLRM19, 7) architecture, though it can be combined with other architectures as well. Meanwhile, to model the randomness in rewards, the proposed approach leverages the recent distributional RL framework (Bellemare2017, 1, 3). As for random termination, in literature it has been considered in a model-based RL (pmlr-v70-white17a, 9) and modeled with evolutionary algorithm (yoshida2013reinforcement, 11) or meta gradient descend (NEURIPS2018_2715518c, 10). Our proposed method differs from these approaches as it learns the discount factor from leaving behaviors of users.
3. Preliminaries
Modeling Sequential Recommendation with RL
Sequential interaction between a user and a recommender agent is modeled as a discounted infinite-horizon Markov Decision Process (MDP): . States are information about users, which include features such as interest tags and lists of items consumed in past few minutes, hours or days. Actions are candidate items. In distributional RL, rewards are considered as random variables. This paper utilizes clicking signal as rewards, so a realization means the corresponding items is clicked, and otherwise. The transition dynamics governs state transitions, and the discount factor is used for defining value distribution.
Action Value Distribution
We are interested in modeling the the sum of rewards obtained during agent-environment interaction, which characterizes long-term user engagement. As we assume rewards to be random variables, state-action values , the discounted sum of rewards obtained after selecting an action at some state and following afterwards, is also a random variable. Bellemare et al. showed that is the fixed point of the distributional Bellman operator that is defined as (Bellemare2017, 1) :
(1) |
where . means that the random variable has the same distribution as random variable .
Dabney et al. proposed to parameterize with the so-called quantile distributions (Dabney2018, 3). The idea is to learn a parametric function to estimate evenly spaced quantiles of . Given data from , they proposed to learn by minimizing the quantile huber loss:
(2) |
where is the ith quantile of value distribution, and for are quantile mid-points. For simplicity, . and is the indicator function. is a hyper-parameter, and in practice we use . has the same neural structures as , and its parameters are periodically copied from .
4. Proposed Method
4.1. The Learning Problem
We assume an agent learns from some offline data that are generated by behavior policy . are logs of sequential interactions between users and the agent. We represent the information about the user and as states and recommended items as actions. The reward for a decision equals to one if the corresponding item is clicked, and it equals to zero otherwise. Finally, we organize as transitions, which can be written as , where and are two consecutive states, is the action chosen at and is the reward for this decision.
Our goal is to learn a function that predicts the the sum of rewards obtained after selecting at step and following afterwards, which characterizes the long-term user engagement of selecting . With , we can rank items according to their long-term user engagement.
4.2. The Proposed Approach
The proposed method is based on the following observation. Real-world RS are often highly optimized using both learning-based method and rule-based methods, and they are regularly updated to adapt to new users and new items. As a result, users tend to interact with them sequentially, so the data generated by them contain information about items’ effect on user engagement. Thus, we propose to mine such information from data.
Meanwhile, as discussed in Section 1, user feedbacks are stochastic by nature. In consequence, when modeling user engagement as cumulative user feedbacks, we have to appropriately address such randomness. The idea of the proposed approach is to estimate , the value distribution of behavior policy from data . This can be achieved with Equation 2. With we can use its expectation as a scoring function for long-term user engagement, i.e. .
We now discuss how to handle random termination. While the discount factor in an MDP is used to defined value functions and value distributions, it can also be interpreted as the probability that interaction continues after a state-action pair (Littman1994, 5). Thus, is precisely the probability that interaction terminates. Our idea is to estimate such probability using data and learn based on such estimate. In consequence, will be aware of the correlation between decisions and termination of interactions. This leads to a new distributional Bellman operator :
(3) |
where . is the probability that interaction terminates after , and is a hyper-parameter. In this operator, the value distribution of next state is weighted by , which characterizes the probability that interaction continues after . Thus, with this operator the learned will be able to model random termination.
One potential issue about is its convergence. In what follows we show that is a contraction in tabular case, based on the analysis for provided by (Bellemare2017, 1). To begin with, let and be cumulative distribution functions (CDF) of random variable and . The Wasserstein metric between and can be written as 111To simplify notation, random variables and their CDF are conflated whenever possible.. For two action value distributions and , defined . Our analysis is based on following two properties of (Bellemare2017, 1). Suppose is a random variable that is independent with and and is a constant, then:
(4) |
Suppose we have samples from policy . Then for two distributions over , and , and for any state and , we can characterize the Wasserstein metric between after applying as:
(5) |
Hence , which means is a -contraction in .
4.3. Practical Algorithm
As for a practical algorithm, we estimate and simultaneously using data . The objective function to be minimized is as follows.
(6) |
if interaction terminates at and otherwise. and can be parameterized as a multi-objective network, in which they share feature embeddings but have different feedforward networks. Algorithm 1 provides pseudo code for this algorithm.
, the size of mini-batches
5. Experiments
This section evaluates the proposed method on an industrial RS that generates a feed for ”videos you might also like”. In particular, we investigates the following two questions.
-
(1)
Can we improve user engagement of real-world RS via modeling ?
-
(2)
Does modeling randomness in rewards and termination beneficial in practice?
5.1. Methodology
We followed standard protocol for A/B testing. Users were randomly hashed into the control group and test groups. Before the evaluation period, we confirmed that there was no significant difference between the control group and test groups using statistical testing. Every group has at least 630,000 users.
As the our goal is to improve RS via modeling long-term user engagement, we evaluated the performance of the RS after deploying algorithms to the system. Let be the current ranking function of the RS. Then after deploying estimator for user engagement , the new ranking function becomes , where is a parameter tuned in preliminaries experiments.
We used the metrics for video consumption as evaluation metric for user engagement, which are listed in Table 1. We report the change in performance after deploying algorithms to the RS. A positive change indicates that an algorithm can improve the performance of the current online policy.
Metric | Meaning |
---|---|
video views (VV) | #clicks / #user |
#impression (IMP) | #impression / #user |
duration (DUR) | (play duration) / #user |
CTR | #clicks / #impression |
CVR | (play duration) / (clicked videos length) |
5.2. Algorithms
control group
A highly-optimized RS for short-videos used in some industrial application.
Proposed
The proposed approach, which considers randomness in rewards and termination for modeling user engagement.
Random-Reward
A variant of the proposed approach considers randomness in rewards but not randomness in termination. Refer to this method as RR.
Full-Deterministic
A variant of the proposed approach that directly estimates without considering randomness in rewards and termination. Refer to this method as FD.
5.3. Implementation Details
We used the DLRM structure in experiments. There were 40 features in total, including user ID, item ID, and item keywords. These discrete features were mapped to embedding vectors in . Parameters of the target module are copied from the policy module every 100 gradient steps. Algorithms were trained daily using a single P40 GPU. We used Adam as the optimizer for all three algorithms with a learning rate set to . Whem modeling randomness in rewards, the number of quantiles was set to 200.
Metrics | FD | RR | Proposed |
---|---|---|---|
VV | 1.19% (0.11) | 1.58% (0.031) | 2.72% () |
IMP | 0.140% (0.86) | -0.265% (0.74) | 0.502% (0.53) |
DUR | 0.263% (0.73) | 1.24% (0.10) | 1.59% () |
CTR | 1.05% (0.93) | 1.85% () | 2.21% () |
CVR | 0.410% (0.14) | 1.22% ( | 1.07% () |
5.4. Results
Table 2 shows results for our online evaluations. FD failed to provide any significant improvement over the control group. RR was able to improve VV, CTR and CVR significantly, though its improvement for DUR was not significant. These results imply that users in this group were likely to watch slightly more videos than the control group. Moreover, they show that modeling randomness in rewards is indeed helpful for characterizing items’ effect on user engagement. Meanwhile, the proposed method was able to significantly improve VV, DUR, CTR and CVR, showing that randomness in termination is yet another key factor for modeling user engagement. Together, Table 2 confirms the efficacy of the proposed method as an efficient alternative for modeling long-term user engagement for RS.
6. Conclusion
This paper investigates how to model long-term user engagement for RS efficiently. The proposed approach relies on an observation that the behavior policy (i.e. the RS in production environment) in industrial applications are often highly optimized. So instead of learning the optimal policy from scratch, the proposed model tries to capture correlation between user engagement and recommended item in offline data. Although not guaranteed to be optimal, it enjoys computational efficiency and is confirmed to be effective for real-world systems. Moreover, this paper proposes to model randomness in rewards and termination and confirms the importance of the two factors for modeling user engagement.
References
- (1) Marc G. Bellemare, Will Dabney and Rémi Munos “A Distributional Perspective on Reinforcement Learning” In Proceedings of the Thirty-fourth International Conference on Machine Learning Sydney, Australia: PMLR, 2017, pp. 449–458
- (2) Minmin Chen et al. “Top-K Off-Policy Correction for a REINFORCE Recommender System” In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining Melbourne, VIC, Australia: Association for Computing Machinery, 2019, pp. 456–464
- (3) Will Dabney, Mark Rowland, Marc G. Bellemare and Rémi Munos “Distributional Reinforcement Learning With Quantile Regression” In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018, pp. 2892–2901
- (4) Eugene Ie et al. “SlateQ: A Tractable Decomposition for Reinforcement Learning with Recommendation Sets” In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence International Joint Conferences on Artificial Intelligence Organization, 2019, pp. 2592–2599
- (5) Michael L. Littman “Markov Games As a Framework for Multi-agent Reinforcement Learning” In Proceedings of the Eleventh International Conference on International Conference on Machine Learning, 1994, pp. 157–163
- (6) Volodymyr Mnih et al. “Human-level control through deep reinforcement learning” In Nature 518.7540, 2015, pp. 529–533
- (7) Maxim Naumov et al. “Deep Learning Recommendation Model for Personalization and Recommendation Systems”, 2019 arXiv:1906.00091 [cs.IR]
- (8) Kai Wang et al. “RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System”, 2021 arXiv:2110.11073 [cs.IR]
- (9) Martha White “Unifying Task Specification in Reinforcement Learning” In Proceedings of the Thirty-Fourth International Conference on Machine Learning Sydney, Australia: PMLR, 2017, pp. 3742–3750
- (10) Zhongwen Xu, Hado P Hasselt and David Silver “Meta-Gradient Reinforcement Learning” In Advances in Neural Information Processing Systems Curran Associates, Inc., 2018
- (11) Naoto Yoshida, Eiji Uchibe and Kenji Doya “Reinforcement learning with state-dependent discount factor” In 2013 IEEE third joint international conference on development and learning and epigenetic robotics, 2013, pp. 1–6 IEEE
- (12) Xiangyu Zhao et al. “Deep Reinforcement Learning for Online Advertising in Recommender Systems” In arXiv preprint arXiv:1909.03602, 2019
- (13) Xiangyu Zhao et al. “Deep Reinforcement Learning for Page-Wise Recommendations” In Proceedings of the 12th ACM Conference on Recommender Systems Vancouver, BC, Canada: Association for Computing Machinery, 2018, pp. 95–103
- (14) Guanjie Zheng et al. “DRN: A Deep Reinforcement Learning Framework for News Recommendation” In Proceedings of the 2018 World Wide Web Conference Lyon, France: International World Wide Web Conferences Steering Committee, 2018, pp. 167–176