Continuous-Time User Preference Modelling
for Temporal Sets Prediction
Abstract
Given a sequence of sets, where each set has a timestamp and contains an arbitrary number of elements, temporal sets prediction aims to predict the elements in the subsequent set. Previous studies for temporal sets prediction mainly focus on the modelling of elements and implicitly represent each user’s preference based on his/her interacted elements. However, user preferences are often continuously evolving and the evolutionary trend cannot be fully captured with the indirect learning paradigm of user preferences. To this end, we propose a continuous-time user preference modelling framework for temporal sets prediction, which explicitly models the evolving preference of each user by maintaining a memory bank to store the states of all the users and elements. Specifically, we first construct a universal sequence by arranging all the user-set interactions in a non-descending temporal order, and then chronologically learn from each user-set interaction. For each interaction, we continuously update the memories of the related user and elements based on their currently encoded messages and past memories. Moreover, we present a personalized user behavior learning module to discover user-specific characteristics based on each user’s historical sequence, which aggregates the previously interacted elements from dual perspectives according to the user and elements. Finally, we develop a set-batch algorithm to improve the model efficiency, which can create time-consistent batches in advance and achieve 3.5 and 3.0 speedups in the training and evaluation process on average. Experiments on four real-world datasets demonstrate the superiority of our approach over state-of-the-arts under both transductive and inductive settings. The good interpretability of our method is also shown.
Index Terms:
Temporal sets prediction, continuous-time representation learning, user modelling1 Introduction
Temporal sets can be formalized as a sequence of sets, where each set carries a timestamp and includes an arbitrary number of elements [1]. In many practical scenarios, the sequential behaviors of users could be treated as temporal sets. For example, the customers’ purchasing behaviors of baskets in online shopping [2, 3], patients’ medical behaviors of prescriptions in intelligent treatments [4, 5], and tourists’ travelling behaviors of point-of-interests in transportation systems [6, 7]. Accurately predicting which elements will appear in the next-period set can help people make better decisions ahead of time [8, 9, 10, 11].

Recently, some researchers have attempted to tackle the problem of temporal sets prediction. One part of the scholars simplified the temporal sets prediction problem by converting it into conventional prediction of temporal events [12, 6] through a set embedding process [3, 8, 13]. Specifically, [3, 8] and [13] first derived the representations of sets via pooling operations or matrix factorization, and then learned the temporal dependencies in user behaviors by Recurrent Neural Networks (RNNs) [14, 15] and the attention mechanism [16]. The other part of researchers focused on the learning of elements by designing customized components for temporal sets prediction [9, 10, 11]. In particular, [9] presented a dual sequential network to learn the multi-level (i.e., set-level and element-level) representation of each user’s sequence based on the Transformer architecture [17]. [10] first performed graph convolutions on the co-occurrence graphs of elements in the set level to discover element relationships, and then learned the temporal dependencies and shared patterns of elements by the attention mechanism and the gating mechanism. [11] first connected the sequences of different users by constructing a temporal graph and then captured high-order user-element interactions via the element-guided message aggregation mechanism, which is aware of the temporal information.
Although previous methods are insightful for temporal sets prediction, they mainly focus on the modelling of elements and implicitly capture each user’s preference according to the interacted elements. However, user preferences are often continuously evolving in real-world scenarios [18, 19, 20, 21], and thus the evolutionary trend cannot be fully captured by the implicit learning paradigm of user preferences. This motivates us to design a new framework to explicitly model the evolving user preferences, see the bottom of Fig. 1. Moreover, when making the prediction for each user, it is also essential to discover the personalized behaviors based on the historical sequence for better reflecting the user-specific characteristics, see the top of Fig. 1.
In this paper, we propose a Continuous-Time user preference modelling framework for Temporal Sets Prediction, namely CTTSP. Our main idea is to explicitly capture the evolving user preferences in a continuous manner by storing the states of all the users and elements via a memory bank. Given the sequences of all the users, we first sort all the user-set interactions in a non-descending temporal order to construct a universal sequence, and then chronologically learn from each user-set interaction. In particular, for each user-set interaction, our approach first learns the relationships of the related user and elements via message encoders, and then updates their memories based on the encoded messages and previous memories via memory updaters. To exploit each user’s unique characteristics, we also devise a personalized user behavior learning component, which adaptively aggregates the historical sequence with the guidance of the user and elements from dual perspectives. Finally, a set-batch algorithm is developed to make our method more efficient, which can create time-consistent batches in advance and achieve the average of 3.5 and 3.0 speedups in training and evaluation, respectively. Experiments on four benchmarks not only show that our approach could significantly outperform existing methods under both transductive and inductive settings, but also reveal the good interpretability of our method. Our key contributions include:
-
•
We present a continuous-time user preference modelling framework to explicitly capture the evolving user preferences by maintaining a memory bank to store the states of all the users and elements. When a user-set interaction is observed, we continuously update the related user’s and elements’ memories using the encoded messages and past memories.
-
•
We learn the personalized behaviors of each user from the historical sequence by adaptively aggregating the interacted elements based on the user and elements.
-
•
A set-batch algorithm is developed to improve the model efficiency, which can create time-consistent batches and accelerate the model speed in the training and evaluation process.
The key difference between previous implicit methods and our explicit approach is whether the model can memorize users’ historical states and track the evolving user preferences across time. Previous methods do not design customized modules to maintain the states of users and they implicitly represent user preferences based on the interacted elements. For each user-set interaction, they need to recompute the user representation from scratch since they lack the ability in memorizing users’ historical states. Instead, our approach uses a memory bank to explicitly store the states of users and elements, which can be updated to be the latest based on the currently encoded messages and the memorized state of the user. The explicit learning paradigm helps our approach capture the continuously evolving user preferences, which cannot be achieved by previous methods.
We would like to point out that there is another understanding that a continuous-time model should be able to derive user preferences at any given time, which is the objective of methods like Neural Ordinary/Partial Differential Equations [22, 23]. Although insightful, these methods tend to be computationally expensive, especially when using black-box differential equation solvers. Their performance is also sensitive to the choice of numerical solvers. Furthermore, our approach learns from the universal sequence constructed by all the user-set interactions, which differs from the input data format of the above methods. Hence, we conclude that it is challenging to design such a continuous-time method for temporal sets prediction, which could be left as an interesting problem for future research.
2 Related work
2.1 Temporal Sets Prediction
Over the past decades, time series [24, 25] and temporal events [12, 6] have been widely studied in the temporal data mining community. However, as a more complicated type of temporal data, temporal sets have not been fully investigated yet. Formally, temporal sets could be formalized as a sequence of sets, where each set contains an arbitrary number of elements and is associated with a timestamp [1]. In real-world scenarios, temporal sets are ubiquitous and accurately predicting of temporal sets can help people make better decisions in advance [8, 9, 10, 11].
Indeed, the prediction of temporal sets is much more challenging than the predictive modelling of time series and temporal events. The existing methods for temporal sets prediction could be divided into the following two categories. The first category of methods adopted a set embedding phase to denote each set as a vectorized representation, and then fed the sequence of set representations into sequence models to learn the temporal dependencies in user behaviors. For instance, [3] adopted pooling operations to get the fixed-length representation of each basket and leveraged RNNs to learn the dynamic patterns in each customer’s purchasing behaviors. [8] and [13] first obtained each set’s representation by the average pooling or matrix factorization, and then employed RNNs with the attention mechanism to capture the temporal dependencies. The repeated elements in each user’s sequence are also considered. Methods belonging to the second category designed specialized modules to investigate the elements in sets for further improvements. [9] proposed a dual sequential network with the co-transformer architecture to compute both set-level and element-level representations of each user’s sequence. [10] first learned the relationships of elements within each set based on the constructed set-level co-occurrence graphs, and then extended the attention mechanism and the gating mechanism to mine the temporal dependencies and shared patterns among elements. [11] pointed out the necessity of leveraging the collaborative signals for temporal sets prediction and learned element-specific representations for each user on the constructed temporal graph with the usage of temporal information.
Existing methods for temporal sets prediction implicitly capture the user preference according to the interacted elements and thus fail to capture the continuous evolutionary trend of user preference. To cope with this issue, we present a new framework that explicitly models the evolving dynamics in user preference in a continuous-time manner.
2.2 User Modelling
Based on the massive collected data of users, a great number of efforts have been made on user modelling in recent years. Some methods utilized users’ sequences to exploit their dynamics and formalized the task as a sequence learning problem. For example, [2] integrated the Markov chain and matrix factorization to learn the evolving preferences of users. [26] proposed GRU4Rec based on RNNs for session-based recommendation. [18] designed the memory-augmented neural network to capture user sequential behaviors for sequential recommendation. [19] incorporated the knowledge base information into RNN-based memory networks to enhance the model interpretability. [21] proposed a lifelong sequential modelling framework to learn the multi-scale evolving patterns in user behaviors via a hierarchical and periodical updating mechanism. There are also some attempts for user modelling in a non-chronological manner. For click-through rate prediction, [27] computed user representations that are specific to the ads by a local activation unit. [28] and [29] extended the Graph Neural Networks (GNNs) [30, 31, 32] to recommender systems by learning from the user-item interaction graph. They performed graph convolutions to generate embeddings of both users and items, which utilize the graph structure and node feature information simultaneously. [33] removed the feature transformation and nonlinear activation in graph convolutions, and linearly propagated information on the user-item interaction graph.
In this paper, we design a continuous-time user preference modelling framework for temporal sets prediction.
2.3 Continuous-Time Dynamic Graph Learning
A continuous-time dynamic graph is defined as a chronological sequence of events, where each event can correspond to the modifications of an object or the interactions of multiple objects [34]. Representation learning on continuous-time dynamic graphs aims to provide meaningful representations for nodes in the graph with the consideration of graph dynamics. [35] first constructed temporal dependency interaction graphs which are induced from the sequence of interactions, and then captured both global and local graph information with a dynamic message passing neural network. [36] first considered the temporal dependencies via a time encoding function and then aggregated the neighbor information using the temporal graph attention layer to compute time-aware node representations. [37] designed a general framework for continuous-time dynamic graphs by learning evolving node representations across time. To learn from temporal user-item interaction graphs, [20] employed two RNNs to update the embedding of a user and an item and introduced a projection operator to learn the embedding trajectories of users and items. [38] defined a continuous-time bipartite graph of users and items, and designed a temporal collaborative transformer to consider the temporal dynamics inside sequential patterns and mine the underlying collaborative signals. [39] introduced a unified library for facilitating the development of the continuous-time dynamic graph learning field.
However, the above methods are not specifically designed for predicting temporal sets. They fail to capture the relationships between elements within the same set as well as the correlations between the user and the interacted set. In this paper, we propose a customized continuous-time learning framework for temporal sets prediction.
3 Problem Formalization
Let and denote the collections of users and elements. We use to represent the observed user-set interactions in chronological order, where is an interaction event, , , and . We use to represent the timestamp of user ’s last interaction set. The task of temporal sets prediction aims to predict elements to appear in the subsequent interaction set for each user at the next-period timestamp . Our learning paradigm for user could be represented by
It is worth noticing that our solution integrates the historical behaviors of all the users until timestamp when making predictions for user , who can benefit from exploiting the collaborative signals.
In this paper, compared with the existing methods that implicitly represent each user’s preference based on his/her interacted elements, we aim to design a new framework to explicitly capture the evolving user preference by learning the continuous-time representations of users and elements.
4 Methodology
This section shows the framework of our CTTSP and presents each component one by one.

As shown in Fig. 2, our approach predicts the next-period set for each user in a continuous manner. To explicitly capture the time-evolving user preferences, we maintain a memory bank for storing the states of all the users and elements. We first construct a universal sequence by arranging all the user-set interactions in a non-descending temporal order, and then learn from each user-set interaction chronologically. For each user-set interaction, we learn the relationships between the user and elements by message encoders, and then update their memories to be the latest by memory updaters using the encoded messages and past memories. We also devise a personalized user behavior learning module to exploit each user’s individual characteristics from dual perspectives, which adaptively aggregates elements in the historical sequence with the guidance of the user and elements. A prediction layer is designed to compute the appearing probabilities of elements in the next-period set by integrating the continuous-time and personalized information. Moreover, we present a set-batch algorithm to improve the model efficiency by creating time-consistent batches in advance with the constraint of temporal orders.
4.1 Continuous-Time User Preference Modelling
We first construct a universal sequence by sorting all the user-set interactions in a non-descending order based on the timestamp of each set (as shown on the left of Fig. 2), and then learn from user-set interactions chronologically for encoding the collaborative signals among different users. To be specific, we maintain a memory bank to explicitly store the states of all the users and elements for tracking the evolving preferences. We initialize the memories of all the users and elements as zero vectors, and update the memories of the related user and elements involved in each user-set interaction according to the following message encoders and memory updaters.
4.1.1 Message Encoders
In a user-set interaction, the involved user and elements are often associated with each other, including the mutual correlations between the user and elements, and the co-occurring relations of elements. Therefore, we design two customized message encoders to learn the above user-element and element-element relationships.
User Message Encoder. Mathematically, We use to denote the memories of user and element at timestamp , where is the hidden dimension. When a user-set interaction is observed, the correlation weight of the user 111For better readability, we assume that corresponds to the -th user . and elements are calculated by the following attention mechanism
(1) |
where are the query and key vector of users. and are the last time that user interacted with a set and element was involved in a set right before timestamp , respectively. Then, we aggregate the elements in for user using the learned weights by
(2) |
where is the value vector of users. adaptively selects the information of all the elements in , and thus captures the user-element relationships. Finally, the message of user at timestamp is obtained by concatenating and , that is,
(3) |
Element Message Encoder. The calculation process of the element message encoder is analogous to that of the user message encoder, and the main difference is that we need to additionally design trainable parameters specific to elements, including . Therefore, we do not elaborate on them in detail. Based on the element message encoder, we could obtain , which is the message of element at timestamp .
4.1.2 Memory Updaters
Our approach maintains a memory bank to store the memories of both users and elements, where each memory aims to represent the historical states of the corresponding user or element in a compact format. Note that the memories are not trainable, but they can be modified according to the trainable parameters in the following memory updaters. Each memory will be continuously updated when a new user-set interaction contains the user or the element.
User Memory Updater. For the user-set interaction , we keep the memory of user to be the latest by the gated updating mechanism with the consideration of both encoded message and previous memory as follows,
(4) |
where and are the transformation matrices to align the dimensions of each user’s message and memory. denotes the Hadamard product. controls the importance of the user message , which is computed by
(5) |
where are user-specific trainable parameters to calculate the importance of each dimension of the message and memory.
Element Memory Updater. The computations of the element memory updater are similar to those of the user memory updater. One difference is that we need to design element-specific trainable parameters, including . Hence, we do not elaborate on them due to space limitations. Based on the element memory updater, we could obtain the memory of element at timestamp , which is denoted by .
Note that our method learns from the universal sequence which is composed of all the user-set interactions, and the memories of users and elements are continuously updated for each user-set interaction. This provides our approach with the ability in exploiting the underlying collaborative signals across different users. Take the first two user-set interactions and in Fig. 2 as an example, after dealing with , element ’s memory will contain the information of element and user . Therefore, when learning on , user is able to additionally access the cross-user information of user and element via the memory of element , and thus the latent collaborative signals could be learned.
4.2 Personalized User Behavior Learning
Apart from learning from the universal sequence, we also explore the personalized behaviors of each user via the previously interacted elements from his/her own historical sequence. In particular, for user that involved in the interaction , we use to denote the list of elements that have interacted with up to timestamp . It is worth noticing that could contain the same element multiple times, and elements that appear with higher times indicate that they are more preferred by user . Then, we adaptively aggregate the elements in from dual perspectives, i.e., the user perspective and the element perspective.
4.2.1 User Perspective
Formally, we use to denote the static embeddings of user and element 222Note that is shared across users while is element-specific. This can reduce the model complexity and provide our model with the inductive ability (validated in the experiments)., which are randomly initialized from a normal distribution with mean 0 and variance 1 and can be optimized in the model training process. Then, we use as a query vector to compute the correlation of each element by
(6) |
where denotes the dot product for calculating the similarity of and . is the activation function. represents the importance of element to user . Through the above aggregation, elements that are more similar to user will be assigned with higher weights and contribute more in the aggregation process.
4.2.2 Element Perspective
We also use each element as a query vector to calculate the correlation of every element via
(7) |
stands for the relevance of element to element , and elements that are more similar to element will become more important.
4.2.3 Dual Perspectives Aggregation
After obtaining the relationships of each element to user and element , we then make aggregation from the dual perspectives via
(8) |
where is a hyperparameter to control the importance of user perspective. is the aggregation of user ’s interacted elements until timestamp , which is guided by the embeddings of user and element .
4.3 The Prediction Layer
To provide the appearing probabilities of all the elements in the subsequent set of each user, we design a prediction layer by integrating continuous-time and personalized information. Specifically, the prediction layer first computes the continuous-time and personalized probabilities and then fuses them to make the final predictions.
4.3.1 Continuous-Time Probability Calculation
Through the modelling of continuous-time user preferences, for user-set interaction , we can obtain the updated memories of the related user and element , i.e., and . We can also get the elements that user have previously interacted with but do not appear in , which belong to . is a function to derive the unique items in the input. We denote the memories of element before timestamp as . Then, the continuous-time probabilities of elements in the above two parts are calculated as follows.
For element , we compute the continuous-time probability that it will appear in the next-period set by the dot product of and as follows,
(9) |
For element , we should notice that its memory is not trainable, and directly using it to calculate the dot product with will make it fail to access the gradient. To tackle this issue, we first use a single-layer fully connected network (denoted as ) to project , and then calculate the dot product of and the projected memory as follows,
(10) |
4.3.2 Personalized Probability Calculation
According to the personalized user behavior learning component, up to the timestamp , we can get for each via Equation (8), which stands for the aggregation of user ’s historically interacted elements until timestamp with the guidance of user and element . The personalized probability that element will appear in the next-period set is calculated via
(11) |
where stands for a transformation matrix.
4.3.3 Probabilities Fusion
We predict the next-period set by fusing both the continuous-time and personalized probabilities. Specifically, we utilize a hyperparameter to control the importance of the continuous-time probability by
(12) |
where stands for the probability that element will appear in the next-period set of user after timestamp . is an indicator vector that is composed of 0 or 1, where the entry of 1 means the corresponding element has interacted with user up to timestamp . Otherwise, the entries are set to 0. We can observe that for element that has interacted with user up to timestamp , the appearing probability comes from both the continuous-time probability and personalized probability . For element that has not interacted with user up to timestamp , the appearing probability only depends on the personalized probability .
4.4 Set-batch Algorithm
Our approach constructs a universal sequence in the non-decreasing temporal order and processes each user-set interaction one after another, which makes it time-consuming when handling datasets with long time spans. Classical RNN models assume that different users are independent of each other, so these methods can split the sequences of users into multiple batches and compute on each batch in parallel [26, 40]. However, our approach merges the sequences of users into a universal sequence and thus the users are correlated with each other, making the above technique infeasible. Though [20] proposed an algorithm to accelerate the model efficiency on temporal interaction networks, it can only handle a pairwise user-element interaction and thus fails to handle the user-set interactions since each set contains an arbitrary number of elements.
To this end, we develop a set-batch algorithm to create time-consistent batches in advance and improve our model efficiency. The core of the set-batch algorithm is to satisfy the constraint of temporal orders, such that the memory of any user or any element is updated chronologically. Formally, for , if , then . Moreover, if , then . is a function to return the index of the batch that contains .
We show the set-batch algorithm in Algorithm 1. In particular, the set-batch algorithm takes the universal non-decreasing sequence as the input and provides the batched data 333For better understanding, we prescribe that the indices of a list start with 1 rather than 0. as the output, where denotes the number of batches. Set-batch first initializes , and in line 1 and line 2, where and are designed to record the largest index of the batch that contains users and elements, respectively. Then, set-batch chronologically processes each user-set interaction in line 3. It obtains the index of batch that should be inserted from line 4 to line 18 and appends to the corresponding position from line 19 to line 22. Finally, the indices of the related user and elements in and are updated from line 23 to line 26. An example of the set-batch algorithm is shown on the left of Fig. 2.
The complexity of our set-batch algorithm is , which is linear to the number of user-set interactions in the sequence . We do not need to manually predetermine because set-batch can automatically provide according to different datasets. The value of ranges from 1 to . The extreme cases are: 1) all the user-set interactions have unique users and elements, and then is equal to 1; 2) all the user-set interactions have the same user or the same element, then is set to . It can be verified that the set-batch algorithm satisfies the constraint of temporal orders because it guarantees: 1) each user and each element should appear at most once in each batch; and 2) if batch and consist of interactions and with , and and contain the same user or the same element, then .
4.5 Model Training Process
To support the calculation on the batched data , we first perform the padding operation in each batch and then use the masking technique in [17] by setting all the padded values to - before the softmax operations. We treat the task of temporal sets prediction as a multi-label classification problem, where an element corresponds to a label. The ground truth of user ’s next-period set after timestamp is denoted by , where the entry of 1 indicates the corresponding element appears in the next-period set of user . The multi-label classification problem is converted into multiple binary classification problems, which are optimized by minimizing the cross-entropy loss,
(13) |
where and denote the ground truth and predicted probability of element appearing in the next-period set of user after timestamp . We have also tried other widely-used loss functions in recommender systems such as Sampled Softmax [41] and Personalized Ranking with Importance Sampling [42], but they cannot show better performance. In the Appendix, Algorithm 2 shows the training process of CTTSP.
5 Experiments
5.1 Descriptions of Datasets
We conduct experiments on four benchmarks:
-
•
JingDong444https://jdata.jd.com/html/detail.html?id=8 contains the actions of users about purchasing, browsing, commenting, following and adding products to shopping carts. We select the purchasing actions in March 2018 and treat products bought on the same day by each user as a set.
-
•
Dunnhumby-Carbo (DC)555https://www.dunnhumby.com/careers/engineering/sourcefiles records the transactions of households at a retailer in two years. We select transactions in the first 60 days to conduct experiments and treat products purchased on the same day by each household as a set.
-
•
TaFeng666https://www.kaggle.com/chiranjivdas09/ta-feng-grocery-dataset includes the shopping transactions at a Chinese grocery store from November 2000 to February 2001. Products bought on the same day by each customer are treated as a set.
-
•
TaoBao777https://tianchi.aliyun.com/dataset/dataDetail?dataId=649 records the online user behaviors about purchasing, clicking, marking products as favors, and adding products to shopping carts. We select the purchasing behaviors and treat categories of products purchased on the same day by each user as a set. This dataset is recorded from November 24, 2017 to December 3, 2017.
We strictly follow [11] to preprocess the datasets. We select the frequent elements that cover 80% records in each dataset. We drop the sequences with lengths less than 4 and crop the sequences with lengths more than 20. Note that the TMS dataset in[11] is not used because the absolute temporal information is unavailable, but we additionally use the TaFeng dataset to do experiments. We compare our method with the existing methods under both transductive and inductive settings. For the transductive setting, we follow [11] to use the last set, the second last set, and the remaining sets of each user for testing, validation, and training. For the inductive setting, we follow [10] to randomly divide each dataset into the training, validation, and testing sets with the ratio of 70%, 10%, and 20% across users. We provide statistics of the datasets in the Appendix.
5.2 Compared Baselines
Our approach is compared with the following baselines:
-
•
TOP uses the most frequent elements in all the users’ sequences as the predictions for any user.
-
•
PTOP predicts the most frequent elements in the individual sequence for each user.
-
•
FPMC utilizes the matrix factorization and Markov chain to predict the next-period basket, which can capture the element transitions as well as long-term user preferences [2].
-
•
DREAM first uses pooling operations to embed baskets into vectors, and then feeds the sequence of basket embeddings into an RNN to predict the next-period basket [3].
-
•
TGN is a general dynamic graph learning framework, which computes evolving representations for nodes in the graph [37]. To adapt TGN for the temporal sets prediction task, we treat each user-set interaction as multiple user-element interactions and separately learn from the user-element interactions.
-
•
RUM is a memory-augmented neural network for sequential recommendation, aiming to exploit sequential behaviors of users [18]. We first use the pooling operation to embed each set and then employ RUM to handle the sequences of set embeddings.
-
•
HPMN introduces a hierarchical framework with an updating mechanism for multiple periods, which can capture the multi-scale sequential patterns in the lifelong behavior sequences of users [21]. We incorporate the pooing operations into HPMN to apply it for predicting temporal sets.
-
•
DIN is designed for the click-through rate prediction, which learns the representation of user interests with respect to each ad by the attention mechanism [27]. We adapt DIN by splitting each set into separate elements and learning from the sequences of elements.
-
•
Sets2Sets first generates the representations of sets via the average pooling and then captures temporal dependencies with an encoder-decoder framework for predicting multi-period sets. It also considers the repeated patterns in user behaviors [8].
-
•
DSNTSP designs a dual transformer-based architecture to learn both element-level and set-level representations for each user’s sequence. A co-transformer module is further proposed to capture the multiple temporal dependencies of elements and sets [9].
-
•
DNNTSP constructs set-level co-occurrence graphs to learn the element relationships and uses the attention mechanism to capture temporal dependencies. The gating mechanism is also employed to discover the shared patterns among elements [10].
-
•
ETGNN first connects the sequences of different users via a temporal graph and then learns from the graph according to an element-guided message aggregation mechanism and a temporal information utilization component [11].
Datasets | Methods | =10 | =20 | =30 | =40 | ||||||||
Recall | NDCG | PHR | Recall | NDCG | PHR | Recall | NDCG | PHR | Recall | NDCG | PHR | ||
JingDong | TOP | 0.1531 | 0.0988 | 0.1574 | 0.1826 | 0.1076 | 0.1926 | 0.2115 | 0.1143 | 0.2207 | 0.2395 | 0.1198 | 0.2484 |
PTOP | 0.2709 | 0.2264 | 0.2905 | 0.2742 | 0.2276 | 0.2935 | 0.2757 | 0.2279 | 0.2954 | 0.2762 | 0.2280 | 0.2964 | |
FPMC | 0.2704 | 0.2109 | 0.2880 | 0.2973 | 0.2182 | 0.3134 | 0.3082 | 0.2207 | 0.3245 | 0.3178 | 0.2226 | 0.3346 | |
DREAM | 0.2888 | 0.2198 | 0.3033 | 0.3373 | 0.2329 | 0.3513 | 0.3637 | 0.2388 | 0.3787 | 0.3757 | 0.2413 | 0.3918 | |
TGN | 0.2359 | 0.1682 | 0.2491 | 0.2817 | 0.1807 | 0.2977 | 0.3090 | 0.1870 | 0.3262 | 0.3280 | 0.1909 | 0.3461 | |
RUM | 0.2921 | 0.2229 | 0.3105 | 0.3269 | 0.2326 | 0.3448 | 0.3429 | 0.2363 | 0.3608 | 0.3560 | 0.2390 | 0.3735 | |
HPMN | 0.2582 | 0.2131 | 0.2762 | 0.2754 | 0.2179 | 0.2948 | 0.2824 | 0.2196 | 0.3026 | 0.2913 | 0.2214 | 0.3115 | |
DIN | 0.3024 | 0.2503 | 0.3213 | 0.3176 | 0.2545 | 0.3379 | 0.3262 | 0.2565 | 0.3461 | 0.3328 | 0.2580 | 0.3519 | |
Sets2Sets | 0.3209 | 0.2497 | 0.3418 | 0.3474 | 0.2571 | 0.3696 | 0.3623 | 0.2604 | 0.3843 | 0.3735 | 0.2627 | 0.3960 | |
DSNTSP | 0.3464 | 0.2734 | 0.3670 | 0.3750 | 0.2820 | 0.3947 | 0.3883 | 0.2852 | 0.4078 | 0.3963 | 0.2869 | 0.4150 | |
DNNTSP | 0.3224 | 0.2458 | 0.3470 | 0.3568 | 0.2551 | 0.3813 | 0.3747 | 0.2594 | 0.3986 | 0.3843 | 0.2613 | 0.4074 | |
ETGNN | 0.3658 | 0.2724 | 0.3885 | 0.4217 | 0.2878 | 0.4460 | 0.4558 | 0.2956 | 0.4780 | 0.4752 | 0.2997 | 0.4959 | |
CTTSP | 0.4078⋆ | 0.2946⋆ | 0.4311⋆ | 0.4507⋆ | 0.3072⋆ | 0.4724⋆ | 0.4704⋆ | 0.3118⋆ | 0.4911⋆ | 0.4895⋆ | 0.3155⋆ | 0.5105⋆ | |
Improv. | 11.48% | 7.75% | 10.97% | 6.88% | 6.74% | 5.92% | 3.20% | 5.48% | 2.74% | 3.01% | 5.27% | 2.94% | |
DC | TOP | 0.1606 | 0.0839 | 0.2326 | 0.2521 | 0.1093 | 0.3430 | 0.3279 | 0.1269 | 0.4251 | 0.3872 | 0.1397 | 0.4906 |
PTOP | 0.4080 | 0.3161 | 0.5039 | 0.4383 | 0.3246 | 0.5389 | 0.4636 | 0.3306 | 0.5663 | 0.4982 | 0.3381 | 0.5980 | |
FPMC | 0.2462 | 0.1991 | 0.3274 | 0.3175 | 0.2191 | 0.4128 | 0.3771 | 0.2332 | 0.4805 | 0.4323 | 0.2451 | 0.5386 | |
DREAM | 0.3159 | 0.2266 | 0.4091 | 0.4102 | 0.2532 | 0.5089 | 0.4813 | 0.2701 | 0.5821 | 0.5427 | 0.2833 | 0.6428 | |
TGN | 0.3230 | 0.2359 | 0.4141 | 0.4232 | 0.2639 | 0.5267 | 0.4914 | 0.2798 | 0.5941 | 0.5445 | 0.2913 | 0.6453 | |
RUM | 0.3281 | 0.2551 | 0.4184 | 0.3955 | 0.2740 | 0.4926 | 0.4486 | 0.2866 | 0.5505 | 0.4967 | 0.2970 | 0.6008 | |
HPMN | 0.2560 | 0.1962 | 0.3403 | 0.3373 | 0.2188 | 0.4357 | 0.4005 | 0.2338 | 0.5044 | 0.4499 | 0.2444 | 0.5575 | |
DIN | 0.3747 | 0.2761 | 0.4752 | 0.4617 | 0.3004 | 0.5673 | 0.5166 | 0.3135 | 0.6222 | 0.5687 | 0.3247 | 0.6717 | |
Sets2Sets | 0.4417 | 0.3169 | 0.5383 | 0.5031 | 0.3342 | 0.6004 | 0.5533 | 0.3459 | 0.6514 | 0.5936 | 0.3546 | 0.6913 | |
DSNTSP | 0.4399 | 0.3201 | 0.5386 | 0.5112 | 0.3303 | 0.6105 | 0.5615 | 0.3522 | 0.6608 | 0.6031 | 0.3612 | 0.7004 | |
DNNTSP | 0.4461 | 0.3176 | 0.5442 | 0.5168 | 0.3374 | 0.6170 | 0.5634 | 0.3483 | 0.6626 | 0.6067 | 0.3575 | 0.7033 | |
ETGNN | 0.4593 | 0.3321 | 0.5582 | 0.5477 | 0.3567 | 0.6454 | 0.6070 | 0.3708 | 0.7009 | 0.6580 | 0.3818 | 0.7468 | |
CTTSP | 0.4672⋆ | 0.3337⋆ | 0.5669⋆ | 0.5576⋆ | 0.3592⋆ | 0.6546⋆ | 0.6209⋆ | 0.3740⋆ | 0.7128⋆ | 0.6691⋆ | 0.3844⋆ | 0.7545⋆ | |
Improv. | 1.72% | 0.48% | 1.56% | 1.81% | 0.70% | 1.43% | 2.29% | 0.86% | 1.70% | 1.69% | 0.68% | 1.03% | |
TaFeng | TOP | 0.0896 | 0.0965 | 0.2644 | 0.1207 | 0.1050 | 0.3463 | 0.1469 | 0.1134 | 0.4105 | 0.1586 | 0.1171 | 0.4391 |
PTOP | 0.1282 | 0.1158 | 0.3770 | 0.1721 | 0.1293 | 0.4685 | 0.1959 | 0.1364 | 0.5110 | 0.2105 | 0.1413 | 0.5336 | |
FPMC | 0.0675 | 0.0553 | 0.2036 | 0.0932 | 0.0633 | 0.2766 | 0.1121 | 0.0692 | 0.3266 | 0.1274 | 0.0738 | 0.3644 | |
DREAM | 0.0977 | 0.0928 | 0.2760 | 0.1214 | 0.1001 | 0.3504 | 0.1404 | 0.1063 | 0.4011 | 0.1585 | 0.1119 | 0.4462 | |
TGN | 0.0987 | 0.0909 | 0.2989 | 0.1251 | 0.0985 | 0.3669 | 0.1414 | 0.1037 | 0.4061 | 0.1558 | 0.1078 | 0.4382 | |
RUM | 0.1115 | 0.0930 | 0.3317 | 0.1473 | 0.1037 | 0.4121 | 0.1699 | 0.1107 | 0.4564 | 0.1855 | 0.1154 | 0.4895 | |
HPMN | 0.0932 | 0.0800 | 0.2711 | 0.1188 | 0.0878 | 0.3454 | 0.1389 | 0.0942 | 0.4000 | 0.1561 | 0.0994 | 0.4383 | |
DIN | 0.0997 | 0.0826 | 0.3005 | 0.1276 | 0.0907 | 0.3751 | 0.1456 | 0.0963 | 0.4204 | 0.1610 | 0.1009 | 0.4551 | |
Sets2Sets | 0.1555 | 0.1309 | 0.4327 | 0.2149 | 0.1499 | 0.5466 | 0.2486 | 0.1609 | 0.6014 | 0.2721 | 0.1683 | 0.6367 | |
DSNTSP | 0.1404 | 0.1153 | 0.4043 | 0.1857 | 0.1291 | 0.4991 | 0.2126 | 0.1376 | 0.5488 | 0.2329 | 0.1437 | 0.5843 | |
DNNTSP | 0.1475 | 0.1160 | 0.4127 | 0.2056 | 0.1350 | 0.5311 | 0.2359 | 0.1452 | 0.5864 | 0.2579 | 0.1522 | 0.6198 | |
ETGNN | 0.1450 | 0.1123 | 0.4151 | 0.2040 | 0.1319 | 0.5327 | 0.2395 | 0.1434 | 0.5916 | 0.2627 | 0.1505 | 0.6235 | |
CTTSP | 0.1618⋆ | 0.1353⋆ | 0.4428⋆ | 0.2249⋆ | 0.1563⋆ | 0.5582⋆ | 0.2603⋆ | 0.1682⋆ | 0.6140⋆ | 0.2883⋆ | 0.1769⋆ | 0.6506⋆ | |
Improv. | 4.05% | 3.36% | 2.33% | 4.65% | 4.27% | 2.12% | 4.71% | 4.54% | 2.10% | 5.95% | 5.11% | 2.18% | |
TaoBao | TOP | 0.1572 | 0.0835 | 0.1987 | 0.2457 | 0.1074 | 0.2964 | 0.3091 | 0.1220 | 0.3637 | 0.3609 | 0.1328 | 0.4208 |
PTOP | 0.1794 | 0.1240 | 0.2187 | 0.1909 | 0.1272 | 0.2328 | 0.1984 | 0.1289 | 0.2424 | 0.2061 | 0.1305 | 0.2517 | |
FPMC | 0.1675 | 0.0959 | 0.2088 | 0.2548 | 0.1196 | 0.3082 | 0.3189 | 0.1343 | 0.3778 | 0.3659 | 0.1440 | 0.4273 | |
DREAM | 0.1665 | 0.0932 | 0.2069 | 0.2566 | 0.1177 | 0.3079 | 0.3185 | 0.1319 | 0.3752 | 0.3663 | 0.1419 | 0.4262 | |
TGN | 0.1665 | 0.0888 | 0.2082 | 0.2560 | 0.1129 | 0.3088 | 0.3226 | 0.1283 | 0.3799 | 0.3727 | 0.1387 | 0.4329 | |
RUM | 0.1671 | 0.1077 | 0.2072 | 0.2301 | 0.1248 | 0.2797 | 0.2734 | 0.1347 | 0.3280 | 0.3096 | 0.1423 | 0.3678 | |
HPMN | 0.1848 | 0.1144 | 0.2281 | 0.2692 | 0.1374 | 0.3235 | 0.3310 | 0.1516 | 0.3900 | 0.3775 | 0.1612 | 0.4394 | |
DIN | 0.2188 | 0.1317 | 0.2671 | 0.3056 | 0.1553 | 0.3623 | 0.3646 | 0.1690 | 0.4255 | 0.4088 | 0.1782 | 0.4716 | |
Sets2Sets | 0.2413 | 0.1488 | 0.2911 | 0.3228 | 0.1710 | 0.3821 | 0.3838 | 0.1850 | 0.4465 | 0.4315 | 0.1950 | 0.4954 | |
DSNTSP | 0.2363 | 0.1431 | 0.2867 | 0.3296 | 0.1685 | 0.3885 | 0.3932 | 0.1832 | 0.4557 | 0.4414 | 0.1932 | 0.5050 | |
DNNTSP | 0.2511 | 0.1535 | 0.3028 | 0.3369 | 0.1769 | 0.3972 | 0.3925 | 0.1898 | 0.4535 | 0.4384 | 0.1994 | 0.5024 | |
ETGNN | 0.2589 | 0.1542 | 0.3103 | 0.3525 | 0.1798 | 0.4134 | 0.4124 | 0.1937 | 0.4760 | 0.4596 | 0.2036 | 0.5239 | |
CTTSP | 0.2610⋆ | 0.1595⋆ | 0.3134⋆ | 0.3558⋆ | 0.1855⋆ | 0.4167⋆ | 0.4170⋆ | 0.1996⋆ | 0.4801⋆ | 0.4627⋆ | 0.2092⋆ | 0.5264⋆ | |
Improv. | 0.81% | 3.44% | 1.00% | 0.94% | 3.17% | 0.80% | 1.12% | 3.05% | 0.86% | 0.67% | 2.75% | 0.48% |
5.3 Experimental Settings
To evaluate the model performance, we follow [10, 11] to rank the top- elements using the predicted probabilities and set to 10, 20, 30, and 40. We adopt Recall, Normalized Discounted Cumulative Gain (NDCG), and Personal Hit Ratio (PHR) as the evaluation metrics. Adam optimizer [43] is leveraged to optimize the models. We employ the cosine annealing learning rate scheduler [44] to change the learning rate. Dropout [45] is adopted to prevent models from over-fitting. We train the methods with a fixed 2000 epochs and use an early stopping strategy with a patience of 100. The model with the best performance (more specifically, the highest average NDCG) on the validation set is used for testing. To eliminate the deviations, we run the methods ten times with seeds from 0 to 9 on all the datasets and report the average performance. For the proposed CTTSP, we set the learning rate to 0.001 on all the datasets. We apply the grid search to find the best settings of CTTSP. Specifically, the dropout rate and hidden dimension are searched in [0.0, 0.05, 0.1, 0.15, 0.2] and [32, 64, 128]. The hyperparameter and are both searched in [0.0, 0.05, 0.1, 0.3, 0.5, 0.7, 0.9, 0.95, 1.0]. The settings of our model on different datasets are shown in the Appendix.
Note that the memories of users and elements are continuously updated during the training, validation, and testing process, but the model parameters are not optimized in the validation and testing process to avoid the data leakage issue. We train our model in a mini-batch manner, where the batches are created by our set-batch algorithm in advance. All the experiments are conducted on an Ubuntu machine equipped with one Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz, which has 16 physical cores. The GPU device is NVIDIA Tesla T4 with 15 GB memory. Our approach is implemented with PyTorch [46]. Codes and datasets are available at https://github.com/yule-BUAA/CTTSP.
5.4 Performance under Transductive Setting
The comparisons of our approach with baselines under the transductive setting are shown in Table I. From Table I, several conclusions can be summarized as follows.
Firstly, PTOP usually shows better performance than TOP because PTOP recommends personalized elements for each user based on his/her individual sequence, while TOP just identically predicts the most frequent elements for all the users which leads to unsatisfactory results. This reveals the importance of learning each user’s personal preference for temporal sets prediction.
Secondly, DREAM is often superior to FPMC because FPMC only utilizes the Markov chain to capture adjacent behaviors while DREAM employs the RNNs to capture temporal dependencies in the whole sequence of each user. This indicates that it is necessary to comprehensively mine the temporal information in users’ historical behaviors.
Thirdly, TGN, RUM, HPMN, and DIN often achieve competitive or better results than the above baselines, indicating some designs for memory networks or recommender systems can also facilitate the prediction of temporal sets. RUM and HPMN perform better than TGN in most cases as they design specialized memory updating mechanisms (i.e., item-level and feature-level versions in RUM, hierarchical and periodical mechanisms in HPMN). DIN learns element-specific representations for each user, which can effectively enhance the expressive ability of the model. However, all of them fail to handle the unique properties of sets and there is still space for improvement.
Fourthly, Sets2Sets, DSNTSP, DNNTSP, and ETGNN, which are customized approaches for the temporal sets prediction problem, usually yield better performance than other baselines. In particular, Sets2Sets additionally explores repeated patterns in user behaviors. DSNTSP learns both the element-level and set-level representations for the sequence of each user. DNNTSP studies the correlations of elements within each set and captures the temporal dependencies of elements across different sets. ETGNN is the best baseline in most cases because it not only learns element-specific representations that are aware of the temporal information for each user but also captures the collaborative signals in high-order user-element interactions.
Finally, CTTSP significantly outperforms the existing methods over all the metrics on all the datasets. The superiority of our approach lies in 1) the continuous-time learning framework to explicitly model the evolving user preferences and exploit the latent collaborative signals among different users; 2) the learning of personalized user behaviors, which can aggregate the previously interacted elements of each user according to the user and elements.
5.5 Performance under Inductive Setting
CTTSP is endowed with the inductive ability because 1) the memory of each user in the memory bank is identically initialized as a zero vector at first. When a new user comes, his/her memory can be updated according to the learned model parameters; 2) the static embedding is shared across the users in Section 4.2.1, which is not influenced by new users. Therefore, we evaluate the model performance under the inductive setting, which predicts the next-period set for users that are not contained in the training set. Note that FPMC, RUM, and ETGNN could only be evaluated under the transductive setting because they contain user-specific trainable parameters and are inherently not inductive. We report the performance of different methods in Fig. 3.

From Fig. 3, we find that the results of baselines vary across different datasets, but our CTTSP often consistently achieves better performance than baselines. The superiority of our approach under the inductive setting may be due to the learning of memories of users and elements, which can help our model discover the correlations across different users, and thus increase the reasoning ability for new users. This advantage indicates the potential of CTTSP in tackling the cold-start problems [47]. We also observe that Sets2Sets usually performs better than baselines and it even achieves higher NDCG than our approach on TaFeng. We owe such a phenomenon to the repeated user behavior learning module in Sets2Sets, which is similar to the PerTOP baseline but with trainable importance weights. PerTOP achieves relatively good NDCG on TaFeng, and its trainable version in Sets2Sets may contribute more and lead to higher NDCG.
5.6 Effects of Continuous-Time and Personalized Information of Users
We further investigate the importance of continuous-time and personalized information of users. In particular, we vary the continuous-time probability importance in [0.0, 0.05, 0.1, 0.3, 0.5, 0.7, 0.9, 0.95, 1.0] by fixing the user perspective importance as its best configuration. Due to space limitations, we report the results on all the datasets in Fig. 4, where is set to 10. Similar trends could be observed when the values of are set to 20, 30, and 40.

From Fig. 4, we can observe that the best settings of change over different datasets and reflect the datasets’ unique properties. Two representative datasets are JingDong and DC, where the continuous-time user preferences are more obvious on JingDong, while DC is more likely to be affected by personalized user behaviors. TaFeng and TaoBao tend to show the balance of both continuous-time and personalized information of users. These findings suggest that it is necessary to select appropriate according to the property of each dataset.
5.7 Effects of User and Element Perspectives
We also study the impacts of the user and element perspectives on different datasets. Specifically, we first fix the continuous-time probability importance as its best configuration and then change the value of the user perspective importance in [0.0, 0.05, 0.1, 0.3, 0.5, 0.7, 0.9, 0.95, 1.0]. Experimental results on all the datasets are depicted in Fig. 5, where is set to 10.
From Fig. 5, we conclude that different datasets depict varied importance of the user perspective and element perspective. Specifically, JingDong and TaoBao are more likely to be influenced by the user perspective, while TaFeng pays more attention to the element perspective. DC keeps a balance of the user and element perspective. Therefore, choosing suitable on different datasets is also essential for satisfactory prediction performance.

5.8 Ablation Study
We further conduct the ablation study to validate the effectiveness of the Attention mechanism in the User Message Encoder (AUME), the Attention mechanism in the Element Message Encoder (AEME), and the Gated Updating mechanism in the Memory Updaters (GUMU). Specifically, we use the average pooling operation over the related elements in each user-set interaction when computing the user’s message and denote the model as CTTSP w/o AUME. This indicates that CTTSP w/o AUME does not distinguish the importance of different elements to the user. CTTSP w/o AEME is implemented by encoding the message of each element according to the information of the interacted user and the element itself, while the information of other elements in the same user-set interaction is ignored. We employ Gated Recurrent Units (GRU) in [15] to replace the GUMU component for both users and elements, and denote the model as CTTSP w/o GUMU. Experimental results of the variants on JingDong and TaoBao are shown in Fig. 6, where is set to 10.

From Fig. 6, we can observe that CTTSP achieves the best performance when it uses all the three components, and removing any component would lead to worse results. In particular, the AUME component encodes the user’s message by learning the importance of different elements. The AEME component captures the intra-set relationships of elements when encoding the message of each element. The GUMU component adaptively selects the encoded message and historical memories according to the gated updating mechanism with fewer parameters than GRU, which may reduce the training difficulty [48].
5.9 Efficiency of the Set-batch Algorithm
We validate the efficiency of the proposed set-batch algorithm by comparing the running time of our CTTSP with CTTSP w/o set-batch. It is worth noticing that the set-batch algorithm can create time-consistent batches during the preprocessing phase in advance, and thus does not introduce extra cost in the training and evaluation process. Moreover, there is no need to predetermine the number of batches as it is automatically provided by the set-batch algorithm for different datasets. In the experiments, the set-batch algorithm creates 1,349, 2,079, 12,652, and 9,266 batches on JingDong, DC, TaFeng, and TaoBao, respectively. This can significantly improve the model efficiency since the model has to chronologically deal with 15,195, 42,905, 73,302, and 225,989 user-set interactions (see Table III) without the set-batch algorithm. We report the running time of each epoch in the training and evaluation process in Table II.
Process | Methods | JingDong | DC | TaFeng | TaoBao |
Training | CTTSP | 47s | 109s | 400s | 426s |
CTTSP w/o set-batch | 143s | 399s | 1007s | 2050s | |
Speedups | 3.04 | 3.66 | 2.52 | 4.81 | |
Average | 3.51 | ||||
Evaluation | CTTSP | 18s | 45s | 183s | 235s |
CTTSP w/o set-batch | 54s | 152s | 331s | 933s | |
Speedups | 3.00 | 3.38 | 1.81 | 3.97 | |
Average | 3.04 |
From Table II, we could observe that the proposed set-batch algorithm enables our CTTSP to be efficiently executed in a mini-batch manner and achieves 3.5 speedups in training and 3.0 speedups in evaluation on average. We also compare the convergence of CTTSP and CTTSP w/o set-batch. The results show that the set-batch algorithm leads to faster and more stable convergence since it enables CTTSP to be trained in a mini-batch manner rather than processing interactions one by one, which can reduce the variance of the parameter updates [49].
We further report the evaluation time of different methods on JingDong and DC in Fig. 7, where the x-axis is scaled by a logarithmic function with base 2. The results on all the datasets are shown in the Appendix. The model complexity of CTTSP comes from the Continuous-Time User Preference Modelling and Personalized User Behavior Learning components, which is relatively higher than the baselines. However, we also observe that CTTSP obtains the best performance with acceptable increments in computational complexity. Therefore, we conclude that CTTSP can achieve a good trade-off between efficiency and effectiveness.

5.10 Model Interpretability
We show the interpretability of our approach by conducting experiments on the JingDong dataset.
5.10.1 Element Representation Visualization
We first choose four categories that have the top four numbers of elements, and set the number of elements in each category to 100, leading to 400 elements in total. Then, we visualize the representations of the selected elements based on t-SNE [50]. The results are shown in Fig. 8(a).

From Fig. 8(a), we could find that elements in the same category are closely gathered by our approach, and the boundaries between elements in different categories are relatively clear. This observation demonstrates the effectiveness of our method in capturing implicit correlations of elements. Note that our model does not explicitly access the category information of elements, but it still well learns such information. We also note that a few elements in category 7 and category 69 are mixed with each other, which may indicate that their underlying relationships can not be sufficiently represented by the category information only.
5.10.2 Continuous-Time User Preference Visualization
To confirm that our approach can track continuous-time user preferences by learning evolving memories of users, we first obtain all the memories of a sampled user, and then use t-SNE [50] to project the memories into a two-dimensional space in Fig. 8(b). Each point represents the projected memory of the user after he/she interacts with a set at a specific time, and different points can reflect the time-evolving user preferences. The point shapes are determined by the types of elements in the interacted set. We chronologically connect the points with arrows based on the interaction time, aiming to show the changes in user preferences along the time dimension. The involved elements and their categories are also shown near the arrows.
From Fig. 8(b), we observe that the user’s preference is evolving over time with different categories of the interacted elements. At first, the user displays a favor of element in category 73. Then, the user interacts with elements , and in category 24, and the user’s preference gradually transits to the bottom-right corner of the latent space. Finally, the user’s preference shifts to the top-right positions of the latent space because he/she shows interests in elements , and in category 47. These observations indicate that our approach can well capture the user’s evolutionary preference by learning the continuously evolving memories.
5.10.3 Collaborative Signal Exploration
We present an intuitive case to show the ability of our CTTSP in exploring the collaborative signals latent in different users. In particular, we choose the sequences of three users including , and , and our task is to predict ’s next-period set, which contains a new element that has never interacted with.

The sequences of , and are presented in Fig. 9. We find that and show similar behaviors to user , and we mark the element transaction patterns (i.e., underlying collaborative signals) with different colors. It would be helpful to leverage the collaborative signals when predicting the next-period set for user . By chronologically learning from the constructed universal sequence that consists of all the users’ historical behaviors, the proposed CTTSP first discovers the collaborative signals latent in user and user , and then successfully predicts element for user within the top-5 results, while the baselines fail to do so. This observation demonstrates the superiority of our CTTSP in exploring collaborative signals for improving prediction performance.
6 Conclusion
In this paper, we studied the temporal sets prediction problem and pointed out the limitations of the existing methods in the implicit learning paradigm of user preferences. We proposed a continuous-time learning framework to explicitly capture the evolving user preferences by maintaining a memory bank, which could store the states of all the users and elements. We first constructed a non-descending universal sequence to contain all the user-set interactions, and then chronologically learned from each interaction. Our approach could update the memories of the user and elements for each interaction by message encoders and memory updaters. A personalized user behavior learning module was also devised to capture each user’s individual properties by adaptively aggregating the historical sequence. Moreover, a set-batch algorithm was developed to improve the model efficiency. Extensive experiments on four benchmarks showed that our approach significantly outperformed the existing baselines and revealed good interpretability.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (No. 62272023 and 51991395) and the Fundamental Research Funds for the Central Universities (No. YWF-23-L-1203).
References
- [1] A. R. Benson, R. Kumar, and A. Tomkins, “Sequences of sets,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018, pp. 1148–1157.
- [2] S. Rendle, C. Freudenthaler, and L. Schmidt-Thieme, “Factorizing personalized markov chains for next-basket recommendation,” in Proceedings of the 19th International Conference on World Wide Web. ACM, 2010, pp. 811–820.
- [3] F. Yu, Q. Liu, S. Wu, L. Wang, and T. Tan, “A dynamic recurrent model for next basket recommendation,” in Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 2016, pp. 729–732.
- [4] Y. Zhu, H. Li, Y. Liao, B. Wang, Z. Guan, H. Liu, and D. Cai, “What to do next: Modeling user behaviors by time-lstm,” in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. ijcai.org, 2017, pp. 3602–3608.
- [5] B. Jin, H. Yang, L. Sun, C. Liu, Y. Qu, and J. Tong, “A treatment engine by predicting next-period prescriptions,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018, pp. 1608–1616.
- [6] P. Zhao, H. Zhu, Y. Liu, J. Xu, Z. Li, F. Zhuang, V. S. Sheng, and X. Zhou, “Where to go next: A spatio-temporal gated network for next POI recommendation,” in The Thirty-Third AAAI Conference on Artificial Intelligence. AAAI Press, 2019, pp. 5877–5884.
- [7] L. Zhang, Z. Sun, J. Zhang, Y. Lei, C. Li, Z. Wu, H. Kloeden, and F. Klanner, “An interactive multi-task learning framework for next POI recommendation with uncertain check-ins,” in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. ijcai.org, 2020, pp. 3551–3557.
- [8] H. Hu and X. He, “Sets2sets: Learning from sequential sets with neural networks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2019, pp. 1491–1499.
- [9] L. Sun, Y. Bai, B. Du, C. Liu, H. Xiong, and W. Lv, “Dual sequential network for temporal sets prediction,” in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. ACM, 2020, pp. 1439–1448.
- [10] L. Yu, L. Sun, B. Du, C. Liu, H. Xiong, and W. Lv, “Predicting temporal sets with deep neural networks,” in KDD ’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 2020, pp. 1083–1091.
- [11] L. Yu, G. Wu, L. Sun, B. Du, and W. Lv, “Element-guided temporal graph representation learning for temporal sets prediction,” in The ACM Web Conference 2022. ACM, 2022, pp. 1902–1913.
- [12] L. Li, H. Jing, H. Tong, J. Yang, Q. He, and B. Chen, “NEMO: next career move prediction with contextual embedding,” in Proceedings of the 26th International Conference on World Wide Web Companion. ACM, 2017, pp. 505–513.
- [13] N. Shi, L. Yu, L. Sun, L. Wang, C. Lin, and R. Zhang, “Deep heterogeneous network for temporal set prediction,” Knowl. Based Syst., vol. 223, p. 107039, 2021.
- [14] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
- [15] K. Cho, B. van Merrienboer, Ç. Gülçehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. ACL, 2014, pp. 1724–1734.
- [16] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” in 3rd International Conference on Learning Representations, 2015.
- [17] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017, pp. 5998–6008.
- [18] X. Chen, H. Xu, Y. Zhang, J. Tang, Y. Cao, Z. Qin, and H. Zha, “Sequential recommendation with user memory networks,” in Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. ACM, 2018, pp. 108–116.
- [19] J. Huang, W. X. Zhao, H. Dou, J. Wen, and E. Y. Chang, “Improving sequential recommendation with knowledge-enhanced memory networks,” in The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. ACM, 2018, pp. 505–514.
- [20] S. Kumar, X. Zhang, and J. Leskovec, “Predicting dynamic embedding trajectory in temporal interaction networks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2019, pp. 1269–1278.
- [21] K. Ren, J. Qin, Y. Fang, W. Zhang, L. Zheng, W. Bian, G. Zhou, J. Xu, Y. Yu, X. Zhu, and K. Gai, “Lifelong sequential modeling with personalized memorization for user response prediction,” in Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2019, pp. 565–574.
- [22] T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud, “Neural ordinary differential equations,” in Advances in Neural Information Processing Systems, 2018, pp. 6572–6583.
- [23] Z. Li, N. B. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. M. Stuart, and A. Anandkumar, “Fourier neural operator for parametric partial differential equations,” in 9th International Conference on Learning Representations. OpenReview.net, 2021.
- [24] P. J. Brockwell and R. A. Davis, Introduction to time series and forecasting. Springer, 2016.
- [25] H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang, “Informer: Beyond efficient transformer for long sequence time-series forecasting,” in Thirty-Fifth AAAI Conference on Artificial Intelligence. AAAI Press, 2021, pp. 11 106–11 115.
- [26] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, “Session-based recommendations with recurrent neural networks,” in 4th International Conference on Learning Representations, 2016.
- [27] G. Zhou, X. Zhu, C. Song, Y. Fan, H. Zhu, X. Ma, Y. Yan, J. Jin, H. Li, and K. Gai, “Deep interest network for click-through rate prediction,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018, pp. 1059–1068.
- [28] R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec, “Graph convolutional neural networks for web-scale recommender systems,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018, pp. 974–983.
- [29] X. Wang, X. He, M. Wang, F. Feng, and T. Chua, “Neural graph collaborative filtering,” in Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2019, pp. 165–174.
- [30] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in 5th International Conference on Learning Representations, 2017.
- [31] W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Advances in Neural Information Processing Systems 30, 2017, pp. 1024–1034.
- [32] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in 6th International Conference on Learning Representations. OpenReview.net, 2018.
- [33] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, “Lightgcn: Simplifying and powering graph convolution network for recommendation,” in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. ACM, 2020, pp. 639–648.
- [34] S. M. Kazemi, R. Goel, K. Jain, I. Kobyzev, A. Sethi, P. Forsyth, and P. Poupart, “Representation learning for dynamic graphs: A survey,” J. Mach. Learn. Res., vol. 21, pp. 70:1–70:73, 2020.
- [35] X. Chang, X. Liu, J. Wen, S. Li, Y. Fang, L. Song, and Y. Qi, “Continuous-time dynamic graph learning via neural interaction processes,” in The 29th ACM International Conference on Information and Knowledge Management. ACM, 2020, pp. 145–154.
- [36] D. Xu, C. Ruan, E. Körpeoglu, S. Kumar, and K. Achan, “Inductive representation learning on temporal graphs,” in 8th International Conference on Learning Representations. OpenReview.net, 2020.
- [37] E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. M. Bronstein, “Temporal graph networks for deep learning on dynamic graphs,” CoRR, vol. abs/2006.10637, 2020.
- [38] Z. Fan, Z. Liu, J. Zhang, Y. Xiong, L. Zheng, and P. S. Yu, “Continuous-time sequential recommendation with temporal graph collaborative transformer,” in The 30th ACM International Conference on Information and Knowledge Management. ACM, 2021, pp. 433–442.
- [39] L. Yu, L. Sun, B. Du, and W. Lv, “Towards better dynamic graph learning: New architecture and unified library,” arXiv preprint arXiv:2303.13047, 2023.
- [40] I. M. Baytas, C. Xiao, X. Zhang, F. Wang, A. K. Jain, and J. Zhou, “Patient subtyping via time-aware LSTM networks,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017, pp. 65–74.
- [41] J. Wu, X. Wang, X. Gao, J. Chen, H. Fu, T. Qiu, and X. He, “On the effectiveness of sampled softmax loss for item recommendation,” CoRR, vol. abs/2201.02327, 2022.
- [42] D. Lian, Q. Liu, and E. Chen, “Personalized ranking with importance sampling,” in The Web Conference 2020. ACM / IW3C2, 2020, pp. 1093–1103.
- [43] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, 2015.
- [44] I. Loshchilov and F. Hutter, “SGDR: stochastic gradient descent with warm restarts,” in 5th International Conference on Learning Representations, 2017.
- [45] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 1, pp. 1929–1958, 2014.
- [46] A. Paszke, S. Gross, F. Massa et al., “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems, 2019, pp. 8024–8035.
- [47] H. Lee, J. Im, S. Jang, H. Cho, and S. Chung, “Melu: Meta-learned user preference estimator for cold-start recommendation,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2019, pp. 1073–1082.
- [48] R. Pascanu, T. Mikolov, and Y. Bengio, “On the difficulty of training recurrent neural networks,” in Proceedings of the 30th International Conference on Machine Learning, vol. 28. JMLR.org, 2013, pp. 1310–1318.
- [49] S. Ruder, “An overview of gradient descent optimization algorithms,” CoRR, vol. abs/1609.04747, 2016.
- [50] L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.
![]() |
Le Yu received the B.S. degree in the School of Computer Science and Engineering at Beihang University, Beijing, China, in 2019. He is currently a fifth-year computer science Ph.D. student in the School of Computer Science and Engineering at Beihang University. His research interests include temporal data mining, user behavior modeling, and graph neural networks. |
![]() |
Zihang Liu is currently a second-year M.S. student in Computer Science and Engineering from Beihang University. He received the B.S. degree in 2021. His research interests include time-series data mining, recommendation system, and graph neural networks. |
![]() |
Leilei Sun received the B.S., M.S. and Ph.D. degree from Dalian University of Technology in 2009, 2012, and 2017 respectively. He is currently an associate professor with the State Key Laboratory of Software Development Environment (SKLSDE) at School of Computer Science, Beihang University. He was a postdoctoral research fellow from 2017 to 2019 at Tsinghua University. His research interests include machine learning and data mining. |
![]() |
Bowen Du received the Ph.D. degree in Computer Science and Engineering from Beihang University, Beijing, China, in 2013. He is currently a Professor with the State Key Laboratory of Software Development Environment, Beihang University. His research interests include smart city technology, multi-source data fusion, and traffic data mining. |
![]() |
Chuanren Liu received the B.S. degree from the University of Science and Technology of China (USTC), the M.S. degree from the Beijing University of Aeronautics and Astronautics (BUAA), and the Ph.D. degree from Rutgers, the State University of New Jersey. He is currently an associate professor with the Business Analytics and Statistics Department at the University of Tennessee, Knoxville, USA. His research interests include data mining and machine learning, and their applications in business analytics. |
![]() |
Weifeng Lv received the B.S. degree in Computer Science and Engineering from Shandong University, Jinan, China, and the Ph.D. degree in Computer Science and Engineering from Beihang University, Beijing, China, in 1992 and 1998 respectively. Currently, he is a Professor with the State Key Laboratory of Software Development Environment, Beihang University, Beijing, China. His research interests include smart city technology and mass data processing. |
In the appendix, details of the experiments are introduced.
Statistics of Datasets
Statistics of the datasets are shown in Table III, where #E/S denotes the average number of elements in each set, and #S/U stands for the average number of sets of each user.
Datasets | #S | #U | #E | #E/S | #S/U | Time span |
JingDong | 15,195 | 3,063 | 3,551 | 1.26 | 4.96 | Mar. 1, 2018 – Mar. 31, 2018 |
DC | 42,905 | 9,010 | 217 | 1.52 | 4.76 | the first 60 days |
TaFeng | 73,302 | 9,841 | 4,935 | 5.41 | 7.45 | Nov. 1, 2000 – Feb. 28, 2001 |
TaoBao | 225,989 | 49,393 | 689 | 1.45 | 4.58 | Nov. 24, 2017 – Dec. 3, 2017 |
Settings of Hyperparameters
Table IV shows the hyperparameter settings of our approach.
Settings | JingDong | DC | TaFeng | TaoBao |
Learning rate | 0.001 | 0.001 | 0.001 | 0.001 |
Dropout rate | 0.2 | 0.2 | 0.15 | 0.05 |
Hidden dimension | 64 | 64 | 64 | 32 |
Hyperparameter | 0.9 | 0.5 | 0.05 | 0.9 |
Hyperparameter | 0.9 | 0.0 | 0.7 | 0.7 |
Model Efficiency Comparison
Fig. 10 shows the evaluation time of different methods on all the datasets, where the x-axis is scaled by a logarithmic function with base 2.

Model Training Algorithm
The training process of CTTSP is shown in Algorithm 2.