all
Time-based Sequence Model for Personalization and Recommendation Systems
Abstract
In this paper we develop a novel recommendation model that explicitly incorporates time information. The model relies on an embedding layer and TSL attention-like mechanism with inner products in different vector spaces, that can be thought of as a modification of multi-headed attention. This mechanism allows the model to efficiently treat sequences of user behavior of different length. We study the properties of our state-of-the-art model on statistically designed data set. Also, we show that it outperforms more complex models with longer sequence length on the Taobao User Behavior dataset.
1 Background
Recommendation systems play an important role in many e-commerce applications as well as search and ranking services [6, 15, 21, 26, 30, 31, 41, 48]. There are two main strategies to perform recommendations: content and collaborative filtering. In content filtering the user creates a profile based on its interest, while human experts create a profile for the product. An algorithm matches the two profiles and recommends the closest matches to the user. For example, this approach is taken by the Pandora Music Genome Project [29].
In collaborative filtering, the recommendations are based only on user past behavior from which the future behavior is derived. The advantage of this approach is that it requires no external information and is not domain specific. The challenge is that in the beginning very few user-item interactions are available. For instance, this cold start problem is addressed by Netflix by asking the user for a few favorite movies when creating their profile for the first time [27].
Further, the collaborative filtering schemes are often split into neighborhood and latent factor methods. The neighborhood methods either find clusters of items (item-oriented) or identify groups of users that share some items (users-oriented) and then recommend new items to a particular user based on it [28]. In contrast, latent factor methods characterize users and items based on implicit factors to be determined by the algorithm/model. The factors can often be thought of as elements of a vector in [7, 17, 33]. We show a simple schematic drawing of these methods on Fig. 1, where circles and rectangles denote users and items, respectively.



The most common example of implicit factor methods is matrix factorization [19]. It attempts to solve the matrix completion problem that in its simplest form can be formulated as following. Let be the rating of -th item by -th user. We would like to find an approximation to partially filled-in matrix of user-item interactions as
(1) |
where and can be though of as embedding tables. Note that individual vectors and can be interpreted as representations of -th user and -th item, respectively.
This problem can be solved by finding
(2) |
where is a set of known ratings.
Notice that this formulation can readily be generalized by exchanging the dot product in (2) for a more complex interaction function , with input indices , and parameters [14]. Moreover, we can often provide a richer context with continuous values x as well as discrete sets of indices for the user and for the item features, resulting in
(3) |
Typically these indices are used for embedding table lookups [24], with results propagated further. In effect many of the state-of-the-art recommendation models that treat time implicitly adopt this formulation and focus on the design of the function that defines a particular model [5, 13, 20, 22, 25, 34, 42].
The recommendation models can also explicitly incorporate timing component. In this scenario we often have a set of events for time and we are interested in predicting the next event at time . Therefore, the dataset to train such models is often organized into time series corresponding to history of user activity, rather than simply a collection of user and item features.
The models that implicitly incorporate timing component can often produce a representation of an event as an embedding vector, therefore making it possible to generate a sequence of such embeddings . This sequence can be generated independently one item at a time, using recurrent neural networks (RNNs) or some other mechanism represented by function in
(4) |
for and being some initial state. Then, the prediction of the new rating relies on its context as well as its relevance relative to the earlier events, which may be represented through an attention mechanism [39], in
(5) | |||||
c | (6) | ||||
(7) |
The implementation details specifying interaction of function that constructs embedding of prior events, attention mechanism attn and function that computes the final rating define individual models111Note that for a sequence of , and no context c from attention mechanism, the model (5) - (7) reduces to the one without an explicit timing component in (3), so that . [31, 46, 47].
2 Time-based Sequence Model
In this section we introduce the time-based sequence model (TBSM) for user behavior data as well as its different variations which serve to demonstrate several plausible approaches to this problem. In subsequent discussion we define a datapoint as a sequence of events corresponding to a particular user.
2.1 Embedding Layer
As a first step we pass each datapoint representing a sequence of events for a given user through an embedding layer that generates an embedding vector for each event. The use of an embedding layer is a standard step in deep learning solutions to the CTR problems [3, 13, 46].
We choose to use deep learning recommendation model (DLRM) [25] to express this embedding layer222Note that we stop DLRM one layer prior to the final output, where we have an embedding representation and before it is converted to a probability of a click by last layer of the model.. It accepts input continuous features x, a set of discrete features and generates an output embedding z. In particular, in our experiments we will be interested in three one-hot categorical features corresponding to user , item and category as well as a single continuous feature x corresponding to time, as shown on Fig. 2. Notice that in DLRM categorical features get embedded into using , where is a trainable weight parameter. On the other hand, continuous features are passed through the bottom multilayer perceptron (MLP), whose final dimension is the same as embedding dimension of all categorical features. In the following step we take all pairwise dot products between dense and categorical features. The final step of DLRM takes these dot products combined with embedded dense feature and passes it through the top MLP resulting in a vector .
As a result of the application of DLRM to each position in the input sequence our output can be thought of as a time series in . We use notation for DLRM output of item at position , and for the output of the last item.
2.2 Time Series Layer
The time series layer (TSL) is designed specifically for time series processing. The high-level purpose of this layer is to relate the time series vectors to the last item. In this layer we compute the similarities between the sequence of vectors and the last item , pass them though a MLP to create coefficients a and produce a context vector as shown in Fig. 3. We will use several such layers when composing the full model in the next section. Also, we point out that this scheme resembles an attention mechanism, with a few small but important modifications that will be discussed next.
The first modification is that the vectors are normalized by projecting them onto a unit sphere :
(8) |
Also, we replace the dot product by one or more positive definite inner products
(9) |
for some nonsingular matrix . The inner product achieves two inter-dependent goals: (i) capturing label dependence on coupling between time series in different coordinates of the embedded space, and (ii) learning different similarity measures in the embedded space.
Note that in combination with spherical projection dot product becomes a cosine similarity, a measure widely used in natural language processing (NLP) [36]. We also may view inner product between projected vectors as a more general similarity measure as compared to cosine similarity. We remark that the connection with NLP can be elucidated if we think of analogy of item vocabulary to word vocabulary, and probability of last item corresponding to probability of next word under a specific language model. The difference with NLP case is the size of vocabulary, which in our case can easily exceed tens of millions of items.
Let us now point out how our attention-like mechanism is different from multi-head attention [39]. Recall that in multi-head attention the similarity between key and query is constructed via
(10) |
where and are matrices whose dimensions are chosen so that their outputs have same dimensionality. We remark that in our case we do not need to map a pair of vectors into a common space since they are already in the output space of DLRM embedding layer. Therefore, there are three major differences of our approach as compared to multi-head attention. First, there are fewer trainable parameters since it involves only one matrix . Second, the weighting has a clear geometric interpretation since cosine similarity is directly related to a distance function on the unit sphere
(11) |
Finally, when using multiple TSLs, we will pair them with individual MLPs, while in multi-headed attention a single shared MLP is used for all. While these differences seem small, we will show in our experiments that they result in major improvement in the statistical performance of the model.
2.3 TBSM: Composing All Layers Together
Let us now compose all layers together into a single model, that will be referred to as time-based sequence model (TBSM).
Notice that the output of embedding layer corresponding to (5) and TSL corresponding to (6) can be concatenated and supplied as an input to the MLP which would produce probability of a click .
Further, we may choose to use multiple TSLs, with different similarity measures between vectors, or even using different sequence lengths for each of them. In this case, we obtain concatenated outputs and pass each of them through an independent MLP, resulting in a distinct probability for . Once again note that this approach resembles the use of multiple heads in an attention mechanism [18, 35, 40, 43], but aside from the use of spherical projection in (8) and well defined inner products in (9), the output uses individual rather than shared MLPs.
Finally, note that we can interpret each as a probability obtained by using the corresponding similarity measure. We can also think of them as being produced by an ensemble of models [1, 11, 38]. In this scenario, the last MLP determines their contribution to the final probability of click . The comprehensive design of TBSM showing all of these components together is outlined on Fig. 4.
3 Datasets
In this section we describe in detail the two datasets used in the experiments presented in this paper. We focus our attention on the Taobao User Behavior dataset333Taobao data can be found at https://tianchi.aliyun.com/dataset/dataDetail?dataId=649&userId=1 [48], while we also design and use a custom synthetic dataset to explain the reasoning behind some of the architectural choices made in the model.
3.1 Synthetic Dataset
In synthetic dataset we let a datapoint be a time series, with each position in the series being a randomly generated vector in . This has a similar structure to Taobao train dataset after it passes through model’s embedding layer, except that the binary label is made to explicitly depend on series data as follows.
Let us denote the first vectors in time series by and the last vector by . Also, let denote a sum of (zero or more) vectors, each obtained by a random permutation of coordinates of . Then, define the corresponding click/non-click label and function to be
(12) | |||||
(13) |
where denotes an inner product. Note that the first term is a function of inner products between time series elements and , while the second term contributes “mixed” products of coordinates in (e.g. ). We can control “complexity” of the dataset because we have a summand in for each permutation of vector . If , i.e. there are no mixed terms, then our time series in can be thought of as independent time series in that are completely decoupled from each other. The more summands we add into the stronger is coupling between time series in different coordinates. We will use this interplay to test different components of the model.
3.2 Taobao User Behavior Dataset
The raw Taobao User Behavior dataset has a total of about 4M items, 10K categories and 1M users. It is organized into a set of time series parameterized by a user. Each entry in the time series for a given user is a triple , where user interacted with the item “” belonging to category “” at time “”444For example, “soccer ball” and “sports” could be a valid item-category pair..
The processed Taobao User Behavior dataset is obtained from the raw dataset and organized into a set of datapoints, split across taobao_train.txt and taobao_test.txt files. We note that train dataset contains about 9M points, while test dataset contains a bit more than 296K points. A datapoint in processed dataset corresponds to a specific user and consists of
-
(i)
user id
-
(ii)
pair of item and category id
-
(iii)
randomly generated (50/50 chance) binary label
-
(iv)
200 pairs which encode user’s true behavior
-
(v)
200 pairs randomly taken from the full dataset of 4M items, but that are different from user’s true behavior
Notice that for a given user the sequence (iv) represent positive samples, i.e. clicks, while sequence (v) can be used to generate negative samples, i.e. non-clicks. Further, for users who have interacted with less than 200 items, the true behavior sequence is padded with zeros in front. In case user has longer than 200 item history, it is truncated at 200. The -th point of true behavior sequence is special: if label the pair in (ii) is taken for true user history, otherwise if label the true point is replaced with randomly generated fake one taken from sequence (v).
For instance, a sample datapoint (one line in text file) is shown below
7 123 50 1 0,45,12,...123 0,17,89,...50 98,112,75,... 43,765,14,...
In this line user id , pair of item and category id of -th item is (123, 50), while label . The sequence starting with 0,45 is 200 item ids from true history. The sequence starting with 0,17 is 200 are the category ids corresponding to the item ids in the first sequence. Note that this user only had 199 items in the true history sequence, therefore the first entry is padded by 0. Finally, last two sequences are item and category ids randomly generated for this user.
For a fixed user behavior sequence length , the train and test datasets are constructed using the following scheme. For the train dataset, positive datapoints (with label ) are constructed by taking contiguous subsequences of length from -item long full sequence starting at random position. The negative datapoints (with label ) are constructed by replacing the item id following the chosen subsequence of length by an item chosen from the randomly generated 200-item sequence for this user. For the test dataset, we always take last items for each user with the provided label, so no replacements are made in this case.
Finally we append user and time by taking equally-spaced values between and to each pair in length subsequence. Therefore, resulting in a final datapoint consisting of tuples . Note that we think of each datapoint as a known time series of length and the last item whose probability we are trying to predict.
4 Experiments
We perform all of our experiments on the following architecture of TBSM555TBSM code can be found at https://github.com/facebookresearch/tbsm. In embedding layer implemented through DLRM we let the embedding dimension , bottom MLP to have a single layer where is the number of dense features and top MLP have a single layer , where is the number of (pair-wise) interactions between features and is the number of sparse features. Notice that for processed Taobao User Behavior dataset we have a single continuous feature time and we have three discrete features corresponding to user, item and category, therefore and , respectively. We let and remark that reasonable choices for MLP layer sizes (e.g. ) do not have a measurable effect on performance and other values provide similar results.
Inside TSL we let MLP have three layers, with dimensions , and , where is the length of the time series. Note that input and output dimensions of this layer must coincide with because there are as many input values as there are events , and the output values provide coefficients in the linear combination , where a is the vector resulting from the MLP. In our experiments, we will show that in our model it will be sufficient to choose to achieve higher statistical performance than existing more complex models with larger that is supposed to be beneficial to them.
Further, the MLP above TSL has dimensions and since its input is a pair of -dimensional vectors []. When multiple TSLs are used, each with corresponding MLPs, these dimensions are replicated across them.
Since our label is binary, it is natural to choose binary cross entropy
(14) |
as model’s loss function, where is the target and is the predicted probability. We also track accuracy as well as AUC metrics [9] throughout training. The model is trained for one epoch using Adagrad optimizer666We have experimented with SGD but the results were considerably worse with it when compared to Adagrad and therefore we do not report them here. [8]. We always report the average values over 10 runs to avoid different random initialization effects.
4.1 Synthetic Dataset
In synthetic dataset the binary label encodes varying degree of coupling between coordinates in a time series vector. Let be the number of summands in in the second term in (13) which reflects it. Let us consider the model where TSL uses similarity measured by standard single dot product or multiple inner products defined in (8) and (9), as summarized in Tab. 1.
Model | ||||
---|---|---|---|---|
1-dot | 0.99 | 0.68 | 0.63 | 0.66 |
1-inner | 0.82 | 0.60 | 0.58 | 0.67 |
4-inner | 0.94 | 0.77 | 0.70 | 0.78 |
8-inner | 0.98 | 0.80 | 0.79 | 0.80 |

Notice that when there is no coupling between time series in different coordinates () the model using dot product in TSL has the best performance among the tested models. However, when strength of the coupling between coordinates increases (higher ) the performance for all models naturally deteriorates, but models with multiple inner products start to outperform the one with a single dot product, as shown on Fig. 5. Notice also that in this experiment 8-head attention model is doing no better than a random model, while TSL significantly outperforms it.
4.2 Taobao User Behavior Dataset
Let us now study the behavior of different variations of TBSM. We consider different similarity measures, number and lengths of sequences as well as different attention and LSTM-based mechanisms for processing time series data.
First, we experiment with different similarity measures in TSL. We let DotSim() refer to dot product in Cartesian coordinates, while GenSim() refer to inner product on a unit sphere computed using (8) and (9). Also, we test the performance of indefinite inner product refered to as IndSim(), where we replace inner product in (9) through symmetric positive semi-definite matrix by an inner product through a nonsingular matrix , thereby allowing similarity to take negative values. At last, we let CosSim() refer to the model where after the unit sphere projection we apply dot product. Note that since dot product is a special case of inner product, all these variants of defining similarity measure between two vectors can be succinctly described as optionally doing the unit sphere projection followed by a particular type of inner product.

In order to compare GenSim(), DotSim() and CosSim() similarity measures within TBSM, we record the corresponding mean , standard deviations and range of their AUC scores, which are for the former and for latter model. Assuming that AUC is normally distributed with given means and standard deviations the estimated chance that GenSim() performs better than DotSim() is , as shown on Fig. 6. Finally, the loss and accuracy obtained during training of the best GenSim() model are shown on Fig. 7. Notice that the convergence shows the standard training L-shape curve, as expected.


Then, we experiment with different number and setup of TSLs. We let TSL (k-inner) designate a model with TSLs, each with its own inner product similarity measure on the unit sphere (after projection). We point out that GenSim() is a special case of TSL(k-inner) model with . Moreover, we also test the use of different lengths subsequences. The idea of TSL(k-seq) approach is to directly amplify influence of most current items in the datapoint’s time series. To achieve this effect we compute context vector not only for the full series but also for the most recent , and subsequences. In this case, the output of TSL consists of four pairs , where is a context vector for the corresponding subsequence. This method can also be viewed as an example of ensemble technique, where instead of having different models we provide the same model with different data.
Model | Time Series Processing | AUC (average) |
TBSM | GenSim() | 0.9319 |
TBSM | CosSim() | 0.9313 |
TBSM | DotSim() | 0.9261 |
TBSM | IndSim() | 0.9246 |
TBSM | TSL(4 - seq) | 0.9279 |
TBSM | TSL(8 - seq) | 0.9218 |
TBSM | TSL(4-inner) | 0.9273 |
TBSM | TSL(8-inner) | 0.9327 |
TBSM | LSTM(5-stack) | 0.8404 |
TBSM | MHA (8-heads) | 0.8833 |
MIMN | default | 0.9179 |
DIEN | default | 0.9081 |
Third, we compare the performance of TBSM with TSL we have proposed versus two standard mechanisms for processing time series data. The first mechanism uses standard multi-head attention (MHA) with 8 heads to replace the TSL. Note that the output of TSL is one pair , where c is the usual output of 8-head attention mechanism. Unlike the model with GenSim() here we map time series vectors into (as opposed to ) using Key and Query matrices, and weight normalization is done at the end using softmax function ensuring that the sum of weights across time series is . Note that this normalization is across time series, while in our main model normalization (projection onto ) is done for each time series vector individually.
The last mechanism replaces TSL with recurrent neural network based on LSTM cells [16]. These cells are sized to input and output dimensional vectors and are stacked into 5 vertical layers. In this case the output is a pair , where c is the final hidden state. We remark that as compared to the default approach, LSTM model is not invariant under permutations of time series points.
The performance achieved by all of these variations is summarized in the Tab. 2. Notice that it is split into four parts following our experimental setup. The first part contains a class of smaller models, consisting of just one TSL. The best result in this class is achieved by the GenSim(). The second class consists of larger models, the ones with more than one TSL. The overall best performing model TSL(8-inner) is in this class. The third class contains MHA- and LSTM-based models. Finally, we also add two well-known reference models MIMN [31] and DIEN [46] for comparison. Note that in our experiments TBSM significantly outperforms both of them.
We remark that although performance of all models was calculated over the same test dataset, train dataset for models presented in this paper is different from the one used for MIMN and DIEN. Our training data consists of datapoints which are random contiguous subsequences of length extracted from each user behaviour history, while the dataset which was used to train MIMN and DIEN models consisted of datapoints of full length history sequences for each user . Therefore, we may conclude that in our experiments TBSM achieves higher statistical performance than existing more complex models, all while using shorter sequence lengths.
5 Related Work
Notice that click-through rate (CTR) prediction can be seen as a special case of matrix completion problem, where we predict an event probability and rating that represents clicks and non-clicks. The data corresponding to CTR problem may be broadly classified into two types based on whether each sample contains only user and item features or a time series corresponding to history of user activity. In this section we summarize modern approaches to solving the CTR prediction problem for both scenarios.
The recommendation systems field received significant attention after the development of matrix factorization [19]. The renewed focus on deep learning suggested the use of MLPs, rather than dot-products, as a function giving the final rating between user and item embeddings [14]. Then, factorization machine method was developed in [32] and further explored in [13, 25, 32], while the autoencoder approach in [20] provided another example of using neural networks for continuous rating prediction problem. Higher degree polynomial approximations of feature interactions were explored in [42], while self-attention was adopted in [37]. In [45] different sparse feature embeddings are built depending on the consequent operations performed on these embedding, resulting in more than one embedding per feature. Lastly, [23] utilized convolutional neural networks combined with MLPs to learn new useful features.
Along these developments the time component was introduced explicitly into some of these models. In [44] the time-dependent user-movie rating is decomposed into a sum of static and dynamic components, where first one is learned via factorization algorithm while the dynamic component is provided by LSTM [16]. The authors in [2] incorporate contextual information such as time between successive user-video interactions into their RNN model. In [4] Memory-Augmented Neural Networks are used to learn the dynamics of factorization vectors used in matrix factorization. On the other hand, DIN [47] enhanced basic embedding and MLPs with soft attention mechanism (interest) between the current item and user history. In DSIN [10] this idea is further enhanced by splitting user history into local sessions, while in DIEN [46] temporal dynamics are added to the concept of interest by using a variant of RNN. Finally, the most recent MIMN model [31] learns from long history sequences using Neural Turing Machines for managing the memory network [12]. The TBSM model proposed in this paper incorporates several of these developments, but remains relatively simple, while retaining high statistical performance.
6 Conclusion
We have proposed time-based sequence model (TBSM) that incorporates embedding layer enhanced with TSL attention-like mechanism for handling time series data. In contrast to standard approaches, our approach to attention was geometric in a sense that attention weights came from similarity measure closely related to spherical distance function.
We have shown on a synthetic dataset that as relationship between time series components gets stronger, adding different inner products helps model achieve a higher statistical score. Moreover, we have shown that taking relatively short time period () is sufficient for achieving a good statistical performance on Taobao User Behavior dataset, as measured by AUC metric. In our experiments TBSM has outperformed the scores attained by existing more complex models, all while using shorter sequence length.
Finally we point out that there is significant opportunity for exploring parallelism in TBSM, because embedding layer processes each location in time series separately and multiple TSLs are also independent of each other. We leave this exploration as future work.
Acknowledgments
The authors would like to thank Mikhail Smelyanskiy for supporting this work.
References
- [1] Sarkhan Badirli et al. “Gradient Boosting Neural Networks: GrowNet” In CoRR, arXiv:2002.07971, 2020
- [2] Alex Beutel et al. “Latent Cross: Making Use of Context in Recurrent Recommender Systems” In Proc. 10th ACM Int. Conf. Web Search and Data Mining, 2018, pp. 46–54
- [3] Wenqiang Chen, Lizhang Zhan, Yuanlong Ci and Chen Lin “FLEN: Leveraging Field for Scalable CTR Prediction” In CoRR, arXiv:1911.04690, 2019
- [4] Xu Chen et al. “Sequential Recommendation with User Memory Networks” In Proc. 11th ACM Int. Conf. Web Search and Data Mining, 2018, pp. 108–116
- [5] Heng-Tze Cheng et al. “Wide & deep learning for recommender systems” In Proc. 1st Workshop Deep Learning Recommender Systems, 2016, pp. 7–10
- [6] Paul Covington, Jay Adams and Emre Sargin “Deep Neural Networks for YouTube Recommendations” In Proc. 10th ACM Conf. Recommender Systems, 2016
- [7] Anupam Datta, Sophia Kovaleva, Piotr Mardziel and Shayak Sen “Latent Factor Interpretations for Collaborative Filtering” In CoRR, arXiv:1711.10816, 2017
- [8] John Duchi, Elad Hazan and Yoram Singer “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization” In Journal of Machine Learning Research 12, 2011, pp. 2121–2159
- [9] Tom Fawcett “An introduction to ROC analysis, Pattern Recognition Letters” In Pattern Recognition Letters 27, 2006, pp. 861–874
- [10] Yufei Feng et al. “Deep Session Interest Network for Click-Through Rate Prediction” In CoRR, arXiv:1905.06482, 2019
- [11] Yoav Freund and Robert E. Schapire “Experiments with a New Boosting Algorithm” In Proc. 13th Int. Conf. Machine Learning, 1996, pp. 148–156
- [12] Alex Graves, Greg Wayne and Ivo Danihelka “Neural Turing Machines” In CoRR, arXiv:1410.5401, 2014
- [13] Huifeng Guo et al. “DeepFM: a factorization-machine based neural network for CTR prediction” In CoRR, arXiv:1703.04247, 2017
- [14] Xiangnan He et al. “Neural collaborative filtering” In Proc. 26th Int. Conf. World Wide Web, 2017, pp. 173–182
- [15] Xinran He et al. “Practical lessons from predicting clicks on ads at Facebook” In Proc. 8th Int. Workshop Data Mining for Online Ad., 2014, pp. 1–9
- [16] Sepp Hochreiter and Jürgen Schmidhuber “Long Short-Term Memory” In Neural Comput. 9 Cambridge, MA, USA: MIT Press, 1997, pp. 1735–1780
- [17] Aleksandar Ilic and Maja Kabiljo “Recommending items to more than a billion people” URL: https://engineering.fb.com/core-data/recommending-items-to-more-than-a-billion-people/
- [18] J. Alammar “The Illustrated Transformer” URL: http://jalammar.github.io/illustrated-transformer/
- [19] Yehuda Koren, Robert Bell and Chris Volinsky “Matrix factorization techniques for recommender systems” In Computer IEEE, 2009, pp. 30–37
- [20] Oleksii Kuchaiev and Boris Ginsburg “Training deep autoencoders for collaborative filtering” In CoRR, arXiv:1708.01715, 2017
- [21] Thom Lake et al. “Large-scale Collaborative Filtering with Product Embeddings” In CoRR, arXiv:1901.04321, 2019
- [22] Jianxun Lian et al. “xDeepFM: Combining explicit and implicit feature interactions for recommender systems” In Proc. 24th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2018, pp. 1754–1763 ACM
- [23] Bin Liu et al. “Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction” In The World Wide Web Conf. ACM Press, 2019
- [24] Maxim Naumov “On the Dimensionality of Embeddings for Sparse Features and Data” In CoRR, arXiv:1901.02103, 2019
- [25] Maxim Naumov et al. “Deep Learning Recommendation Model for Personalization and Recommendation Systems” In CoRR, arXiv:1906.00091, 2019
- [26] Maxim Naumov et al. “Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems” In CoRR, arXiv:2003.09518, 2020
- [27] Netflix URL: https://www.netflix.com
- [28] Xia Ning, Christian Desrosiers and George Karypis “A comprehensive survey of neighborhood-based recommendation methods” In Recommender Systems Handbook, 2015
- [29] Pandora “Music Genome Project” URL: https://www.pandora.com/about/mgp
- [30] Jongsoo Park et al. “Deep Learning Inference in Facebook Data Centers: Characterization, Performance Optimizations and Hardware Implications” In CoRR, arXiv:1811.09886, 2018
- [31] Qi Pi et al. “Practice on Long Sequential User Behavior Modeling for Click-Through Rate Prediction” In CoRR, arXiv:1905.09248, 2019
- [32] Steffen Rendle “Factorization Machines” In Proc. 2010 IEEE Int. Conf. Data Mining, 2010, pp. 995–1000
- [33] Marco Rossetti, Fabio Stella and Markus Zanker “Towards Explaining Latent Factors with Topic Models in Collaborative Recommender Systems” In Proc. 24th Int. Workshop Database and Expert Systems Appl., 2013, pp. 162–167
- [34] Suvash Sedhain, Aditya Krishna Menon, Scott Sanner and Lexing Xie “Autorec: Autoencoders meet collaborative filtering” In Proc. 24th Int. Conf. World Wide Web, 2015, pp. 111–112
- [35] Noam Shazeer “Fast Transformer Decoding: One Write-Head is All You Need” In CoRR, arXiv:1911.02150, 2019
- [36] Pinky Sitikhu, Kritish Pahi, Pujan Thapa and Subarna Shakya “A Comparison of Semantic Similarity Methods for Maximum Human Interpretability” In CoRR, arXiv:1910.09129, 2019
- [37] Weiping Song et al. “AutoInt” In Proc. 28th ACM Int. Conf. Information and Knowledge Management ACM, 2019
- [38] Sean Tao “Deep Neural Network Ensembles” In arXiv:1904.05488, 2019
- [39] Ashish Vaswani et al. “Attention is All you Need” In Advances in Neural Information Processing Systems 30, 2017, pp. 5998–6008
- [40] Elena Voita et al. “Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned” In CoRR, arXiv:1905.09418, 2019
- [41] Fan Wang et al. “Sequential Evaluation and Generation Framework for Combinatorial Recommender System” In CoRR, arXiv:1902.00245, 2019
- [42] Ruoxi Wang, Bin Fu, Gang Fu and Mingliang Wang “Deep & cross network for ad click predictions” In Proc. AdKDD, 2017, pp. 12
- [43] L. Weng “Lil’log” URL: https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html
- [44] Chao-Yuan Wu et al. “Recurrent Recommender Networks” In Proc. 10th ACM Int. Conf. Web Search and Data Mining, 2017, pp. 495–503
- [45] Yi Yang, Baile Xu, Furao Shen and Jian Zhao “Operation-aware Neural Networks for User Response Prediction” In CoRR, arXiv:1904.12579, 2019
- [46] Guorui Zhou et al. “Deep Interest Evolution Network for Click-Through Rate Prediction” In CoRR, arXiv:1809.03672, 2018
- [47] Guorui Zhou et al. “Deep interest network for click-through rate prediction” In Proc. 24th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2018, pp. 1059–1068 ACM
- [48] Han Zhu et al. “Joint Optimization of Tree-based Index and Deep Model for Recommender Systems” In CoRR, arXiv:1902.07565, 2019