Consistent Collaborative Filtering via Tensor Decomposition
Abstract
Collaborative filtering is the de facto standard for analyzing users’ activities and building recommendation systems for items. In this work we develop Sliced Anti-symmetric Decomposition (SAD), a new model for collaborative filtering based on implicit feedback. In contrast to traditional techniques where a latent representation of users (user vectors) and items (item vectors) are estimated, SAD introduces one additional latent vector to each item, using a novel three-way tensor view of user-item interactions. This new vector extends user-item preferences calculated by standard dot products to general inner products, producing interactions between items when evaluating their relative preferences. SAD reduces to state-of-the-art (SOTA) collaborative filtering models when the vector collapses to , while in this paper we allow its value to be estimated from data. Allowing the values of the new item vector to be different from has profound implications. It suggests users may have nonlinear mental models when evaluating items, allowing the existence of cycles in pairwise comparisons. We demonstrate the efficiency of SAD in both simulated and real world datasets containing over user-item interactions. By comparing with seven SOTA collaborative filtering models with implicit feedbacks, SAD produces the most consistent personalized preferences, in the meanwhile maintaining top-level of accuracy in personalized recommendations. We release the model and inference algorithms in a Python library https://github.com/apple/ml-sad.
1 Introduction
Understanding preferences based on users’ historical activities is key for personalized recommendation. This is particularly challenging when explicit ratings on many items are not available. In this scenario, historical activities are typically viewed as binary, representing whether a user has interacted with an item or not. Users’ preferences must be inferred from such implicit feedback with additional assumptions based on this binary data.
One common assumption is to view non-interacted items as negatives, meaning users are not interested in them; items that have been interacted are often assumed to be preferred ones (Hu et al., 2008; Pan et al., 2008). In reality however, such an assumption is rarely met. For example, lack of interaction between a user and an item might simply be the result of lack of exposure. It is therefore more natural to assume that non-interacted items are a combination of the ones that users do not like and the ones that users are not aware of (Rendle et al., 2009).
With this assumption, Rendle et al. (2009) proposed to give partial orders between items. Particularly, they assumed that items which users have interacted with are more preferable than the non-interacted ones. With this assumption in mind, Bayesian Personalized Ranking (BPR) was developed to perform personalized recommendations. In BPR, the observed data are transformed into a three-way binary tensor with the first dimension representing users. The other two dimensions represent items, encoding personalized preferences between item pairs (Figure 1(a)). Mathematically, this means that any first order slice of at the -th user, , is represented as a pairwise comparison matrix (PCM) between items. The -th entry when observed, , is binary, suggesting whether -th user prefers () the -th item over the -th one, or the other way around (). The tensor is only partially observed, with missing entries where there is no prior knowledge to infer any preference for a particular user. The recommendation problem becomes finding a parsimonious parameterization of the generative model for observed entries in and estimating model parameters which best explain the observed data.


The model used in BPR assumes that among the observed entries in , the probability that the -th user prefers the -th item over the -th item can be modeled as a Bernoulli distribution (Hu et al., 2008),
(1) |
where is the collection of unknown parameters of the Bernoulli distribution (Figure 1(b)). In fact, is the natural parameter of a Bernoulli distribution. Rendle et al. (2009) decompose the natural parameter by
(2) |
In the equation, () can be interpreted as the strength of preference on the -th (-th) item for the -th user. In other words, users’ strengths of preference on different items are represented by scalar values denoted as . The relative preferences between items are therefore characterized by the differences between their preference’s strengths for a particular user. With this decomposition, becomes -th user’s relative preference on the -th item over the -th item.
By further letting be represented as a dot product between a dimensional user vector and an item vector ,
(3) |
Rendle et al. (2009) reveal the connection between their proposed BPR model and traditional collaborative filtering models such as matrix factorization.
The factorization in equations 2 and 3 provides a parsimonious representation of the original Equation 1. Without the factorization, one could model independently. This model requires free parameters, and a model fitting would result in for , and for . The rests of with missing are left unknown. The parsimonious representation, in contrast, reduces the number of parameters to . The value of now becomes coupled with for , and entries in jointly impact parameter estimate of . This highlights a fundamental assumption behind collaborative filtering: information can be learned from items that share similar pattern when they are interacted by users, and one user can learn from another user who interacts with similar set of items.
Equation 3 has a direct link to the Bradley-Terry model often studied when analyzing a PCM for decision making (Hunter, 2004; Weng & Lin, 2011). This model can be at least dated back to 1929 (Zermelo, 1929). One property of the model is its transitivity: The relative preference can be expressed as the sum of relative preferences of and , for any user with and . However, in reality this transitivity property is less frequently met. Only around of real world PCM’s satisfy complete transitivity (Mazurek & Perzina, 2017). The violation of the property is more conceivable when users are exposed to various types of items. For example, in an online streaming platform, one favorable movie could become less intriguing after a subscriber watches a different style/genre.
In this paper, we extend the original BPR model (which is one of the most fundamental models in recommendation systems) to allow non-transitive user ratings. In particular, we extend Equation 2 to a more general form by proposing a new tensor decomposition. We denote our new model SAD (Sliced Anti-symmetric Decomposition). The new tensor decomposition introduces a second set of non-negative item vectors for every item. Different from the first vector , the new vector contributes negatively when calculating relative preferences, producing counter-effects to the original strength of users’ preferences; see Section 4 for more details. Mathematically, the new vector extends the original dot product in Equation 3 to an inner product. The original BPR model becomes a special case of SAD when the values in are all set to , and the inner product reduces to a standard dot product. When contains entries that are not , the transitivity property no longer necessarily holds. While assigning an regularization to the entries in to encourage its values being to reflect prior beliefs, SAD is able to infer its unknown value from real world data. We derive an efficient group coordinate descent algorithm for parameter estimation of SAD. Our algorithm results in a simple stochastic gradient descent (SGD) producing fast and accurate parameter estimations. Through a simulation study we first demonstrate the expressiveness of SAD and efficiency of the SGD algorithm. We then compare SAD to seven alternative SOTA recommendation models in three publicly available datasets with distinct characteristics. Results confirm that our new model permits to exploit information and relations between items not previously considered, and provides more consistent and accurate results as we will demonstrate in this paper.
2 Related Works
Inferring priority via pairwise comparison. The Bradley-Terry model (Hunter, 2004; Weng & Lin, 2011; Zermelo, 1929) has been heavily used along this line of research. In the Bradley-Terry model, the probability of the -th unit (an individual, a team, or an item) being more preferable than the -th unit (denoted as ) is modeled by
(4) |
where represents the strength, or degree, of preference of the -th unit. The goal is to estimate for all units based on pairwise comparisons. The link to Equation 1 becomes clear once we omitting user index and set . In fact, the original BPR model can be viewed as an extension to the Bradley-Terry model to allow personalized parameters, and the strength of preferences are assumed to be dot products of user and item vectors as in Equation 3.
Various algorithms have been developed for parameter estimation of this model. For example, Hunter (2004) developed a class of algorithms named minorization-maximization (MM) for parameter estimation. In MM, a minorizing function is maximized to find the next parameter update at every iteration. Refer to the work by Hunter & Lange (2004) for more details related to MM. Weng & Lin (2011) proposed a Bayesian approximation method to estimate team’s priorities from outputs of games between teams. Most recently, Wang et al. (2021) developed a bipartite graph iterative method to infer priorities from large and sparse pairwise comparison matrices. They applied the algorithm to the Movie-Lens dataset to rank movies based on their ratings aggregated from multiple users. Our paper is different from aforementioned models in that we model user-specific item preferences under personalized settings.
Tensor decompositions for recommendation. Compared to traditional collaborative filtering methods using matrix factorizations, tensor decompositions have received less attention in this field until recently. The BPR model can be viewed as one of the first attempts to approach the recommendation problems using tensor analysis. As discussed in Section 1, by making the assumption that interacted items are more preferrable compared to non-interacted ones, user-item implicit feedback are represented as a three-way binary tensor (Rendle et al., 2009). In their later work, the authors developed tensor decomposition models for personalized tag recommendation (Rendle & Schmidt-Thieme, 2010). The relationship between their approach and traditional tensor decomposition approaches such as Tucker and PARAFAC (parallel factors) decompositions (Kiers, 2000; Tucker, 1966) was discussed.
Recently, tensor decomposition methods have been used to build recommendation systems using information from multiple sources. Wermser et al. (2011) developed a context aware tensor decomposition approach by using information from multiple sources, including time, location, and sequential information. Hidasi & Tikk (2012) considered implicit feedback and incorporated contextual information using tensor decomposition. They developed an algorithm which scaled linearly with the number of non-zero entries in a tensor. A comprehensive review about applications of tensor methods in recommendation can be found by Frolov & Oseledets (2017) and references therein. Different from leveraging multiple sources of information, the SAD model developed in this paper considers the basic scenario where only implicit feedback are available, the scenario that is considered in BPR model (Rendle et al., 2009). Our novelty lies in the fact that we propose a more general form of tensor decomposition for modeling implicit feedback.
Deep learning in recommendation models. Deep learning has attracted significant attention in recent years, and the recommendation domain is no exception. Traditional approaches such as collaborative filtering and factorization machines (FM) have been extended to incorporate neural network components (Chen et al., 2017; He et al., 2017; Xiao et al., 2017). In particular, Chen et al. (2017) replaced the dot product that has been widely used in traditional collaborative filtering with a neural network containing Multilayer Perceptrons (MLP) and embedding layers. Chen et al. (2017) and Xiao et al. (2017) introduced attention mechanisms (Vaswani et al., 2017) to both collaborative filtering and FM (Rendle, 2010). Mostly recently Rendle et al. (2020) revisited the comparison of traditional matrix factorization and neural collaborative filtering and concluded that matrix factorization models can be as powerful as their neural counterparts with proper hyperparameters selected. Despite the controversy, various types of deep learning models including convolutional networks, recurrent networks, variational auto-encoders (VAEs), attention models, and combinations thereof have been successfully applied in recommendation systems. The work by Zhang et al. (2019) provided an excellent review on this topic. This line of research doesn’t have direct link to the SAD model considered in the current work. However, we provide a brief review along this line since our model could be further extended to use the latest advances in the area.
3 Notation
We use to denote the total number of users in a dataset and to denote the total number of items. Users are indexed by . Items are indexed by both and . We use to denote the number of latent factors, and use to index a factor. Capital letters are used to denote a matrix or a tensor, and lowercase letters to denote a scalar or a vector. For example, the three-way tensor of observations is denoted as and the -th entry is denoted as . Similarly, the user latent matrix is denoted as , and its -th column is denoted as to represent the user vector for -th user. We use to denote the -th row (the -th factor) of viewed as a column vector.
4 Tensor Sliced Anti-symmetric Decomposition
We start with the original BPR model. The relative preference defined in Equation 2 forms a three-way tensor . The BPR model (Rendle et al., 2009) can be viewed as one parsimonious representation of tensor . Let and denote the user and item vectors respectively, and let () indicate the -th entry in (). Equations 1, 2, and 3 can be re-written as
where is the first order slice of at -th user,
being the -th row of item matrix . is used to denote a vector of all ’s and being the outer product.
4.1 Anti-symmetricity of
As discussed in Section 2, represents a parsimonious representation of parameters of a PCM. We formalize the property of as follows:
Property 4.1.
For every user , the first order slice of , is an anti-symmetric with .
This can be shown easily by letting and noting that the relative preference is the natural parameter of the corresponding Bernoulli distribution with . Note that is also known as the log-odds or logit.
The decomposition introduced in BPR can be further written as
(5) |
Note that the anti-symmetricity is well respected in the equation. Intuitively, the decomposition suggests that for the -th user, her item preference matrix can be decomposed as a weighted sum of anti-symmetric components, each of which is the difference of a rank one square matrix and its transpose.
4.2 Generalization of BPR
By replacing with arbitrary vector , the new square matrix is still anti-symmetric, and Property 4.1 still holds for the resulting . With this observation, we generalize Equation 5 by proposing a new parsimonious representation of ,
(6) |
In this work we require entries in to be non-negative. The rationale will become clear in Section 4.3. Furthermore, by letting , , and (we use to denote the set of non-negative real numbers), we introduce the proposed Sliced Anti-symmetric Decomposition (SAD).
Definition 4.1.
We define the Sliced Anti-symmetric Decomposition (SAD) of to be the matrices satisfying Equation 6 above for every user index . We denote this by
(7) |
4.3 Interpretation of SAD
The term can be interpreted as the strength of preference of the -th user on the -th item from the -th factor. The overall strength of preference, , is the sum of contributions from the individual factors. Accordingly, the relative preference over the -th item pair for the -th user can be viewed as the difference between the preference strengths of the -th and the -th items from the -th user.
This interpretation has a direct link to the Bradley-Terry model (4) as previously mentioned, in which the strength of the -th item is described as a positive number . Here the strength of preference is viewed as user specific and is represented by a real number .
SAD extends the original equations 2 and 3 by introducing a new non-negative vector for every item (a column in in Equation 7). We can re-write Equation 6 as follows for every item pair and :
(8) |
in Equation 8 denotes the inner product with a diagonal weight matrix having on the diagonal. To be a proper inner product, we require to be non-negative, resulting in a positive semi-definite matrix .
The first term on the right hand side of Equation 8, describing the preference strength of the -th item, now becomes dependent on , the -th entry in of the -th rival. When is bigger than , it increases the effect of . Similarly, the second term on the right hand side suggests that when is bigger than , it strengthens the effect of the -th item. The opposites happen when either or is smaller than . Therefore, while respecting the anti-symmetricity, the new non-negative item vector can be viewed as a counter-effect acting upon the strength of relative preferences, penalizing the strength when greater than , while reinforcing when smaller than . In real world applications, a user’s preference indeed may be influenced by different items. For example, during online shopping, one favorable dress may become less intriguing after a customer sees a different one with different style/color that matches her needs. In an online streaming platform, one favorable movie could become less interesting after a subscriber watches another one with different style/genre. SAD allows us to capture these item-item interactions by introducing a new set of vectors .
To summarize, we interpret the three factor matrices in SAD as follows:
-
•
represents the user matrix. Each user is represented by a user vector .
-
•
represents the left item matrix, which is composed of left item vectors denoted as . It contributes to the strength of preference on the -th item via an inner product with user vector .
-
•
represents the right item matrix, which contains non-negative right item vectors denoted as . This set of vectors defines the weight matrices of inner products between and . It produces counter-effects to the original preference strengths, with values bigger than adding additional strength to rival items in pairwise comparisons, and a value smaller than producing the opposite effect. When , the model reduces to the original BPR model.
In SAD we estimate the value of from data. As discussed in Section 1, we encourage the values of entries in to be unless there is strong evidence from the data suggesting otherwise. This is achieved by adding an regularization centered around to the entries in independently.
The regularization has another side effect. In Equation 8, multiplying by any constant to and to results in the same objective function, causing and to be unidentifiable. The additional regularization around mitigates the identifiability problem by discouraging any constant multiplication that moves away from , making the joint objective function identifiable between and .
4.4 The transitivity problem
In social science involving decision makings, PCMs have been investigated extensively (Saaty & Vargas, 2013; Wang et al., 2021). It is usually assumed that a PCM holds the transitivity property, resulting in the following observation introduced in Section 1: The relative preference of the -th item pair can be derived from the sum of relative preferences of the -th and -th item pairs, with and , . The original BPR meets this property nicely. After introducing in SAD, this property no longer necessarily holds. One can show that for ternary is a sufficient condition for transitivity in SAD. In our model, we allow the violation of this property, making the proposed model more realistic given the fact that complete transitivity is met only in of real world applications (Mazurek & Perzina, 2017).
4.5 Inference algorithms
To estimate model parameters, we maximize the log likelihood function directly. The log likelihood given observed entries in can be re-written as
(9) |
where is the indicator function, and the sum is taken with respect to non-missing entries in with . Here we require to prevent us from double counting.
We take the derivatives with respect to the columns of , , and , resulting in following gradients
(10) | ||||
from the -th observation. Here and is the element-wise product (Hadamard product).
Equation 10 allows us to create a stochastic gradient descent (SGD) algorithm (Algorithm 1) to optimize the negative of the log likelihood. During optimization, we add an penalty with weight to the entries in independently to encourage their values to be . In addition, we add an independent penalties with weight to both and for further regularization.
We also develop an efficient Gibbs sampling algorithm for full posterior inference under a Probit model setup. By drawing parameter samples from posterior distributions, the Gibbs sampling algorithm has the advantage of producing accurate uncertainty estimation of the unknown parameters under Bayesian inference. We replace the logistic function in Equation 1 with , where is the cumulative distribution function (CDF) of the standard Guassian distribution centered at . By assigning spherical Gaussian priors to , , and , full conditional distributions can be derived. More details can be found in Appendix.
5 Simulation Study
We first evaluate the performance of SAD and the SGD algorithm on simulation data, with the goal of examining the performance of our algorithm with true parameters known ahead. We choose users, items, and , resulting in , , and . We consider two scenarios in the simulation. In the first simulation (Sim1) we set to , effectively reducing SAD to the generative model of BPR (Rendle et al., 2009). In the second scenario (Sim2), we set a small proportion of to either or , the other entries are set to . For user matrix and left item matrix , their values are uniformly drawn from the interval . We calculate the preference tensor with Equation 8 and draw an observation tensor from the corresponding Bernoulli distributions.
We first examine the performance of SAD with complete observations in Sim2 to validate if our method is able to generate accurate parameter estimation. We run the SGD algorithm with a learning rate . The weight of the regularization assigned to is set to , and the weight of the regularization is set to . Initial values of parameters are randomly drawn from a standard Gaussian distribution. The number of latent factors is set to the true value. In reality when is unknown, cross validation can be used to select the best value of . After 20 epochs, is able to recover the sparse structure of the true parameters of up to permutation of factors (Figure 2). The user matrix and left item matrix converge to the true parameter values as well (Figure 3).
Next we examine the performance of SAD in both Sim1 and Sim2, under the scenarios with missing data. To be more specific, we randomly mark of as missing to mimic missing at random. Note that in the real world, observations could have more complex missingness structures. As a comparison, we run BPR under the same contexts.






The convergences of SAD in Sim1 are shown in Figure 4(a), together with the estimated sparsity of . Here the sparsity of is defined as the percentage of the entries in with . When the percentage of missingness is at small or medium levels, SAD is able to converge to a sparsity close to , suggesting the effectiveness of the regularization. It becomes more challenging when the percentage of missingness surpasses . Figure 4(c) shows the trajectories of the mean squared error (MSE) between and the true under different missingness percentages for both SAD and BPR. Both SAD and BPR are able to converge to a low MSE with small/medium percentages of missingness. Similarly, with a high percentage of missingness, the performance of both models begins to deteriorate. We conclude that SAD has a performance on par with BPR when data are simulated from the generative model of BPR.
Results for Sim2 are shown in Figure 4(b). Note that the true sparsity of is . SAD is able to generate an accurate estimation of the sparsity under small/medium percentages of missingness. When evaluating both models using MSE, SAD is able to achieve a much lower value due to its correct specification of the generative model (Figure 4(d)).
6 Applications for Real Data
We select three real world datasets to evaluate SAD and compare against SOTA recommendation models. The datasets selected contain explicit integer valued ratings. Nonetheless, we mask their values and view them as binary. The explicit ratings are used as a means to evaluate models’ consistency in pairwise comparison after model fitting (details below). The first dataset used is from the Netflix Prize (Bennett et al., 2007). The original dataset contains movie ratings of movies from unique users, with a total number of ratings reaching to over . We randomly select users as our first dataset. The resulting dataset contains movies with over ratings from the users. For the second dataset, we choose the Movie-Lens 1M dataset (Harper & Konstan, 2015). It contains over ratings from users on movies. As a third dataset, we consider the reviews of recipes from Food-Com (Majumder et al., 2019). The complete dataset has reviews from users on recipes. We select the top users with the most activities, and filter out recipes receiving less than reviews. The resulting dataset has reviews from users (users with zero activity are further removed after filtering recipes) on recipes.
The three datasets have distinct characteristics. Among the three datasets, the Food-Com review dataset has the least user-item interactions, even when the most active users/popular recipes are selected. The maximum number of items viewed by a single user is . It also has the largest number of users. The Netflix dataset is the most skewed, with the number of items interacted by a user ranging from as low as to as high as over . The Movie-Lens dataset contains the largest number of user-item interactions, and is most uniformly distributed. Some details of the three datasets can be found in Table A in Appendix.
We choose seven SOTA recommendation models to compare with SAD. Their details are listed in Table C. For each of the model considered, we perform a grid search to determine hyperparameters. Models are chosen based on their goodness-of-fit using log likelihood. We evaluate the models using a comprehensive leave one interaction out (LOO) evaluation (Bayer et al., 2017; He et al., 2017; 2016), in which we randomly hold out one user-item interaction from training set for every user. Users who have only one interaction are skipped. We create such LOO sets for each dataset considered. The dimension of latent space during evaluation is set to for all models and datasets. The choice of the latent dimension can be further optimized using methods such as across validation. In this work we choose the same number across comparing SOTA models such that they can be evaluated on a common ground.
(11) | ||||
(12) |
We consider two aspects of model’s performance during evaluation: consistency and recommendation. The consistency is defined as whether model’s prediction matches user’s actual pairwise preference. During evaluation, the hold out item and other items in interacted set of user are arranged such that based on users’ actual ratings. The mean of their predicted preference (Equation 11), the percentage of consistent predictions (Equation 12), and the median of per user percentage of consistency are reported in Table 6. SAD has the most consistent results among all eight recommendation models except in one scenario, in which our model is second best.
Dataset | Model | Consistency | Recommendation | |||
mean | match () | per user () | M1 (%) | M2 (%) | ||
Netflix | SAD | |||||
BPR | ||||||
SVD | ||||||
MF | ||||||
PMF | ||||||
FM | ||||||
NCF | ||||||
-VAE | ||||||
\pbox10cmMovie -Lens | SAD | |||||
BPR | ||||||
SVD | ||||||
MF | ||||||
PMF | ||||||
FM | ||||||
NCF | ||||||
-VAE | ||||||
\pbox10cmFood -Com | SAD | |||||
BPR | ||||||
SVD | ||||||
MF | ||||||
PMF | ||||||
FM | ||||||
NCF | ||||||
-VAE |
To evaluate models’ performance in recommendation, we create an item set containing non-interacted items by randomly sampling for each user. We combine the hold out item with the items to form a test set, and examine whether models are able to rank the hold out item high in the test set (Hit Ratio (He et al., 2017)). SAD faces some unconventional challenges (hence opportunities) in producing a ranking when violation of transitivity exists. Consider three items , and . With transitivity, if user has and , then must hold. However, SAD can result in a scenario in which , forming a preference cycle among the three items, in which case no ranking can be inferred. We propose two methods using pairwise comparisons for SAD in the evaluation. In the first method (M1), the number of non-interacted items in the test set that are more preferrable than the hold out item is dubbed as its rank. In a second method (M2), we use the ratings from interacted items kept in training set and calculate the proportion of interacted items that are less preferable than a test item. The proportion is used as a score to rank items in the test set. We calculate the percentage of hold out items that are ranked higher than in all the hold out items. SAD is among the top three best models that rank the hold out items in top (Table 6). We argue that since our model’s predictions match better with user’s actual ratings, it is able to bring additional information by introducing diversity of items into recommendation, while respecting users’s potential preferences. In Appendix, we illustrate examples in which SAD produces model predictions consistent with true ratings while other SOTA models fail.
7 Discussions
We proposed a new tensor decomposition model for collaborative filtering with implicit feedback. In contrast to traditional models, we introduced a new set of non-negative latent vectors for items. While respecting anti-symmetricity of parameters, the new vectors generalized the standard dot products for calculating user-item preferences to general inner products, allowing items to interact when evaluating their relative preferences. When such vectors were all set to ’s, our model reduced to standard collaborative filtering models. We allowed their values to be different from , enabling the violation of the known transitivity property. The proposed model generated accurate recommendations across multiple real world datasets examined.
The existence of violation of transitivity has profound implication in real world applications. The items that violate the property form cycles in the directed graph implied by pairwise preferences of a user. For example, consider three items , and . Transitivity implies if user has and , then must hold. Violation of transitivity suggests , forming a preference cycle among the three items. When SAD detects such violations, even a strong prior, regularization, suggesting otherwise, it indicates that users themselves may not have a linear mental model in terms of ranking items. When making recommendations, the items forming a cycle can be selected as a group, and users who are not certain which one in the group is the most relevant can be exposed to the entire group, increasing the possibility of producing a hit.
When evaluating consistency in Section 6, we used users’ actual ratings to determine pairwise preference between items. Many items in the datasets however had collided ratings due to the limited values those ratings can take. We chose the unevenly rated items with partial orders as ground truth during evaluation. The partial orders are a proxy for users’ true preferences, and using them is the best we can do to evaluate whether our model is able to produce consistent predictions. For evenly rated items, there is no ground truth available to allow us to evaluate the performance.
This topic touches one fundamental aspect in our existing rating system - using an integer valued score with limited range as a sole reflection of user’s preference. Traditional collaborative filtering models pander to this system, by assuming there is a linear ranking among items, and the ranking is dictated by a real number representing the preference strength. However, reality can hold evidence that violates the assumption. For example, Mazurek & Perzina (2017) demonstrated only a small proportion of real world pairwise comparisons satisfy complete transitivity. Li et al. (2013) showed the ubiquity of a user providing very different rating scores on closely correlated items, producing self-contradictions. In addition to noisiness in user ratings, the observed self-contradictions is a manifestation of the collision between user’s mental model and the rating system. SAD innovates by providing a novel methodology to mathematically parameterize this mental model.
There can be parameterizations that both respect anti-symmetric property of , and at the same time allow the violation of transitivity property other than Equation 8. For example, instead of introducing the new vector for every item, one can model as
x_uij = ∑_h=1^k ξ_uh (η_hi - η_hj) τ_hij,
with . in this parameterization can be viewed as an item interaction matrix for -th factor. Additional structures can be assigned to to enable parsimoniousness, such as letting be symmetric (in order to make anti-symmetric) and modeling it as a low rank matrix with , and . In our current work we focused on an efficient model with a simple interpretation. We delegate the research of exploring alternative parameterizations to future work.
In this paper, we restrict our scope to consider collaborative filterings for implicit feedbacks. SAD can be applied to datasets beyond the scope. For instances, datasets with explicit ratings contain partial orders that can be leveraged directly during model fitting, instead of being used to evaluate model consistency in a post-hoc manner as in current work. Such datasets include the ones considered in Section 6, and numerous others such as Yandex challenge and KDD Cup 2012. Other datasets that contain actual pairwise comparisons such as the one created by Pavlichenko & Ustalov (2021) are a natural fit to SAD as well. We expect the power of SAD can be further enhanced with neural network components integrated.
References
- Bayer et al. (2017) Immanuel Bayer, Xiangnan He, Bhargav Kanagal, and Steffen Rendle. A generic coordinate descent framework for learning from implicit feedback. In Proceedings of the 26th International Conference on World Wide Web, pp. 1341–1350, 2017.
- Bennett et al. (2007) James Bennett, Stan Lanning, et al. The Netflix prize. In Proceedings of KDD Cup and Workshop, volume 2007, pp. 35. New York, NY, USA., 2007.
- Chen et al. (2017) Jingyuan Chen, Hanwang Zhang, Xiangnan He, Liqiang Nie, Wei Liu, and Tat-Seng Chua. Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 335–344, 2017.
- Frolov & Oseledets (2017) Evgeny Frolov and Ivan Oseledets. Tensor methods and recommender systems. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 7(3):e1201, 2017.
- Graham et al. (2019) Scott Graham, Jun-Ki Min, and Tao Wu. Microsoft recommenders: Tools to accelerate developing recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems, pp. 542–543, 2019.
- Harper & Konstan (2015) F Maxwell Harper and Joseph A Konstan. The Movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TIIS), 5(4):1–19, 2015.
- He et al. (2016) Xiangnan He, Hanwang Zhang, Min-Yen Kan, and Tat-Seng Chua. Fast matrix factorization for online recommendation with implicit feedback. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pp. 549–558, 2016.
- He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, pp. 173–182, 2017.
- Hidasi & Tikk (2012) Balázs Hidasi and Domonkos Tikk. Fast ALS-based tensor factorization for context-aware recommendation from implicit feedback. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 67–82. Springer, 2012.
- Hu et al. (2008) Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In 2008 Eighth IEEE International Conference on Data Mining, pp. 263–272. Ieee, 2008.
- Hug (2020) Nicolas Hug. Surprise: A Python library for recommender systems. Journal of Open Source Software, 5(52):2174, 2020. doi: 10.21105/joss.02174. URL https://doi.org/10.21105/joss.02174.
- Hunter (2004) David R Hunter. MM algorithms for generalized Bradley-Terry models. The Annals of Statistics, 32(1):384–406, 2004.
- Hunter & Lange (2004) David R Hunter and Kenneth Lange. A tutorial on MM algorithms. The American Statistician, 58(1):30–37, 2004.
- Kiers (2000) Henk AL Kiers. Towards a standardized notation and terminology in multiway analysis. Journal of Chemometrics: A Journal of the Chemometrics Society, 14(3):105–122, 2000.
- Li et al. (2013) Bin Li, Ling Chen, Xingquan Zhu, and Chengqi Zhang. Noisy but non-malicious user detection in social recommender systems. World Wide Web, 16(5):677–699, November 2013. ISSN 1573-1413. doi: 10.1007/s11280-012-0161-9. URL https://doi.org/10.1007/s11280-012-0161-9.
- Liang et al. (2018) Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 World Wide Web Conference, pp. 689–698, 2018.
- Majumder et al. (2019) Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, and Julian McAuley. Generating personalized recipes from historical user preferences. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 5976–5982, 2019.
- Mazurek & Perzina (2017) Jiří Mazurek and Radomír Perzina. On the inconsistency of pairwise comparisons: An experimental study. Scientific Papers of the University of Pardubice. Series D, Faculty of Economics and Administration, 2017.
- Mnih & Salakhutdinov (2008) Andriy Mnih and Russ R Salakhutdinov. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems, pp. 1257–1264, 2008.
- Pan et al. (2008) Rong Pan, Yunhong Zhou, Bin Cao, Nathan N Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. One-class collaborative filtering. In 2008 Eighth IEEE International Conference on Data Mining, pp. 502–511. IEEE, 2008.
- Pavlichenko & Ustalov (2021) Nikita Pavlichenko and Dmitry Ustalov. IMDB-WIKI-SbS: An evaluation dataset for crowdsourced pairwise comparisons. arXiv preprint arXiv:2110.14990, 2021.
- Rendle (2010) Steffen Rendle. Factorization machines. In 2010 IEEE International Conference on Data Mining, pp. 995–1000, 2010. doi: 10.1109/ICDM.2010.127.
- Rendle & Schmidt-Thieme (2010) Steffen Rendle and Lars Schmidt-Thieme. Pairwise interaction tensor factorization for personalized tag recommendation. In Proceedings of the Third ACM International Conference on Web Search and Data Mining, pp. 81–90, 2010.
- Rendle et al. (2009) Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pp. 452–461, 2009.
- Rendle et al. (2020) Steffen Rendle, Walid Krichene, Li Zhang, and John Anderson. Neural collaborative filtering vs. matrix factorization revisited. In Fourteenth ACM Conference on Recommender Systems, pp. 240–248, 2020.
- Saaty & Vargas (2013) Thomas L Saaty and Luis G Vargas. The analytic network process. In Decision Making with the Analytic Network Process, pp. 1–40. Springer, 2013.
- Salah et al. (2020) Aghiles Salah, Quoc-Tuan Truong, and Hady W Lauw. Cornac: A comparative framework for multimodal recommender systems. the Journal of Machine Learning Research, 21:1–5, 2020.
- Tucker (1966) Ledyard R Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008, 2017.
- Wang et al. (2021) Haomin Wang, Gang Kou, and Yi Peng. An iterative algorithm to derive priority from large-scale sparse pairwise comparison matrix. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021.
- Weng & Lin (2011) Ruby C Weng and Chih-Jen Lin. A Bayesian approximation method for online ranking. Journal of Machine Learning Research, 12(1), 2011.
- Wermser et al. (2011) Hendrik Wermser, Achim Rettinger, and Volker Tresp. Modeling and learning context-aware recommendation scenarios using tensor decomposition. In 2011 International Conference on Advances in Social Networks Analysis and Mining, pp. 137–144. IEEE, 2011.
- Xiao et al. (2017) Jun Xiao, Hao Ye, Xiangnan He, Hanwang Zhang, Fei Wu, and Tat-Seng Chua. Attentional factorization machines: Learning the weight of feature interactions via attention networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, pp. 3119–3125, Melbourne, Australia, 2017. AAAI Press. ISBN 978-0-9992411-0-3.
- Zermelo (1929) Ernst Zermelo. Die berechnung der turnier-ergebnisse als ein maximumproblem der wahrscheinlichkeitsrechnung. Mathematische Zeitschrift, 29(1):436–460, 1929.
- Zhang et al. (2019) Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. Deep learning based recommender system: A survey and new perspectives. ACM Computing Surveys, 52(1):1–38, 2019.
Appendix A Properties of Real World Datasets
Dataset | #Users | #Items | #Ratings | Sparsity | Quantiles of #Ratings/User |
Netflix | |||||
Movie-Lens | |||||
Food-Com |
Appendix B Gibbs Sampler for Posterior Inference of SAD
We derive an efficient Gibbs sampling algorithm as a complement to the SGD algorithm in the main paper. The Gibbs sampling algorithm has the advantage of producing accurate uncertainty estimation of unknowns under Bayesian inference by drawing parameter samples from the posterior distribution. The algorithm is an application of Bayesian Probit regression to the current setting. Specifically, we replace the original logistic parameterization in Equation 1 with the following equation:
(13) |
where is the CDF of a Gaussian distribution with mean and variance . By augmenting the model with a hidden tensor , where and , the Probit model is equivalent to
With this new model, an efficient Gibbs sampling algorithm can be derived. As a toy example, we assign spherical Gaussian priors to rows of , and independently. With the likelihood defined in Equation 13, the following conditional posterior distributions can be derived.
Posterior of
where and are truncated Gaussian distributions on positive and negative quadrants respectively.
Posterior of with
where ,
and .
Posterior of with
where ,
In the above notation, we take advantage of the anti-symmetric property of and assume when .
Posterior of with
where ,
Similar to , we take advantage of the anti-symmetric property of and assume when .
Appendix C Methods Considered in the Real World Application
Model | Parameter | Values |
SAD | Implementation | Supplementary Code |
Learning Rate | ||
# Epochs | ||
Reg | ||
Reg | ||
BPR (Rendle et al., 2009) | Implementation | Supplementary Code |
Learning Rate | ||
# Epochs | ||
Reg | ||
SVD | Implementation | Surprise (package) (Hug, 2020) |
Learning Rate | ||
# Epochs | ||
Regularization | ||
Matrix Factorization (MF) | Implementation | Cornac (package) (Salah et al., 2020) |
Learning Rate | ||
# Epochs | ||
Reg | ||
\pbox10cmProbabilistic Matrix Factorization (PMF) (Mnih & Salakhutdinov, 2008) | Implementation | Cornac (package) (Salah et al., 2020) |
Learning Rate | ||
# Epochs | ||
Reg | ||
Factorization Machine (FM) (Rendle, 2010) | Implementation | RankFM (package) |
Learning Rate | ||
# Epochs | ||
Reg | ||
\pbox10cmNeural Collaborative Filtering (NCF) (He et al., 2017) | Implementation | MSFT recommenders (package) (Graham et al., 2019) |
Learning Rate | ||
# Epochs | ||
Batch size | ||
Network | Three layers MLP with sizes | |
\pbox10cmVariational AutoEncoder (-VAE) (Liang et al., 2018) | Implementation | MSFT recommenders (package) (Graham et al., 2019) |
parameter | ||
# Epochs | ||
Batch size |
Appendix D Real World Examples
During LOO evaluation, true preferences between a hold out item and the rest user interacted items in training set are determined based on users’ actual ratings. Cases in which our model’s predictions are consistent with true preferences while alternative models are not are shown in Table 4. We select such cases from each of the three real world datasets. Model predictions together with item descriptions are shown in the table.
Dataset | -th user | -th item (rating) | -th item (rating) | ||
SAD | BPR | ||||
Netflix | ’1381599’ | Last of the Dogmen () | Look at Me () | ||
’1243460’ | Joe Kidd () | Scenes of the Crime () | |||
’581011’ | The Professional () | The Bourne Identity () | |||
’632823’ | Lara Croft: Tomb Raider: The Cradle of Life () | The Cookout () | |||
’429299’ | The Worst Witch () | A Chorus Line () | |||
’127356’ | Trading Spaces: Great Kitchen Designs and More! () | White Oleander () | |||
’2264661’ | Free Tibet () | Native American Medicine () | |||
’581011’ | The Dream Catcher () | The Bourne Identity () | |||
’1368371’ | Zombie Holocaust () | Gothika () | |||
’1243460’ | Moog () | Scenes of the Crime () | |||
Movie-Lens | ’1318’ | High Fidelity () | Jimmy Hollywood () | ||
’1250’ | American Beauty () | eXistenZy () | |||
’4166’ | My Fair Lady () | Problem Child 2 () | |||
’153’ | American Beauty () | Spice World () | |||
’2160’ | Harold and Maude () | The Brady Bunch Movie () | |||
’4692’ | Blade Runner () | The Newton Boys () | |||
’2385’ | Braveheart () | Voyage to the Bottom of the Sea () | |||
’4756’ | Galaxy Quest () | Felicia’s Journey () | |||
’4439’ | The Maltese Falcon () | Entrapment () | |||
’3021’ | L.A. Confidential () | Fatal Attraction () | |||
Food-Com | ’148323’ | Best Ever Banana Cake \w Cream Cheese Frosting () | Crock Pot Garlic Brown Sugar Chicken () | ||
’424008’ | Glazed Cinnamon Rolls, Bread Machine () | Japanese Mum’s Chicken () | |||
’428423’ | Crock Pot Stifado () | Best Ever and Most Versatile Muffins () | |||
’733257’ | Banana Banana Bread () | Low Fat Oatmeal Muffins () | |||
’340980’ | Funky Chicken \w Sesame Noodles () | Amanda’s Thai Peanut () | |||
’573772’ | Delicious Chicken Pot Pie () | Amish Oven Fried Chicken () | |||
’1477540’ | Cinnabon Cinnamon Rolls by Todd Wilbur () | Amanda’s Cheese Pound Cake () | |||
’268644’ | Baked Tilapia \w Lots of Spice () | Southern Fried Salmon Patties () | |||
’762742’ | Easy Peezy Pizza Dough & Bread Machine Pizza Doug () | Fresh Orange Muffins () | |||
’212558’ | Steak or Chicken Fajitas () | Thai Style Ground Beef () |