This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Consistent Collaborative Filtering via Tensor Decomposition

Shiwen Zhao [email protected]
Apple
Charles G Crissman [email protected]
Work done while at Apple
Guillermo R Sapiro [email protected]
Apple
Abstract

Collaborative filtering is the de facto standard for analyzing users’ activities and building recommendation systems for items. In this work we develop Sliced Anti-symmetric Decomposition (SAD), a new model for collaborative filtering based on implicit feedback. In contrast to traditional techniques where a latent representation of users (user vectors) and items (item vectors) are estimated, SAD introduces one additional latent vector to each item, using a novel three-way tensor view of user-item interactions. This new vector extends user-item preferences calculated by standard dot products to general inner products, producing interactions between items when evaluating their relative preferences. SAD reduces to state-of-the-art (SOTA) collaborative filtering models when the vector collapses to 11, while in this paper we allow its value to be estimated from data. Allowing the values of the new item vector to be different from 11 has profound implications. It suggests users may have nonlinear mental models when evaluating items, allowing the existence of cycles in pairwise comparisons. We demonstrate the efficiency of SAD in both simulated and real world datasets containing over 1M1M user-item interactions. By comparing with seven SOTA collaborative filtering models with implicit feedbacks, SAD produces the most consistent personalized preferences, in the meanwhile maintaining top-level of accuracy in personalized recommendations. We release the model and inference algorithms in a Python library https://github.com/apple/ml-sad.

1 Introduction

Understanding preferences based on users’ historical activities is key for personalized recommendation. This is particularly challenging when explicit ratings on many items are not available. In this scenario, historical activities are typically viewed as binary, representing whether a user has interacted with an item or not. Users’ preferences must be inferred from such implicit feedback with additional assumptions based on this binary data.

One common assumption is to view non-interacted items as negatives, meaning users are not interested in them; items that have been interacted are often assumed to be preferred ones (Hu et al., 2008; Pan et al., 2008). In reality however, such an assumption is rarely met. For example, lack of interaction between a user and an item might simply be the result of lack of exposure. It is therefore more natural to assume that non-interacted items are a combination of the ones that users do not like and the ones that users are not aware of (Rendle et al., 2009).

With this assumption, Rendle et al. (2009) proposed to give partial orders between items. Particularly, they assumed that items which users have interacted with are more preferable than the non-interacted ones. With this assumption in mind, Bayesian Personalized Ranking (BPR) was developed to perform personalized recommendations. In BPR, the observed data are transformed into a three-way binary tensor DD with the first dimension representing users. The other two dimensions represent items, encoding personalized preferences between item pairs (Figure 1(a)). Mathematically, this means that any first order slice of DD at the uu-th user, Du::D_{u::}, is represented as a pairwise comparison matrix (PCM) between items. The (i,j)(i,j)-th entry when observed, duij{1,1}d_{uij}\in\{-1,1\}, is binary, suggesting whether uu-th user prefers (duij=1d_{uij}=1) the ii-th item over the jj-th one, or the other way around (duij=1d_{uij}=-1). The tensor DD is only partially observed, with missing entries where there is no prior knowledge to infer any preference for a particular user. The recommendation problem becomes finding a parsimonious parameterization of the generative model for observed entries in DD and estimating model parameters which best explain the observed data.

Refer to caption
(a) Data transformation to form three-way binary tensor DD
Refer to caption
(b) Tensor parameter XX
Figure 1: Diagrams provide visualizations of both transformed observation DD and parameters XX underlying the observation (Rendle et al., 2009).

The model used in BPR assumes that among the observed entries in DD, the probability that the uu-th user prefers the ii-th item over the jj-th item can be modeled as a Bernoulli distribution (Hu et al., 2008),

p(duij=1|X)=11+exp(xuij),\displaystyle p(d_{uij}=1|X)=\frac{1}{1+\exp{(-x_{uij})}}, (1)

where X={xuij}n×m×mX=\{x_{uij}\}\in\mathbb{R}^{n\times m\times m} is the collection of unknown parameters of the Bernoulli distribution (Figure 1(b)). In fact, xuijx_{uij} is the natural parameter of a Bernoulli distribution. Rendle et al. (2009) decompose the natural parameter by

xuij:=xuixuj.\displaystyle x_{uij}:=x_{ui}-x_{uj}\in\mathbb{R}. (2)

In the equation, xuix_{ui} (xujx_{uj}) can be interpreted as the strength of preference on the ii-th (jj-th) item for the uu-th user. In other words, users’ strengths of preference on different items are represented by scalar values denoted as xuix_{ui}. The relative preferences between items are therefore characterized by the differences between their preference’s strengths for a particular user. With this decomposition, xuijx_{uij} becomes uu-th user’s relative preference on the ii-th item over the jj-th item.

By further letting xuix_{ui} be represented as a dot product between a kk dimensional user vector ξuk\xi_{u}\in\mathbb{R}^{k} and an item vector ηik\eta_{i}\in\mathbb{R}^{k},

xui=ξu,ηi,\displaystyle x_{ui}=\langle\xi_{u},\eta_{i}\rangle, (3)

Rendle et al. (2009) reveal the connection between their proposed BPR model and traditional collaborative filtering models such as matrix factorization.

The factorization in equations 2 and 3 provides a parsimonious representation of the original Equation 1. Without the factorization, one could model xuijx_{uij} independently. This model requires n×m×mn\times m\times m free parameters, and a model fitting would result in xuij=x_{uij}=\infty for duij=1d_{uij}=1, and xuij=x_{uij}=-\infty for duij=1d_{uij}=-1. The rests of xuijx_{uij} with missing duijd_{uij} are left unknown. The parsimonious representation, in contrast, reduces the number of parameters to (n+m)×k(n+m)\times k. The value of xuijx_{uij} now becomes coupled with xuitx_{uit} for tjt\neq j, and entries in DD jointly impact parameter estimate of xuijx_{uij}. This highlights a fundamental assumption behind collaborative filtering: information can be learned from items that share similar pattern when they are interacted by users, and one user can learn from another user who interacts with similar set of items.

Equation 3 has a direct link to the Bradley-Terry model often studied when analyzing a PCM for decision making (Hunter, 2004; Weng & Lin, 2011). This model can be at least dated back to 1929 (Zermelo, 1929). One property of the model is its transitivity: The relative preference xuijx_{uij} can be expressed as the sum of relative preferences of xuitx_{uit} and xutjx_{utj}, for any user with tit\neq i and tjt\neq j. However, in reality this transitivity property is less frequently met. Only around 3%~{}3\% of real world PCM’s satisfy complete transitivity (Mazurek & Perzina, 2017). The violation of the property is more conceivable when users are exposed to various types of items. For example, in an online streaming platform, one favorable movie could become less intriguing after a subscriber watches a different style/genre.

In this paper, we extend the original BPR model (which is one of the most fundamental models in recommendation systems) to allow non-transitive user ratings. In particular, we extend Equation 2 to a more general form by proposing a new tensor decomposition. We denote our new model SAD (Sliced Anti-symmetric Decomposition). The new tensor decomposition introduces a second set of non-negative item vectors τi\tau_{i} for every item. Different from the first vector ηi\eta_{i}, the new vector contributes negatively when calculating relative preferences, producing counter-effects to the original strength of users’ preferences; see Section 4 for more details. Mathematically, the new vector extends the original dot product in Equation 3 to an inner product. The original BPR model becomes a special case of SAD when the values in τi\tau_{i} are all set to 11, and the inner product reduces to a standard dot product. When τi\tau_{i} contains entries that are not 11, the transitivity property no longer necessarily holds. While assigning an l1l_{1} regularization to the entries in τi\tau_{i} to encourage its values being 11 to reflect prior beliefs, SAD is able to infer its unknown value from real world data. We derive an efficient group coordinate descent algorithm for parameter estimation of SAD. Our algorithm results in a simple stochastic gradient descent (SGD) producing fast and accurate parameter estimations. Through a simulation study we first demonstrate the expressiveness of SAD and efficiency of the SGD algorithm. We then compare SAD to seven alternative SOTA recommendation models in three publicly available datasets with distinct characteristics. Results confirm that our new model permits to exploit information and relations between items not previously considered, and provides more consistent and accurate results as we will demonstrate in this paper.

2 Related Works

Inferring priority via pairwise comparison. The Bradley-Terry model (Hunter, 2004; Weng & Lin, 2011; Zermelo, 1929) has been heavily used along this line of research. In the Bradley-Terry model, the probability of the ii-th unit (an individual, a team, or an item) being more preferable than the jj-th unit (denoted as iji\succ j) is modeled by

p(ij)=λi/(λi+λj),\displaystyle p(i\succ j)=\lambda_{i}/(\lambda_{i}+\lambda_{j}), (4)

where λi\lambda_{i} represents the strength, or degree, of preference of the ii-th unit. The goal is to estimate λi\lambda_{i} for all units based on pairwise comparisons. The link to Equation 1 becomes clear once we omitting user index and set xi=log(λi)x_{i}=\log(\lambda_{i}). In fact, the original BPR model can be viewed as an extension to the Bradley-Terry model to allow personalized parameters, and the strength of preferences are assumed to be dot products of user and item vectors as in Equation 3.

Various algorithms have been developed for parameter estimation of this model. For example, Hunter (2004) developed a class of algorithms named minorization-maximization (MM) for parameter estimation. In MM, a minorizing function QQ is maximized to find the next parameter update at every iteration. Refer to the work by Hunter & Lange (2004) for more details related to MM. Weng & Lin (2011) proposed a Bayesian approximation method to estimate team’s priorities from outputs of games between teams. Most recently, Wang et al. (2021) developed a bipartite graph iterative method to infer priorities from large and sparse pairwise comparison matrices. They applied the algorithm to the Movie-Lens dataset to rank movies based on their ratings aggregated from multiple users. Our paper is different from aforementioned models in that we model user-specific item preferences under personalized settings.

Tensor decompositions for recommendation. Compared to traditional collaborative filtering methods using matrix factorizations, tensor decompositions have received less attention in this field until recently. The BPR model can be viewed as one of the first attempts to approach the recommendation problems using tensor analysis. As discussed in Section 1, by making the assumption that interacted items are more preferrable compared to non-interacted ones, user-item implicit feedback are represented as a three-way binary tensor (Rendle et al., 2009). In their later work, the authors developed tensor decomposition models for personalized tag recommendation (Rendle & Schmidt-Thieme, 2010). The relationship between their approach and traditional tensor decomposition approaches such as Tucker and PARAFAC (parallel factors) decompositions (Kiers, 2000; Tucker, 1966) was discussed.

Recently, tensor decomposition methods have been used to build recommendation systems using information from multiple sources. Wermser et al. (2011) developed a context aware tensor decomposition approach by using information from multiple sources, including time, location, and sequential information. Hidasi & Tikk (2012) considered implicit feedback and incorporated contextual information using tensor decomposition. They developed an algorithm which scaled linearly with the number of non-zero entries in a tensor. A comprehensive review about applications of tensor methods in recommendation can be found by Frolov & Oseledets (2017) and references therein. Different from leveraging multiple sources of information, the SAD model developed in this paper considers the basic scenario where only implicit feedback are available, the scenario that is considered in BPR model (Rendle et al., 2009). Our novelty lies in the fact that we propose a more general form of tensor decomposition for modeling implicit feedback.

Deep learning in recommendation models. Deep learning has attracted significant attention in recent years, and the recommendation domain is no exception. Traditional approaches such as collaborative filtering and factorization machines (FM) have been extended to incorporate neural network components (Chen et al., 2017; He et al., 2017; Xiao et al., 2017). In particular, Chen et al. (2017) replaced the dot product that has been widely used in traditional collaborative filtering with a neural network containing Multilayer Perceptrons (MLP) and embedding layers. Chen et al. (2017) and Xiao et al. (2017) introduced attention mechanisms (Vaswani et al., 2017) to both collaborative filtering and FM (Rendle, 2010). Mostly recently Rendle et al. (2020) revisited the comparison of traditional matrix factorization and neural collaborative filtering and concluded that matrix factorization models can be as powerful as their neural counterparts with proper hyperparameters selected. Despite the controversy, various types of deep learning models including convolutional networks, recurrent networks, variational auto-encoders (VAEs), attention models, and combinations thereof have been successfully applied in recommendation systems. The work by Zhang et al. (2019) provided an excellent review on this topic. This line of research doesn’t have direct link to the SAD model considered in the current work. However, we provide a brief review along this line since our model could be further extended to use the latest advances in the area.

3 Notation

We use nn to denote the total number of users in a dataset and mm to denote the total number of items. Users are indexed by u[1,n]u\in[1,\cdots n]. Items are indexed by both ii and j[1,m]j\in[1,\cdots m]. We use kk to denote the number of latent factors, and use h[1,,k]h\in[1,\cdots,k] to index a factor. Capital letters are used to denote a matrix or a tensor, and lowercase letters to denote a scalar or a vector. For example, the three-way tensor of observations is denoted as Dn×m×mD\in\mathbb{R}^{n\times m\times m} and the (u,i,j)(u,i,j)-th entry is denoted as duijd_{uij}. Similarly, the user latent matrix is denoted as Ξk×n\Xi\in\mathbb{R}^{k\times n}, and its uu-th column is denoted as ξu\xi_{u} to represent the user vector for uu-th user. We use ξh\xi^{h} to denote the hh-th row (the hh-th factor) of Ξ\Xi viewed as a column vector.

4 Tensor Sliced Anti-symmetric Decomposition

We start with the original BPR model. The relative preference xuijx_{uij} defined in Equation 2 forms a three-way tensor Xn×m×mX\in\mathbb{R}^{n\times m\times m}. The BPR model (Rendle et al., 2009) can be viewed as one parsimonious representation of tensor XX. Let ξuk\xi_{u}\in\mathbb{R}^{k} and ηik\eta_{i}\in\mathbb{R}^{k} denote the user and item vectors respectively, and let ξhu\xi_{hu} (ηhi\eta_{hi}) indicate the hh-th entry in ξu\xi_{u} (ηi\eta_{i}). Equations 1, 2, and 3 can be re-written as

Xu::=h=1kξhu(H~h(H~h)),\displaystyle X_{u::}=\sum_{h=1}^{k}\xi_{hu}(\widetilde{H}_{h}-(\widetilde{H}_{h})^{\top}),

where Xu::X_{u::} is the first order slice of XX at uu-th user,

H~h=ηh𝟙,\displaystyle\widetilde{H}_{h}=\eta^{h}\circ\mathbb{1},

ηhm\eta^{h}\in\mathbb{R}^{m} being the hh-th row of item matrix Hk×mH\in\mathbb{R}^{k\times m}. 𝟙m\mathbb{1}\in\mathbb{R}^{m} is used to denote a vector of all 11’s and \circ being the outer product.

4.1 Anti-symmetricity of Xu::X_{u::}

As discussed in Section 2, Xu::X_{u::} represents a parsimonious representation of parameters of a PCM. We formalize the property of XX as follows:

Property 4.1.

For every user uu, the first order slice of XX, is an anti-symmetric with Xu::=Xu::X_{u::}=-X^{\top}_{u::}.

This can be shown easily by letting puij=p(duij=1)=p(iuj)p_{uij}=p(d_{uij}=1)=p(i\succ_{u}j) and noting that the relative preference xuijx_{uij} is the natural parameter of the corresponding Bernoulli distribution with xuij=log(puij/(1puij))x_{uij}=\log(p_{uij}/(1-p_{uij})). Note that xuijx_{uij} is also known as the log-odds or logit.

The decomposition introduced in BPR can be further written as

Xu::=h=1kξhu(ηh𝟙𝟙ηh).\displaystyle X_{u::}=\sum_{h=1}^{k}\xi_{hu}(\eta^{h}\circ\mathbb{1}-\mathbb{1}\circ\eta^{h}). (5)

Note that the anti-symmetricity is well respected in the equation. Intuitively, the decomposition suggests that for the uu-th user, her item preference matrix Xu::X_{u::} can be decomposed as a weighted sum of kk anti-symmetric components, each of which is the difference of a rank one square matrix and its transpose.

4.2 Generalization of BPR

By replacing 𝟙\mathbb{1} with arbitrary vector τhm\tau^{h}\in\mathbb{R}^{m}, the new square matrix ηhτhτhηh\eta^{h}\circ\tau^{h}-\tau^{h}\circ\eta^{h} is still anti-symmetric, and Property 4.1 still holds for the resulting Xu::X_{u::}. With this observation, we generalize Equation 5 by proposing a new parsimonious representation of Xu::X_{u::},

Xu::=h=1kξhu(ηhτhτhηh).\displaystyle X_{u::}=\sum_{h=1}^{k}\xi_{hu}(\eta^{h}\circ\tau^{h}-\tau^{h}\circ\eta^{h}). (6)

In this work we require entries in τh\tau^{h} to be non-negative. The rationale will become clear in Section 4.3. Furthermore, by letting Ξ:=(ξ1,ξ2,,ξn)k×n\Xi:=(\xi_{1},\xi_{2},\cdots,\xi_{n})\in\mathbb{R}^{k\times n}, H:=(η1,η2,,ηk)k×mH:=(\eta^{1},\eta^{2},\cdots,\eta^{k})^{\top}\in\mathbb{R}^{k\times m}, and T:=(τ1,τ2,,τk)+k×mT:=(\tau^{1},\tau^{2},\cdots,\tau^{k})^{\top}\in\mathbb{R}^{k\times m}_{+} (we use +\mathbb{R}_{+} to denote the set of non-negative real numbers), we introduce the proposed Sliced Anti-symmetric Decomposition (SAD).

Definition 4.1.

We define the Sliced Anti-symmetric Decomposition (SAD) of XX to be the matrices Ξ,H,T\Xi,H,T satisfying Equation 6 above for every user index uu. We denote this by

X:=SAD{Ξ,H,T}.\displaystyle X\overset{\textrm{SAD}}{:=}\{\Xi,H,T\}. (7)

4.3 Interpretation of SAD

To understand the interpretation of SAD, we start by re-writing equations 2 and 3 in BPR as

xuij=ξu,ηiξu,ηj=h=1k(ξhuηhiξhuηhj).\displaystyle x_{uij}=\langle\xi_{u},\eta_{i}\rangle-\langle\xi_{u},\eta_{j}\rangle=\sum_{h=1}^{k}(\xi_{hu}\eta_{hi}-\xi_{hu}\eta_{hj}).

The term ξhuηhi\xi_{hu}\eta_{hi} can be interpreted as the strength of preference of the uu-th user on the ii-th item from the hh-th factor. The overall strength of preference, xuix_{ui}, is the sum of contributions from the kk individual factors. Accordingly, the relative preference over the (i,j)(i,j)-th item pair for the uu-th user can be viewed as the difference between the preference strengths of the ii-th and the jj-th items from the uu-th user.

This interpretation has a direct link to the Bradley-Terry model (4) as previously mentioned, in which the strength of the ii-th item is described as a positive number λi\lambda_{i}. Here the strength of preference is viewed as user specific and is represented by a real number xui=logλuix_{ui}=\log{\lambda_{ui}}.

SAD extends the original equations 2 and 3 by introducing a new non-negative vector τi\tau_{i} for every item (a column in TT in Equation 7). We can re-write Equation 6 as follows for every item pair ii and jj:

xuij\displaystyle x_{uij} =ξu,ηidiag(τj)ξu,ηjdiag(τi)\displaystyle=\langle\xi_{u},\eta_{i}\rangle_{\textrm{diag}(\tau_{j})}-\langle\xi_{u},\eta_{j}\rangle_{\textrm{diag}(\tau_{i})}
=h=1k(ξhuηhiτhjξhuηhjτhi).\displaystyle=\sum_{h=1}^{k}(\xi_{hu}\eta_{hi}\tau_{hj}-\xi_{hu}\eta_{hj}\tau_{hi}). (8)

,diag(τi)\langle\cdot,\cdot\rangle_{\textrm{diag}(\tau_{i})} in Equation 8 denotes the inner product with a diagonal weight matrix having τi\tau_{i} on the diagonal. To be a proper inner product, we require τi\tau_{i} to be non-negative, resulting in a positive semi-definite matrix diag(τi)\textrm{diag}(\tau_{i}).

The first term on the right hand side of Equation 8, describing the preference strength of the ii-th item, now becomes dependent on τhj\tau_{hj}, the hh-th entry in τj\tau_{j} of the jj-th rival. When τhj\tau_{hj} is bigger than 11, it increases the effect of ξhuηhi\xi_{hu}\eta_{hi}. Similarly, the second term on the right hand side suggests that when τhi\tau_{hi} is bigger than 11, it strengthens the effect of the jj-th item. The opposites happen when either τhj\tau_{hj} or τhi\tau_{hi} is smaller than 11. Therefore, while respecting the anti-symmetricity, the new non-negative item vector τi\tau_{i} can be viewed as a counter-effect acting upon the strength of relative preferences, penalizing the strength when greater than 11, while reinforcing when smaller than 11. In real world applications, a user’s preference indeed may be influenced by different items. For example, during online shopping, one favorable dress may become less intriguing after a customer sees a different one with different style/color that matches her needs. In an online streaming platform, one favorable movie could become less interesting after a subscriber watches another one with different style/genre. SAD allows us to capture these item-item interactions by introducing a new set of vectors τi\tau_{i}.

To summarize, we interpret the three factor matrices in SAD as follows:

  • Ξ\Xi represents the user matrix. Each user is represented by a user vector ξuk\xi_{u}\in\mathbb{R}^{k}.

  • HH represents the left item matrix, which is composed of left item vectors denoted as ηik\eta_{i}\in\mathbb{R}^{k}. It contributes to the strength of preference on the ii-th item via an inner product with user vector ξu\xi_{u}.

  • TT represents the right item matrix, which contains non-negative right item vectors denoted as τi+k\tau_{i}\in\mathbb{R}^{k}_{+}. This set of vectors defines the weight matrices of inner products between ηi\eta_{i} and ξu\xi_{u}. It produces counter-effects to the original preference strengths, with values bigger than 11 adding additional strength to rival items in pairwise comparisons, and a value smaller than 11 producing the opposite effect. When T=1T=1, the model reduces to the original BPR model.

In SAD we estimate the value of TT from data. As discussed in Section 1, we encourage the values of entries in TT to be 11 unless there is strong evidence from the data suggesting otherwise. This is achieved by adding an l1l_{1} regularization centered around 11 to the entries in TT independently.

The l1l_{1} regularization has another side effect. In Equation 8, multiplying by any constant cc to ηhi\eta_{hi} and 1/c1/c to τhj\tau_{hj} results in the same objective function, causing HH and TT to be unidentifiable. The additional l1l_{1} regularization around 11 mitigates the identifiability problem by discouraging any constant multiplication that moves τhj\tau_{hj} away from 11, making the joint objective function identifiable between HH and TT.

4.4 The transitivity problem

In social science involving decision makings, PCMs have been investigated extensively (Saaty & Vargas, 2013; Wang et al., 2021). It is usually assumed that a PCM holds the transitivity property, resulting in the following observation introduced in Section 1: The relative preference of the (i,j)(i,j)-th item pair can be derived from the sum of relative preferences of the (i,t)(i,t)-th and (t,j)(t,j)-th item pairs, with tit\neq i and tjt\neq j, xuij=xuit+xutjx_{uij}=x_{uit}+x_{utj}. The original BPR meets this property nicely. After introducing TT in SAD, this property no longer necessarily holds. One can show that τi=τj=τt\tau_{i}=\tau_{j}=\tau_{t} for ternary (i,j,t)(i,j,t) is a sufficient condition for transitivity in SAD. In our model, we allow the violation of this property, making the proposed model more realistic given the fact that complete transitivity is met only in 3%3\% of real world applications (Mazurek & Perzina, 2017).

4.5 Inference algorithms

To estimate model parameters, we maximize the log likelihood function directly. The log likelihood given observed entries in DD can be re-written as

log\displaystyle\log p(D|Θ)=(u,i,j)𝟏(duij=1)xuijlog(1+exp(xuij)),\displaystyle p(D|\Theta)=\sum_{(u,i,j)}\mathbf{1}(d_{uij}=-1)x_{uij}-\log{(1+\exp{(-x_{uij})})}, (9)

where 𝟏()\mathbf{1}(\cdot) is the indicator function, and the sum is taken with respect to non-missing entries in DD with i<ji<j. Here we require i<ji<j to prevent us from double counting.

We take the derivatives with respect to the columns of Ξ\Xi, HH, and TT, resulting in following gradients

logp(D|Θ)ξu\displaystyle\frac{\partial\log p(D|\Theta)}{\partial\xi_{u}} =wuij(ηiτjηjτi),\displaystyle=w_{uij}(\eta_{i}\odot\tau_{j}-\eta_{j}\odot\tau_{i}), (10)
logp(D|Θ)ηi\displaystyle\frac{\partial\log p(D|\Theta)}{\partial\eta_{i}} =wuijξuτj,\displaystyle=w_{uij}\xi_{u}\odot\tau_{j},
logp(D|Θ)ηj\displaystyle\frac{\partial\log p(D|\Theta)}{\partial\eta_{j}} =wuijξuτi,\displaystyle=-w_{uij}\xi_{u}\odot\tau_{i},
logp(D|Θ)τi\displaystyle\frac{\partial\log p(D|\Theta)}{\partial\tau_{i}} =wuijξuηj,\displaystyle=-w_{uij}\xi_{u}\odot\eta_{j},
logp(D|Θ)τj\displaystyle\frac{\partial\log p(D|\Theta)}{\partial\tau_{j}} =wuijξuηi,\displaystyle=w_{uij}\xi_{u}\odot\eta_{i},

from the (u,i,j)(u,i,j)-th observation. Here wuij=𝟏(duij=1)+exp(xuij)/(1+exp(xuij))w_{uij}=\mathbf{1}(d_{uij}=-1)+\exp{(-x_{uij})}/(1+\exp{(-x_{uij})}) and \odot is the element-wise product (Hadamard product).

Equation 10 allows us to create a stochastic gradient descent (SGD) algorithm (Algorithm 1) to optimize the negative of the log likelihood. During optimization, we add an l1l_{1} penalty with weight w1w_{1} to the entries in TT independently to encourage their values to be 11. In addition, we add an l2l_{2} independent penalties with weight w2w_{2} to both Ξ\Xi and HH for further regularization.

Algorithm 1 SGD for parameter estimation of SAD
nn, mm, kk, Dn×m×mD\in\mathbb{R}^{n\times m\times m}, ρ\rho, w1w_{1}, w2w_{2} \triangleright ρ\rho: learning rate, w1w_{1}, w2w_{2} weights for l1l_{1} and l2l_{2}
Initialization Ξk×n,Hk×m,T+k×m\Xi\in\mathbb{R}^{k\times n},H\in\mathbb{R}^{k\times m},T\in\mathbb{R}^{k\times m}_{+}
X={Ξ,H,T}X=\{\Xi,H,T\} \triangleright Equation 7
while Convergence not met do
     for u=1nu=1\cdots n do
         for Every item ii in interacted set do
              Random select item jj from non-interacted item set
              Calculate dξu\mathrm{d}\xi_{u}, dηi\mathrm{d}\eta_{i}, dηj\mathrm{d}\eta_{j}, dτi\mathrm{d}\tau_{i}, dτj\mathrm{d}\tau_{j} \triangleright Equation 10
              ξuξu+ρ(dξu2w2ξu\xi_{u}\leftarrow\xi_{u}+\rho\cdot(\mathrm{d}\xi_{u}-2w_{2}\xi_{u})
              ηiηi+ρ(dηi2w2ηi\eta_{i}\leftarrow\eta_{i}+\rho\cdot(\mathrm{d}\eta_{i}-2w_{2}\eta_{i})
              ηjηj+ρ(dηj2w2ηj\eta_{j}\leftarrow\eta_{j}+\rho\cdot(\mathrm{d}\eta_{j}-2w_{2}\eta_{j})
              τiτi+ρ(dτiw1𝟏[τi>1]+w1𝟏[τi<1]\tau_{i}\leftarrow\tau_{i}+\rho\cdot(\mathrm{d}\tau_{i}-w_{1}\mathbf{1}[\tau_{i}>1]+w_{1}\mathbf{1}[\tau_{i}<1]) \triangleright 𝟏[]\mathbf{1}[\cdot]: Entry-wise indicator to τik\tau_{i}\in\mathbb{R}^{k}
              τjτj+ρ(dτjw1𝟏[τj>1]+w1𝟏[τj<1]\tau_{j}\leftarrow\tau_{j}+\rho\cdot(\mathrm{d}\tau_{j}-w_{1}\mathbf{1}[\tau_{j}>1]+w_{1}\mathbf{1}[\tau_{j}<1])
         end for
     end for
end while

We also develop an efficient Gibbs sampling algorithm for full posterior inference under a Probit model setup. By drawing parameter samples from posterior distributions, the Gibbs sampling algorithm has the advantage of producing accurate uncertainty estimation of the unknown parameters under Bayesian inference. We replace the logistic function in Equation 1 with p(duij=1|Θ)=Φ(xuij)p(d_{uij}=1|\Theta)=\Phi(x_{uij}), where Φ(xuij)\Phi(x_{uij}) is the cumulative distribution function (CDF) of the standard Guassian distribution centered at xuijx_{uij}. By assigning spherical Gaussian priors to Ξ\Xi, HH, and TT, full conditional distributions can be derived. More details can be found in Appendix.

5 Simulation Study

We first evaluate the performance of SAD and the SGD algorithm on simulation data, with the goal of examining the performance of our algorithm with true parameters known ahead. We choose n=20n=20 users, m=50m=50 items, and k=5k=5, resulting in Ξ5×20\Xi\in\mathbb{R}^{5\times 20}, H5×50H\in\mathbb{R}^{5\times 50}, and T+5×50T\in\mathbb{R}^{5\times 50}_{+}. We consider two scenarios in the simulation. In the first simulation (Sim1) we set TT to 11, effectively reducing SAD to the generative model of BPR (Rendle et al., 2009). In the second scenario (Sim2), we set a small proportion of TT to either 0.010.01 or 55, the other entries are set to 11. For user matrix Ξ\Xi and left item matrix HH, their values are uniformly drawn from the interval [2,2][-2,2]. We calculate the preference tensor XX with Equation 8 and draw an observation tensor DD from the corresponding Bernoulli distributions.

We first examine the performance of SAD with complete observations in Sim2 to validate if our method is able to generate accurate parameter estimation. We run the SGD algorithm with a learning rate 0.050.05. The weight of the l1l_{1} regularization assigned to TT is set to 0.010.01, and the weight of the l2l_{2} regularization is set to 0.0050.005. Initial values of parameters are randomly drawn from a standard Gaussian distribution. The number of latent factors kk is set to the true value. In reality when kk is unknown, cross validation can be used to select the best value of kk. After 20 epochs, T^\hat{T} is able to recover the sparse structure of the true parameters of TT up to permutation of factors (Figure 2). The user matrix and left item matrix converge to the true parameter values as well (Figure 3).

Next we examine the performance of SAD in both Sim1 and Sim2, under the scenarios with missing data. To be more specific, we randomly mark x%x\% of DD as missing to mimic missing at random. Note that in the real world, observations could have more complex missingness structures. As a comparison, we run BPR under the same contexts.

Refer to caption
Figure 2: Comparison of T^\hat{T} with ground truth. Factors are re-ordered in T^\hat{T} to match true TT.
Refer to caption
Figure 3: Comparison of Ξ^\hat{\Xi}, H^\hat{H} with their ground truth. Factors are subject to re-order and sign flips.
Refer to caption
(a) Likelilhood & Sparsity (Sim1)
Refer to caption
(b) Likelilhood & Sparsity (Sim2)
Refer to caption
(c) Convergence to True Parameter (Sim1)
Refer to caption
(d) Convergence to True Parameter (Sim2)
Figure 4: Convergence in Simulation Study. Top rows in 4(a) and 4(b) show changes of log likelihood (Equation 9) during SGD optimization. log(pΘ0)\log(p_{\Theta_{0}}) is the log likelihood at 0-th iteration. Bottom row of 4(b) shows the true sparsity (86%86\%) as black line. 4(c) and 4(d) show the Frobenius distance between estimated parameter X^\hat{X} and true parameter XtrueX_{true} during SGD optimization. Note that SAD (solid line) achieves a much lower distance compared with BPR (dashed line) in 4(d) under wide range of percentages of missingness.

The convergences of SAD in Sim1 are shown in Figure 4(a), together with the estimated sparsity of TT. Here the sparsity of TT is defined as the percentage of the entries in TT with |τhj1|<0.05|\tau_{hj}-1|<0.05. When the percentage of missingness is at small or medium levels, SAD is able to converge to a sparsity close to 11, suggesting the effectiveness of the l1l_{1} regularization. It becomes more challenging when the percentage of missingness surpasses 70%70\%. Figure 4(c) shows the trajectories of the mean squared error (MSE) between X^\hat{X} and the true XX under different missingness percentages for both SAD and BPR. Both SAD and BPR are able to converge to a low MSE with small/medium percentages of missingness. Similarly, with a high percentage of missingness, the performance of both models begins to deteriorate. We conclude that SAD has a performance on par with BPR when data are simulated from the generative model of BPR.

Results for Sim2 are shown in Figure 4(b). Note that the true sparsity of TT is 86%86\%. SAD is able to generate an accurate estimation of the sparsity under small/medium percentages of missingness. When evaluating both models using MSE, SAD is able to achieve a much lower value due to its correct specification of the generative model (Figure 4(d)).

6 Applications for Real Data

We select three real world datasets to evaluate SAD and compare against SOTA recommendation models. The datasets selected contain explicit integer valued ratings. Nonetheless, we mask their values and view them as binary. The explicit ratings are used as a means to evaluate models’ consistency in pairwise comparison after model fitting (details below). The first dataset used is from the Netflix Prize (Bennett et al., 2007). The original dataset contains movie ratings of 8,9218,921 movies from 478,533478,533 unique users, with a total number of ratings reaching to over 50M50M. We randomly select 10,00010,000 users as our first dataset. The resulting dataset contains 8,6938,693 movies with over 1M1M ratings from the 10,00010,000 users. For the second dataset, we choose the Movie-Lens 1M dataset (Harper & Konstan, 2015). It contains over 1M1M ratings from 6,0406,040 users on 3,7063,706 movies. As a third dataset, we consider the reviews of recipes from Food-Com (Majumder et al., 2019). The complete dataset has 1.1M1.1M reviews from 227K227K users on 231K231K recipes. We select the top 20K20K users with the most activities, and filter out recipes receiving less than 5050 reviews. The resulting dataset has 145K145K reviews from 17K17K users (users with zero activity are further removed after filtering recipes) on 1.4K1.4K recipes.

The three datasets have distinct characteristics. Among the three datasets, the Food-Com review dataset has the least user-item interactions, even when the most active users/popular recipes are selected. The maximum number of items viewed by a single user is 878878. It also has the largest number of users. The Netflix dataset is the most skewed, with the number of items interacted by a user ranging from as low as 11 to as high as over 8K8K. The Movie-Lens dataset contains the largest number of user-item interactions, and is most uniformly distributed. Some details of the three datasets can be found in Table A in Appendix.

We choose seven SOTA recommendation models to compare with SAD. Their details are listed in Table C. For each of the model considered, we perform a grid search to determine hyperparameters. Models are chosen based on their goodness-of-fit using log likelihood. We evaluate the models using a comprehensive leave one interaction out (LOO) evaluation (Bayer et al., 2017; He et al., 2017; 2016), in which we randomly hold out one user-item interaction from training set for every user. Users who have only one interaction are skipped. We create 2020 such LOO sets for each dataset considered. The dimension of latent space during evaluation is set to 500500 for all models and datasets. The choice of the latent dimension can be further optimized using methods such as across validation. In this work we choose the same number across comparing SOTA models such that they can be evaluated on a common ground.

1n×(|Iu|1)ujIu,jo\displaystyle\frac{1}{n\times(|I_{u}|-1)}\sum_{u}\sum_{j\in I_{u},j\neq o} (x^hoj𝟏(oj)+x^hjo𝟏(jo))\displaystyle\big{(}\hat{x}_{hoj}\mathbf{1}(o\succ j)+\hat{x}_{hjo}\mathbf{1}(j\succ o)\big{)} (11)
1n×(|Iu|1)ujIu,jo\displaystyle\frac{1}{n\times(|I_{u}|-1)}\sum_{u}\sum_{j\in I_{u},j\neq o} (𝟏(x^hoj>0)𝟏(oj)+𝟏(x^hjo>0)𝟏(jo))\displaystyle\big{(}\mathbf{1}(\hat{x}_{hoj}>0)\mathbf{1}(o\succ j)+\mathbf{1}(\hat{x}_{hjo}>0)\mathbf{1}(j\succ o)\big{)} (12)

We consider two aspects of model’s performance during evaluation: consistency and recommendation. The consistency is defined as whether model’s prediction matches user’s actual pairwise preference. During evaluation, the hold out item oo and other items jj in interacted set IuI_{u} of user uu are arranged such that oujo\succ_{u}j based on users’ actual ratings. The mean of their predicted preference (Equation 11), the percentage of consistent predictions (Equation 12), and the median of per user percentage of consistency are reported in Table 6. SAD has the most consistent results among all eight recommendation models except in one scenario, in which our model is second best.

Table 1: Model evaluations across 2020 LOO datasets. When evaluating consistency, item pairs between hold out item oo and other interacted items jj for a user are ordered based on the user’s actual ratings (oujo\succ_{u}j). The mean of their predicted preference (Equation 11), the percentage of predictions that match with actual ratings (Equation 12), and the median of per user percentage of match are reported. When evaluating a recommendation, the percentages of random hold out items that are ranked higher than 2020 (out of 100100) using two different ranking method (M1 and M2) are shown. See Table C in Appendix for details about each of the comparing models.
Dataset Model Consistency Recommendation
mean xuijx_{uij} match (%\%) per user (%\%) M1 (%) M2 (%)
Netflix SAD 0.024±0.012\mathbf{0.024\pm 0.012} 33.7±0.5\mathbf{33.7\pm 0.5} 18.7±0.318.7\pm 0.3 83.3±0.383.3\pm 0.3 83.9±0.383.9\pm 0.3
BPR 0.019±0.012-0.019\pm 0.012 32.6±0.532.6\pm 0.5 12.7±0.312.7\pm 0.3 81.8±0.481.8\pm 0.4 81.7±0.381.7\pm 0.3
SVD 0.824±0.032-0.824\pm 0.032 10.1±0.110.1\pm 0.1 0.0±0.00.0\pm 0.0 2.0±0.52.0\pm 0.5 2.1±0.32.1\pm 0.3
MF 0.018±0.058-0.018\pm 0.058 8.1±0.58.1\pm 0.5 0.0±0.00.0\pm 0.0 74.8±2.674.8\pm 2.6 45.4±8.745.4\pm 8.7
PMF 0.003±0.0150.003\pm 0.015 33.1±0.333.1\pm 0.3 26.9±0.3\mathbf{26.9\pm 0.3} 21.1±0.321.1\pm 0.3 20.2±0.320.2\pm 0.3
FM 0.538±0.316-0.538\pm 0.316 12.9±0.212.9\pm 0.2 8.4±0.18.4\pm 0.1 86.4±0.2\mathbf{86.4\pm 0.2} 81.6±0.381.6\pm 0.3
NCF 0.025±0.043-0.025\pm 0.043 13.1±4.413.1\pm 4.4 5.6±1.25.6\pm 1.2 86.0±0.386.0\pm 0.3 85.9±2.2\mathbf{85.9\pm 2.2}
β\beta-VAE 0.011±0.017-0.011\pm 0.017 21.1±0.921.1\pm 0.9 9.7±2.19.7\pm 2.1 35.7±3.135.7\pm 3.1 31.0±5.731.0\pm 5.7
\pbox10cmMovie -Lens SAD 0.120±0.014\mathbf{0.120\pm 0.014} 36.4±0.6\mathbf{36.4\pm 0.6} 29.9±0.7\mathbf{29.9\pm 0.7} 82.9±0.482.9\pm 0.4 82.1±0.482.1\pm 0.4
BPR 0.083±0.0120.083\pm 0.012 35.7±0.635.7\pm 0.6 25.2±0.625.2\pm 0.6 77.7±0.577.7\pm 0.5 76.9±0.476.9\pm 0.4
SVD 0.324±0.011-0.324\pm 0.011 7.0±0.47.0\pm 0.4 0.0±0.00.0\pm 0.0 3.5±0.13.5\pm 0.1 3.1±0.23.1\pm 0.2
MF 0.024±0.1030.024\pm 0.103 19.0±0.519.0\pm 0.5 0.0±0.00.0\pm 0.0 47.8±1.047.8\pm 1.0 27.0±0.727.0\pm 0.7
PMF 0.027±0.0160.027\pm 0.016 32.7±0.432.7\pm 0.4 26.4±0.426.4\pm 0.4 27.3±0.827.3\pm 0.8 22.8±0.522.8\pm 0.5
FM 0.103±0.0250.103\pm 0.025 21.7±0.321.7\pm 0.3 18.1±0.318.1\pm 0.3 78.8±0.378.8\pm 0.3 76.3±0.476.3\pm 0.4
NCF 0.241±0.346-0.241\pm 0.346 24.0±1.924.0\pm 1.9 14.5±2.014.5\pm 2.0 90.4±1.5\mathbf{90.4\pm 1.5} 90.1±2.1\mathbf{90.1\pm 2.1}
β\beta-VAE 0.120±0.301-0.120\pm 0.301 13.2±2.113.2\pm 2.1 10.9±2.410.9\pm 2.4 71.0±0.371.0\pm 0.3 69.9±0.769.9\pm 0.7
\pbox10cmFood -Com SAD 0.329±0.002\mathbf{-0.329\pm 0.002} 14.9±0.5\mathbf{14.9\pm 0.5} 0.0±0.00.0\pm 0.0 24.7±0.224.7\pm 0.2 23.9±0.223.9\pm 0.2
BPR 1.276±0.009-1.276\pm 0.009 5.9±0.55.9\pm 0.5 0.0±0.00.0\pm 0.0 23.2±0.323.2\pm 0.3 21.3±0.321.3\pm 0.3
SVD 5.152±0.039-5.152\pm 0.039 1.1±0.11.1\pm 0.1 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0
MF 1.956±0.161-1.956\pm 0.161 6.4±2.06.4\pm 2.0 0.0±0.00.0\pm 0.0 27.8±5.527.8\pm 5.5 24.6±6.124.6\pm 6.1
PMF 0.434±0.031-0.434\pm 0.031 4.1±3.54.1\pm 3.5 0.0±0.00.0\pm 0.0 20.2±0.620.2\pm 0.6 21.0±1.121.0\pm 1.1
FM 5.324±0.247-5.324\pm 0.247 9.3±1.79.3\pm 1.7 0.0±0.00.0\pm 0.0 35.9±0.235.9\pm 0.2 35.1±0.335.1\pm 0.3
NCF 11.362±0.852-11.362\pm 0.852 8.9±0.78.9\pm 0.7 0.0±0.00.0\pm 0.0 38.2±4.8\mathbf{38.2\pm 4.8} 37.1±5.5\mathbf{37.1\pm 5.5}
β\beta-VAE 3.127±0.426-3.127\pm 0.426 10.2±1.110.2\pm 1.1 0.0±0.00.0\pm 0.0 16.3±3.116.3\pm 3.1 16.1±3.716.1\pm 3.7

To evaluate models’ performance in recommendation, we create an item set containing 100100 non-interacted items by randomly sampling for each user. We combine the hold out item with the 100100 items to form a test set, and examine whether models are able to rank the hold out item high in the test set (Hit Ratio (He et al., 2017)). SAD faces some unconventional challenges (hence opportunities) in producing a ranking when violation of transitivity exists. Consider three items ii, jj and tt. With transitivity, if user has iji\succ j and jtj\succ t, then iti\succ t must hold. However, SAD can result in a scenario in which tit\succ i, forming a preference cycle among the three items, in which case no ranking can be inferred. We propose two methods using pairwise comparisons for SAD in the evaluation. In the first method (M1), the number of non-interacted items in the test set that are more preferrable than the hold out item is dubbed as its rank. In a second method (M2), we use the ratings from interacted items kept in training set and calculate the proportion of interacted items that are less preferable than a test item. The proportion is used as a score to rank items in the test set. We calculate the percentage of hold out items that are ranked higher than 2020 in all the hold out items. SAD is among the top three best models that rank the hold out items in top 2020 (Table 6). We argue that since our model’s predictions match better with user’s actual ratings, it is able to bring additional information by introducing diversity of items into recommendation, while respecting users’s potential preferences. In Appendix, we illustrate examples in which SAD produces model predictions consistent with true ratings while other SOTA models fail.

7 Discussions

We proposed a new tensor decomposition model for collaborative filtering with implicit feedback. In contrast to traditional models, we introduced a new set of non-negative latent vectors for items. While respecting anti-symmetricity of parameters, the new vectors generalized the standard dot products for calculating user-item preferences to general inner products, allowing items to interact when evaluating their relative preferences. When such vectors were all set to 11’s, our model reduced to standard collaborative filtering models. We allowed their values to be different from 11, enabling the violation of the known transitivity property. The proposed model generated accurate recommendations across multiple real world datasets examined.

The existence of violation of transitivity has profound implication in real world applications. The items that violate the property form cycles in the directed graph implied by pairwise preferences of a user. For example, consider three items ii, jj and tt. Transitivity implies if user uu has iji\succ j and jtj\succ t, then tit\succ i must hold. Violation of transitivity suggests tit\succ i, forming a preference cycle among the three items. When SAD detects such violations, even a strong prior, l1l_{1} regularization, suggesting otherwise, it indicates that users themselves may not have a linear mental model in terms of ranking items. When making recommendations, the items forming a cycle can be selected as a group, and users who are not certain which one in the group is the most relevant can be exposed to the entire group, increasing the possibility of producing a hit.

When evaluating consistency in Section 6, we used users’ actual ratings to determine pairwise preference between items. Many items in the datasets however had collided ratings due to the limited values those ratings can take. We chose the unevenly rated items with partial orders as ground truth during evaluation. The partial orders are a proxy for users’ true preferences, and using them is the best we can do to evaluate whether our model is able to produce consistent predictions. For evenly rated items, there is no ground truth available to allow us to evaluate the performance.

This topic touches one fundamental aspect in our existing rating system - using an integer valued score with limited range as a sole reflection of user’s preference. Traditional collaborative filtering models pander to this system, by assuming there is a linear ranking among items, and the ranking is dictated by a real number representing the preference strength. However, reality can hold evidence that violates the assumption. For example, Mazurek & Perzina (2017) demonstrated only a small proportion of real world pairwise comparisons satisfy complete transitivity. Li et al. (2013) showed the ubiquity of a user providing very different rating scores on closely correlated items, producing self-contradictions. In addition to noisiness in user ratings, the observed self-contradictions is a manifestation of the collision between user’s mental model and the rating system. SAD innovates by providing a novel methodology to mathematically parameterize this mental model.

There can be parameterizations that both respect anti-symmetric property of Xu::X_{u::}, and at the same time allow the violation of transitivity property other than Equation 8. For example, instead of introducing the new vector τi+k\tau_{i}\in\mathbb{R}^{k}_{+} for every item, one can model xuijx_{uij} as

x_uij = ∑_h=1^k ξ_uh (η_hi - η_hj) τ_hij,

with Th::={τhij}1i,jmm×mT_{h::}=\{\tau_{hij}\}_{1\leq i,j\leq m}\in\mathbb{R}^{m\times m}. Th::T_{h::} in this parameterization can be viewed as an item interaction matrix for hh-th factor. Additional structures can be assigned to Th::T_{h::} to enable parsimoniousness, such as letting Th::T_{h::} be symmetric (in order to make Xu::X_{u::} anti-symmetric) and modeling it as a low rank matrix with τhij=αhi,βhj\tau_{hij}=\langle\alpha_{hi},\beta_{hj}\rangle, and αhi,βhjκ\alpha_{hi},\beta_{hj}\in\mathbb{R}^{\kappa}. In our current work we focused on an efficient model with a simple interpretation. We delegate the research of exploring alternative parameterizations to future work.

In this paper, we restrict our scope to consider collaborative filterings for implicit feedbacks. SAD can be applied to datasets beyond the scope. For instances, datasets with explicit ratings contain partial orders that can be leveraged directly during model fitting, instead of being used to evaluate model consistency in a post-hoc manner as in current work. Such datasets include the ones considered in Section 6, and numerous others such as Yandex challenge and KDD Cup 2012. Other datasets that contain actual pairwise comparisons such as the one created by Pavlichenko & Ustalov (2021) are a natural fit to SAD as well. We expect the power of SAD can be further enhanced with neural network components integrated.

References

  • Bayer et al. (2017) Immanuel Bayer, Xiangnan He, Bhargav Kanagal, and Steffen Rendle. A generic coordinate descent framework for learning from implicit feedback. In Proceedings of the 26th International Conference on World Wide Web, pp.  1341–1350, 2017.
  • Bennett et al. (2007) James Bennett, Stan Lanning, et al. The Netflix prize. In Proceedings of KDD Cup and Workshop, volume 2007, pp.  35. New York, NY, USA., 2007.
  • Chen et al. (2017) Jingyuan Chen, Hanwang Zhang, Xiangnan He, Liqiang Nie, Wei Liu, and Tat-Seng Chua. Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.  335–344, 2017.
  • Frolov & Oseledets (2017) Evgeny Frolov and Ivan Oseledets. Tensor methods and recommender systems. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 7(3):e1201, 2017.
  • Graham et al. (2019) Scott Graham, Jun-Ki Min, and Tao Wu. Microsoft recommenders: Tools to accelerate developing recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems, pp.  542–543, 2019.
  • Harper & Konstan (2015) F Maxwell Harper and Joseph A Konstan. The Movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TIIS), 5(4):1–19, 2015.
  • He et al. (2016) Xiangnan He, Hanwang Zhang, Min-Yen Kan, and Tat-Seng Chua. Fast matrix factorization for online recommendation with implicit feedback. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pp.  549–558, 2016.
  • He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, pp.  173–182, 2017.
  • Hidasi & Tikk (2012) Balázs Hidasi and Domonkos Tikk. Fast ALS-based tensor factorization for context-aware recommendation from implicit feedback. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp.  67–82. Springer, 2012.
  • Hu et al. (2008) Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In 2008 Eighth IEEE International Conference on Data Mining, pp.  263–272. Ieee, 2008.
  • Hug (2020) Nicolas Hug. Surprise: A Python library for recommender systems. Journal of Open Source Software, 5(52):2174, 2020. doi: 10.21105/joss.02174. URL https://doi.org/10.21105/joss.02174.
  • Hunter (2004) David R Hunter. MM algorithms for generalized Bradley-Terry models. The Annals of Statistics, 32(1):384–406, 2004.
  • Hunter & Lange (2004) David R Hunter and Kenneth Lange. A tutorial on MM algorithms. The American Statistician, 58(1):30–37, 2004.
  • Kiers (2000) Henk AL Kiers. Towards a standardized notation and terminology in multiway analysis. Journal of Chemometrics: A Journal of the Chemometrics Society, 14(3):105–122, 2000.
  • Li et al. (2013) Bin Li, Ling Chen, Xingquan Zhu, and Chengqi Zhang. Noisy but non-malicious user detection in social recommender systems. World Wide Web, 16(5):677–699, November 2013. ISSN 1573-1413. doi: 10.1007/s11280-012-0161-9. URL https://doi.org/10.1007/s11280-012-0161-9.
  • Liang et al. (2018) Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 World Wide Web Conference, pp.  689–698, 2018.
  • Majumder et al. (2019) Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, and Julian McAuley. Generating personalized recipes from historical user preferences. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp.  5976–5982, 2019.
  • Mazurek & Perzina (2017) Jiří Mazurek and Radomír Perzina. On the inconsistency of pairwise comparisons: An experimental study. Scientific Papers of the University of Pardubice. Series D, Faculty of Economics and Administration, 2017.
  • Mnih & Salakhutdinov (2008) Andriy Mnih and Russ R Salakhutdinov. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems, pp.  1257–1264, 2008.
  • Pan et al. (2008) Rong Pan, Yunhong Zhou, Bin Cao, Nathan N Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. One-class collaborative filtering. In 2008 Eighth IEEE International Conference on Data Mining, pp.  502–511. IEEE, 2008.
  • Pavlichenko & Ustalov (2021) Nikita Pavlichenko and Dmitry Ustalov. IMDB-WIKI-SbS: An evaluation dataset for crowdsourced pairwise comparisons. arXiv preprint arXiv:2110.14990, 2021.
  • Rendle (2010) Steffen Rendle. Factorization machines. In 2010 IEEE International Conference on Data Mining, pp.  995–1000, 2010. doi: 10.1109/ICDM.2010.127.
  • Rendle & Schmidt-Thieme (2010) Steffen Rendle and Lars Schmidt-Thieme. Pairwise interaction tensor factorization for personalized tag recommendation. In Proceedings of the Third ACM International Conference on Web Search and Data Mining, pp.  81–90, 2010.
  • Rendle et al. (2009) Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pp.  452–461, 2009.
  • Rendle et al. (2020) Steffen Rendle, Walid Krichene, Li Zhang, and John Anderson. Neural collaborative filtering vs. matrix factorization revisited. In Fourteenth ACM Conference on Recommender Systems, pp.  240–248, 2020.
  • Saaty & Vargas (2013) Thomas L Saaty and Luis G Vargas. The analytic network process. In Decision Making with the Analytic Network Process, pp.  1–40. Springer, 2013.
  • Salah et al. (2020) Aghiles Salah, Quoc-Tuan Truong, and Hady W Lauw. Cornac: A comparative framework for multimodal recommender systems. the Journal of Machine Learning Research, 21:1–5, 2020.
  • Tucker (1966) Ledyard R Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp.  5998–6008, 2017.
  • Wang et al. (2021) Haomin Wang, Gang Kou, and Yi Peng. An iterative algorithm to derive priority from large-scale sparse pairwise comparison matrix. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021.
  • Weng & Lin (2011) Ruby C Weng and Chih-Jen Lin. A Bayesian approximation method for online ranking. Journal of Machine Learning Research, 12(1), 2011.
  • Wermser et al. (2011) Hendrik Wermser, Achim Rettinger, and Volker Tresp. Modeling and learning context-aware recommendation scenarios using tensor decomposition. In 2011 International Conference on Advances in Social Networks Analysis and Mining, pp.  137–144. IEEE, 2011.
  • Xiao et al. (2017) Jun Xiao, Hao Ye, Xiangnan He, Hanwang Zhang, Fei Wu, and Tat-Seng Chua. Attentional factorization machines: Learning the weight of feature interactions via attention networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, pp.  3119–3125, Melbourne, Australia, 2017. AAAI Press. ISBN 978-0-9992411-0-3.
  • Zermelo (1929) Ernst Zermelo. Die berechnung der turnier-ergebnisse als ein maximumproblem der wahrscheinlichkeitsrechnung. Mathematische Zeitschrift, 29(1):436–460, 1929.
  • Zhang et al. (2019) Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. Deep learning based recommender system: A survey and new perspectives. ACM Computing Surveys, 52(1):1–38, 2019.

Appendix A Properties of Real World Datasets

Table 2: Properties of the three real word datasets used in Section 6
Dataset #Users #Items #Ratings Sparsity Quantiles of #Ratings/User
(min/5%/50%/95%/max)(\text{min}/5\%/50\%/95\%/\text{max})
Netflix 10,00010,000 8,6938,693 1,044,3181,044,318 98.80%98.80\% (1/6/46/400/8,237)(1/6/46/400/8,237)
Movie-Lens 6,0406,040 3,7063,706 1,000,2091,000,209 95.53%95.53\% (20/23/96/556/2,314)(20/23/96/556/2,314)
Food-Com 17,48217,482 1,3581,358 145,431145,431 99.39%99.39\% (1/1/4/30/878)(1/1/4/30/878)

Appendix B Gibbs Sampler for Posterior Inference of SAD

We derive an efficient Gibbs sampling algorithm as a complement to the SGD algorithm in the main paper. The Gibbs sampling algorithm has the advantage of producing accurate uncertainty estimation of unknowns under Bayesian inference by drawing parameter samples from the posterior distribution. The algorithm is an application of Bayesian Probit regression to the current setting. Specifically, we replace the original logistic parameterization in Equation 1 with the following equation:

p(duij=1|Θ):=Φ(xuij),p(d_{uij}=1|\Theta):=\Phi(x_{uij}), (13)

where Φ(xuij)\Phi(x_{uij}) is the CDF of a Gaussian distribution with mean xuijx_{uij} and variance 11. By augmenting the model with a hidden tensor Z={zuij}Z=\{z_{uij}\}, where zuij=xuij+ϵuijz_{uij}=x_{uij}+\epsilon_{uij} and ϵuiji.i.dN(0,1)\epsilon_{uij}\overset{\textrm{i.i.d}}{\sim}N(0,1), the Probit model is equivalent to

duij={1zuij>01zuij0.d_{uij}=\begin{cases}1&z_{uij}>0\\ -1&z_{uij}\leq 0\end{cases}.

With this new model, an efficient Gibbs sampling algorithm can be derived. As a toy example, we assign spherical Gaussian priors to rows of Ξ\Xi, HH and TT independently. With the likelihood defined in Equation 13, the following conditional posterior distributions can be derived.

Posterior of zuijz_{uij}

zuij|Ξ,H,T{N+(xuij,1)ifduij=1N(xuij,1)ifduij=1,z_{uij}|\Xi,H,T\sim\begin{cases}N_{+}(x_{uij},1)&\textrm{if}~{}d_{uij}=1\\ N_{-}(x_{uij},1)&\textrm{if}~{}d_{uij}=-1\end{cases},

where N+(μ,σ)N_{+}(\mu,\sigma) and N(μ,σ)N_{-}(\mu,\sigma) are truncated Gaussian distributions on positive and negative quadrants respectively.

Posterior of ξu\xi_{u} with u=1,,nu=1,\cdots,n

ξu|Z,Ξξu,H,TNk(Σuξ(Ψuξ)z¯uξ,Σuξ),\xi_{u}|Z,\Xi\setminus\xi_{u},H,T\sim N_{k}(\Sigma_{u}^{\xi}(\Psi_{u}^{\xi})^{\top}\bar{z}_{u}^{\xi},\Sigma_{u}^{\xi}),

where (Σuξ)1=(Ψuξ)(Ψuξ)+I(\Sigma_{u}^{\xi})^{-1}=(\Psi_{u}^{\xi})^{\top}(\Psi_{u}^{\xi})+I,

Ψuξ\displaystyle\Psi_{u}^{\xi} =(η11τ12η12τ11η21τ22η22τ21ηk1τk2ηk2τk1η11τ13η13τ11η21τ23η23τ21ηk1τk3ηk3τk1η12τ13η13τ12η22τ23η23τ22ηk2τk3ηk3τk2η1,m1τ1mη1mτ1,m1η2,m1τ2mη2mτ2,m1ηk,m1τkmηkmτk,m1)\displaystyle=\begin{pmatrix}\eta_{11}\tau_{12}-\eta_{12}\tau_{11}&\eta_{21}\tau_{22}-\eta_{22}\tau_{21}&\cdots&\eta_{k1}\tau_{k2}-\eta_{k2}\tau_{k1}\\ \eta_{11}\tau_{13}-\eta_{13}\tau_{11}&\eta_{21}\tau_{23}-\eta_{23}\tau_{21}&\cdots&\eta_{k1}\tau_{k3}-\eta_{k3}\tau_{k1}\\ \vdots&\vdots&\ddots&\vdots\\ \eta_{12}\tau_{13}-\eta_{13}\tau_{12}&\eta_{22}\tau_{23}-\eta_{23}\tau_{22}&\cdots&\eta_{k2}\tau_{k3}-\eta_{k3}\tau_{k2}\\ \vdots&\vdots&\ddots&\vdots\\ \eta_{1,m-1}\tau_{1m}-\eta_{1m}\tau_{1,m-1}&\eta_{2,m-1}\tau_{2m}-\eta_{2m}\tau_{2,m-1}&\cdots&\eta_{k,m-1}\tau_{km}-\eta_{km}\tau_{k,m-1}\\ \end{pmatrix}
m(m1)/2×k,\displaystyle\in\mathbb{R}^{m(m-1)/2\times k},

and z¯uξ=[zu12,zu13,,zu23,zu24,,zu,m1,m]m(m1)/2\bar{z}_{u}^{\xi}=[z_{u12},z_{u13},\cdots,z_{u23},z_{u24},\cdots,z_{u,m-1,m}]^{\top}\in\mathbb{R}^{m(m-1)/2}.

Posterior of ηi\eta_{i} with i=1,,mi=1,\cdots,m

ηi|Z,Ξ,Hηi,TNk(Σiη(Ψiη)z¯iη,Σiη),\eta_{i}|Z,\Xi,H\setminus\eta_{i},T\sim N_{k}(\Sigma_{i}^{\eta}(\Psi_{i}^{\eta})^{\top}\bar{z}_{i}^{\eta},\Sigma_{i}^{\eta}),

where (Σiη)1=(Ψiη)(Ψiη)+I(\Sigma_{i}^{\eta})^{-1}=(\Psi_{i}^{\eta})^{\top}(\Psi_{i}^{\eta})+I,

Ψiη\displaystyle\Psi_{i}^{\eta} =(ξ11τ11ξ21τ21ξk1τk1ξ11τ12ξ21τ22ξk1τk2ξ11τ1,i1ξ21τ2,i1ξk1τk,i1ξ11τ1,i+1ξ21τ2,i+1ξk1τk,i+1ξ11τ1mξ21τ2mξk1τkmξ12τ11ξ22τ21ξk2τk1ξ12τ1,i1ξ22τ2,i1ξk2τk,i1ξ12τ1,i+1ξ22τ2,i+1ξk2τk,i+1ξ12τ1mξ22τ2mξk2τkmξ1nτ11ξ2nτ21ξknτk1ξ1nτ1,i1ξ2nτ2,i1ξknτk,i1ξ1nτ1,i+1ξ2nτ2,i+1ξknτk,i+1ξ1nτ1mξ2nτ2mξknτkm)n(m1)×k,\displaystyle=\begin{pmatrix}\xi_{11}\tau_{11}&\xi_{21}\tau_{21}&\cdots&\xi_{k1}\tau_{k1}\\ \xi_{11}\tau_{12}&\xi_{21}\tau_{22}&\cdots&\xi_{k1}\tau_{k2}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{11}\tau_{1,i-1}&\xi_{21}\tau_{2,i-1}&\cdots&\xi_{k1}\tau_{k,i-1}\\ \xi_{11}\tau_{1,i+1}&\xi_{21}\tau_{2,i+1}&\cdots&\xi_{k1}\tau_{k,i+1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{11}\tau_{1m}&\xi_{21}\tau_{2m}&\cdots&\xi_{k1}\tau_{km}\\ \xi_{12}\tau_{11}&\xi_{22}\tau_{21}&\cdots&\xi_{k2}\tau_{k1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{12}\tau_{1,i-1}&\xi_{22}\tau_{2,i-1}&\cdots&\xi_{k2}\tau_{k,i-1}\\ \xi_{12}\tau_{1,i+1}&\xi_{22}\tau_{2,i+1}&\cdots&\xi_{k2}\tau_{k,i+1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{12}\tau_{1m}&\xi_{22}\tau_{2m}&\cdots&\xi_{k2}\tau_{km}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{1n}\tau_{11}&\xi_{2n}\tau_{21}&\cdots&\xi_{kn}\tau_{k1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{1n}\tau_{1,i-1}&\xi_{2n}\tau_{2,i-1}&\cdots&\xi_{kn}\tau_{k,i-1}\\ \xi_{1n}\tau_{1,i+1}&\xi_{2n}\tau_{2,i+1}&\cdots&\xi_{kn}\tau_{k,i+1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{1n}\tau_{1m}&\xi_{2n}\tau_{2m}&\cdots&\xi_{kn}\tau_{km}\\ \end{pmatrix}\in\mathbb{R}^{n(m-1)\times k},
z¯iη\displaystyle\bar{z}_{i}^{\eta} =(z1i1+h=1kξh1τhiηh1z1i2+h=1kξh1τhiηh2z1,i,i1+h=1kξh1τhiηh,i1z1,i,i+1+h=1kξh1τhiηh,i+1z1im+h=1kξh1τhiηh,mz2i1+h=1kξh2τhiηh1z2,i,i1+h=1kξh2τhiηh,i1z2,i,i+1+h=1kξh2τhiηh,i+1z2im+h=1kξh2τhiηhmzni1+h=1kξhnτhiηh1zn,i,i1+h=1kξhnτhiηh,i1zn,i,i+1+h=1kξhnτhiηh,i+1znim+h=1kξhnτhiηhm)n(m1).\displaystyle=\begin{pmatrix}z_{1i1}+\sum_{h=1}^{k}\xi_{h1}\tau_{hi}\eta_{h1}\\ z_{1i2}+\sum_{h=1}^{k}\xi_{h1}\tau_{hi}\eta_{h2}\\ \vdots\\ z_{1,i,i-1}+\sum_{h=1}^{k}\xi_{h1}\tau_{hi}\eta_{h,i-1}\\ z_{1,i,i+1}+\sum_{h=1}^{k}\xi_{h1}\tau_{hi}\eta_{h,i+1}\\ \vdots\\ z_{1im}+\sum_{h=1}^{k}\xi_{h1}\tau_{hi}\eta_{h,m}\\ z_{2i1}+\sum_{h=1}^{k}\xi_{h2}\tau_{hi}\eta_{h1}\\ \vdots\\ z_{2,i,i-1}+\sum_{h=1}^{k}\xi_{h2}\tau_{hi}\eta_{h,i-1}\\ z_{2,i,i+1}+\sum_{h=1}^{k}\xi_{h2}\tau_{hi}\eta_{h,i+1}\\ \vdots\\ z_{2im}+\sum_{h=1}^{k}\xi_{h2}\tau_{hi}\eta_{hm}\\ \vdots\\ z_{ni1}+\sum_{h=1}^{k}\xi_{hn}\tau_{hi}\eta_{h1}\\ \vdots\\ z_{n,i,i-1}+\sum_{h=1}^{k}\xi_{hn}\tau_{hi}\eta_{h,i-1}\\ z_{n,i,i+1}+\sum_{h=1}^{k}\xi_{hn}\tau_{hi}\eta_{h,i+1}\\ \vdots\\ z_{nim}+\sum_{h=1}^{k}\xi_{hn}\tau_{hi}\eta_{hm}\\ \end{pmatrix}\in\mathbb{R}^{n(m-1)}.

In the above notation, we take advantage of the anti-symmetric property of Zu::Z_{u::} and assume zuij=zujiz_{uij}=-z_{uji} when i>ji>j.

Posterior of τj\tau_{j} with j=1,,mj=1,\cdots,m

τj|Z,Ξ,H,TτjNk+(Σjτ(Ψjτ)z¯jτ,Σjτ),\tau_{j}|Z,\Xi,H,T\setminus\tau_{j}\sim N^{+}_{k}(\Sigma_{j}^{\tau}(\Psi_{j}^{\tau})^{\top}\bar{z}_{j}^{\tau},\Sigma_{j}^{\tau}),

where (Σjτ)1=(Ψjτ)(Ψjτ)+I(\Sigma_{j}^{\tau})^{-1}=(\Psi_{j}^{\tau})^{\top}(\Psi_{j}^{\tau})+I,

Ψjτ\displaystyle\Psi_{j}^{\tau} =(ξ11η11ξ21η21ξk1ηk1ξ11η12ξ21η22ξk1ηk2ξ11η1,j1ξ21η2,j1ξk1ηk,j1ξ11η1,j+1ξ21η2,j+1ξk1ηk,j+1ξ11η1mξ21η2mξk1ηkmξ12η11ξ22η21ξk2ηk1ξ12η1,j1ξ22η2,j1ξk2ηk,j1ξ12η1,j+1ξ22η2,j+1ξk2ηk,j+1ξ12η1mξ22η2mξk2ηkmξ1nη11ξ2nη21ξknηk1ξ1nη1,j1ξ2nη2,j1ξknηk,j1ξ1nη1,j+1ξ2nη2,j+1ξknηk,j+1ξ1nη1mξ2nη2mξknηkm)n(m1)×k,\displaystyle=\begin{pmatrix}\xi_{11}\eta_{11}&\xi_{21}\eta_{21}&\cdots&\xi_{k1}\eta_{k1}\\ \xi_{11}\eta_{12}&\xi_{21}\eta_{22}&\cdots&\xi_{k1}\eta_{k2}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{11}\eta_{1,j-1}&\xi_{21}\eta_{2,j-1}&\cdots&\xi_{k1}\eta_{k,j-1}\\ \xi_{11}\eta_{1,j+1}&\xi_{21}\eta_{2,j+1}&\cdots&\xi_{k1}\eta_{k,j+1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{11}\eta_{1m}&\xi_{21}\eta_{2m}&\cdots&\xi_{k1}\eta_{km}\\ \xi_{12}\eta_{11}&\xi_{22}\eta_{21}&\cdots&\xi_{k2}\eta_{k1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{12}\eta_{1,j-1}&\xi_{22}\eta_{2,j-1}&\cdots&\xi_{k2}\eta_{k,j-1}\\ \xi_{12}\eta_{1,j+1}&\xi_{22}\eta_{2,j+1}&\cdots&\xi_{k2}\eta_{k,j+1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{12}\eta_{1m}&\xi_{22}\eta_{2m}&\cdots&\xi_{k2}\eta_{km}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{1n}\eta_{11}&\xi_{2n}\eta_{21}&\cdots&\xi_{kn}\eta_{k1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{1n}\eta_{1,j-1}&\xi_{2n}\eta_{2,j-1}&\cdots&\xi_{kn}\eta_{k,j-1}\\ \xi_{1n}\eta_{1,j+1}&\xi_{2n}\eta_{2,j+1}&\cdots&\xi_{kn}\eta_{k,j+1}\\ \vdots&\vdots&\ddots&\vdots\\ \xi_{1n}\eta_{1m}&\xi_{2n}\eta_{2m}&\cdots&\xi_{kn}\eta_{km}\\ \end{pmatrix}\in\mathbb{R}^{n(m-1)\times k},
z¯jτ\displaystyle\bar{z}_{j}^{\tau} =(z1j1+h=1kξh1τh1ηhjz1j2+h=1kξh1τh2ηhjz1j,j1+h=1kξh1τh,j1ηhjz1j,j+1+h=1kξh1τh,j+1ηhjz1jm+h=1kξh1τhmηhjz2j1+h=1kξh2τh1ηhjz2j,j1+h=1kξh2τh,j1ηhjz2j,j+1+h=1kξh2τh,j+1ηhjz2jm+h=1kξh2τhmηhjzn,j,j1+h=1kξhnτh,j1ηhjzn,j,j+1+h=1kξhnτh,j+1ηhjznjm+h=1kξhnτhmηhj)n(m1).\displaystyle=\begin{pmatrix}-z_{1j1}+\sum_{h=1}^{k}\xi_{h1}\tau_{h1}\eta_{hj}\\ -z_{1j2}+\sum_{h=1}^{k}\xi_{h1}\tau_{h2}\eta_{hj}\\ \vdots\\ -z_{1j,j-1}+\sum_{h=1}^{k}\xi_{h1}\tau_{h,j-1}\eta_{hj}\\ -z_{1j,j+1}+\sum_{h=1}^{k}\xi_{h1}\tau_{h,j+1}\eta_{hj}\\ \vdots\\ -z_{1jm}+\sum_{h=1}^{k}\xi_{h1}\tau_{hm}\eta_{hj}\\ -z_{2j1}+\sum_{h=1}^{k}\xi_{h2}\tau_{h1}\eta_{hj}\\ \vdots\\ -z_{2j,j-1}+\sum_{h=1}^{k}\xi_{h2}\tau_{h,j-1}\eta_{hj}\\ -z_{2j,j+1}+\sum_{h=1}^{k}\xi_{h2}\tau_{h,j+1}\eta_{hj}\\ \vdots\\ -z_{2jm}+\sum_{h=1}^{k}\xi_{h2}\tau_{hm}\eta_{hj}\\ \vdots\\ -z_{n,j,j-1}+\sum_{h=1}^{k}\xi_{hn}\tau_{h,j-1}\eta_{hj}\\ -z_{n,j,j+1}+\sum_{h=1}^{k}\xi_{hn}\tau_{h,j+1}\eta_{hj}\\ \vdots\\ -z_{njm}+\sum_{h=1}^{k}\xi_{hn}\tau_{hm}\eta_{hj}\\ \end{pmatrix}\in\mathbb{R}^{n(m-1)}.

Similar to z¯iη\bar{z}_{i}^{\eta}, we take advantage of the anti-symmetric property of Zu::Z_{u::} and assume zuij=zujiz_{uij}=-z_{uji} when i>ji>j.

Appendix C Methods Considered in the Real World Application

Table 3: Specifics & hyperparameters for models used when applying to real world datasets.
Model Parameter Values
SAD Implementation Supplementary Code
Learning Rate [0.001,0.002,0.005,0.01,0.02,0.05,0.1][0.001,0.002,0.005,0.01,0.02,0.05,0.1]
# Epochs [2,5,10,20,50][2,5,10,20,50]
l2l_{2} Reg [0.05,0.01,0.005,0.001][0.05,0.01,0.005,0.001]
l1l_{1} Reg 0.010.01
BPR (Rendle et al., 2009) Implementation Supplementary Code
Learning Rate [0.001,0.002,0.005,0.01,0.02,0.05,0.1][0.001,0.002,0.005,0.01,0.02,0.05,0.1]
# Epochs [2,5,10,20,50][2,5,10,20,50]
l2l_{2} Reg [0.05,0.01,0.005,0.001][0.05,0.01,0.005,0.001]
SVD Implementation Surprise (package) (Hug, 2020)
Learning Rate [0.001,0.002,0.005,0.01,0.02,0.05,0.1][0.001,0.002,0.005,0.01,0.02,0.05,0.1]
# Epochs [2,5,10,20,50][2,5,10,20,50]
Regularization [0.05,0.01,0.005,0.001][0.05,0.01,0.005,0.001]
Matrix Factorization (MF) Implementation Cornac (package) (Salah et al., 2020)
Learning Rate [0.001,0.002,0.005,0.01,0.02,0.05,0.1][0.001,0.002,0.005,0.01,0.02,0.05,0.1]
# Epochs [2,5,10,20,50][2,5,10,20,50]
λ\lambda Reg [0.05,0.01,0.005,0.001][0.05,0.01,0.005,0.001]
\pbox10cmProbabilistic Matrix Factorization (PMF) (Mnih & Salakhutdinov, 2008) Implementation Cornac (package) (Salah et al., 2020)
Learning Rate [0.001,0.002,0.005,0.01,0.02,0.05,0.1][0.001,0.002,0.005,0.01,0.02,0.05,0.1]
# Epochs [2,5,10,20,50][2,5,10,20,50]
λ\lambda Reg [0.05,0.01,0.005,0.001][0.05,0.01,0.005,0.001]
Factorization Machine (FM) (Rendle, 2010) Implementation RankFM (package)
Learning Rate [0.001,0.002,0.005,0.01,0.02,0.05,0.1][0.001,0.002,0.005,0.01,0.02,0.05,0.1]
# Epochs [2,5,10,20,50][2,5,10,20,50]
l2l_{2} Reg [0.05,0.01,0.005,0.001][0.05,0.01,0.005,0.001]
\pbox10cmNeural Collaborative Filtering (NCF) (He et al., 2017) Implementation MSFT recommenders (package) (Graham et al., 2019)
Learning Rate [0.001,0.002,0.005,0.01,0.02,0.05,0.1][0.001,0.002,0.005,0.01,0.02,0.05,0.1]
# Epochs [2,5,10,20,50][2,5,10,20,50]
Batch size [128,256,512,1024][128,256,512,1024]
Network Three layers MLP with sizes [128,64,32][128,64,32]
\pbox10cmVariational AutoEncoder (β\beta-VAE) (Liang et al., 2018) Implementation MSFT recommenders (package) (Graham et al., 2019)
β\beta parameter [0.001,0.002,0.005,0.01,0.02,0.05,0.1][0.001,0.002,0.005,0.01,0.02,0.05,0.1]
# Epochs [2,5,10,20,50][2,5,10,20,50]
Batch size [128,256,512,1024][128,256,512,1024]

Appendix D Real World Examples

During LOO evaluation, true preferences between a hold out item and the rest user interacted items in training set are determined based on users’ actual ratings. Cases in which our model’s predictions are consistent with true preferences while alternative models are not are shown in Table 4. We select 1010 such cases from each of the three real world datasets. Model predictions together with item descriptions are shown in the table.

Table 4: Examples where SAD produces a consistent prediction (xuij>0x_{uij}>0 & puij>0.5p_{uij}>0.5) while BPR fails (xuij<0x_{uij}<0 & puij<0.5p_{uij}<0.5).
Dataset uu-th user ii-th item (rating) jj-th item (rating) xuijpuijx_{uij}\mid p_{uij}
SAD BPR
Netflix ’1381599’ Last of the Dogmen (55) Look at Me (22) 0.350.590.35\mid 0.59 0.220.44-0.22\mid 0.44
’1243460’ Joe Kidd (55) Scenes of the Crime (11) 0.800.690.80\mid 0.69 0.270.43-0.27\mid 0.43
’581011’ The Professional (55) The Bourne Identity (22) 0.800.690.80\mid 0.69 0.320.42-0.32\mid 0.42
’632823’ Lara Croft: Tomb Raider: The Cradle of Life (44) The Cookout (22) 0.270.570.27\mid 0.57 1.360.20-1.36\mid 0.20
’429299’ The Worst Witch (44) A Chorus Line (11) 0.850.700.85\mid 0.70 0.760.32-0.76\mid 0.32
’127356’ Trading Spaces: Great Kitchen Designs and More! (44) White Oleander (22) 0.300.570.30\mid 0.57 1.240.22-1.24\mid 0.22
’2264661’ Free Tibet (55) Native American Medicine (33) 1.250.781.25\mid 0.78 0.050.49-0.05\mid 0.49
’581011’ The Dream Catcher (44) The Bourne Identity (22) 0.900.710.90\mid 0.71 0.370.41-0.37\mid 0.41
’1368371’ Zombie Holocaust (44) Gothika (11) 0.710.670.71\mid 0.67 0.310.42-0.31\mid 0.42
’1243460’ Moog (33) Scenes of the Crime (11) 0.980.730.98\mid 0.73 1.340.21-1.34\mid 0.21
Movie-Lens ’1318’ High Fidelity (55) Jimmy Hollywood (33) 0.850.700.85\mid 0.70 1.260.22-1.26\mid 0.22
’1250’ American Beauty (55) eXistenZy (33) 1.440.811.44\mid 0.81 0.140.47-0.14\mid 0.47
’4166’ My Fair Lady (55) Problem Child 2 (11) 0.510.630.51\mid 0.63 0.780.31-0.78\mid 0.31
’153’ American Beauty (44) Spice World (11) 1.070.751.07\mid 0.75 0.080.48-0.08\mid 0.48
’2160’ Harold and Maude (44) The Brady Bunch Movie (11) 0.340.580.34\mid 0.58 0.730.32-0.73\mid 0.32
’4692’ Blade Runner (55) The Newton Boys (33) 0.650.660.65\mid 0.66 0.380.41-0.38\mid 0.41
’2385’ Braveheart (44) Voyage to the Bottom of the Sea (33) 0.980.730.98\mid 0.73 0.560.36-0.56\mid 0.36
’4756’ Galaxy Quest (33) Felicia’s Journey (22) 1.010.731.01\mid 0.73 0.210.45-0.21\mid 0.45
’4439’ The Maltese Falcon (44) Entrapment (33) 0.450.610.45\mid 0.61 0.570.36-0.57\mid 0.36
’3021’ L.A. Confidential (55) Fatal Attraction (33) 0.640.660.64\mid 0.66 0.370.41-0.37\mid 0.41
Food-Com ’148323’ Best Ever Banana Cake \w Cream Cheese Frosting (55) Crock Pot Garlic Brown Sugar Chicken (0) 0.160.540.16\mid 0.54 3.510.03-3.51\mid 0.03
’424008’ Glazed Cinnamon Rolls, Bread Machine (55) Japanese Mum’s Chicken (0) 0.310.570.31\mid 0.57 0.380.41-0.38\mid 0.41
’428423’ Crock Pot Stifado (55) Best Ever and Most Versatile Muffins (33) 0.300.570.30\mid 0.57 2.990.05-2.99\mid 0.05
’733257’ Banana Banana Bread (22) Low Fat Oatmeal Muffins (0) 0.320.580.32\mid 0.58 2.310.09-2.31\mid 0.09
’340980’ Funky Chicken \w Sesame Noodles (33) Amanda’s Thai Peanut (11) 0.560.640.56\mid 0.64 0.770.32-0.77\mid 0.32
’573772’ Delicious Chicken Pot Pie (55) Amish Oven Fried Chicken (11) 1.280.781.28\mid 0.78 1.210.23-1.21\mid 0.23
’1477540’ Cinnabon Cinnamon Rolls by Todd Wilbur (44) Amanda’s Cheese Pound Cake (0) 0.810.690.81\mid 0.69 2.590.07-2.59\mid 0.07
’268644’ Baked Tilapia \w Lots of Spice (55) Southern Fried Salmon Patties (22) 0.380.590.38\mid 0.59 3.350.03-3.35\mid 0.03
’762742’ Easy Peezy Pizza Dough & Bread Machine Pizza Doug (55) Fresh Orange Muffins (11) 0.980.730.98\mid 0.73 2.310.09-2.31\mid 0.09
’212558’ Steak or Chicken Fajitas (55) Thai Style Ground Beef (33) 1.860.871.86\mid 0.87 0.110.47-0.11\mid 0.47