This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Knowledge-Enhanced Recommendation Model
with Attribute-Level Co-Attention

Deqing Yang, Zengchun Song, Lvxin Xue yangdeqing, zcsong19, [email protected] School of Data Science, Fudan UniversityNo. 220 Handan Rd.Shanghai 200433China  and  Yanghua Xiao [email protected] School of Computer Science, Fudan UniversityNo. 220 Handan Rd.Shanghai 200433China
(2020)
Abstract.

Deep neural networks (DNNs) have been widely employed in recommender systems including incorporating attention mechanism for performance improvement. However, most of existing attention-based models only apply item-level attention on user side, restricting the further enhancement of recommendation performance. In this paper, we propose a knowledge-enhanced recommendation model ACAM, which incorporates item attributes distilled from knowledge graphs (KGs) as side information, and is built with a co-attention mechanism on attribute-level to achieve performance gains. Specifically, each user and item in ACAM are represented by a set of attribute embeddings at first. Then, user representations and item representations are augmented simultaneously through capturing the correlations between different attributes by a co-attention module. Our extensive experiments over two realistic datasets show that the user representations and item representations augmented by attribute-level co-attention gain ACAM’s superiority over the state-of-the-art deep models.

recommender system, attribute-level, co-attention, knowledge graph
* Corresponding author.
journalyear: 2020copyright: acmcopyrightconference: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval; July 25–30, 2020; Virtual Event, Chinabooktitle: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’20), July 25–30, 2020, Virtual Event, Chinaprice: 15.00doi: 10.1145/3397271.3401313isbn: 978-1-4503-8016-4/20/07

ACM Reference Format:
Deqing Yang, Zengchun Song, Lvxin Xue and Yanghua Xiao. 2020. A Knowledge-Enhanced Recommendation Model with Attribute-Level Co-Attention. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’20), July 25–30, 2020, Virtual Event, China. ACM, New York, NY, USA, 4 pages.
https://doi.org/10.1145/3397271.3401313

1. Introduction

Encouraged by the success of deep neural networks (DNNs) in computer vision, image and natural language processing (NLP) ect., many researchers also imported DNNs to improve recommender systems. In these deep recommendation models, attention mechanism has also been employed broadly for recommendation performance gains (He and et al., 2018; Zhou and et al., 2018; Kang and McAuley, 2018; Shen and et al., 2019). Although these attention-based models have been proven effective, the following problems restrict the further enhancement of recommendation performance. First, some of them (He and et al., 2018; Kang and McAuley, 2018) only employ coarse attention on item-level, i.e., each item is directly represented by a single embedding based on which user presentations are generated. Such coarse-grained embeddings can not represent users and items thoroughly. Second, although some models (Zhou and et al., 2018; Shen and et al., 2019) incorporate item features (attributes), also known as item knowledge, to improve the expressive ability of user/item representations, they only apply attention mechanism on user side.

Given these problems, we propose a novel deep recommendation model with attribute-level co-attention, namely ACAM (Attribute-level Co-Attention Model). ACAM demonstrates superior performance due to the following merits. First, its item representations and user representations are generated based on a set of attribute embeddings rather than a single embedding, where the attributes are distilled from open knowledge graphs (KGs) as side information. Second, the co-attention module in ACAM captures the correlations between different attribute embeddings to augment user/item representations. As we know, there may exist correlations between different item attributes, indicating the latent relationships between items. For example, in movie attributes, actor Stallone is more correlated to genre action film, and actor GONG Li is more correlated to director ZHANG Yimou. Therefore, the latent relationships between the target users and the candidate items are uncovered precisely by the user/item representations augmented based on attribute correlations, resulting in enhanced recommendation performance. Furthermore, we add an objective of knowledge graph embedding (KGE) into the loss function to lean better attribute embeddings.

In summary, we have the following contributions in this paper:

1. We propose an attribute-level co-attention mechanism in a deep recommendation model, to capture the correlations between different user/item attributes sophisticatedly, and then augment user/item representations simultaneously which are helpful for recommendation performance enhancement.

2. Our extensive experiments demonstrate our model’s superiority over some state-of-the-art deep models including previous attention-based recommendation models, which apparently justify the effectiveness of incorporating attribute embeddings and employing attribute-level co-attention to co-augment user representations and item representations.

2. Model Description

The task addressed by ACAM is top-n recommendation of implicit feedback (He et al., 2017; He and et al., 2018). To generate the top-n recommendation list, the target uu should be coupled with each candidate item vv and input into the model to compute y^uv\hat{y}_{uv}, which quantifies the probability that uu likes vv, i.e., uu has a positive feedback to vv.

Refer to caption
Figure 1. The proposed model’s framework.

As shown in Fig. 1, ACAM can be divided into three layers, i.e., embedding layer, co-attention layer and prediction layer. In the embedding layer, both uu and vv are represented by a representation matrix on attribute-level, rather than a single vector (embedding) as previous models (He et al., 2017; He and et al., 2018; Kang and McAuley, 2018). Then, uu’s representation and vv’s representation are co-augmented based on the correlations (attentions) between different attributes captured by an attribute-level co-attention module. In the last prediction layer, a multi-layer perceptron (MLP) is built and fed with uu’s representation and vv’s representation to compute y^uv\hat{y}_{uv}.

2.1. Embedding Layer

2.1.1. Generating Item Representation

In general, most items and their attributes (e.g., the actor and director of movies) in an open domain can be fetched from large-scale KGs, which constitute knowledge triplets formed as <h,r,t><h,r,t>. The head entity (hh) is just an item, the relation (rr) corresponds to an attribute and the tail entity (tt) is an attribute value. For example, a triplet <Rocky,starred,Stallone><Rocky,starred,Stallone> describes that Stallone is an actor of movie Rocky. The shared attribute values well indicate the latent relationships between different items (Zhou and et al., 2018; Shen and et al., 2019). Therefore, MM significant attributes (values) {a1,a2,,aM}\{a_{1},a_{2},...,a_{M}\} are first selected to represent an item in ACAM. The aia_{i}’s embedding is denoted as 𝒂id\boldsymbol{a}_{i}\in\mathbb{R}^{d} (1iM1\leq i\leq M). We further use an item embedding (head entity embedding), denoted as 𝒂0\boldsymbol{a}_{0}, to supplement the representation of an item. For the convenience of the following introduction, 𝒂0\boldsymbol{a}_{0} is also regarded as an attribute embedding. Thus, an item’s representation is enriched into a matrix of (M+1)×d(M+1)\times d, as shown in Fig. 1. Please note that an attribute may have multiple values corresponding to different tail entities in KGs. For example, a movie generally involves many actors and each actor corresponds to an entity in KGs and has a unique embedding. Thus, for an attribute with multiple values, we average all of its value embeddings (tail entity embeddings) as its attribute embedding.

2.1.2. Generating User Representation

A user in ACAM is represented by the recent LL items that he/she has interacted with, denoted as ={v1,v2,,vL}\mathcal{L}=\{v_{1},v_{2},…,v_{L}\}. As the candidate item’s representation, each interacted item vj(1jL)v_{j}(1\leq j\leq L) in \mathcal{L} is also represented by the union of its M+1M+1 attribute embeddings. Therefore, the target user uu’s representation is enriched to be a cube (tensor) of (M+1)×L×d(M+1)\times L\times d, denoted as 𝑬~u\boldsymbol{\tilde{E}}_{u}. For those users with less than LL historical interactions, we fill paddings in their representations.

Next, we need to extract the features of 𝑬~u\boldsymbol{\tilde{E}}_{u} and compress it into a matrix. It is not only to reduce trainable parameters but also to feed the co-attention layer (module) conveniently. To this end, we aggregate the ii-th attribute embeddings of uu’s historical items as uu’s ii-th attribute embedding through an attention network. Specifically, we adopt a weighted sum pooling as

(1) 𝒂iu=j=1LFFN(𝒂ij𝒂iv)𝒂ij\vspace{-0.1cm}\boldsymbol{a}^{u}_{i}=\sum\limits_{j=1}^{L}FFN(\boldsymbol{a}^{j}_{i}\oplus\boldsymbol{a}^{v}_{i})\boldsymbol{a}^{j}_{i}

where 𝒂iu\boldsymbol{a}^{u}_{i} is regarded as uu’s ii-th (0iM0\leq i\leq M) attribute embedding, and FFN(𝒂ij𝒂iv)FFN(\boldsymbol{a}^{j}_{i}\oplus\boldsymbol{a}^{v}_{i}) is the computation of a feed-forward network fed with the concatenation of item’s attribute embedding 𝒂ij\boldsymbol{a}^{j}_{i} and 𝒂iv\boldsymbol{a}^{v}_{i}. Thus, uu’s representation consists of M+1M+1 attribute embeddings computed by Eq. 1. Such user representations vary given different candidate recommended items, which have been proven more helpful for recommendation performance gains than fixed user representations (Shen and et al., 2019; He and et al., 2018).

2.2. Co-attention Layer

In this layer, both uu’s and vv’s attribute-based representations are simultaneously augmented by a co-attention module of symmetrical neural architecture, as shown in Fig. 1.

Specifically, each of uu’s attribute embeddings, i.e., 𝒂iu\boldsymbol{a}_{i}^{u}, is adjusted as the weighted sum of all uu’s attribute embeddings (denoted as 𝒂ju\boldsymbol{a}_{j}^{u}) where the weight (attention) of 𝒂ju\boldsymbol{a}_{j}^{u} is computed based on the correlation between 𝒂ju\boldsymbol{a}_{j}^{u} and 𝒂iv\boldsymbol{a}_{i}^{v}. To adjust vv’s attribute embedding 𝒂iv\boldsymbol{a}_{i}^{v}, we just use the symmetric operation. Such adjustment makes the embeddings of two correlated attributes more closer to each other. As a result, the user/item representations generated based on such adjusted attribute embeddings induce more precise recommendations. For example, it makes a movie starring Stallone easier to be recommended to a user who have watched many action films.

Formally, we first denote uu’s representations and vv’s representations respectively as

𝑬au=[𝒂0u,𝒂1u,,𝒂Mu],𝑬av=[𝒂0v,𝒂1v,,𝒂Mv]\vspace{-0.1cm}\boldsymbol{E}_{a}^{u}=[\boldsymbol{a}_{0}^{u},\boldsymbol{a}_{1}^{u},...,\boldsymbol{a}_{M}^{u}],\quad\boldsymbol{E}_{a}^{v}=[\boldsymbol{a}_{0}^{v},\boldsymbol{a}_{1}^{v},...,\boldsymbol{a}_{M}^{v}]

Then, we take nonlinear transformation to obtain the key matrices 𝑲u,𝑲v(M+1)×dK\boldsymbol{K}^{u},\boldsymbol{K}^{v}\in\mathbb{R}^{(M+1)\times d_{K}} and value matrices 𝑽u,𝑽v(M+1)×dV\boldsymbol{V}^{u},\boldsymbol{V}^{v}\in\mathbb{R}^{(M+1)\times d_{V}} based on 𝑬au\boldsymbol{E}_{a}^{u} and 𝑬av\boldsymbol{E}_{a}^{v} as

(2) 𝑲u=tanh(𝑬au𝑾Ku+𝒃Ku),𝑽u=tanh(𝑬au𝑾Vu+𝒃Vu)𝑲v=tanh(𝑬av𝑾Kv+𝒃Kv),𝑽v=tanh(𝑬av𝑾Vv+𝒃Vv)\vspace{-0.1cm}\begin{split}&\boldsymbol{K}^{u}=\text{tanh}(\boldsymbol{E}_{a}^{u\top}\boldsymbol{W}^{u}_{K}+\boldsymbol{b}^{u}_{K}),\boldsymbol{V}^{u}=\text{tanh}(\boldsymbol{E}_{a}^{u\top}\boldsymbol{W}^{u}_{V}+\boldsymbol{b}^{u}_{V})\\ &\boldsymbol{K}^{v}=\text{tanh}(\boldsymbol{E}_{a}^{v\top}\boldsymbol{W}^{v}_{K}+\boldsymbol{b}^{v}_{K}),\boldsymbol{V}^{v}=\text{tanh}(\boldsymbol{E}_{a}^{v\top}\boldsymbol{W}^{v}_{V}+\boldsymbol{b}^{v}_{V})\end{split}

where 𝑾Ku,𝑾Kvd×dK,𝑾Vu,𝑾Vvd×dV\boldsymbol{W}^{u}_{K},\boldsymbol{W}^{v}_{K}\in\mathbb{R}^{d\times d_{K}},\boldsymbol{W}^{u}_{V},\boldsymbol{W}^{v}_{V}\in\mathbb{R}^{d\times d_{V}} are transformation weight matrices, and 𝒃Ku,𝒃KvdK,𝒃Vu,𝒃VvdV\boldsymbol{b}^{u}_{K},\boldsymbol{b}^{v}_{K}\in\mathbb{R}^{d_{K}},\boldsymbol{b}^{u}_{V},\boldsymbol{b}^{v}_{V}\in\mathbb{R}^{d_{V}} are transformation bias vectors. Next, we obtain a co-attention map 𝑺=𝑲u𝑲v\boldsymbol{S}=\boldsymbol{K}^{u}\boldsymbol{K}^{v\top}, which is a square matrix of (M+1)×(M+1)(M+1)\times(M+1) and each entry 𝑺ij\boldsymbol{S}_{ij} quantifies the affinity between uu’s ii-th attribute and vv’s jj-th attribute, i.e., the correlation between 𝒂iu\boldsymbol{a}_{i}^{u} and 𝒂jv\boldsymbol{a}_{j}^{v}. Accordingly, 𝑺\boldsymbol{S} stores attribute-level attentions.

Based on 𝑽u,𝑽v\boldsymbol{V}^{u},\boldsymbol{V}^{v} along with 𝑺\boldsymbol{S}, all uu’s and vv’s representations are revised as

(3) 𝑼=softmaxcol(𝑺)𝑽u,𝑽=softmaxrow(𝑺)𝑽v\vspace{-0.1cm}\boldsymbol{U}=\text{softmax}_{col}(\boldsymbol{S})^{\top}\boldsymbol{V}^{u},\boldsymbol{V}=\text{softmax}_{row}(\boldsymbol{S})\boldsymbol{V}^{v}

where 𝑼,𝑽(M+1)×dV\boldsymbol{U},\boldsymbol{V}\in\mathbb{R}^{(M+1)\times d_{V}}, and each row of them represents an adjusted attribute embedding. And softmaxcol()\text{softmax}_{col}(\cdot) and softmaxrow()\text{softmax}_{row}(\cdot) represent the softmax computation in terms of column and row, respectively.

In order to reduce the number of trainable parameters in ACAM, we set 𝑲u=𝑽u,𝑲v=𝑽v,d=dK=dV\boldsymbol{K}^{u}=\boldsymbol{V}^{u},\boldsymbol{K}^{v}=\boldsymbol{V}^{v},d=d_{K}=d_{V} in our experiments. It has been proven that such reduction does not affect the final recommendation performance.

The last operation in this layer is to use sum pooling111It was proven that sum pooling is a bit better than max/min/avg pooling through our experiments. in terms of column to compress matrices 𝑼,𝑽\boldsymbol{U},\boldsymbol{V} into the final representations of uu and vv as

(4) 𝒓u=sumcol(𝑼),𝒓v=sumcol(𝑽)\vspace{-0.1cm}\boldsymbol{r}_{u}=\text{sum}_{col}(\boldsymbol{U}),\quad\boldsymbol{r}_{v}=\text{sum}_{col}(\boldsymbol{V})

In ACAM’s prediction layer, the final score y^uv\hat{y}_{uv} is computed through an MLP of three layers fed with the concatenation of 𝒓u\boldsymbol{r}_{u}, 𝒓v\boldsymbol{r}_{v}, (avgrow(𝑬au))(\text{avg}_{row}(\boldsymbol{E}_{a}^{u}))^{\top} and (avgrow(𝑬av))(\text{avg}_{row}(\boldsymbol{E}_{a}^{v}))^{\top}, where avgrow()\text{avg}_{row}(\cdot) is the average operation in terms of row.

2.3. Model Training

As we introduced before, the item embedding 𝒂0\boldsymbol{a}_{0} and attribute embedding 𝒂i(1iM)\boldsymbol{a}_{i}(1\leq i\leq M) are the basis of computing y^uv\hat{y}_{uv}. Besides the cross-entropy loss as in (He et al., 2017; He and et al., 2018), we further use a KGE objective to learn 𝒂0\boldsymbol{a}_{0} and 𝒂i\boldsymbol{a}_{i} better, since an item and an attribute value correspond to a head entity and a tail entity in a KG, respectively. Specifically, we adopt the objective of transH (Wang and et al., 2014) model since it learns many-to-many relations effectively. Therefore, we minimize the following objective function to learn ACAM’s parameters:

(5) 𝒪=(u,v)𝒴[yuvlogy^uv+(1yuv)log(1y^uv)]+λ1<h,r,t>𝒦(𝒉𝒘r𝒉𝒘r)+𝒅r(𝒕𝒘r𝒕𝒘r)22+λ2Θ2\small\vspace{-0.1cm}\begin{split}&\mathcal{O}=-\sum\limits_{(u,v)\in\mathcal{Y}}\big{[}y_{uv}\log\hat{y}_{uv}+(1-y_{uv})\log(1-\hat{y}_{uv})\big{]}+\\ &\lambda_{1}\sum_{<h,r,t>\in\mathcal{K}}\|(\boldsymbol{h}-\boldsymbol{w}_{r}^{\top}\boldsymbol{hw}_{r})+\boldsymbol{d}_{r}-(\boldsymbol{t}-\boldsymbol{w}_{r}^{\top}\boldsymbol{tw}_{r})\|_{2}^{2}+\lambda_{2}\|\Theta\|^{2}\end{split}\vspace{-0.1cm}

where 𝒴\mathcal{Y} is the union of observed user-item interactions and the negative feedbacks, and 𝒦\mathcal{K} is the observed triplet set in the KG. The head entity’s embedding 𝒉\boldsymbol{h} and the tail entity’s embedding 𝒕\boldsymbol{t} in the second term are used as 𝒂0\boldsymbol{a}_{0} and 𝒂i\boldsymbol{a}_{i}, respectively. And 𝒘r\boldsymbol{w}_{r} and 𝒅r\boldsymbol{d}_{r} are the hyperplane and translation vector in transH, respectively.

3. Model Evaluation

3.1. Experiment Settings

3.1.1. Dataset Description

We conducted our experiments against two realistic datasets, i.e, Douban movies and NetEase songs222Douban: https://movie.douban.com, NetEase: https://music.163.com. The statistics of these two datasets are listed in Table 1. We fetched Douban movies’ attribute values from a large-scale Chinese KG CN-DBpedia (Xu et al., 2017). In our experiments, we selected four significant attributes (M=4M=4) for the two datasets, i.e., actor, director, writer and genre for Douban movies, and singer, album, composer and lyricist for NetEase songs. To reproduce our experiment results conveniently, we have published our datasets and ACAM’s source code on https://github.com/DeqingYang/ACAM-model.

Table 1. Statistics of the two experiment datasets.
dataset user number item number interaction number
Douban 4,965 41,785 958,425
NetEase 115,995 19,981 2,399,638

For each user, we truncated his/her recent 10 interactions as the positive samples in test set, and the rest interactions as the positive samples in training set. We also used negative sampling (He et al., 2017) to collect negative samples for each user. Note that we inclined to those popular items (with high rating scores or more reviews) when selecting negative samples in random, to avoid such cases that a user did not rate/review an item just due to the unawareness of the item. In both model training and prediction, each positive sample was paired with 4 negative samples, which is the general setting in previous models (He et al., 2017; He and et al., 2018).

3.1.2. Compared Models

NCF (He et al., 2017): This is a DNN-based recommendation model consisting of a GMF (generalized matrix factorization) layer and an MLP, where each user and item is represented only by a single embedding.

NAIS (He and et al., 2018): This is an attention-based recommendation model in which only user representations are refined by attention mechanism, and each item is represented by a single embedding.

AFM (Xiao and et al., 2017): It is a neural version of FM which adds an attention-based pooling layer after the pairwise feature interaction layer.

FDSA (Zhang and et al., 2019): This sequential recommendation model also incorporates feature-level representations but uses self-attention mechanism to only refine user representations rather than item representations.

RippleNet (et al., 2018): It is a representative KG-based recommendation model, which was compared with ACAM to highlight ACAM’s strengths in knowledge (attribute) exploitation.

DIN (Zhou and et al., 2018): It also imports various features to enrich user/item representations. Furthermore, it uses the attention mechanism similar to NAIS to adjust user representations only.

In the following display of experiment results, we adopt three popular metrics evaluating top-n recommendation or ranking, i.e., HR@n (Hit Ratio), nDCG@n (Normailzed Discounted Cumulative Gain) and RR (Reciprocal Rank). To avoid statistics bias, all model’s performance are reported as the average scores of 3 runnings. All baselines’ hyper-parameters were set to the optimal values in their origin papers.

Table 2. All models’ top3/5/10 performance for the two recommendation tasks.
LL Model Douban movie NetEase song
HR@3 nDCG@3 HR@5 nDCG@5 HR@10 nDCG@10 RR HR@3 nDCG@3 HR@5 nDCG@5 HR@10 nDCG@10 RR
3 NCF 0.8417 0.8483 0.7998 0.8113 0.7074 0.7492 0.9279 0.7903 0.7984 0.7689 0.7723 0.6893 0.7193 0.8952
NAIS 0.8443 0.8531 0.8112 0.8253 0.6768 0.7291 0.9336 0.7998 0.8027 0.7772 0.7824 0.6747 0.7140 0.8963
AFM 0.8399 0.8455 0.8080 0.8220 0.7091 0.7499 0.9228 0.8304 0.8358 0.8112 0.8104 0.7405 0.7708 0.9194
FDSA 0.8625 0.8690 0.8257 0.8437 0.7208 0.7598 0.9394 0.8101 0.8119 0.7949 0.8011 0.7339 0.7574 0.8994
RippleNet 0.7966 0.8012 0.7694 0.7743 0.6588 0.7009 0.8957 0.8025 0.8042 0.7848 0.7901 0.7185 0.7437 0.8951
DIN 0.8322 0.8407 0.7982 0.8023 0.6453 0.7028 0.9234 0.7892 0.7936 0.7658 0.7704 0.6723 0.6950 0.8949
ACAM 0.8680 0.8737 0.8324 0.8477 0.7137 0.7613 0.9495 0.8541 0.8576 0.8267 0.8377 0.7379 0.7733 0.9301
10 NCF 0.8335 0.8105 0.8011 0.8165 0.7003 0.7449 0.8491 0.7929 0.7875 0.7654 0.7785 0.6980 0.7177 0.8694
NAIS 0.8595 0.8694 0.8313 0.8440 0.6875 0.7417 0.9449 0.8026 0.8051 0.7823 0.7844 0.6774 0.7165 0.8971
AFM 0.8246 0.8306 0.8041 0.8105 0.7015 0.7404 0.9163 0.8289 0.8343 0.8073 0.8090 0.7410 0.7692 0.9186
FDSA 0.8588 0.8644 0.8292 0.8425 0.7203 0.7629 0.9353 0.8325 0.8278 0.8184 0.8257 0.7516 0.7725 0.9178
RippleNet 0.8173 0.8224 0.7964 0.8002 0.6726 0.7172 0.9104 0.7894 0.7921 0.7670 0.7704 0.7116 0.7359 0.8904
DIN 0.8325 0.8335 0.8002 0.8047 0.6579 0.7098 0.9105 0.7865 0.7906 0.7723 0.7796 0.6812 0.7065 0.8994
ACAM 0.8682 0.8739 0.8325 0.8478 0.7139 0.7634 0.9504 0.8615 0.8642 0.8305 0.8423 0.7498 0.7802 0.9317

3.2. Evaluation Results

3.2.1. Hyper-parameter Sensitivity

At first, we list the settings of some important hyper-parameters in our experiments in Table 3. The results of hyper-parameter tuning are not displayed due to space limitation. ACAM and other models incorporating attributes enhance their performance a little when more significant attributes are fed, but ACAM still keeps its superiority. In Table 2, we only display the results of L=3L=3 and L=10L=10 towards the two recommendation tasks. We did not focus on the scenario of L=1L=1 because a user’s preference can not be inferred precisely by only one historical interacted item. Given that many users’ preferences may vary as time elapses, using too many historical items to represent a user would induce noises, so we neglected the cases of L>10L>10. We also find that almost all models only improve their performance a bit when LL increases from 3 to 10. It implies that using 3 recent historical items to represent a user is adequate for many models to generate precise results. In addition, ACAM achieves the best performance when λ1\lambda_{1} is small, implying that too large weight of KGE objective will bias the synthetic objective of Eq. 5.

Table 3. Hyper-parameter settings.
dataset d/dK/dVd/d_{K}/d_{V} MM LL λ1\lambda_{1} λ2\lambda_{2}
Douban 512 4 3,10 0.1 0.001
NetEase 512 4 3,10 0.05 0.001

3.2.2. Recommendation Performance Comparison

Table 2 displays all compared models’ top3/5/10 recommendation performance on the two datasets. We find that ACAM outperforms NCF from the table, which justifies the effectiveness of incorporating attribute embeddings. ACAM’s superiority over NAIS shows that the fine-grained user/item representations based on attribute embeddings can improve the effectiveness of attention-based models. It also justifies the rationale of augmenting item representations and user representations simultaneously by co-attention mechanism. Although AFM also captures the interactions (correlations) between different features (attributes) through attention mechanism, it does not perform well as ACAM. It shows that ACAM’s co-attention mechanism captures the correlations between different attributes better than AFM’s attention in terms of recommendation performance. Although FDSA and DIN also incorporate feature embeddings to enrich user/item representations, they do not perform well as ACAM, implying that co-refining user representations and item representations by co-attention mechanism is more effective than only refining user representations by attention mechanism (DIN) or self-attention mechanism (FDSA). Although RippleNet is a state-of-the-art KG-based recommendation model, it is also inferior than ACAM, showing that ACAM’s co-attention mechanism exploits item knowledge (attributes) more effectively.

4. Related Work and Conclusion

NCF (He et al., 2017) is a pioneer work of employing DNNs into recommender systems which fuses a GMF and an MLP together to learn the user/item representation. Inspired by attention’s success in computer vision, image and NLP, many researchers have also employed all kinds of attention mechanisms in recommendation models to capture diverse user preferences more precisely. For example, AFM (Xiao and et al., 2017) adds an attention-based pooling layer after the pairwise feature interaction layer, to achieve content-aware recommendation. As our model, DIN (Zhou and et al., 2018) also imports user/item features and introduces a local activation unit to learn user representations adaptively w.r.t. different candidate items. Similarly, NAIS (He and et al., 2018) assigns different attentions to each historical item of a user to generate adaptive user representations which bring better recommendation results. FDSA (Zhang and et al., 2019) applies self-attention mechanism to refine user representations which are generated based on item features.

We propose a novel recommendation model ACAM in this paper, which first represents users and items with fine-grained attribute embeddings, and then augments user representations and item representations simultaneously by an attribute-level co-attention module. Such augmented representations are proven beneficial to performance gains through our extensive experiments.

References

  • (1)
  • et al. (2018) Hongwei Wang et al. 2018. RippleNet: Propagating User Preferences on the Knowledge Graph for Recommender Systems. In Proc. of CIKM.
  • He and et al. (2018) Xiangnan He and Zhankui He et al. 2018. NAIS: Neural Attentive Item Similarity Model for Recommendation. IEEE TKDE (2018), 1–1.
  • He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat Seng Chua. 2017. Neural Collaborative Filtering. In Proc. of WWW.
  • Kang and McAuley (2018) Wang-Cheng Kang and Julian McAuley. 2018. Self-Attentive Sequential Recommendation. (2018).
  • Shen and et al. (2019) Chenlu Shen and Deqing Yang et al. 2019. A Deep Recommendation Model Incorporating Adaptive Knowledge-based Representations. In Proc. of DASFAA.
  • Wang and et al. (2014) Zhen Wang and Jianwen Zhang et al. 2014. Knowledge graph embedding by translating on hyperplanes. In Proc. of AAAI.
  • Xiao and et al. (2017) Jun Xiao and Hao Ye et al. 2017. Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks. In Proc. of IJCAI.
  • Xu et al. (2017) B. Xu, Y. Xu, J. Liang, C. Xie, B. Liang, W. Cui, and Y. Xiao. 2017. CN-DBpedia: A Never-Ending Chinese Knowledge Extraction System. In Proc. of ICIEA.
  • Zhang and et al. (2019) Tingting Zhang and Pengpeng Zhao et al. 2019. Feature-level Deeper Self-Attention Network for Sequential Recommendation. In Proc. of IJCAI.
  • Zhou and et al. (2018) Guorui Zhou and Chengru Song et al. 2018. Deep Interest Network for Click-Through Rate Prediction. In Proc. of KDD.