Context-Enhanced Entity and Relation Embedding for
Knowledge Graph Completion (Student Abstract)
Abstract
Most researches for knowledge graph completion learn representations of entities and relations to predict missing links in incomplete knowledge graphs. However, these methods fail to take full advantage of both the contextual information of entity and relation. Here, we extract contexts of entities and relations from the triplets which they compose. We propose a model named AggrE, which conducts efficient aggregations respectively on entity context and relation context in multi-hops, and learns context-enhanced entity and relation embeddings for knowledge graph completion. The experiment results show that AggrE is competitive to existing models.
Introduction
Knowledge graphs(KGs) store a wealth of knowledge from real world into structured graphs, which consist of collections of triplets, and each triplet represents that head entity is related to tail entity through a relation type . KGs have play a significant role in AI-related applications such as recommendation systems, question answering, and information retrieval. However, the coverage of knowledge graphs nowadays is still far from complete and comprehensive, researchers have proposed a number of knowledge graph completion(KGC) methods to predict the missing links/relations in the incomplete knowledge graph.
Most state-of-the-art methods for KGC are usually based on knowledge graph embeddings, which normally assign an embedding vector to each entity and relation in the continuous embedding space and train the embeddings based on the existing triplets, i.e., to make the scores for observed graph triplets higher than random ones. However, most of them fail to utilize the context/neighborhood of the entity or relation, which may contain rich and valuable information for KGC.
Recently, some researches have proved the significance of contextual information of entity and relation in KGC. For example, A2N (Bansal et al. 2019) and RGHAT (Zhang et al. 2020) propose attention-based methods which leverage the contextual information for link prediction by attending to the neighbor entities and lead to more accurate KGC. PathCon (Wang, Ren, and Leskovec 2020) considers relational context of the head/tail entity and relational paths between head and tail entity in one model, and finds that they are critical to the task of relation prediction. However, these researches just utilize entity context or relation context, which may lead to information loss.
In this paper, we aim to take full advantage of both the entity context and relation context for enhancing the KGC task. Specifically, different from the neighborhood definition in traditional KG topology, for each element in each triplet, we extract the pair composed of the other two elements as one neighbor in its context. Then we propose an efficient model, named AggrE, to alternately aggregate the information of entity context and relation context in multi-hops into entity and relation, and learn context-enhanced entity embeddings and relation embeddings. Then we use the learned embeddings to predict the missing relation given a pair of entities to complete knowledge graphs.

The Proposed Model
Given a knowledge graph , where and are entity set and relation set respectively. Firstly, we extract contexts of entities and relations from the existing triplets, as shown in Figure 1. For an entity in , we define the entity context of as , which is actually the set of neighbor entities with their corresponding relations of . For a relations in , we define the relation context of as , which is actually the two endpoints of . Denote and as the randomly initialized embedding of and respectively, our intuition is to aggregate the contextual information into the representations of each entity and relation to help the prediction. We define the aggregation functions as:
(1) | |||
(2) |
where and is the number of aggregation layers, and is the -th layer’s output embedding of and , is the element-wise product, and are layer-specific softmax normalization constants, represent how important each entity context is for : , and represent how important each relation context is for :. The represents the score for each possible triplet after layers’ aggregation, and we use the same score function with DistMult(Yang et al. 2014) to calculate the triplet scores:
(3) |
Where is a diagonal matrix with in its diagonal. After layers’ aggregation, we can obtain the final output and of each entity and relation, which contain neighbor information from their -hops contexts. Then we conduct a softmax loss function on the final triplet scores to compute the likelihood of predicting the correct relations:
(4) |
where is the set of relations. We use a mini-batch Adam optimizer to minimize . The difference between our aggregation model and Graph Neural Network(GNN) is that instead of using complex matrix transformation, we use element-wise products to obtain neighborhood information and add it directly to central nodes, as the embedding itself can be regarded as trainable transformation parameters. Also our model is expected to be more efficient and suitable for larger knowledge graph.
Experiments
We conduct experiments on two widely used KG benchmarks: FB15K-237 and WN18RR. Noted that the trainable parameters in our model are only entity and relation embeddings, for a fair comparison, we choose 5 traditional baselines with a small amount of parameters. We use Mean Reciprocal Rank (MRR, the mean of all the reciprocals of predicted ranks), Mean Rank (MR, the mean of all the predicted ranks), and Hit@3(the proportion of correctly predicted entities ranked in the top 3 predictions) as evaluation metrics. In the experiment, we set the embedding dimensionality as 256, the learning rate as 5e-3, l2 penalty coefficient as 1e-7, batch size as 512 and a maximum of 20 epochs. We set the number of aggregation layers as 2 for WN18RR and 4 for FB15K-237 repectively.
WN18RR | FB15K-237 | |||||
---|---|---|---|---|---|---|
MRR | MR | Hit@3 | MRR | MR | Hit@3 | |
TransE | 0.784 | 2.079 | 0.870 | 0.966 | 1.352 | 0.984 |
Distmult | 0.847 | 2.024 | 0.891 | 0.875 | 1.927 | 0.936 |
ComplEx | 0.840 | 2.053 | 0.880 | 0.924 | 1.494 | 0.970 |
SimplE | 0.730 | 3.259 | 0.755 | 0.971 | 1.407 | 0.987 |
RotatE | 0.799 | 2.284 | 0.823 | 0.970 | 1.315 | 0.980 |
AggrE | 0.953 | 1.136 | 0.989 | 0.966 | 1.171 | 0.989 |
As shown in Table 1, The results indicate that AggrE significantly outperforms all the baselines on two benchmarks, which indicates the effectiveness of AggrE. Specifically, the improvement is rather significant for WN18RR, where the links between entities are more sparse than in FB15K-237, this may because without using extra parameters other than embeddings, AggrE is less prone to overfitting. Besides, AggrE achieves substantial improvements against DistMult on all the metrics, and noted that they have the similar objective functions, it indicates that aggregating contextual information for entities and relations is valuable and can great improve the performance of prediction.
Conclusion
In this paper, we specify a novel definition on the context/neighborhood of entity and relation in KGs, and propose a multi-layer aggregation models to compose contextual information to embeddings for KGC. In the future, we will explore more possible aggregation functions in our model.
Acknowledgments
This research was supported by the NSFC under Grant 61836013, National Key R&D Plan of China (2016YFB0501901), Beijing Nova Program of Science and Technology (Z191100001119090).
References
- Bansal et al. (2019) Bansal, T.; Juan, D.-C.; Ravi, S.; and McCallum, A. 2019. A2N: attending to neighbors for knowledge graph inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4387–4392.
- Wang, Ren, and Leskovec (2020) Wang, H.; Ren, H.; and Leskovec, J. 2020. Entity Context and Relational Paths for Knowledge Graph Completion. arXiv preprint arXiv:2002.06757 .
- Yang et al. (2014) Yang, B.; Yih, W.-t.; He, X.; Gao, J.; and Deng, L. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 .
- Zhang et al. (2020) Zhang, Z.; Zhuang, F.; Zhu, H.; Shi, Z.-P.; Xiong, H.; and He, Q. 2020. Relational Graph Neural Network with Hierarchical Attention for Knowledge Graph Completion. In AAAI, 9612–9619.