Diversified and Adaptive Negative Sampling on Knowledge Graphs
Abstract
In knowledge graph embedding, aside from positive triplets (i.e., facts in the knowledge graph), the negative triplets used for training also have a direct influence on the model performance. In reality, since knowledge graphs are sparse and incomplete, negative triplets often lack explicit labels, and thus they are often obtained from various sampling strategies (e.g., randomly replacing an entity in a positive triplet). An ideal sampled negative triplet should be informative enough to help the model train better. However, existing methods often ignore diversity and adaptiveness in their sampling process, which harms the informativeness of negative triplets. As such, we propose a generative adversarial approach called Diversified and Adaptive Negative Sampling (DANS) on knowledge graphs. DANS is equipped with a two-way generator that generates more diverse negative triplets through two pathways, and an adaptive mechanism that produces more fine-grained examples by localizing the global generator for different entities and relations. On the one hand, the two-way generator increase the overall informativeness with more diverse negative examples; on the other hand, the adaptive mechanism increases the individual sample-wise informativeness with more fine-grained sampling. Finally, we evaluate the performance of DANS on three benchmark knowledge graphs to demonstrate its effectiveness through quantitative and qualitative experiments.
keywords:
Knowledge graphs, Graph representation learning, Graph neural networks, Negative samplingWGMwgm\xspace \csdefQEqe\xspace \csdefEPep\xspace \csdefPMSpms\xspace \csdefBECbec\xspace \csdefDEde\xspace
[1]organization=Singapore Management University, addressline=81 Victoria St, city=Singapore, postcode=188065, country=Singapore \affiliation[2]organization=Agency for Science, Technology and Research, addressline=1 Fusionopolis Way, city=Singapore, postcode=138632, country=Singapore \affiliation[3]organization=Beijing Normal University, addressline=19 Xinwai Ave, Beitaipingzhuang, Haidian District, city=Beijing, postcode=100875, country=China
We design a two-way generator to produce diverse negative triplets, to increase the overall informativeness.
We employ a FiLM layer to adapt the global generator model into local models, to increase the individual informativeness of the negative triplets.
We conduct extensive experiments on three benchmark datasets. The results demonstrate the superiority of our proposed approach.
1 Introduction
Knowledge graphs have been widely used to encode facts about the real world. Typically, each fact describes a relationship between a head and tail entity in the form of a triplet head, relation, tail, and different entities across facts are interconnected to form a graph structure. The rich facts contained in a large-scale knowledge graph can be used to enhance numerous applications that rely on real-world knowledge, such as question answering [41, 16, 33], object detection [9, 15, 19] and recommendation [4, 12, 8, 42]. To effectively exploit the facts for these applications, a common approach is to first perform knowledge graph embedding that converts the symbolic entities and relations to a latent vector space. The learned embedding aims to capture relevant structural and semantic information in the knowledge graph, which can then be integrated with other machine learning models.
In this paper, we focus on the problem of knowledge graph embedding. The high-level idea is that the embedding vectors of entities and relations co-occurring in the same fact should be bounded by certain constraints due to their relatedness. For instance, consider a fact and a classic method TransE [2]. TransE maps each entity and relation in the fact to vectors, i.e., , , , respectively, so that they approximately satisfy the constraint by minimizing the loss . On the contrary, a nonfact such as would maximize the loss . Given this contrast, the factual triplets are known as positive triplets (or examples), whereas the non-factual triplets are called negative triplets. Although positive triplets are readily available, negative triplets are often obtained through random sampling. More recent works [43, 3, 1, 48, 30, 50] explore advanced constraints or losses [2, 36, 44] on the triplets, but the sampling strategy for negative triplets remains a crucial yet less explored problem.
Earlier negative sampling approaches resort to random sampling, e.g., by replacing the tail (or head) entity in a positive triplet with a random entity from the knowledge graph sampled in a uniform [42] or popularity-weighted manner [20]. Although random sampling is straightforward, it is often inadequate to optimize the informativeness of negative triplets. The informativeness refers to how much information each negative triplet could contribute to model learning. Intuitively, a more informative negative triplet would improve the efficiency of model training and accelerate model convergence. For instance, the positive triplet given earlier, , , is considered a more informative negative triplet than , , , as the latter can be easily identified as negative and thus helps little in refining the decision boundary. Although various scoring functions [2, 36, 44, 32, 31] help to judge the informativeness of negative triplets, they do not consider the diversity and adaptiveness of the sampling process, which are two aspects we propose to study in this work.
On one hand, diversity helps to increase the overall informativeness of all the negative triplets collectively. We observe that negative triplets can be associated with both entities and relations. For example, the tail entity of the positive triplet can be replaced by entities associated not only with the head entity , such as and , but also with the relation , such as (a country with some capital city) and (a capital city of some country). On the other hand, adaptive sampling of negative triplets would make entity- or relation-specific adjustments to sample selection, which increases the individual informativeness of each triplet in a finer-grained manner. For instance, selecting a tail entity for , or using a global sampling model could be suboptimal given the variability among these entities. Instead, local models that condition on each entity would be able to adapt to such differences and make each triplet more informative.
In view of the above, we propose a Diversified and Adaptive Negative Sampling (DANS) approach for knowledge graph embedding, to improve both the overall and individual informativeness of negative triplets. Similar to previous state-of-the-art approaches such as KBGAN [3], we adopt a generative adversarial network (GAN) [38] for the generation of negative samples. However, there are two significant differences from previous GAN-based negative sampling on the knowledge graph. First, we design a two-way generator to produce diversified samples that are associated with both entities and relations w.r.t. a positive triplet, which aims to increase the overall informativeness of the samples. More specifically, the generator consists of two pathways to produce two different kinds of negative triplets associated with a given entity and entity-relation, respectively. Second, we design an adaptive mechanism to modulate the global generator model into local models to handle the differences across entities and relations, which aims to increase the individual informativeness of the samples in a finer-grained manner. In particular, we employ a Feature-wise Linear Modulation (FiLM) layer [26] that conditions the generator on a given entity or entity-relation input. In summary, we make the following contributions:
-
•
We design a two-way generator to produce diverse negative triplets, to increase the overall informativeness.
-
•
We employ a FiLM layer to adapt the global generator model into local models, to increase the individual informativeness of the negative triplets.
-
•
We conduct extensive experiments on three benchmark datasets. The results demonstrate the superiority of our proposed approach.
2 Background
Negative sampling is an important issue in various machine learning tasks such as recommendation systems [28] and natural language processing [20]. In the context of knowledge graph embedding, negative triplets are often constructed by replacing the tail or head entity in a positive triplet with a randomly sampled entity [2, 40, 17]. Unfortunately, in uniform [2] or popularity-weighted sampling [20], the sampled entity could be completely unrelated to the head or the relation, and therefore be less informative.
To sample more informative negative triplets, researchers have leveraged different heuristics or learning strategies. Several structure-aware models [1, 46, 18] exploit the graph structures, which generally select negative examples in the neighborhood of positive examples. For example, SANS [1] hypothesizes that entities that are in close proximity to each other, but do not share a direct relationship, are better candidates for negative sampling. In a similar spirit, PinSage [18] generates localized graphs via random walks to extract informative negative samples. However, these approaches have a high risk of selecting false negatives, as not explicitly related entities in close proximity could still form positive triplets due to the incompleteness of the observed graph.
Other approaches seek to quantify the informativeness of the negative triplets through various learning strategies, including GANs [3, 38, 11, 47], reinforcement learning [39, 49], and importance sampling [48]. These methods provide a more explicit and systematic scoring of negative triplets which often led to better performance. However, these approaches do not consider the diversity and adaptiveness of negative sampling, which are crucial to the overall and individual informativeness of the negative triplets, respectively.
Besides, recent studies [45, 27] show that the optimal negative sampling distribution should be positively but sub-linearly correlated to the positive sampling distribution. Although our proposed model shares a similar view by learning the underlying distribution of positive samples to produce negative samples, we take one step further to consider the diversity and adaptiveness of the negative samples in an adversarial setting. In particular, toward adaptiveness, we borrow the idea from Feature-wise Linear Modulation (FiLM) [26], which was first introduced in the area of visual question answering. Its mechanism includes a learnable feature-wise affine transformation on the hidden neurons of a neural network, conditioned on an arbitrary input. In our context, we employ a FiLM layer to adapt the global generators into local models conditioned on individual input (entity or relation).
3 Methodology
In this section, we introduce the problem formulation and some preliminaries on knowledge graph embedding, followed by our proposed approach DANS.

Before we delve into the details, we first sketch the overall framework in Figure 1. The model consists of four main parts: (a) A base embedding model which learns the embeddings for entities and relations; (b) the two-way adaptive generator which generate “fake” entity samples to construct negative examples; (c) the two-way discriminator which utilize both adversarial and auxiliary losses to improve the quality of produced samples; (d) model training with negative sampling, where we replace one entity in a positive triplet with a generated fake entity to form negative triplets, and train the base model together with the original positive triplets.
3.1 Problem formulation and preliminaries
A knowledge graph (KG) is defined by an entity (node) set , a relation set and a ground-truth or positive triplet (edge) set . Given a triplet for some and , a typical KG model aims to learn a scoring function to estimate the probability that is a positive triplet, i.e., is a fact that should appear in the ground truth set .
Given the power of graph convolutional networks, in this paper, we adopt a multi-layer relational graph convolutional network (RGCN) [29] to serve as our base embedding model in Figure 1(a). The base model encodes the entities in layer into vectors in a latent embedding space, by aggregating their embeddings from the previous layer , as follows.
(1) |
where is the set of neighbors of entity under relation , is a trainable weight matrix for , is an additional trainable weight matrix to capture the self-information of each entity in layer , and ReLU is the activation function. Assuming a total of layers are stacked, the embeddings in the last layer are the output embeddings, which we simply write as .
To optimize the parameters, a set of training triplets that consists of both positive and negative triplets is used. As shown in Figure 1(d), our objective is to sample a set of high-quality negative triplets, which, together with positive triplets, will be used to minimize the following cross-entropy loss:
(2) |
where if , else . We implement using three popular decoders, namely, DistMult [44], ComplEx [36] and RotatE [34]. We provide the DistMult function below, and leave the details of ComplEx and RotatE to A.
(3) |
where is the sigmoid activation, are the head, tail entity embeddings from RGCN, is diagonal matrix whose diagonal is , an -specific trainable vector of the decoder. Therefore, the full set of training parameters of the base model is
3.2 Adaptive two-way generator
A common way to obtain a negative triplet is to replace the tail (or head) entity in a positive triplet by a randomly sampled entity. Beyond simple random sampling, generative adversarial nets (GAN) [10] such as KBGAN [3], IGAN [38], HeGAN [11] and GNDN [47], which learn the underlying sample distributions, have been shown to be effective in negative sampling on KG or other graph structures.
Formally, given a positive triplet , a generator aims to produce a “fake” tail entity to replace the real tail , resulting in a negative triplet . More precisely, is a function that maps a noise (typically sampled from a prior distribution) to a vector in the entity embedding space. Although we follow a similar process, distinct from existing GAN-based approaches, we propose an adaptive two-way generator, as shown in Figure 1(b). It not only diversifies the generation of fake entities, but also localizes the global generator model to adapt to fine-grained differences across entities.
Diversity. Classical GANs generate fake samples through a single pathway and assume a fixed prior distribution, which limits the diversity of fake entity generation. Particularly, in the context of KG, we can generate a fake tail entity associated with either the head entity only, or the relation as well. This improves the diversity of resulting negative triplets and increases the overall informativeness. Hence, we propose a two-way generator that consists of two pathways, namely and , to generate negative triplets associated with a given entity and entity-relation, respectively. Furthermore, having personalized priors for each entity or relation would further enhance the diversification. Specifically, to replace the tail entity in a positive triplet (the same process would also apply to replacing the head entity ), we generate fake tail entity embeddings and from the two pathways, as follows.
(4) | ||||
(5) |
where each pathway has its own parameters, i.e., parameterized by and parameterized by . The noise vector that feeds into each pathway is sampled from a personalized multivariate Gaussian distribution for each entity/relation, or depending on the pathway. represents the prior Gaussian distribution for sampling the input to the generator , is a hyper-parameter controlling the covariance of the multivariate Gaussian, is the identity matrix, and stands for element-wise multiplication. Intuitively, as the prior Gaussian distributions in Eqs. (4) and (5) are centered on different embeddings, or , it helps to diversify the generated samples from different pathways.
Each pathway is implemented as a multi-layer perceptron (MLP). Taking as an example, its MLP is parameterized by which consists of the weights and biases in each layer. Let denote the activations of the -th MLP layer, where the activations of the last MLP layer are simply the output embedding of . The architecture of mirrors that of .
Adaptiveness. While more diverse samples help increase the overall informativeness, it is also important to improve the informativeness of individual samples. On the one hand, all input entities or relations sharing a global generator model are unable to fully adapt to fine-grained differences across entities or relations. On the other hand, training one model for each entity or relation can cause severe overfitting and incur large overheads. To address the dilemma, we still train a global generator model, but allow the global model to be modulated through a Feature-wise Linear Modulation (FiLM) layer conditioned on each input entity or relation, which essentially adapts the shared global model into local models. Thus, in addition to the global model parameters, the adaptive mechanism only needs to learn the parameters of the FiLM layer, instead of one set of model parameters for each entity or relation.
Consider the pathway to generate a fake tail entity for a head entity . We adapt the global model to suit the head entity , by modulating the activations in each hidden layer of :
(6) |
where and are vectors conditioned on the head entity and have the same dimension as the -th layer of . They are used to scale and shift the activations of the -th layer of . That is, the global is adapted into a local model conditioned on . More specifically, and are output of the FiLM layer applied to the -th layer of , as follows.
(7) | |||
(8) |
Note that the head entity embedding is the input to , making the output adaptive to and conditioned on . can be implemented as a MLP, parameterized by and in the -th layer of . Similarly, the second pathway can be modulated by a FiLM layer , whose input is , to generate a fake tail entity for a head entity and relation . is parameterized by and in the -th layer of , to output and to scale and shift the activations in .
To sum up, the trainable parameters in the adaptive two-way generator, , include the weights in the two global pathways and the FiLM layer weights for each layer in each pathway. Assuming a total of hidden layers in the global pathways, we would have .
3.3 Two-way discriminator
As in a standard GAN architecture, a discriminator is needed to help the generator produce high-quality fake entities that mimic real entities. Specifically, the discriminator and the generator compete with each other in a minimax game, in which the generator aims to fool the discriminator by producing realistic looking entities, while the discriminator aims to beat the generator by distinguishing the real and fake entities. In our case, given the two-way generator, we further equip the discriminator with the ability to distinguish the fake entities generated by the two pathways, which can further differentiate and diversify the two pathways.
Concretely, as shown in Figure 1(c), the discriminator also has two pathways: , an adversarial pathway to distinguish fake and real entities, and , an auxiliary pathway to distinguish fake entities generated by and . Taking the generation of tail entities as an example, given the real tail entity in a positive triplet, as well as the fake entities generated by and generated by , tries to distinguish from and , while tries to distinguish from . In other words, each of them involves a binary classification:
(9) | ||||
(10) |
where and are implemented as a fully connected layer, and is a shared hidden representation computed from the embedding of a real or fake entity . The shared hidden representation allows both and to benefit from each other during training as in Odena [23], as they collectively try to distinguish three different classes of samples (, , ).
Note that (or ) is the predicted value of the ground-truth label (or ), such that if is a real entity, else . Furthermore, for a fake entity , we define if is generated via Eq. (4), or 0 if generated via Eq. (5). Subsequently, we employ a cross-entropy loss on the two discriminator pathways:
(11) | |||
(12) |
In summary, the set of trainable parameters of the two-way discriminator pathway includes the shared parameters and the weights of each classifier, i.e.,
3.4 Adversarial training
Lastly, we train the generator, discriminator, and base embedding model jointly. On the one hand, the generator aims to fool the adversarial pathway of the discriminator, making harder to distinguish real and fake entities, as below.
(13) |
where is a real tail entity, and are fake tail entities from and , respectively (again, we only illustrate the case where the tail entity in a positive triplet is replaced). The last term in Eq. (3.4) is a regularization term on the scaling and shifting factors to prevent overfitting as in Oreshkin et al. [24], and is a hyper-parameter to control the strength of regularization. On the other hand, the goal of the discriminator is to overcome the generators by distinguishing fake and real entities, as well as fake entities from different generator pathways, as follows.
(14) |
where can be either real or fake entity in the first term, but can only be a fake entity in the second term.
Following a typical adversarial training scheme in negative sampling on knowledge graphs in KBGAN [3], we alternate the model updating among the three parties, as follows. First, we train the generator by updating the generator parameters with Eq. (3.4), while freezing the discriminator parameters and the base model parameters . Next, we update with Eq. (3.4), while freezing . Finally, we update by minimizing the loss on the positive and negative triples in Eq. (2), while freezing the other two parameter sets. We repeat the three steps until the convergence of all parties are achieved.
4 Experiments
We perform empirical evaluation on three benchmark knowledge graphs. We first compare the empirical performance of the proposed model DANS111We include the code for review in Supplementary Materials. with state-of-the-art baselines. In addition, we seek to address a number of research questions (RQ) through more in-depth empirical analysis. RQ1: Does the two-way design in the generator improve model performance? RQ2: Does the adaptive FiLM layer in the generator improve model performance? RQ3: What is the impact of the number of negative triplets and adaptive regularization, respectively? RQ4: Can we observe the diversity and adaptiveness of generated triplets?
4.1 Experimental Design
Entities | Relations | Train | Val | Test | Total | |
---|---|---|---|---|---|---|
WN18RR | 40,943 | 11 | 86,835 | 3,034 | 3,134 | 93,003 |
NELL-995 | 75,492 | 200 | 149,678 | 543 | 3,992 | 154,213 |
UMLS | 135 | 46 | 5,216 | 652 | 661 | 6,529 |
Datasets. Three benchmark knowledge graphs are used for our experiment. (1) WN18RR [2], a harder variant of WN18 [7], which is derived from WordNet consisting of hyponym and hypernym relations between words. Compared to WN18, WN18RR removes inverse relations to minimize leakage from training. (2) NELL-995 [43] is a subset of the web-based facts collected by the 995th iteration of the Nell system [5] which contains a large pool of entity types and only the top 200 relations are retained. (3) UMLS [7] is a specialized knowledge base containing medical entities and their semantic relationships. The entities are biomedical concepts (e.g., disease, antibiotic), and the relations include interactions such as and . Table 1 gives a summary of the datasets used.
Task and evaluation. We employ the standard knowledge graph completion task [21, 4, 13, 6]. Specifically, for each positive test triplet, we construct a list of candidate triplets that also include negative triplets, which are obtained by replacing either the head or tail of the positive triplet with every other entity in the dataset. To avoid false negatives, we follow previous work Bordes et al. [2] by adopting their “filtered setting”. We then rank the candidate triplets based on the scoring function. For evaluation, we adopt several standard ranking metrics including Mean Reciprocal Ranking (MRR), Hit ratio at 1 (H@1) and Normalized discounted cumulative gain at 5 (NDCG@5) [35]. Details of these ranking metrics can be found in B.
Baselines. We compare with baselines in two distinct categories:
(1) Negative samplers with the same RGCN backbone [29] and decoders. In other words, they are flexible “plug-ins” that only replace the sampling strategy for a fair comparison to our method DANS. They include Rand, which replaces the head or tail entity with a uniformly sampled random entity; Pop [20]: a variant of Rand that substitutes uniform sampling with popularity-weighted sampling; Self-adv [35]: a self-adversarial negative sampling methodology; MCNS [45]: a model which derives negative samples from a distribution that is positively but sub-linearly correlated with the positive distribution.
(2) Other state-of-the-art baselines for knowledge graph embedding which may employ a variety of different backbones, heuristics and techniques that diverge from DANS, for a comprehensive comparison. They include SANS-RW [1]: a structure-aware model that selects negative samples at close proximity from positive nodes via random walks on the graph; NSCaching [48]: a model that employs importance sampling to sample more informative negative triplets; KBGAN [3]: a GAN-based model that learns to generate informative negative triplets; CAKE [22]: a framework which leverages extra information such as entity types to from factual triplets to sample negative triplets; SMiLE [25]: a framework which employs specific contextual information influenced by entity types to sample negative triplets.
Parameter settings. Our model DANS and other negative samplers (Random, Pop, Self-Adv and MCNS) employ RGCN [29] as the backbone, which follows JinheonBaek’s pytorch implementation. RGCN is first pre-trained for 15000 epochs, and our base embedding model is then initialized using the pre-trained weights. We train the model for 5000 epochs, using a learning rate of 0.001 and a mini-batch size of 1000 for UMLS, WN18RR and NELL-995. In each mini-batch, the generator and discriminator epochs are set to 5 and 1, respectively, and their learning rates are set to 1e-3 and 1e-4, respectively. The regularization coefficient for the FiLM layer in Eq. (13) is set to 1e-4 for all three datasets as it is the most optimal among candidate set 1e-2, 1e-3, 1e-4, 1e-5, 1e-6.
Furthermore, we generate negative triplets for each positive triplet, out of which the first ten negative triplets are equally split between the two generator pathways, while the remaining ten negative triplets are obtained via uniform random sampling to further increase the diversity. In all cases, either the head or tail of the positive triplets are randomly replaced with negative entities, but not both. We set the output embedding dimension to 100 for all methods, except SANS-RW, where is set to the recommended 1,000 to achieve optimal performance. RGCN, RGCN-P, RGCN-Adv and RGCN-MCNS follow the same implementation and settings per the backbone of DANS.
In addition, the hyper-parameters related to negative sampling via Metr-opolis-Hastings in RGCN-MCNS have been copied from the original paper ’s link prediction experiments in Yang et al. [45]. To reduce the variance resulting from parameter initialization, the experimental results are calculated from an average of five runs with different seeds in all methods. Furthermore, every method is standardized to use the triplet loss in Eqs.(2). Other baseline settings have also been tuned according to the recommendations of the literature. Additional details can be found in C.
4.2 Results and Analysis
Sampling | WN18RR | NELL-995 | UMLS | ||||||
---|---|---|---|---|---|---|---|---|---|
method | MRR | H@1 | NDCG@5 | MRR | H@1 | NDCG@5 | MRR | H@1 | NDCG@5 |
DistMult | |||||||||
Rand | .372.002 | .343.003 | .369.005 | .218.001 | .146.002 | .219.002 | .696.010 | .607.082 | .693.007 |
Pop | .374.002 | .342.002 | .376.006 | .216.001 | .142.002 | .216.003 | .680.009 | .589.012 | .692.005 |
Self-adv | .370.007 | .332.010 | .373.006 | .238.003 | .156.003 | .241.003 | .717.009 | .624.015 | .733.008 |
MCNS | .376.004 | .340.005 | .374.006 | .226.002 | .144.002 | .221.003 | .700.002 | .606.008 | .717.002 |
DANS | .381.006 | .352.007 | .386.008 | .227.004 | .162.007 | .220.009 | .724.008 | .641.009 | .725.008 |
RotatE | |||||||||
Rand | .234.009 | .110.003 | .260.008 | .182.003 | .093.003 | 1̇89.003 | .817.015 | .683.021 | .855.013 |
Pop | .235.007 | .095.003 | .268.007 | .181.002 | .131.002 | .200.003 | .800.005 | .673.010 | .839.004 |
Self-adv | .202.007 | .058.010 | .235.006 | .186.002 | .096.003 | .194.002 | .809.007 | .677.007 | .848.007 |
MCNS | .242.009 | .132.004 | .288.006 | .194.003 | .122.004 | .200.004 | .822.005 | .682.006 | .884.006 |
DANS | .249.002 | .154.001 | .274.003 | .195.010 | .135.011 | .208.010 | .833.004 | .716.006 | .866.005 |
ComplEx | |||||||||
Rand | .386.007 | .346.005 | .390.006 | .245.004 | .172.003 | 2̇51.006 | .898.008 | .822.017 | .920.015 |
Pop | .389.011 | .341.007 | .387..012 | .241.005 | .179.006 | .245.004 | .840.009 | .747.009 | .865.008 |
Self-adv | .375.006 | .329.011 | .382.013 | .250.005 | .181.007 | .277.008 | .908.009 | .844.006 | .925.010 |
MCNS | .392.008 | .343.007 | .394.008 | .248.007 | .177.004 | .264.009 | .879.007 | .835.005 | .892.011 |
DANS | .404.005 | .347.004 | .392.009 | .257.006 | .186.010 | .255.008 | .920.007 | .857.011 | .927.008 |
WN18RR | NELL-995 | UMLS | |||||||
Model | MRR | H@1 | NDCG@5 | MRR | H@1 | NDCG@5 | MRR | H@1 | NDCG@5 |
SANS-RW | .349.010 | .340.013 | .334.010 | .135.006 | .109.008 | .110.008 | .510.008 | .369.009 | .478.003 |
NSCaching | .374.002 | .337.003 | .374.002 | .177.004 | .150.003 | .140.002 | .625.004 | .508.021 | .607.004 |
KBGAN | .172.004 | .070.006 | .155.002 | .170.002 | .077.004 | .195.009 | .680.005 | .556.023 | .654.004 |
CAKE | .353.007 | .345.005 | .351.008 | .204.006 | .130.007 | .175.012 | .441.013 | .365.008 | .383.010 |
SMiLE | .315.006 | .291.007 | .294.012 | .131.004 | .127.005 | .105.008 | .414.015 | .345.007 | .372.013 |
DANS | .381.006 | .352.007 | .386.008 | .227.004 | .162.007 | .220.009 | .724.008 | .641.009 | .725.008 |
Table 2 reports quantitative comparison against the first category of baselines involving different negative samplers under the same backbone and decoder. Overall, our model DANS consistently leads to better performance for DistMult, RotatE and ComplEx decoders. This shows the robustness of our approach across various decoders. In general, DANS performs better than Rand and its variant Pop, showing that it is important to account for the informativeness of negative triples which are missing in random and popularity-weighted sampling. Since Self-Adv accounts for the informativeness by giving more weight to higher quality triplets, it generally outperforms Rand and Pop. It still lags behind DANS in most cases as it ignores the concepts of diversity and adaptiveness. The variant MCNS shows better performance than Rand and its variant Pop but loses to DANS as MCNS was originally designed for homogeneous graphs.
Next, Table 3 compares DANS with the second category of baselines. Negative sampling in SANS-RW is not relation-aware and thus performs poorly on datasets with more variety of relations, namely, NELL-995 and UMLS. In addition, KBGAN fell short for the two bigger datasets WN18RR and NELL-995 as it ignores graph structure in the sampling process. Furthermore, its adversarial training process potentially suffers from instability and degeneracy. On the other hand, NSCaching employs a more streamlined importance sampling approach, contributing to its competitive performance despite not considering graph structure for negative sampling. As CAKE and SMiLE leverage on extra side information such as entity types to enhance its performances, their experimental results deteriorate as such information are not available in standard knowledge graph completion benchmarks in this paper.
Overall, DANS has obtained favourable performance, showing the importance of diversity and adaptiveness during negative sampling. We will conduct further ablation study in the next part to examine the contribution from each aspect. Finally, we have included the experimental results for dataset FB15k-237 which show favorable performance on ComplEx decoder in D.
4.3 Additional research questions



In this part, we seek to investigate RQ1–RQ4 listed at the beginning of this section. All experiments in this part are conducted using the DistMult function as the decoder.
Ablation study (RQ1, RQ2). We investigate the contribution from major design choices through an ablation study. As depicted in Figure 2(a), we compare DANS with the following variants, all of which do not employ the FiLM layer. (1) : Only the pathway in the generator; (2) : Only the pathway in the generator; (3) : Both pathways and .
From the results, among the single pathways (either or ), there is no consistent winner and it depends on the dataset. However, it is clear that the use of both pathways in the generator () outperforms using just a single pathway. Thus, this addresses RQ1 and shows that diversifying the negative triplets with the two-way generator can improve model performance and improve the overall informativeness of the negative triplets.
Furthermore, by comparing (i.e., both pathways without FiLM) and the proposed model DANS (i.e., both pathways with FiLM), our model obtains a significant lead in performance. This addresses RQ2 and shows the effectiveness of our adaptive design using FiLM.
Parameter sensitivity (RQ3). To answer RQ3, we perform a parameter sensitivity analysis. We first analyze how the number of negative triplets per positive triplet, , can impact model performance. As shown in Figure 2(b), as we increase on each dataset, we consistently observe that the MRR performance improves and peaks at . A larger allows for greater diversity, which explains the initial increase in performance. However, when , performance starts to plateau or even deteriorate, due to highly imbalanced training data.
Next, we investigate the impact of adaptive regularization controlled by in Figure 2(c). Generally, having such a regularization (i.e., ) avoids excessive scaling and shifting from the FiLM layer, and thus reduces overfitting to individual entities or relations. In particular, the MRR performance improves as increases and achieves the most optimal performance for all three datasets when is around 1e-4. As the optimal values of and are largely stable across the three datasets, our model is not sensitive to these hyperparameter settings, and potentially requires less effort in hyperparameter tuning. We also note that the performance on the UMLS datasets tends to be more sensitive to changes in both parameters. This could be because UMLS is a smaller dataset than the other two, containing only 5,216 positive triplets in training and this increases the risk of overfitting to certain settings in general.
Case study (RQ4). We conduct a qualitative evaluation of DANS to demonstrate the diversity and adaptiveness of negative triplets generated by DANS.

(WN18RR)

(NELL-995)

(UMLS)

(WN18RR)

(NELL-995)

(UMLS)
Note that the ablation study has demonstrated improved model performance with the two-way generator, FiLM layers and provided direct evidence on the importance of diverse pathways and FiLM in the model architecture. However, it is not immediately clear if having two pathways and FiLM layers in the generator would indeed produce diverse and adaptive examples in their embedding space. Instead, as asked in RQ4, can we observe some generated examples on their diversity and adaptiveness?
To demonstrate the notion of diversity in RQ4, we present a few case studies of how DANS can produce more diverse examples than uniform random sampling (RNS) or popularity-weight random sampling (PNS). In Figures 4 and 4 we visualize the positive and negative tail entities w.r.t. a given relation and all its head entities on each dataset. More specifically, each point represents one tail entity, which can be a positive (real) tail entity, or a negative tail entity. The negative entity can be generated by one of the pathways or of the generator, or randomly sampled by RNS or PNS. The high-dimensional embedding space is projected onto a Cartesian plane using the -SNE algorithm [37]. In Figure 4, we compare the diversity of negative entities generated by DANS with that of RNS-based negative entities. For our case study, we select one relation for each dataset, namely, on WN18RR, on NELL-995 and on UMLS, so that all the positive and negative tail entities for a common relation (and the same original head entities) can be contrasted in one visualization. The results show that DANS could provide more diverse negative entities for model training, where those generated by and occupy different subspaces from the positive entities. In contrast, RNS lacks diversity and samples negative entities in the same subspace as positive entities. This could even potentially contribute to false negative triplets as they are not well separated from the real ones. Similarly, in Figure 4, we compare the negative entities generated from and in DANS against the output of PNS, which replaces uniform negative sampling with popularity-weighted sampling. The results again echoed that DANS has produced more diverse negative entities in different subspaces as the positive entities, whereas PNS samples negative entities that mostly overlap with the positive triplets with less diversity.

(a) Generated samples w/o FiLM

(b) Generated samples w/ FiLM
On the other hand, to demonstrate the notion of adaptiveness, we compare two different relations from each dataset to visualize its differences with and without FiLM layers. For example, Figure 5 visualizes the impact of adaptiveness for relations and in the WN18RR dataset. “Positive1” and “Positive2” denote the existing positive entities of each of the two relations, respectively; “Fake1” and “Fake2” denote the corresponding negative samples produced from the generators for each of the two relations, respsectively. As shown in Figure 5(a), without FiLM layers, “Fake1” and “Fake2” both spread across the plane without any clear association to the corresponding “Positive1” and “Positive2”, and thus offering less discriminative power to improve the learning of each relation. In contrast, in Figure 5(b), after FiLM layers are added, we can clearly visualize that “Fake1” and “Fake2” are adapted to “Positive1” and “Positive2” as they move closer to each of their respective positive samples, improving the discriminative power of learning. This demonstrates that the global generators are adapted into local models conditioned on individual input (entity or relation) when FiLM layers are present. Similar patterns can be observed in NELL-995 and UMLS datasets, which are presented in E.
5 Conclusion and Future Work
In this work, we introduced DANS, a negative sampling strategy for knowledge graph embedding that explicitly accounts for the informativeness of negative triplets. On one hand, we proposed a two-way generator to increase the overall informativeness by diversifying the negative triplets based on their association with not only entities but also relations. On the other hand, we adapt the global generator model into local models, which generate negative triplets in a finer-grained manner to improve their individual informativeness. Empirically, DANS has outperformed state-of-the-art baselines on three benchmark knowledge graphs through both quantitative and qualitative experiments. In the future, we believe that the concept of diversity and adaptiveness can be further extended to other graph representation learning problems with a lean structure that is less parameter intensive.
6 Funding
This research / project is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 (Proposal ID: T2EP20122-0041). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
Appendices
Appendix A Scoring function
ComplEx: ComplEx [36] extends DistMult [44] by introducing complex-valued embeddings to better model asymmetric relations. In ComplEx, entity and relation embeddings no longer lie in a real space but a complex space by operator .
ComplEx maps the entities and relations to the complex vector where by operator and is diagonal matrix whose diagonal is , an -specific trainable vector of the decoder.
RotatE: Inspired by TransE [2], RotatE [34] veers into complex vector space and is motivated by Euler’s identity. The model defines each relation as a rotation and measures how the distance from the source entity to the target entity to account for three relation patterns: symmetric/anti-symmetric, inversion and composition.
RotatE maps the entities and relations to the complex vector where and denotes the Hadamard product.
Appendix B Evaluation Metrics
Mean Reciprocal Ranking (MRR): For each candidate list we record the reciprocal of the ranking position of the positive triplet, and take the average over all lists. It reflects the absolute ranking of the first relevant item in the list.
Where is the position of the relevant result in the ith query and Q is the total number of queries.
Hit ratio at (H@): We compute the fraction of candidate lists in which positive triplets fails within the first positions.
where is the number of times a positive triplet is ranked within top positions in the candidate list. is the total number of candidates.
Normalized discounted cumulative gain at (NDCG@): We compare the ranked list with the ideal list, where a match at a lower position would have a discounted gain. The gain is further normalized across lists, and we measure the average over all lists. It reflects the relevance at the top positions, taking position significance into account.
where measures our ranked list and measures the ideal list. The numerator equals to 1 when prediction of positive triplet falls within the top positions and 0 if otherwise. In addition, refers to the ranking position of the positive triplet.
Appendix C Efficiency Comparison
Number of Negative Samples | ||||
Model | 50 | 20 | 10 | 5 |
RGCN | 184 | 87 | 79 | 68 |
RGCN-P | 182 | 94 | 79 | 67 |
RGCN-Adv | 191 | 108 | 91 | 70 |
RGCN-MCNS | 325 | 282 | 169 | 137 |
DANS | 761 | 461 | 271 | 201 |
To compare the efficiency of different negative sampling methods, we report the runtime of DANS and other negative sampling strategies including random negative sampling in RGCN, popularity biased negative sampling in RGCN-P, self-adversarial negative sampling in RGCN-Adv and negative sampling via positive distribution in RGCN-MCNS in Table 4. We standardize the experimental setting and use the same RGCN backbone with DistMult scoring function for fair comparison. While DANS incurs more time than other negative sampling methods, the increase over more advanced methods like RGCN-MCNS is by a manageable constant factor. Furthermore, the growth in time is linear when more negative samples are generated.
Using 20 negative samples for reference, results show that RGCN-Adv, that weighs the negative triplets takes 108 minutes to complete, which is slightly more than 87 minutes and 94 minutes in RGCN and RGCN-P. RGCN-MCNS which samples from positive distribution requires more time (135 minutes) to finish. Overall, DANS being a more complexed model that generally performs better than baselines, takes approximately 3 to 5 times longer to run. As the number of negative samples increases, the gap for run-time widens between DANS and other model variants as DANS takes more computational resources to generate negative synthetic samples.
Appendix D Performance on the FB15k-237 dataset
FB15k-237 | |||||
Model | MRR | H@1 | H@5 | H@10 | NDCG@5 |
ComplEx | .211.005 | .128.003 | .234.006 | .381.006 | .128.004 |
SANS-RW | .195.006 | .153.004 | .204.08 | .267.008 | .119.004 |
NSCaching | .261.007 | .183.006 | .284.007 | .417.006 | .192.008 |
KBGAN | .259.008 | .169.0106 | .279.007 | .413.010 | .188.006 |
CAKE | .195.007 | .132.004 | .222.006 | .319.009 | .130.003 |
SMiLE | .179.003 | .114.003 | .197.005 | .284.007 | .112.004 |
DANS | .253.007 | .185.006 | .333.009 | .446.011 | .252.007 |
The DistMult decoder we use on the other datasets tends to perform poorly on FB15k-237 in Dettmers et al. [7], Ji et al. [14], Zhou et al. [51] Hence, on FB15k-237, we implement the baselines and DANS using the ComplEx decoder, as shown in Table 5. Results show that DANS still achieves competitive performance in comparison to the baselines. In particular, DANS achieves a notable gain of 17.3%, 6.95% and 31.3% in terms of H@5, H@10 and NDCG@5, respectively.
Appendix E Case Study for Adaptiveness

(a) Generated samples without FiLM

(b) Generated samples with FiLM

(a) Generated samples without FiLM

(b) Generated samples with FiLM
We conduct additional experiments to qualitatively assess the impact of FiLM layer on output from the generator pathways through a case study. To demonstrate adaptiveness, we compare two different relations from each dataset to visualize its differences with and without FiLM layers. The visualization on the WN18RR dataset has already been explained in the main paper. Here, we include the visualizations on the NELL-995 and UMLS datasets in Figures 7 and 7, respectively, which display similar patterns as explained in the main paper.
References
- Ahrabian et al. [2020] Kian Ahrabian, Aarash Feizi, Yasmin Salehi, William L Hamilton, and Avishek Joey Bose. Structure aware negative sampling in knowledge graphs. arXiv preprint arXiv:2009.11355, 2020.
- Bordes et al. [2013] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26, 2013.
- Cai and Wang [2017] Liwei Cai and William Yang Wang. Kbgan: Adversarial learning for knowledge graph embeddings. arXiv preprint arXiv:1711.04071, 2017.
- Cao et al. [2019] Yixin Cao, Xiang Wang, Xiangnan He, Zikun Hu, and Tat-Seng Chua. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. In The world wide web conference, pages 151–161, 2019.
- Carlson et al. [2010] Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam Hruschka, and Tom Mitchell. Toward an architecture for never-ending language learning. In Proceedings of the AAAI conference on artificial intelligence, pages 1306–1313, 2010.
- Chen et al. [2020] Zhe Chen, Yuehan Wang, Bin Zhao, Jing Cheng, Xin Zhao, and Zongtao Duan. Knowledge graph completion: A review. Ieee Access, 8:192435–192456, 2020.
- Dettmers et al. [2018] Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. In Proceedings of the 32th AAAI Conference on Artificial Intelligence, pages 1811–1818, February 2018.
- Du et al. [2021] Yu Du, Sylvie Ranwez, Nicolas Sutton-Charani, and Vincent Ranwez. Is diversity optimization always suitable? toward a better understanding of diversity within recommendation approaches. Information processing & management, 58(6):102721, 2021.
- Fang et al. [2017] Yuan Fang, Kingsley Kuan, Jie Lin, Cheston Tan, and Vijay Chandrasekhar. Object detection meets knowledge graphs. In International Joint Conferences on Artificial Intelligence, 2017.
- Goodfellow et al. [2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
- Hu et al. [2019] Binbin Hu, Yuan Fang, and Chuan Shi. Adversarial learning on heterogeneous information networks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 120–129, 2019.
- Isufi et al. [2021] Elvin Isufi, Matteo Pocchiari, and Alan Hanjalic. Accuracy-diversity trade-off in recommender systems via graph convolutions. Information Processing & Management, 58(2):102459, 2021.
- Ji et al. [2016] Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. Knowledge graph completion with adaptive sparse transfer matrix. In Proceedings of the AAAI conference on artificial intelligence, 2016.
- Ji et al. [2020] Kexi Ji, Bei Hui, and Guangchun Luo. Graph attention networks with local structure awareness for knowledge graph completion. IEEE Access, 8:224860–224870, 2020.
- Li et al. [2023] Jianping Li, Guozhen Tan, Xiao Ke, Huaiwei Si, and Yanfei Peng. Object detection based on knowledge graph network. Applied Intelligence, 53(12):15045–15066, 2023.
- Li et al. [2021] Sirui Li, Kok Wai Wong, Chun Che Fung, and Dengya Zhu. Improving question answering over knowledge graphs using graph summarization. In Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part IV 28, pages 489–500. Springer, 2021.
- Lin et al. [2015] Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, 2015.
- Liu et al. [2020] Hai Liu, Kairong Hu, Fu-Lee Wang, and Tianyong Hao. Aggregating neighborhood information for negative sampling for knowledge graph embedding. Neural Computing and Applications, 32:17637–17653, 2020.
- Lv et al. [2023] Wen Lv, Hongbo Shi, Shuai Tan, Bing Song, and Yang Tao. A dynamic semantic knowledge graph for zero-shot object detection. The Visual Computer, 39(10):4513–4527, 2023.
- Mikolov et al. [2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26, 2013.
- Nathani et al. [2019] Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. Learning attention-based embeddings for relation prediction in knowledge graphs. arXiv preprint arXiv:1906.01195, 2019.
- Niu et al. [2022] Guanglin Niu, Bo Li, Yongfei Zhang, and Shiliang Pu. Cake: A scalable commonsense-aware framework for multi-view knowledge graph completion. arXiv preprint arXiv:2202.13785, 2022.
- Odena [2016] Augustus Odena. Semi-supervised learning with generative adversarial networks. arXiv preprint arXiv:1606.01583, 2016.
- Oreshkin et al. [2018] Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved few-shot learning. Advances in neural information processing systems, 31, 2018.
- Peng et al. [2022] Miao Peng, Ben Liu, Qianqian Xie, Wenjie Xu, Hua Wang, and Min Peng. Smile: Schema-augmented multi-level contrastive learning for knowledge graph link prediction. arXiv preprint arXiv:2210.04870, 2022.
- Perez et al. [2018] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI conference on artificial intelligence, 2018.
- Qian [2021] Jing Qian. Understanding negative sampling in knowledge graph embedding. International Journal of Artificial Intelligence and Applications (IJAIA), 12(1), 2021.
- Rendle et al. [2012] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618, 2012.
- Schlichtkrull et al. [2018] Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In The semantic web: 15th international conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, proceedings 15, pages 593–607. Springer, 2018.
- Shaffi et al. [2022] Noushath Shaffi, Faizal Hajamohideen, Mufti Mahmud, Abdelhamid Abdesselam, Karthikeyan Subramanian, and Arwa Al Sariri. Triplet-loss based siamese convolutional neural network for 4-way classification of alzheimer’s disease. In International Conference on Brain Informatics, pages 277–287. Springer, 2022.
- Shen et al. [2022] Jianhao Shen, Chenguang Wang, Linyuan Gong, and Dawn Song. Joint language semantic and structure embedding for knowledge graph completion. arXiv preprint arXiv:2209.08721, 2022.
- Shimin et al. [2021] DI Shimin, YAO Quanming, Yongqi Zhang, and CHEN Lei. Efficient relation-aware scoring function search for knowledge graph embedding. In 2021 IEEE 37th International Conference on Data Engineering (ICDE), pages 1104–1115. IEEE, 2021.
- Shin et al. [2019] Sangjin Shin, Xiongnan Jin, Jooik Jung, and Kyong-Ho Lee. Predicate constraints based question answering over knowledge graph. Information Processing & Management, 56(3):445–462, 2019.
- Sun et al. [2019] Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197, 2019.
- Sun et al. [2020] Zhu Sun, Di Yu, Hui Fang, Jie Yang, Xinghua Qu, Jie Zhang, and Cong Geng. Are we evaluating rigorously? benchmarking recommendation for reproducible evaluation and fair comparison. In Proceedings of the 14th ACM Conference on Recommender Systems, pages 23–32, 2020.
- Trouillon et al. [2016] Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In International conference on machine learning, pages 2071–2080. PMLR, 2016.
- Van der Maaten and Hinton [2008] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
- Wang et al. [2018] Peifeng Wang, Shuangyin Li, and Rong Pan. Incorporating gan for negative sampling in knowledge representation learning. In Proceedings of the AAAI conference on artificial intelligence, 2018.
- Wang et al. [2020] Xiang Wang, Yaokun Xu, Xiangnan He, Yixin Cao, Meng Wang, and Tat-Seng Chua. Reinforced negative sampling over knowledge graph for recommendation. In Proceedings of the web conference 2020, pages 99–109, 2020.
- Wang et al. [2014] Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI conference on artificial intelligence, 2014.
- Wu et al. [2016] Qi Wu, Peng Wang, Chunhua Shen, Anthony Dick, and Anton Van Den Hengel. Ask me anything: Free-form visual question answering based on knowledge from external sources. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4622–4630, 2016.
- Xi et al. [2021] Wu-Dong Xi, Ling Huang, Chang-Dong Wang, Yin-Yu Zheng, and Jian-Huang Lai. Deep rating and review neural network for item recommendation. IEEE Transactions on Neural Networks and Learning Systems, 33(11):6726–6736, 2021.
- Xiong et al. [2017] Wenhan Xiong, Thien Hoang, and William Yang Wang. Deeppath: A reinforcement learning method for knowledge graph reasoning. arXiv preprint arXiv:1707.06690, 2017.
- Yang et al. [2014] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575, 2014.
- Yang et al. [2020] Zhen Yang, Ming Ding, Chang Zhou, Hongxia Yang, Jingren Zhou, and Jie Tang. Understanding negative sampling in graph representation learning. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 1666–1676, 2020.
- Ying et al. [2018] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 974–983, 2018.
- Zeng et al. [2020] Jiehang Zeng, Lu Liu, and Xiaoqing Zheng. Learning structured embeddings of knowledge graphs with adversarial learning framework. arXiv preprint arXiv:2004.07265, 2020.
- Zhang et al. [2019] Yongqi Zhang, Quanming Yao, Yingxia Shao, and Lei Chen. Nscaching: simple and efficient negative sampling for knowledge graph embedding. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pages 614–625. IEEE, 2019.
- Zhao et al. [2022] Jianli Zhao, Hao Li, Lijun Qu, Qinzhi Zhang, Qiuxia Sun, Huan Huo, and Maoguo Gong. Dcfgan: An adversarial deep reinforcement learning framework with improved negative sampling for session-based recommender systems. Information sciences, 596:222–235, 2022.
- Zhou et al. [2021] Xiaofei Zhou, Lingfeng Niu, Qiannan Zhu, Xingquan Zhu, Ping Liu, Jianlong Tan, and Li Guo. Knowledge graph embedding by double limit scoring loss. IEEE Transactions on Knowledge and Data Engineering, 34(12):5825–5839, 2021.
- Zhou et al. [2022] Zhehui Zhou, Can Wang, Yan Feng, and Defang Chen. Jointe: Jointly utilizing 1d and 2d convolution for knowledge graph embedding. Knowledge-Based Systems, 240:108100, 2022.