Layer-diverse Negative Sampling for Graph Neural Networks
Abstract
Graph neural networks (GNNs) are a powerful solution for various structure learning applications due to their strong representation capabilities for graph data. However, traditional GNNs, relying on message-passing mechanisms that gather information exclusively from first-order neighbours (known as positive samples), can lead to issues such as over-smoothing and over-squashing. To mitigate these issues, we propose a layer-diverse negative sampling method for message-passing propagation. This method employs a sampling matrix within a determinantal point process, which transforms the candidate set into a space and selectively samples from this space to generate negative samples. To further enhance the diversity of the negative samples during each forward pass, we develop a space-squeezing method to achieve layer-wise diversity in multi-layer GNNs. Experiments on various real-world graph datasets demonstrate the effectiveness of our approach in improving the diversity of negative samples and overall learning performance. Moreover, adding negative samples dynamically changes the graph’s topology, thus with the strong potential to improve the expressiveness of GNNs and reduce the risk of over-squashing.
1 Introduction
Graph neural networks (GNNs) have emerged as a formidable tool for various applications of structure learning, including drug discovery (Sun et al., 2020), recommendation systems (Yu & Qin, 2020), and traffic prediction (Lan et al., 2022), owing to their strong representation learning power. GNNs propagate the learning of nodes through a message-passing mechanism (Geerts et al., 2021) that conveys and aggregates information from neighbouring nodes, known as first-order neighbours. The message passing is based on the assumption that neighbours of a node have similar representations. The common practice of updating node representations solely with positive samples in most GNNs (Kipf & Welling, 2017; Xu et al., 2019; Brody et al., 2022), can have three limitations: 1) Over-smoothing (Chen et al., 2020; Rong et al., 2020; Zhao & Akoglu, 2020), where the node representations become less distinct as the number of layers increases; 2) GNNs expressivity (Xu et al., 2019), where it becomes difficult to distinguish different graph topologies after aggregation; and 3) Over-squashing (Alon & Yahav, 2021; Topping et al., 2022; Karhadkar et al., 2022), where bottlenecks exist and limit the information passing between weakly connected subgraphs.

In addition to a node’s positive samples, there are many other non-neighbouring nodes that can provide diverse and valuable information for updating the representations. Unlike neighbouring nodes, non-neighbouring nodes typically have distinct representations compared to the given node and are referred to as negative samples (Duan et al., 2022). While it is crucial to select appropriate negative samples, only a few studies have given adequate attention to this aspect of negative sampling.
It is believed that the ideal negative samples should contain enough information about the entire graph without including a large amount of redundant information. However, a remaining issue is that all previous approaches treat the negative samples in each layer as independent. Thus, from a holistic perspective, the negative samples obtained still contain a considerable amount of redundancy. In fact, experiments show that the overlap between the node samples for different layers obtained by Duan et al. (2022) is more than 75%, as outlined in further detail in Section 4.2.
To address the issue of redundant information in negative samples, we propose an approach called layer-diverse negative sampling that utilizes the technique of space squeezing. This method is designed to obtain meaningful information with a smaller number of samples (as illustrated by Figure 1). Specifically, our method utilizes the sampling matrix in DPP to transform the candidate set into a space. The dimension of the space is the number of nodes in the candidate set, with each node represented as a vector in this space. We then apply the space squeezing technique during sampling, which eliminates the dimensions corresponding to the samples of the last layer, thus significantly reducing the probability of selecting those samples again. These negative samples are then utilized in message passing in GCN, resulting in a new model called Layer-diverse GCN, or LDGCN in short.
The effectiveness of the LDGCN model has been demonstrated through extensive experimentation on seven publicly available benchmark datasets. These experiments have shown that LDGCN consistently achieves excellent performance across all datasets. We provide a detailed discussion on why the use of layer-diverse DPP sampling for negative samples may improve learning ability by improving GNNs expressivity and reducing the risk of over-squashing. Our main contributions are twofold:
-
•
We propose a method for layer-diverse negative sampling that utilizes space squeezing to effectively reduce redundancy in the samples found and enhance the overall performance of the model.
-
•
We empirically demonstrate the effectiveness of the proposed method in enhancing the diversity of layer-wise negative samples and overall node representation learning performance, and we also show the great potential of negative samples in improving GNNs expressivity and reducing the risk of over-squashing.
2 Preliminaries
2.1 Determinantal Point Processes (DPP)
A determinantal point process (DPP) is a probability measure on all possible subsets of the ground set with a size of . For every subset , a DPP (Hough et al., 2009) defined via a positive semidefinite matrix is formulated as
(1) |
where denotes the determinant of a given matrix, is a real and symmetric matrix indexed by the elements of , and is a normalisation term that is constant once the ground dataset is fixed.
DPP has an intuitive geometric interpretation. If we have a , there is always a matrix that satisfies . Let be the columns of . A determinantal operator can then be interpreted geometrically as
(2) |
where the right-hand side of the equation is the squared -dimensional volume of the parallelepiped spanned by the columns of corresponding to the elements in . Intuitively, diverse sets are more probable because their feature vectors are more orthogonal and span larger volumes. (See the more details in Appendix A.1)
2.2 Negative Sampling for GNNs
Let denote a graph with node features for , where and are the sets of nodes and edges. Let denote the number of nodes. GNNs aggregate information via message-passing (Geerts et al., 2021), where each node repeatedly receives information from its first-order neighbours to update its representation as
(3) |
where is the degree of the node. Introducing negative samples can improve the quality of the node representations and alleviate the over-smoothing problem (Chen et al., 2020; Rong et al., 2020; Duan et al., 2022). The new update to the representations is formulated as
(4) |
where are the negative samples of node , and is a hyper-parameter to balance the contribution of negative samples.
The current state-of-the-art approaches for selecting negative samples used in Eq. (4) are the DPP-based methods (Duan et al., 2022; 2023). Intuitively, good negative samples for a node should have different semantics while containing as complete knowledge of the whole graph as possible. Since the sampling procedure in the DPP requires an eigendecomposition, the computational cost for once sampling from a nodes set could be , and the total becomes an excessive for sampling times (all nodes). Thus, the large size of candidates found from exploring the whole graph to find negative samples would make such an approach impractical, even for a moderately sized graph.
To reduce the computational complexity, the shortest-path-based method (Duan et al., 2023) is first used to form a smaller but more effective candidate set for node , which is detailed in Algorithm 1. Using this method, the computational cost is approximately , where is the path length (normally smaller than the diameter of the graph) and is the average degree of the graph. As an example, consider a Citeseer graph (Sen et al., 2008) with 3,327 nodes and . When using the experimental setting shown below, where , we observe that , which is significantly larger than .
After obtaining the candidate set , the subsequent step involves effectively leveraging the characteristics of the graph to devise the computation method for the matrix. As a major fundamental technique of DPP, Quality-Diversity decomposition is used to balance the diversity against some underlying preferences for different items in (Kulesza & Taskar, 2012). Since can be written as , each column of is further written as the product of a quality term and a vector of normalized diversity features ,. The probability of a subset is the square of the volume spanned by for . Hence, for the given node now becomes
(5) |
where are two candidate negative nodes.
Utilizing the Quality-Diversity decomposition of DPP, we use the node feature representations and graph structural information to calculate for the intended sample selection. Specifically, following the Duan et al. (2023), all the nodes in a graph are first divide into communities, denoted as , using Fluid Communities method (Parés et al., 2017). Then, the features of each community and each candidate set are extracted from the node representations via
(6) |
With the Eq. (6), the quality terms in Eq. (5) will be defined as:
(7) |
where represents the feature expression of the node belonging to its community, denotes the features of the candidate set and the means point-wise product. These quality terms ensure the candidate node is not similar to the given node . The diversity term in Eq. (5) is defined as
(8) |
which ensures that there are sufficient differences between every pair of candidate nodes and .
Due to the primary focus of this paper not being on the computation of the matrix, additional details can be referenced in Duan et al. (2022; 2023). Although the above method ensures good diversity for each layer, there is still plenty of redundancy across the layers because the negative samples in each layer are treated as being independent.
3 Proposed Model
3.1 Layer-diverse Negative Sampling
Given a node and its candidate negative sample set 111 denotes all non-neighbour nodes of in the graph in theory, but we follow Duan et al. (2022; 2023) to reduce its size by the Algorithm 1.. and denote the negative samples for node in a multi-layer GNNs collected from layers and , respectively, as illustrated in Figure 1. Our goal is to reduce the overlap between and to cover as much negative information in the graph as possible while retaining an accurate representation of . Note that could be seen geometrically as a space spanned by the node representations, while is just a subspace of this space spanned by the selected negative samples/representations. Inspired by this geometric interpretation of negative sampling, our idea is to squeeze this space for negative sampling at layer conditioned on obtained to reduce the probability of re-picking the samples in .
To be specific, an eigendecomposition is first performed on from Eq. (5) of , which yields the eigenvalues and the eigenvectors . Here, is an orthogonal basis for the space of since is a real and symmetric matrix. All the eigenvectors compose a new matrix denoted as , where each row corresponds to a node in , and is also the impacts/contributions of the node on each eigenvector. The probability of picking node through the DPP sampling is then proportional to . Given , the goal is reduce the information of any in space . To this end, the eigenvector/basis of node that makes the greatest impact/contribution is identified as:
(9) |
The space along the direction is then squeezed by
(10) |
where denotes the outer product and is the weight of the squeezing. The outer product can be thought of as a way to "stretch" every vector of node along the -direction. Since in Eq. (10) implies that the node has the strongest influence on this eigenvector/direction, it helps to reduce the contribution of the node at -direction to all vectors as much as possible. It is worth noting about Eq. (10) that:
Remark 3.1.
Suppose the probability of re-picking node in is , the new probability of re-picking it in would be reduced to , where . It means that we can control the squeezing degree by . See the proof given in Appendix A.2.
Remark 3.2.
For a node and , if and are sufficiently similar with each other, then the probability of re-picking would also be reduced. (See the proof in Appendix A.2.) It means we do not just reduce the re-picking probability of . By reducing the re-picking probability of , we also decrease the influence of similar nodes, reducing the likelihood of them being considered.
After obtaining the layer-diverse vector matrix , we employ -DPP for negative sampling. -DPP is a generalization of the DPP for sampling a fixed number of items, rather than a variable number. By setting the value of , we can effectively control the number of negative samples obtained through sampling. (See the more details in Appendix A.1) The process of layer-diverse negative sampling for node is outlined in Algorithm 2. The most significant difference lies in the original input of -DPP is eigenvector matrix , while the input of Algorithm 3 is layer-diverse matrix . Our method can be used for all layers by collecting all negative samples before a given layer as the candidate set. To reduce the computational cost, two consecutive layers are used in the following experiments.
To better illustrate our method, a three-dimensional example is shown in Figure 2, where the candidate set contains three nodes and the size of is . Figure 2(a) shows that the original space is spanned by , with the eigenvectors . Suppose node has the highest impact on , that is . The space along the -direction then squeezes, as we can observe in Figure 2(b). The original space finally turns to the new space in Figure 2(c), where the magnitude of the component is significantly reduced and the probability of choosing the corresponding nodes (including node ) becomes smaller.
3.2 Discussion
Although there is a limited number of works having investigated the use of negative samples for GNNs, the exact benefits of using such samples remain largely unexplored. To better understand the impact of negative samples, we give examples and discussions on their effects on GNNs expressivity and over-squashing. Our results show that negative samples have a strong potential to improve GNNs expressivity and reduce the degree of over-squashing.
3.2.1 GNN expressivity
Intuitively, adding negative samples into a graph’s convolution layers will temporarily change the graph’s topologies. Xu et al. (2019) state that ideally powerful GNNs can distinguish between different graph structures by mapping them into different embeddings (so-called GNN expressivity). In the following, from a topological view, we will demonstrate the three different aggregation cases in which adding negative samples to GNNs may improve the GNN expressivity.
Case 1 In a single layer, negative samples can help aggregators distinguish different structures. As shown in Figure 3(a), before adding negative samples, MAX fails to distinguish two structures because
(11) |
After adding negative samples, we have
(12) |
In Figure 3(b), before adding negative samples, MAX and MEAN both fail to distinguish two structures because for MAX, it will be
(13) |
and for MEAN, it will be
(14) |
After adding negative samples, we have
(15) |
(16) |
Negative samples help MAX and MEAN distinguish different structures in this case.
Case 2 Layer-diverse negative samples can help distinguish different structures in multi-layers. The outcomes of only one negative sampling process do not always help the aggregators to generate different embeddings for various structures. As Figure 4 shows, even after adding some negative samples, the MAX and MEAN aggregators for layer still cannot distinguish between the different structures. This is because we have:
(17) |
(18) |
However, if the sampling method is well-designed, the probability of distinguishing between different graph structures in the network will be higher. Benefit from the layer-diverse method which can obtain different samples from the last layer, for layer , we get different samples and have
(19) |
(20) |
In this case, the layer-diverse negative sampling will help MAX and MEAN to distinguish different structures in multi-layers.
Case 3 There exist some situations where adding negative samples could make the originally distinguishable structures indistinguishable, but the probability of such situations is low. As shown in Figure 5(a), the MAX and MEAN aggregators can distinguish two different structures because we have
(21) |
(22) |
As shown in Figure 5(b), in the layer, after adding negative samples, using MEAN will have
(23) |
(24) |
We notice that it cannot distinguish two structures after adding negative samples when . However, from this example, we can see that to make the originally distinguishable structures indistinguishable, the negative samples and must exactly complement the difference between the two structures from the original aggregation. Apparently, the probability of finding satisfied the negative samples and is significantly smaller than the finding of some negative samples to make representations of two structures different. Furthermore, considering our layer-diverse design, the probability of finding such satisfied negative samples and at every layer would be exponentially reduced. As shown in Figure 5(c), the result of the MEAN operator is
(25) |
Hence, we believe our layer-diverse negative sampling is helpful in improving GNN expressivity.
3.2.2 Over-squashing

Separate from over-smoothing and GNN expressivity, over-squashing is much less known, first pointed out by Alon & Yahav (2021). Over-squashing occurs when bottlenecks exist and limit the information passing between weakly connected subgraphs, which leads to the graph failing to propagate messages flowing from distant nodes (Alon & Yahav, 2021; Topping et al., 2022), as shown in Figure 6. An effective approach to addressing over-squashing is to rewire the input graph to remove the structural bottlenecks Karhadkar et al. (2022). However, the rewiring methods face two main challenges: 1) losing the original topological information when the graph changes and 2) suffering from over-smoothing when adding too many edges. In the following, we will show negative samples have the potential to address these two challenges.
An alternative way to understand negative samples in Eq. (4) is to introduce new (negative) edges/relations into GNNs, which can be rewritten as
(26) |
where and denote positive and negative relations separately. Firstly, as stated in Karhadkar et al. (2022), the flexible and could help to balance the over-smoothing and over-squashing. Secondly, different from the positive samples/edges added by Karhadkar et al. (2022), our negative samples/edges could further improve the ability to preserve the node representations. The reason is that the underlying assumption of GNNs is that the representation of a node should be similar to the representations of its (positive) linked nodes, so any new positive samples might very likely bring some incorrect information to a node and then damage the original node representation significantly. However, our negative samples are purposely chosen to provide negative information to a given node, so the Eq. (4) would not damage the original node representation too much under the same number of newly added edges with Karhadkar et al. (2022). Hence, we believe the negative samples are useful to overcome the over-squashing problem.
Citeseer | Cora | PubMed | CS | Computers | Photo | ogbn-arxiv | |
GCN | |||||||
GATv2 | |||||||
SAGE | |||||||
GIN- | |||||||
AERO | |||||||
RGCN | |||||||
MCGCN | |||||||
PGCN | |||||||
D2GCN | |||||||
LDGCN | 68.27±1.29 | 76.80±1.26 | 77.07±1.23 | 86.23±0.55 | 77.92±2.34 | 86.50±1.48 | 71.66±0.30 |
4 Experiments
Our experiments aimed to address these questions: (1) Can the addition of negative samples obtained using our method improve the performance of GNNs compared to baseline methods? (Section 4.1) (2) Does our method result in negative samples with reduced redundancy? (Section 4.2) (3) Does our method yield consistent results even when fewer nodes are included in the negative sampling? (Section 4.2) (4) How would our negative sampling approach perform when applied to other GNNs architectures? (Section 4.3) (5) Does incorporating these negative samples into graph convolution alleviate issues with over-smoothing and over-squashing? (Section 4.4) (6) What is the time complexity of the proposed method? (Section 4.5) (7) How do the sampling results of our LDGCN model compare to those of the D2GCN model in terms of overlap reduction and sample diversity? (Section 4.6)
4.1 Evaluation of Node Classification
Datasets. We first conducted our experiments with seven homophilous datasets for semi-supervised node classification, including citation network: Citeseer, Cora and PubMed (Sen et al., 2008), Coauthor networks: CS (Shchur et al., 2018), Amazon networks: Computers and Photo (Shchur et al., 2018), and Open Graph Benchmark: ogbn-arxiv (Hu et al., 2020). Then, we expanded our experiments to three heterophilous datasets, including Cornell, Texas, and Wisconsin Craven et al. (1998).
Baselines. For homophilous datasets, we compared our framework to four GNN baselines: GCN (Kipf & Welling, 2017), GATv2 (Brody et al., 2022), SAGE (Hamilton et al., 2017), GIN- (Xu et al., 2019), and AERO (Lee et al., 2023). We also compared existing GNN models with negative sampling methods. RGCN (Kim & Oh, 2021) selects negative samples in a purely random manner. MCGCN (Yang et al., 2020) selects negative samples using Monte Carlo chains. PGCN (Ying et al., 2018) uses personalised PageRank. D2GCN (Duan et al., 2022) calculates the -ensemble using node representations only and does not take into account the diversity of the samples found across layers. Once the negative samples were obtained using these methods, they were integrated into the convolution operation using Eq. (4). For heterophilous datasets, to ensure a comprehensive analysis, we tested the layer-diverse negative sampling method across multiple graph neural network architectures, both in 2-layer and 4-layer configurations. The architectures tested include GCN (LD-GCN), GATv2 (LD-GATv2), GraphSAGE (LD-SAGE), and GIN (LD-GIN).
Experimental setup. We selected 1% of the nodes for negative sampling in each network layer. The datasets were divided consistently with Kipf & Welling (2017). Further information on the experimental setup and hyperparameters can be found in Appendix A.3.
Results on homophilous datasets. The results reported are the average accuracy values of the node classification after 10 runs, shown in Figure 7 for layers 2 to 6 and the detailed values for layer 4 are presented in Table 1. The results indicate that our model outperforms the other models. It performed better than the state-of-the-art GCN variants: GraphSAGE, GATv2, GIN-, and AERO. Unlike these methods, LDGCN incorporates both neighbouring nodes (positive samples) and negative samples obtained by our method into message passing. Although RGCN, MCGCN, and PGCN also incorporate negative samples into the convolution operation, they have not shown consistent performance across various datasets. D2GCN does not utilize layer-diverse sampling to reduce the chances of selecting the same nodes in consecutive layers. Consequently, the negative samples identified by this method contain less information about the overall graph, impeding graph learning.
Results on heterophilous datasets. The results of the 2-layer are shown in Table 2. On the Cornell dataset, our LD-GCN model outperformed the standard GCN by approximately 6.84%. In Texas, the LD-GATv2 model showed an improvement of 11.32% over the standard GATv2. For Wisconsin, LD-SAGE exceeded the performance of standard GraphSAGE by 5.82%. Furthermore, the 4-layer model results (Table 3) are consistent with the improved performance observed in the 2-layer models, suggesting that our layer-diverse negative sampling method contributes positively across different model depths.
Cornell | Texas | Wisconsin | |
---|---|---|---|
GCN | |||
LD-GCN | |||
GATv2 | |||
LD-GATv2 | |||
GraphSAGE | |||
LD-SAGE | |||
GIN | |||
LD-GIN |
Cornell | Texas | Wisconsin | |
---|---|---|---|
GCN | |||
LD-GCN | |||
GATv2 | |||
LD-GATv2 | |||
GraphSAGE | |||
LD-SAGE | |||
GIN | |||
LD-GIN |
Heterophilous graphs are characterized by their tendency to connect nodes with dissimilar features or labels. This starkly contrasts the homophilous nature typically assumed in many GNN designs. This heterophily implies a diverse neighbourhood for each node, which can challenge learning algorithms that rely on the assumption that ’neighbouring nodes have similar labels or features’.
Our layer-diverse negative sampling method is well-suited for such graphs for several reasons:
-
•
DPP-based sampling within layers: Our method uses DPP-based sampling to ensure diversity within each layer of the graph. This approach is crucial for heterophilous graphs, where it’s important to capture a wide range of node characteristics within the same layer.
-
•
Layer-diverse enhancement: We enhance diversity between layers and reduce overlap, allowing for richer information capture across the graph. This method is particularly effective in heterophilous graphs, where nodes with similar properties may not be close in the graph’s topology.
-
•
Improved node representation learning: Our approach effectively learns node representations by distinguishing between similar and dissimilar neighbors. This is key in heterophilous graphs, where traditional GNNs might struggle due to the uniformity in their aggregation and update processes.
-
•
Structural insight: Our method offers more structural insight into the graph by allowing the GNN to learn from a wider range of node connections, thus avoiding the pitfall of homogeneity in the learning process.
We believe that these results and our analysis of the structural properties of heterophilous graphs demonstrate the applicability and advantages of our layer-diverse negative sampling method in a broader range of graph types. This strengthens the case for our approach as a versatile tool in the GNN toolkit, capable of addressing the challenges presented by both homophilous and heterophilous graphs.
METHOD | ||||||
---|---|---|---|---|---|---|
4-Layer | 6-Layer | 4-Layer | 6-Layer | 4-Layer | 6-Layer | |
D2GCN-1% | ||||||
LDGCN-1% | ||||||
D2GCN-10% | ||||||
LDGCN-10% |
METHOD | ||||||
---|---|---|---|---|---|---|
4-Layer | 6-Layer | 4-Layer | 6-Layer | 4-Layer | 6-Layer | |
D2GCN-1% | ||||||
LDGCN-1% | ||||||
D2GCN-10% | ||||||
LDGCN-10% |
METHOD | ||||||
---|---|---|---|---|---|---|
4-Layer | 6-Layer | 4-Layer | 6-Layer | 4-Layer | 6-Layer | |
D2GCN-1% | ||||||
LDGCN-1% | ||||||
D2GCN-10% | ||||||
LDGCN-10% |
4.2 Evaluation of Layer-diversity
This section presents a comparison of our LDGCN model with the previous D2GCN (Duan et al., 2022) to demonstrate that our approach effectively reduces the overlap rate of negative samples in terms of both nodes and clusters, and that these samples are more beneficial for graph learning.
Datasets. Three of seven previous datasets were chosen to test this claim: the citation network Cora, the coauthor network CS, and the Amazon network Computers. The average degrees of the three datasets are 3.90, 8.93 and 35.76, respectively. This difference in density facilitates the comparison of the two methods in different graph datasets.
Setup. We repeated the experiments using both 1% and 10% of the nodes selected for negative sampling. Our aim was to show that: (1) the layer-diverse projection method can identify a set of negative samples with reduced overlap between layers and less redundant information; (2) an efficient sampling method can achieve better performance with fewer central nodes.
Metric. In addition to utilizing accuracy to display the final prediction results, we developed two metrics to evaluate the overlap rate of the selected samples: the Node Overlap Rate and Cluster Overlap Rate . assesses the average overlap of the samples in the last and current layers of the network, defined by
(27) |
where are the central nodes performing the negative sampling, and is the number of network layers.
In addition to selecting diverse nodes, it is crucial that the selected nodes come from different clusters, as shown in Figure 1. In the context of semi-supervised learning with GNNs, we assume that the true labels of all nodes are currently unknown. Therefore, we employ K-Means to partition all nodes into clusters after each layer. This ensures that the negative samples for each layer belong to different clusters and form the cluster set . Then, we define to measure the overlap of the cluster sets between layers:
(28) |
Results. Table 4, 5, and 6 illustrate the various overlap rates for the two methods on three datasets. To evaluate the cluster overlap rate, the number of clusters was set to: (a) the actual number of classes, denoted as , and (b) 5 times the actual number of classes, denoted as . On all three datasets, we found that in comparison to D2GCN, LDGCN not only significantly decreased the node overlap rate but also reduced the repetition rate of the clusters to which the nodes belong. An interesting observation is that when we increased the number of clusters from the actual number of classes to 5 times the actual number of classes, LDGCN saw a further significant reduction in the sample overlap rate, while D2GCN only saw a slight reduction.
One possible explanation is that within each class, nodes can be further clustered. For instance, in the citation network, articles classified as machine learning can be further divided into articles on CNNs, GNNs, etc. Our layer-diverse method is designed to find the most diverse samples possible, even when searching within the same class, such as those belonging to these more specific clusters. In contrast, D2GCN consistently selects negative samples from the same cluster. These results demonstrate that LDGCN provides more useful information about the entire graph for feature extraction than D2GCN during message passing.
We evaluated the accuracy of LDGCN and D2GCN on the Cora, Computers, and CS datasets using different negative sampling rates. As shown in Figure 8, LDGCN demonstrated superior performance on all three datasets using both 1% and 10% of nodes for sampling. Examining the 1% experiment, where fewer nodes were used for negative message passing, we found that the selected nodes were meaningful enough to aid in graph learning. LDGCN maintained consistent performance even with a reduced sampling rate, while D2GCN’s performance decreased significantly.
Method | Layer 2 | Layer 4 | Layer 6 | Layer 8 | Layer 16 | Layer 32 |
---|---|---|---|---|---|---|
GCN | ||||||
LDGCN | 80.94±0.76 | 76.80±1.26 | 67.91±0.82 | 52.80±5.96 | 30.12±1.05 | 25.25±1.08 |
GATv2 | ||||||
LDGATv2 | 79.30±0.61 | 78.46±0.85 | 67.79±2.34 | 30.93±0.00 | 25.11±9.03 | 24.28±8.30 |
SAGE | ||||||
LDSAGE | 80.26±0.45 | 76.55±0.94 | 65.39±3.49 | 23.25±0.16 | 20.89±3.74 | 20.79±4.51 |
GIN | ||||||
LDGIN | 79.21±0.56 | 69.27±1.70 | 39.34±7.51 | 31.80±3.33 | 31.79±6.98 | 30.14±7.13 |
Method | Layer 2 | Layer 4 | Layer 6 | Layer 8 | Layer 16 | Layer 32 |
---|---|---|---|---|---|---|
GCN | ||||||
LDGCN | 91.53±0.41 | 86.23±0.55 | 64.37±3.01 | 53.96±0.87 | 19.82±2.75 | 15.53±1.15 |
GATv2 | ||||||
LDGATv2 | 90.26±1.07 | 88.70±1.24 | 75.59±4.00 | 35.70±7.01 | 22.90±0.00 | |
SAGE | ||||||
LDSAGE | 91.60±0.31 | 86.33±0.62 | 64.79±1.58 | 47.54±9.10 | 13.83±3.66 | 12.80±2.76 |
GIN | ||||||
LDGIN | 90.88±0.72 | 66.10±3.36 | 22.37±4.25 | 20.61±3.56 | 8.27±3.31 |
Method | Layer 2 | Layer 4 | Layer 6 | Layer 8 | Layer 16 | Layer 32 |
---|---|---|---|---|---|---|
GCN | ||||||
LDGCN | 80.81±0.26 | 77.92±2.34 | 70.30±0.65 | 59.39±1.28 | 55.85±2.00 | 54.50±2.82 |
GATv2 | ||||||
LDGATv2 | 72.59±1.02 | 70.28±2.09 | 28.28±20.47 | 31.10±11.08 | ||
SAGE | ||||||
LDSAGE | 80.92±0.41 | 75.61±1.91 | 63.87±1.23 | 56.06±6.89 | 33.28±0.67 | 37.29±7.52 |
GIN | ||||||
LDGIN |
4.3 Evaluation of different GNNs architectures
This section presents an investigation of applying our layer-diverse sampling method to different GNN architectures on the Cora, CS and Computers datasets, which have different graph densities.
Setup. Besides GCN (Kipf & Welling, 2017), layer-diverse sampling method was applied to GATv2 (Brody et al., 2022), SAGE (Hamilton et al., 2017) and GIN- (Xu et al., 2019), which were called LDGATv2, LDSAGE, LDGIN- seperately in the following. We repeated the experiments using 1% of the nodes selected for negative sampling. The layers of different models are set as . Our aim was to show that: the layer-diverse negative sampling is applicable to different GNN architectures and helps theses models to relieve the over-smoothing problem so that to achieve better results when the layers of the model become deeper.
Results. The results are shown Table.7, 8 and 9, which compare the performance of GNN architectures with and without layer-diverse negative sampling methods on the Cora and CS datasets. It’s clear that layer-diverse methods (LDGCN, LDGATv2, LDSAGE, LDGIN) consistently outperform their counterparts without layer-diverse methods (GCN, GATv2, SAGE, GIN) across all layer sizes for both datasets. Although as the layers in the network increased, all models showed a trend of decreasing accuracy, layer-diverse methods consistently performed better, implying that these methods enhance the effectiveness of GNNs in capturing informative samples for learning better graph representations. For example, LDGATv2 outperforms GATv2 on CS, with the highest performance improvement observed in Layer 6 (from 52.19 to 75.59); LDSAGE outperforms SAGE in all layer sizes on Computers, with the highest performance improvement observed in Layer 6 (from 21.59 to 63.87).
Interestingly, it was observed that GIN failed to converge for all layer settings on the Computers dataset. As a result, the layer-diverse method, which had consistently shown improvements on other GNN architectures, couldn’t enhance the performance of GIN on the Computers dataset. This outcome underlines the fact that the layer-diverse approach may not be universally applicable or beneficial for all GNN architectures and datasets and that individual characteristics of the networks and the data can play a significant role.
In summary, the layer-diverse negative sampling methods have consistently improved performance across various architectures and datasets, supporting their effectiveness in graph-based learning tasks. They have potential for further exploration in other tasks or architectures, and could be a promising direction for improving the performance of GNNs, especially those with multiple layers.
Citeseer | Cora | PubMed | Coauthor-CS | Computers | Photo | ogbn-arxiv | |
GCN | |||||||
RGCN | |||||||
MCGCN | |||||||
PGCN | |||||||
D2GCN | |||||||
LDGCN | 74.71±2.60 | 75.24±3.69 | 88.88±7.94 | 73.25±3.57 | 62.33±6.69 | 79.45±4.19 | 81.09±0.91 |
Citeseer | Cora | PubMed | Coauthor-CS | Computers | Photo | ogbn-arxiv | |
GCN | |||||||
RGCN | |||||||
MCGCN | |||||||
PGCN | |||||||
D2GCN | |||||||
LDGCN | 74.29±2.27 | 74.92±5.56 | 81.40±3.55 | 73.70±1.39 | 62.18±5.11 | 82.67±1.43 | 78.33±1.24 |
4.4 Evaluation of Over-smoothing and Over-squashing
Over-smoothing. To measure the smoothness of the graph representations, we employed the Mean Average Distance (MAD) metric (Chen et al., 2020), which was computed as , where , and is the cosine distance between the nodes and . Our comparison between LDGCN and other negative sampling methods are presented in Table 10 and Table 11. As can be seen from these results, LDGCN’s MAD is higher than the other methods on all datasets. All the negative sampling methods, except for GCN, had relatively high MADs, indicating that adding negative samples to the message passing increases the distance between nodes. These results confirm our argument in Section 3.2 that incorporating negative samples into the convolution increases the upper bound of the distance between nodes.
ACC | MAD | |
---|---|---|
GCN | ||
GCN+FA | ||
LDGCN |
Over-squashing. Using the Cora dataset, we created a graph with a bottleneck where only one edge linked two distinct communities. We then compared LDGCN with the basic GCN method (Kipf & Welling, 2017) and the GCN+FA method (Alon & Yahav, 2021), where the last layer of GCN was fully connected. More information on the methods used to construct the graph, as well as the experiment settings, can be found in Appendix A.3.3. Table 12 presents the results in terms of accuracy and MAD. GCN+FA added too many edges in the graph at the last layer, which resulted in over-smoothing problems, and MAD went to zero. In contrast, our method reduces the likelihood of over-squashing and improves classification accuracy without the negative side effect of over-smoothing.
4.5 Evaluation of Time Complexity
The computational complexities of Eqs. (9) and (10) are and respectively for . Let be the average node degree, which is a constant for a given dataset and . The complexity of the loop in Algorithm 2 is then . Since matrix decomposition is an essential step in DPP, with complexity , if we take into account every node in the sampling process, the total one-time cost of our method would be . To reduce this cost, we can do negative sampling only on a fractional number of nodes. Experiments in Section 4.2 show that negative sampling on (random) 1% or 10% nodes suffices to achieve good performance in general. Taking the Citeseer as an example, Tabel.13 shows the computational time per epoch and per run for various methods under 4 layers.
Methods | Time (s) /Epoch | Time (s) /Run |
---|---|---|
GCN | ||
SAGE | ||
GATv2 | ||
GIN | ||
AERO | ||
MCGCN | ||
PGCN | ||
RGCN | ||
D2GCN-1% | ||
LDGCN-1% | ||
D2GCN-10% | ||
LDGCN-10% |
4.6 Case Study for Different Negative Sampling Methods
We conducted a case study using the Cora dataset to provide a qualitative comparison between our proposed Layer-Diverse Graph Convolutional Network (LDGCN) and the existing Determinantal Point Process (DPP) based model (D2GCN).
We implemented a 2-layer GNN for both the D2GCN (without layer diversity) and our LDGCN (with layer diversity). We employed the respective sampling strategies for each model and collected the indices of nodes sampled in two consecutive layers. When visualizing these nodes in a 2D space, we used the original node features as shown in Figure 9. The left panel depicts the sampling results from the D2GCN model, and the right panel illustrates the sampling by our LDGCN model. Overlap nodes from two successive layers are highlighted in dark blue for clarity.
From this case study, we observe two key outcomes:
-
•
Reduced overlap in sampling: The LDGCN model demonstrates fewer overlap nodes than the D2GCN model. This finding substantiates our claim that layer-diverse negative sampling effectively reduces the likelihood of resampling duplicate nodes across layers.
-
•
Enhanced sample diversity: Aside from reducing overlap nodes, the LDGCN model’s samples are more uniformly distributed in the 2D space. This contrasts with the D2GCN model, where samples appear more clustered and thus exhibit a higher spatial overlap. The more dispersed samples from the LDGCN model suggest that layer diversity contributes to increased sample diversity and a more distinct representation for each layer.
5 Related Work
GNNs and their variants. GNNs are typically divided into two categories: spectral-based and spatial-based. A widely used and straightforward example of the first category is the graph convolutional network (GCN) (Kipf & Welling, 2017), which utilizes first-order approximations of spectral graph convolutions. Many variations have been proposed based on GCN, such as GPRGNN (Chien et al., 2021), GNN-LF/HF (Zhu et al., 2021), UFG (Zheng et al., 2021) which uses graph framelet transforms to define convolution. In the spatial stream, GraphSAGE (Hamilton et al., 2017) is a well-known model. It utilizes node attribute information to effectively generate representations for previously unseen data. Xu et al. (2019) provides a theoretical analysis of GNNs’ representational power in capturing various graph structures and proposed the Graph Isomorphism Network. Besides these two models, there are numerous spatial-based methods, such as GAT (Velickovic et al., 2018) and PPNP (Klicpera et al., 2019) to mention just a few.
Negative sampling in GNNs. All the GNNs previously mentioned are based on positive sampling. In terms of negative sampling, there are roughly two kinds negative sampling methods for graph representation learning. The first includes methods such as randomly selecting (Kim & Oh, 2021), Monte Carlo chains based (Yang et al., 2020), and personalized Page-Rank based (Ying et al., 2018). While these methods do find negative samples, they often have a high degree of redundancy or the small clusters are overwhelmed by large clusters. These do not meet the criteria for obtaining good negative samples as proposed in (Duan et al., 2022). Duan et al. (2022) attempt to find negative samples that meet the above criteria, and focus on controlling the diversity of negative samples using DPP (Kulesza & Taskar, 2012). However, the found samples were still highly redundant, and it has not yet been confirmed whether these samples meet the criteria for being good negative samples.
DPP and its applications. Determinantal point process (DPP) was first introduced to the field of machine learning by Kulesza & Taskar (2012) as -determinantal point process (-DPP). The -DPP is a generalization of the DPP for sampling a fixed number of items, , rather than a variable number, which is defined by a positive semidefinite kernel matrix, and encodes the similarity between the items in the candidate set. The -DPP method for negative sampling in graph representation learning is a way of selecting negative samples by controlling the diversity of negative samples using the -DPP. This method is particularly effective in capturing the properties of repulsion and has been successfully applied to various scenarios such as sequential labelling (Qiao et al., 2015), document summarization (Cho et al., 2019), video summarization (Zheng & Lu, 2020).
6 Conclusion
In this paper, we presented a novel approach for negative sampling in graph representation learning, based on a layer-diverse DPP sampling method and space squeezing technique. Our method is able to significantly reduce the redundancy associated with negative sampling, resulting in improved overall classification performance. We also provided an in-depth analysis of why negative samples are beneficial for GNNs and how they help to address common issues such as over-smoothing, GNN expressivity, and over-squashing. Through extensive experiments, we confirmed that our method can effectively improve graph learning ability. Furthermore, our approach can be applied to various types of graph learning tasks, and it is expected to have a wide range of potential applications.
Acknowledgments
This work is supported by the Australian Research Council under Australian Laureate Fellowships FL190100149 and Discovery Early Career Researcher Award DE200100245.
References
- Alon & Yahav (2021) Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In 9th International Conference on Learning Representations, (ICLR 2021), Virtual Event, Austria, 2021.
- Brody et al. (2022) Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? In The Tenth International Conference on Learning Representations,(ICLR 2022), Virtual Event, April 25-29, 2022.
- Chen et al. (2020) Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of The 34th AAAI Conference on Artificial Intelligence (AAAI 2020), New York, NY, USA, February 7-12, pp. 3438–3445, 2020.
- Chien et al. (2021) Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. In 9th International Conference on Learning Representations, (ICLR 2021), Virtual Event, Austria, May 3-7, 2021.
- Cho et al. (2019) Sangwoo Cho, Logan Lebanoff, Hassan Foroosh, and Fei Liu. Improving the similarity measure of determinantal point processes for extractive multi-document summarization. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL 2019), Florence, Italy, July 28- August 2, pp. 1027–1038, 2019.
- Craven et al. (1998) Mark Craven, Dan DiPasquo, Dayne Freitag, Andrew McCallum, Tom M. Mitchell, Kamal Nigam, and Seán Slattery. Learning to extract symbolic knowledge from the world wide web. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, (AAAI 1998), Madison, Wisconsin, USA, pp. 509–516. AAAI Press / The MIT Press, 1998.
- Duan et al. (2022) Wei Duan, Junyu Xuan, Maoying Qiao, and Jie Lu. Learning from the dark: Boosting graph convolutional neural networks with diverse negative samples. In Thirty-Sixth AAAI Conference on Artificial Intelligence, (AAAI 2022), Virtual Event, February 22 - March 1, pp. 6550–6558, 2022.
- Duan et al. (2023) Wei Duan, Junyu Xuan, Maoying Qiao, and Jie Lu. Graph convolutional neural networks with diverse negative samples via decomposed determinant point processes. IEEE Transactions on Neural Networks and Learning Systems, 2023. Accepted on 30 August 2023.
- Geerts et al. (2021) Floris Geerts, Filip Mazowiecki, and Guillermo A. Pérez. Let’s agree to degree: comparing graph convolutional networks in the message-passing framework. In Proceedings of the 38th International Conference on Machine Learning (ICML 2021), Virtual Event, July 18-24, pp. 3640–3649, 2021.
- Hamilton et al. (2017) William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, December 4-9, pp. 1024–1034, 2017.
- Hough et al. (2006) J Ben Hough, Manjunath Krishnapur, Yuval Peres, and Bálint Virág. Determinantal processes and independence. Probability Surveys, 3:206–229, 2006.
- Hough et al. (2009) J. Ben Hough, Manjunath Krishnapur, Yuval Peres, and Bálint Virág. Zeros of gaussian analytic functions and determinantal point processes, volume 51 of University Lecture Series. American Mathematical Society, 2009.
- Hu et al. (2020) Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems (NIPS 2020), Virtual Event, December 6-12, 2020.
- Karhadkar et al. (2022) Kedar Karhadkar, Pradeep Kr. Banerjee, and Guido Montúfar. Fosr: First-order spectral rewiring for addressing oversquashing in gnns. 2022. doi: 10.48550/arXiv.2210.11790.
- Kim & Oh (2021) Dongkwan Kim and Alice H. Oh. How to find your friendly neighbourhood: graph attention design with self-supervision. In Proceedings of the 9th International Conference on Learning Representations (ICLR 2021), Virtual Event, Austria, May 3-7, 2021.
- Kipf & Welling (2017) Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations (ICLR 2017), Toulon, France, April 24-26, 2017.
- Klicpera et al. (2019) Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In 7th International Conference on Learning Representations, (ICLR 2019), 2019.
- Kulesza & Taskar (2012) Alex Kulesza and Ben Taskar. Determinantal point processes for machine learning. Foundations and Trends in Machine Learning, 5(2-3):123–286, 2012.
- Lan et al. (2022) Shiyong Lan, Yitong Ma, Weikang Huang, Wenwu Wang, Hongyu Yang, and Pyang Li. DSTAGNN: dynamic spatial-temporal aware graph neural network for traffic flow forecasting. In International Conference on Machine Learning (ICML 2022), Baltimore, Maryland, USA, July 17-23, pp. 11906–11917, 2022.
- Lee et al. (2023) Soo Yong Lee, Fanchen Bu, Jaemin Yoo, and Kijung Shin. Towards deep attention in graph neural networks: Problems and remedies. In Proceedings of the 40th International Conference on Machine Learning (ICML 2023), Honolulu, Hawaii, USA, volume 202, pp. 18774–18795, 2023.
- Parés et al. (2017) Ferran Parés, Dario Garcia-Gasulla, Armand Vilalta, Jonathan Moreno, Eduard Ayguadé, Jesús Labarta, Ulises Cortés, and Toyotaro Suzumura. Fluid communities: A competitive, scalable and diverse community detection algorithm. In Complex Networks & Their Applications VI - Proceedings of Complex Networks 2017, (COMPLEX NETWORKS 2017), Lyon, France, November 29 - December 1, volume 689, pp. 229–240, 2017.
- Qiao et al. (2015) Maoying Qiao, Wei Bian, Richard Yi Da Xu, and Dacheng Tao. Diversified hidden models for sequential labeling. IEEE Transactions on Knowledge and Data Engineering, 27(11):2947–2960, 2015.
- Rong et al. (2020) Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. In 8th International Conference on Learning Representations, (ICLR 2020), Addis Ababa, Ethiopia, April 26-30, 2020.
- Sen et al. (2008) Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina Eliassi-Rad. Collective classification in network data. AI Magazine, 29(3):93–106, 2008.
- Shchur et al. (2018) Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural network evaluation. CoRR, abs/1811.05868, 2018.
- Sun et al. (2020) Mengying Sun, Sendong Zhao, Coryandar Gilvary, Olivier Elemento, Jiayu Zhou, and Fei Wang. Graph convolutional networks for computational drug development and discovery. Briefings Bioinform., 21(3):919–935, 2020.
- Topping et al. (2022) Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. In The Tenth International Conference on Learning Representations, (ICLR 2022), Virtual Event, April 25-29, 2022.
- Velickovic et al. (2018) Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In 6th International Conference on Learning Representations, (ICLR 2018), Vancouver, BC, Canada, April 30 - May 3, 2018.
- Xu et al. (2019) Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In 7th International Conference on Learning Representations (ICLR 2019), New Orleans, LA, USA, 2019.
- Yang et al. (2020) Zhen Yang, Ming Ding, Chang Zhou, Hongxia Yang, Jingren Zhou, and Jie Tang. Understanding negative sampling in graph representation learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2020), Virtual Event, CA, USA, August 23-27, pp. 1666–1676, 2020.
- Ying et al. (2018) Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2018), London, UK, August 19-23, pp. 974–983, 2018.
- Yu & Qin (2020) Wenhui Yu and Zheng Qin. Graph convolutional network for recommendation with low-pass collaborative filters. In Proceedings of the 37th International Conference on Machine Learning (ICML 2020), Virtual Event, July 13-18, volume 119, pp. 10936–10945, 2020.
- Zhao & Akoglu (2020) Lingxiao Zhao and Leman Akoglu. Pairnorm: Tackling oversmoothing in gnns. In 8th International Conference on Learning Representations, (ICLR 2020), Addis Ababa, Ethiopia, April 26-30, 2020, 2020.
- Zheng & Lu (2020) Jiping Zheng and Ganfeng Lu. k-sdpp: Fixed-size video summarization via sequential determinantal point processes. In Christian Bessiere (ed.), Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI 2020), pp. 774–781, 2020.
- Zheng et al. (2021) Xuebin Zheng, Bingxin Zhou, Junbin Gao, Yu Guang Wang, Pietro Lió, Ming Li, and Guido Montúfar. How framelets enhance graph neural networks. In Proceedings of the 38th International Conference on Machine Learning (ICML 2021), Virtual Event, 18-24 July, volume 139, pp. 12761–12771, 2021.
- Zhu et al. (2021) Meiqi Zhu, Xiao Wang, Chuan Shi, Houye Ji, and Peng Cui. Interpreting and unifying graph neural networks with an optimization framework. In Jure Leskovec, Marko Grobelnik, Marc Najork, Jie Tang, and Leila Zia (eds.), WWW ’21: The Web Conference (WWW 2021), Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, pp. 1215–1226, 2021.
Appendix A Appendix
A.1 Detail for Determinantal Point Processes (DPP)
A point process on a ground set is a probability measure over "point patterns", which are finite subsets of (Kulesza & Taskar, 2012). A determinantal point process is a probability measure on all possible subsets of the ground set with a size of . For every subset , a DPP (Hough et al., 2009) defined via a positive semidefinite matrix is formulated as
(29) |
where denotes the determinant of a given matrix, is a real and symmetric matrix indexed by the elements of , and is a normalisation term that is constant once the ground dataset is fixed.
DPP has an intuitive geometric interpretation. If we have a , there is always a matrix that satisfies . Let be the columns of . A determinantal operator can then be interpreted geometrically as
(30) |
where the right-hand side of the equation is the squared -dimensional volume of the parallelepiped spanned by the columns of corresponding to the elements in . Intuitively, diverse sets are more probable because their feature vectors are more orthogonal and span larger volumes.
One important variant of DPP is -DPP (Kulesza & Taskar, 2012). -DPP measures only -sized subsets of rather than all of them including an empty subset. It is formally defined as
(31) |
with the cardinality of subset being a fixed size , i.e., . Here, we use a sampling method based on eigendecomposition (Hough et al., 2006; Kulesza & Taskar, 2012). Eq. (31) can be rewritten as:
(32) |
where is the elementary symmetric polynomial on eigenvalues of defined as:
(33) |
Following Kulesza & Taskar (2012), Eq. (32) is decomposed into elementary parts as
(34) |
where denotes the set and and are the eigenvectors and eigenvalues of the -ensemble, respectively. Based on Eq. (34), for the self-contained reason, the complete process of sampling from -DPP is given in Algorithm 3. More details about DPP and -DPP can be found in Kulesza & Taskar (2012).
Since the sampling procedure in the DPP requires an eigendecomposition, the computational cost for Algorithm 3 could be , where . If the sampling is performed for every node in the graph, the total will become an excessive for all nodes. Thus, the large size of candidates found from exploring the whole graph to find negative samples would make such an approach impractical, even for a moderately sized graph.
To reduce the computational complexity, the shortest-path-based method (Duan et al., 2023) is first used to form a smaller but more effective candidate set for node , which is detailed in Algorithm 1. Using this method, the computational cost is approximately , where is the path length (normally smaller than the diameter of the graph) and is the average degree of the graph. As an example, consider a Citeseer graph (Sen et al., 2008) with 3,327 nodes and . When using the experimental setting shown below, where , we observe that , which is significantly larger than .
A.2 Remark of Space Squeeze
Remark A.1.
Suppose the probability of re-picking node in is , the new probability of re-picking it in would be reduced to , where . It means that we can control the squeezing degree by .
Proof.
Remark A.2.
For a node and , if and are sufficiently similar with each other, then the probability of re-picking would also be reduced. It means that we do not just reduce the re-picking probability of . By reducing the re-picking probability of , we also decrease the influence of similar nodes, reducing the likelihood of them being considered.
Proof.
For any two -length vectors and , if the following conditions are satisfied,
(37) |
then we call and are -similar with each other.
For a node and , the new representation of in new space would be
(38) |
If and are -similar with each other, we have
(39) |
and according to conditions in (37), we have
(40) |
When goes to , according to sandwich theorem, we have
(41) |
That is to say, if node is similar enough with , the space squeezing will also reduce the probability of selecting node . The more similar the two nodes are, the lower node will be re-picked. Hence, we do not just reduce the re-picking probability of but also the information of this node, and any nodes sharing similar information with this node would not be considered with high probability. ∎
A.3 Experiment Details
A.3.1 Dataset statistics
Dataset | Nodes | Edges | Classes | Features |
|
Label Rate | Val / Test | Epoch | ||
Citeseer | 3,327 | 9,104 | 6 | 3,703 | 2.74 | 3.61 % | 500/1000 | 200 | ||
Cora | 2,708 | 10,556 | 7 | 1,443 | 3.90 | 5.17 % | 500/1000 | 200 | ||
PubMed | 19,717 | 88,648 | 3 | 500 | 4.50 | 0.30 % | 500/1000 | 200 | ||
CoauthorCS | 18,333 | 163,788 | 15 | 6805 | 8.93 | 1.64% | 500/1000 | 200 | ||
Computers | 13,752 | 491,722 | 10 | 767 | 35.76 | 1.45% | 500/1000 | 200 | ||
Photo | 7,650 | 238,162 | 8 | 745 | 31.13 | 2.09% | 500/1000 | 400 | ||
Ogbn-arxiv | 169,343 | 1,166,243 | 40 | 128 | 35.76 | 53.70% | 29799/48603 | 200 | ||
Cornell | 183 | 298 | 5 | 1703 | 1.63 | 48% | 59/37 | 200 | ||
Texas | 183 | 325 | 5 | 1703 | 1.78 | 48% | 59/37 | 200 | ||
Wisconsin | 251 | 515 | 5 | 1703 | 2.05 | 48% | 80/51 | 200 |
The datasets are split generally following Kipf & Welling (2017). For the first 6 datasets, we choose 20 nodes for each class as the training set. For the Ogbn-arxiv, because this graph is large, we choose 100 nodes for each class as the training set.
The first six datasets are downloaded from PyTorch Geometric (PyG)222https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.html. The Ogbn-arxiv is downloaded from Open Graph Benchmark (OGB)333https://ogb.stanford.edu/docs/nodeprop/.
A.3.2 Implementation Details
The experimental task was standard node classification. We set the maximum length of the shortest path to 6 in Algorithm 1, which means after throwing away the first-order nearest neighbours, there are still 5 nodes at the end of the different shortest paths. The size of the candidate set for node i is . When selecting 1% or 10% nodes to perform negative sampling, we choose nodes whose degree is greater than the average degree .
The negative rate is a trainable parameter and is trained in all models. Each model was trained using an Adam optimiser with a learning rate of 0.02. The number of hidden channels is set to 16 for all models. Tests for each model with each dataset were conducted ten times. The convolution layers of GCN (Kipf & Welling, 2017), SAGE (Hamilton et al., 2017), GATv2 (Brody et al., 2022) and GIN- (Xu et al., 2019) use PyTorch Geometric to implement 444https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#convolutional-layers. The AERO (Lee et al., 2023) is implemented by the code published on Github 555https://github.com/syleeheal/AERO-GNN/. All experiments were conducted on an Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz and NVIDIA A100 PCIe 80GB GPU. The software that we use for experiments is Python 3.7.13, PyTorch 1.12.1, torch-geometric 2.1.0, torch-scatter 2.0.9, torch-sparse 0.6.15, torchvision 0.13.1, ogb 1.3.4, numpy 1.21.5 and CUDA 11.6.
Dataset | Nodes | Edges | Classes | Features |
|
Label Rate | Val / Test | Epoch | ||
---|---|---|---|---|---|---|---|---|---|---|
(Cora-based) | 915 | 3054 | 7 | 1443 | 3.33 | 8% | 400/400 | 200 |

A.3.3 Over-squashing Experiment Details
Algorithm of constructing the graph having bottlenecks. The central concept in construction is to segment the initial graph into distinct clusters and incrementally eliminate the nodes linking these clusters until only one edge remains, connecting the two communities. This edge can be considered a bottleneck in the graph. The pseudo-code for constructing this graph is shown in the Algorithm 4.
Detail of Cora-based Graph. It is important to consider a dataset’s specific characteristics and purpose before making any modifications to it, as it can affect the validity and reliability of the analysis and results obtained from it. We construct a graph with bottlenecks based on the real dataset Cora. The statistics of are shown in Table 12, and visualization is shown in Figure 10. As Cora is a citation dataset, it consists of a set of nodes representing scientific publications and edges representing citations between them. If some nodes and corresponding edges are deleted from the dataset, it will not affect the authenticity of the dataset as long as it still represents the citation relationships among the remaining publications.
However, not all datasets are suitable for the above constructing method. Take the PROTEINS dataset as an example. This dataset represents the interactions between different proteins in a biological system, and the edges in the dataset represent the interactions between them. Suppose some nodes and corresponding edges are deleted from the dataset. In that case, it could potentially alter the overall structure and function of the modelled biological system, thus affecting the authenticity of the dataset.
Citeseer | Cora | PubMed | CS | Computers | Photo | ogbn-arxiv | |
---|---|---|---|---|---|---|---|
GCN | 78.15±0.52 | ||||||
GATv2 | 71.76 | ||||||
SAGE | |||||||
GIN- | |||||||
AERO | 81.18±2.14 | 91.59±0.97 | |||||
RGCN | |||||||
MCGCN | |||||||
PGCN | |||||||
D2GCN | |||||||
LDGCN | 74.33±0.79 | 81.93 | 91.53±0.41 |
A.4 Additional Experiment Results
A.4.1 Best Performance of Various GNN Models in Experiment 4.1
We thoroughly examined and tuned the number of layers for each model using the validation set. This has led to a more nuanced understanding of how layer configurations impact model performance. Table 16 shows the highest accuracy achieved by various GNN models across diverse datasets. The table provides insights into whether each model attained its peak performance through a 2-layer, 3-layer, or 4-layer configuration, denoted within brackets where applicable. In the table, the best-performing method for each dataset is highlighted in bold, emphasizing the top achievement, while the second-best performance is underscored for clarity.
The results revealed that while a 2-layer setup is generally effective, certain models and datasets benefit from more layers. For instance, our LDGCN exhibited its highest accuracies on Cora and Photo datasets with 3 layers. An interesting pattern emerged from our analysis: methods that incorporated negative samples frequently achieved the second-best results. This observation suggests that adding negative samples to graph convolutional neural networks can significantly enhance the model’s ability to learn different types of relationships, thereby improving overall performance. However, it’s crucial to note that not all methods involving negative samples yielded consistent performance across different datasets. This variability highlights that the selection of negative samples is a non-trivial aspect of model design and that carefully chosen negative samples can substantially aid GCNs in better learning from graph data.
A.4.2 Big Dataset
We conducted additional experiments using the Open Graph Benchmark’s login-mag dataset, which indeed presents a more challenging environment with its larger graph size of approximately 1.94 million nodes and over 21 million edges. The statistics of obgn-mag are shown in Table 17.
Dataset | Nodes | Edges | Classes |
|
Split / Ration | Epoch | ||
---|---|---|---|---|---|---|---|---|
ogbn-mag | 1,939,743 | 21,111,007 | 349 | 21.7 | 85/9/6 | 400 |
The eigendecomposition techniques employed in DPP sampling do introduce considerable computational complexity, which can lead to scaling challenges on very large graphs. To address this and maintain a balance between computational efficiency and the benefits of our approach, we strategically sampled a subset of 0.1% of the nodes from the ogbn-mag dataset for layer-diverse negative sampling. The results are shown in Table 18.
Despite the reduced sampling size, our LDGCN achieved an accuracy of on the ogbn-mag dataset. This performance is higher compared to the baseline models. The results demonstrate that even with only a fraction of the nodes subjected to layer-diverse negative sampling, there is still enhancement in the performance of the original GCN framework. This evidences the efficacy of our proposed sampling method, suggesting that it could be a valuable strategy for managing the trade-off between computational demand and performance in large-scale graph neural networks.
GCN | GATv2 | GraphSAGE | LDGCN | |
---|---|---|---|---|
obgn-mag |
A.4.3 Graphs Classification
We conducted additional experiments focusing on graph-level classification tasks, this time extending our approach to the Graph Isomorphism Network (GIN) model. We selected two well-known graph datasets, Proteins and MUTAG, for our experiments. We employed a 10-fold cross-validation scheme and utilized SUM readout pooling to aggregate node features at the graph level. Our experiments compared the performance of standard graph neural network models like GCN, GATv2, GraphSAGE, and GIN with our modified version, the Layer-Diverse GIN (LD-GIN). The results of these experiments are shown in the Table 19.
GCN | GATv2 | GraphSAGE | GIN | LD-GIN | |
---|---|---|---|---|---|
PROTEINS | 74.07±4.78 | ||||
MUTAG | 88.89±6.05 |
The results from these experiments were encouraging. Our LD-GIN model improved over the baseline models on both the PROTEINS and MUTAG datasets. This enhanced performance on graph-level classification tasks demonstrates the versatility of our layer-diverse negative sampling method. It not only maintains its effectiveness across different graph structures but also adapts to varying task requirements, whether they involve node-level or graph-level classification. This extension of our experiments to graph-level tasks aligns to broaden the applicability and relevance of our method.