This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Neural Common Neighbor with Completion for Link Prediction

Xiyuan Wang1,2
[email protected]
&Haotong Yang1,2,3
[email protected]
&Muhan Zhang1∗
[email protected]
\AND1
Institute for Artificial Intelligence, Peking University.
2School of Intelligence Science and Technology, Peking University.
3Key Lab of Machine Perception (MoE)
Correspondence to Muhan Zhang.
Abstract

In this work, we propose a novel link prediction model and further boost it by studying graph incompleteness. First, we introduce MPNN-then-SF, an innovative architecture leveraging structural feature (SF) to guide MPNN’s representation pooling, with its implementation, namely Neural Common Neighbor (NCN). NCN exhibits superior expressiveness and scalability compared with existing models, which can be classified into two categories: SF-then-MPNN, augmenting MPNN’s input with SF, and SF-and-MPNN, decoupling SF and MPNN. Second, we investigate the impact of graph incompleteness—the phenomenon that some links are unobserved in the input graph—on SF, like the common neighbor. Through dataset visualization, we observe that incompleteness reduces common neighbors and induces distribution shifts, significantly affecting model performance. To address this issue, we propose to use a link prediction model to complete the common neighbor structure. Combining this method with NCN, we propose Neural Common Neighbor with Completion (NCNC). NCN and NCNC outperform recent strong baselines by large margins, and NCNC further surpasses state-of-the-art models in standard link prediction benchmarks. Our code is available at https://github.com/GraphPKU/NeuralCommonNeighbor.

1 Introduction

Refer to caption
Figure 1: The failure of MPNN in link prediction task. v2v_{2} and v3v_{3} have equal MPNN node representations due to symmetry. However, with different pairwise relations, (v1,v2)(v_{1},v_{2}) and (v1,v3)(v_{1},v_{3}) should have different representations.

Link prediction is a crucial task in graph machine learning, finding applications in various domains, such as recommender systems (Zhang & Chen, 2020), knowledge graph completion (Zhu et al., 2021), and drug interaction prediction (Souri et al., 2022). Graph Neural Networks (GNNs) have gained prominence in link prediction tasks, with Graph Autoencoder (GAE) (Kipf & Welling, 2016) being a notable representation. GAE utilizes Message Passing Neural Network (MPNN) (Gilmer et al., 2017) representations of two individual target nodes to predict link existence. However, Zhang et al. (2021) point out a limitation in GAE: it overlooks pairwise relations between target nodes. For example, in Figure 1, GAE always produces the same prediction for two links (v1,v2)(v_{1},v_{2}) and (v1,v3)(v_{1},v_{3}) despite their differing pairwise relationships, because MPNN generates identical representations for nodes v2,v3v_{2},v_{3} due to graph symmetry. Nevertheless, the two links have different structural features. For example, v1v_{1} and v2v_{2} have a common neighbor v4v_{4}, while v1v_{1} and v3v_{3} do not have any. Therefore, various methods combine structural feature (SF) and MPNN for better expressivity and have dominated the link prediction task (Zhang et al., 2021; Yun et al., 2021; Chamberlain et al., 2023).

Refer to caption
Figure 2: Archtectures for combining SF and MPNN. AA denote the input graph structure. Existing works are (1) SF-then-MPNN (2) SF-and-MPNN architectures. We propose a completely new architecture (3) MPNN-then-SF.

However, models combining SF and MPNN still have much room for improvement. They can generally be concluded into two architectures: SF-then-MPNN and SF-and-MPNN, as shown in Figure 2. For example, SEAL (Zhang & Chen, 2018) adds target-link-specific hand-crafted features to the node features of the input graphs of MPNN, whose output node representations are pooled to represent links. SEAL belongs to the SF-then-MPNN architecture, using SF to augment the input graph of MPNN. Though SF-then-MPNN models achieve provably high expressivity (Zhang et al., 2021), they require running MPNN on a different graph for each target link, leading to significant computational overhead. In contrast, Neo-GNN (Yun et al., 2021) and BUDDY (Chamberlain et al., 2023) decouple the structural feature from MPNN, directly incorporating manually created pairwise features with individual node representations produced by MPNN as link representations, necessitating only a single run of MPNN on the original graph. These models fall under the SF-and-MPNN category, where structural features and MPNN are independent. Such methods have limited expressivity. For example, SEAL can capture the representations of common neighbor nodes, while BUDDY can only count the number of common neighbors.

To solve the drawback of two architectures above, we propose MPNN-then-SF architecture, which initially applies MPNN to the original graph and then uses structural features to guide the pooling of node representations. This approach offers strong expressivity and scalability: similar to SF-then-MPNN models, MPNN-then-SF can capture the node features of common neighbors, and similar to SF-and-MPNN models, it runs MPNN only once for all target links. We introduce the Neural Common Neighbor (NCN) as an instantiation of the MPNN-then-SF architecture. In experiments, NCN outperforms existing models in both scalability and performance.

Furthermore, since NCN heavily relies on common neighbor structure, which is significantly affected by graph incompleteness, we also investigate the impact of incompleteness. Graph incompleteness is ubiquitous in link prediction tasks because the goal is to predict unobserved edges not present in the input graph. We empirically observe that incompleteness reduces the number of common neighbor and leads to a shift in the distribution of common neighbor between the training and test sets. These phenomena collectively lead to performance degradation. To mitigate this issue, we first employ NCN to complete the common neighbor structure and then apply NCN to the completed graph. In experiments, this method significantly improves the performance of NCN.

In conclusion, our contributions are as follows:

  • We introduce Neural Common Neighbor (NCN) with MPNN-then-SF architecture for link prediction, demonstrating superior performance and scalability compared to existing models.

  • We analyze the impact of graph incompleteness and propose Neural Common Neighbor with Completion (NCNC), which completes the input common neighbor structure and applies NCN to the completed graph. NCNC outperforms state-of-the-art models.

2 Preliminary

We consider an undirected graph 𝒢=(V,E,A,X){\mathcal{G}}\!=\!(V,E,A,X), where V={1,2,,n}V\!=\!\{1,2,\ldots,n\} is a set of nn nodes, EV×VE\subseteq V\times V is the set of edges, Xn×FX\in\mathbb{R}^{n\times F} is a node feature matrix whose vv-th row XvX_{v} is the feature of node vv, and a symmetric adjacency matrix An×nA\in\mathbb{R}^{n\times n} has (u,v)(u,v) element Auv=1A_{u}v=1 if (u,v)E(u,v)\in E and 0 otherwise. The degree of node uu is d(u,A):=v=1nAuvd(u,A):=\sum_{v=1}^{n}A_{uv}. Node uu’s neighbors are nodes connected to uu, N(u,A):={v|vV,Auv>0}N(u,A):=\{v|v\!\in\!V,A_{uv}>0\}. For simplicity, we use N(u)N(u) to denote N(u,A)N(u,A) when AA is fixed. Common neighbor means nodes connected to both ii and jj: N(i)N(j)N(i)\cap N(j).

High Order Neighbor.

We define AlA^{l} as a high-order adjacency matrix, where AuvlA_{uv}^{l} represents the number of walks of length ll between nodes uu and vv. N(u,Al)={v|vV,Auvl>0}N(u,A^{l})=\{v|v\in V,A^{l}_{uv}>0\} denotes the set of nodes connected to uu by a walk of length ll in graph AA. Nl(u,A)N_{l}(u,A) denotes the set of nodes whose shortest path distance to uu in graph AA is ll. Existing works define high-order neighbors as either N(u,Al)N(u,A^{l}) or Nl(u,A)N_{l}(u,A). More generally, the neighborhood of uu can be expressed as Nl1(u,Al2)N_{l_{1}}(u,A^{l_{2}}), returning all nodes with a shortest path distance of l1l_{1} to uu in the high-order graph Al2A^{l_{2}}. We use Nl1l2(u)N_{l_{1}}^{l_{2}}(u) to denote Nl1(u,Al2)N_{l_{1}}(u,A^{l_{2}}) when AA is fixed. Given a target link (i,j)(i,j), their general neighborhood overlap is given by Nl1l2(i)Nl1l2(j)N_{l_{1}}^{l_{2}}(i)\cap N_{l_{1}^{\prime}}^{l_{2}^{\prime}}(j), and their neighborhood difference is given by Nl1l2(i)Nl1l2(j)N_{l_{1}}^{l_{2}}(i)\!-\!N_{l_{1}^{\prime}}^{l_{2}^{\prime}}(j).

Message Passing Neural Network (MPNN).

Comprising message passing layers, MPNN (Gilmer et al., 2017) is a common GNN framework. The kthk^{\text{th}} layer is as follows.

𝒉v(k)=U(k)(𝒉v(k1),AGG({M(k)(𝒉v(k1),𝒉u(k1))|uN(v)})),{\bm{h}}_{v}^{(k)}=U^{(k)}({\bm{h}}_{v}^{(k-1)},\text{AGG}(\{M^{(k)}({\bm{h}}_{v}^{(k-1)},{\bm{h}}_{u}^{(k-1)})|u\in N(v)\})), (1)

where 𝒉v(k){\bm{h}}_{v}^{(k)} is the representation of node vv at the kthk^{\text{th}} layer, U(k),M(k)U^{(k)},M^{(k)} are functions like multi-layer perceptron (MLP), and AGG denotes an aggregation function like sum or max. The initial node representation 𝒉v(0){\bm{h}}_{v}^{(0)} is node feature XvX_{v}. In each message passing layer, information is aggregated from neighbors to update the node representation. The final node representations produced by MPNN are the output of the last message passing layer, denoted as MPNN(v,A,X)=𝒉v(K)\text{MPNN}(v,A,X)={\bm{h}}_{v}^{(K)}.

3 Related Work

Link Prediction Model

There are three primary categories of link prediction models. Node embeddings (Perozzi et al., 2014; Grover & Leskovec, 2016; Tang et al., 2015) map each node to an embedding vector and combine target nodes’ embeddings to predict link. Link prediction heuristics (Liben-Nowell & Kleinberg, 2003; Barabási & Albert, 1999; Zhou et al., 2009; Adamic & Adar, 2003) use hand-crafted structural features. GNNs utilize Graph Neural Networks. Among these GNNs, Graph Autoencoder (GAE) (Kipf & Welling, 2016) uses the inner product of two target nodes’ MPNN representations, MPNN(i,A,X),MPNN(j,A,X)\langle\text{MPNN}(i,A,X),\text{MPNN}(j,A,X)\rangle, as link (i,j)(i,j)’s representation. It uses MPNN only and fails to capture pairwise relations between nodes. In contrast, various GNNs combining MPNN and structural features (Zhang & Chen, 2018; Yun et al., 2021; Chamberlain et al., 2023) have achieved state-of-the-art performance. SEAL (Zhang & Chen, 2018) is representative. For a target link (i,j)(i,j), it initially adds each node’s shortest path distance to (i,j)(i,j) to node feature XX and extracts a kk-hop subgraph, generating augmented node feature XX^{\prime} and adjacency AA^{\prime}. Then SEAL applies MPNN to the subgraph and pools node representations within it, uMPNN(u,A,X)\sum_{u}\text{MPNN}(u,A^{\prime},X^{\prime}), as the target link representation. Other models employ a distinct approach to incorporate structural features. Neo-GNN (Yun et al., 2021) and BUDDY (Chamberlain et al., 2023), for instance, directly apply MPNN to the original graph and concatenate structural features, such as the count of common neighbors, with the Hadamard product of target node MPNN representations, MPNN(i,A,X)MPNN(j,A,X)||structural features\text{MPNN}(i,A,X)\odot\text{MPNN}(j,A,X)|\!|\text{structural features}.

Structural Feature

Structural features for link prediction vary but are generally based on neighborhood overlap. Notable heuristics, such as Common Neighbor (CN), Resource Allocation (RA), and Adamic Adar (AA), use first-order common neighbors to compute scores for a target link (i,j)(i,j):

CN(i,j)=uN(i)N(j)1,RA(i,j)=uN(i)N(j)1d(u),AA(i,j)=uN(i)N(j)1logd(u).\text{CN}(i,j)=\sum_{u\in N\!(\!i\!)\cap N\!(\!j\!)}1,\quad\text{RA}(i,j)=\sum_{u\in N\!(\!i\!)\cap N\!(\!j\!)}\frac{1}{d(u)},\quad\text{AA}(i,j)=\sum_{u\in N\!(\!i\!)\cap N\!(\!j\!)}\frac{1}{\log d(u)}. (2)
Table 1: Summary of existing models using Equation (5). FO and HO denote first-order and high-order neighbors, respectively. \cap and \!-\! denote set intersection and difference, respectively.
Model Neighbor \oplus g(x)g(x) f(x)f(x)
Heuristics CN FO \cap 11 11
RA FO \cap 11 1/x1/x
AA FO \cap 11 1/log(x)1/\log(x)
GNN Neo-GNN HO \cap xx MLP
BUDDY HO &\cap\&- 11 11

Neo-GNN (Yun et al., 2021) and BUDDY (Chamberlain et al., 2023) extend these heuristics by utilizing higher-order neighbors. Neo-GNN computes features for high-order neighborhood overlap Nl1(i),Nl2(j)N^{l_{1}}(i),N^{l_{2}}(j) as follows,

uN1l1(i)N1l2(j)Aiul1Ajul2f(d(u)),\sum_{u\in N_{1}^{l_{1}}(i)\cap N_{1}^{l_{2}}(j)}A^{l_{1}}_{iu}A^{l_{2}}_{ju}f(d(u)), (3)

where ff is a learnable function of node degree d(u)d(u). BUDDY (Chamberlain et al., 2023) further utilize high-order neighborhood difference. It computes overlap features {al1,l2(i,j)|l1,l2=1,2,,k}\{a_{l_{1},l_{2}}(i,j)|l_{1},l_{2}=1,2,...,k\} and difference features {bl(i,j),bl(j,i)|l=1,2,,k}\{b_{l}(i,j),b_{l}(j,i)|l=1,2,...,k\} as follows:

al1,l2(i,j)=uNl11(i)Nl21(j)1,bl(i,j)=uNl1(i)l=1kNl1(j)1.a_{l_{1},l_{2}}(i,j)=\sum_{u\in N_{l_{1}}^{1}(i)\cap N_{l_{2}}^{1}(j)}1,\quad b_{l}(i,j)=\sum_{u\in N_{l}^{1}(i)-\bigcup_{l^{\prime}=1}^{k}N_{l^{\prime}}^{1}(j)}1. (4)

All these pairwise features can be summarized into the following framework.

uNl1l2(i)Nl1l2(j)g(Aiul2)g(Ajul2)f(d(u)),\sum_{u\in N_{l_{1}}^{l_{2}}(i)\oplus N_{l_{1}^{\prime}}^{l_{2}^{\prime}}(j)}g(A^{l_{2}}_{iu})g(A^{l_{2}^{\prime}}_{ju})f(d(u)), (5)

where Nl1l2(i)N_{l_{1}}^{l_{2}}(i) and Nl1l2(j)N_{l_{1}^{\prime}}^{l_{2}^{\prime}}(j) denote the general neighborhood of ii and jj, \oplus is a set operator like intersection or difference, and f,gf,g are node degree and high-order adjacency weight functions, respectively. Details on how this framework unify existing structure features are shown in Table 1.

Incompleteness of Graph

Link prediction task is to forecast unobserved edges, inherently making the input graph incomplete. Nevertheless, graph incompleteness can significantly impact structural features, such as common neighbors, and models based on them. This issue has drawn attention in some prior works. Yang et al. (2022) examined how unobserved links could distort evaluation scores, with a specific focus on metrics and benchmark design, while our research concentrates on model design. Dong et al. (2022) explored the consequences of the presence of target links, whereas our emphasis lies in understanding how incompleteness affects common neighbor-based features. Outside the domain of link prediction, Zhao et al. (2023) and Zhao et al. (2021) add edges predicted by GAE to the input graph of GNNs. However, their primary objective was node classification tasks, aiming to enhance edges between nodes of the same class while diminishing others. In contrast, our research addresses distribution shifts and information loss stemming from graph incompleteness, offering unique completion methods and insights tailored for link prediction.

4 Neural Common Neighbor

Structural features (SF), such as common neighbors, are commonly employed in link prediction models. Existing approaches combine SF with Message Passing Neural Networks (MPNN) in two manners (illustrated in Figure 2): SF-then-MPNN and SF-and-MPNN. However, these approaches exhibit limitations in terms of either scalability or expressivity. To address these issues comprehensively, we introduce a novel architecture, MPNN-then-SF, which offers a unique blend of high expressivity and scalability. Subsequently, we present a concrete instantiation of this architecture, Neural Common Neighbor (NCN). All proofs for theorems in this section are in Appendix A.

4.1 New Architecture Combining MPNN and SF

Refer to caption
Figure 3: White, green, and yellow colors represent node features 0,10,1, and 22, respectively. Both links (v1,v2)(v_{1},v_{2}) and (v1,v3)(v_{1},v_{3}) have one common neighbor, making it indistinguishable for existing SF-and-MPNN models. However, NCN can differentiate between them because the two common neighbors have different features.

In Figure 2, we categorize existing methods into two architectures:

  • SF-then-MPNN: This category includes SEAL (Zhang & Chen, 2018) and NBFNet (Zhu et al., 2021). In this approach, the input graph is initially enriched with structural features and then fed into the MPNN, which allows MPNN to leverage SF and have provable expressivity (Zhang & Chen, 2018). However, the drawback is that structural features change with target link, necessitating MPNN to be re-run for each link, resulting in lower scalability.

  • SF-and-MPNN: This category encompasses models like NeoGNN (Yun et al., 2021) and BUDDY (Chamberlain et al., 2023). Here, MPNN takes the original graph as input and runs only once for all target links, leading to high scalability. However, SF are directly concatenated to the final representations and thus detached from MPNN, leading to reduced expressivity.

From these two architectural paradigms, it becomes apparent that feeding the original graph to MPNN is essential for achieving high scalability. Moreover, the coupling between SF and MPNN remains a crucial factor for expressivity. Thus, we introduce a new architecture: MPNN-then-SF. This approach initially runs MPNN on the original graph and then employs structural features to guide the pooling of MPNN features, requiring only one MPNN run and enhancing expressivity. The specific representation of the target link (i,j)(i,j) is as follows:

Pool({MPNN(u,A,X)|uS}),\text{Pool}(\{\text{MPNN}(u,A,X)|u\in S\}), (6)

where Pool is a pooling function mapping a multiset of node representations to a single set representation, and SS is a node set related to the target link. Multiple node sets can be used in conjunction to produce concatenated representations. This flexible framework can express various models. When using target nodes ii and jj as SS and Hadamard product as Pool, it can express GAE:

MPNN(i,A,X)MPNN(j,A,X).\text{MPNN}(i,A,X)\odot\text{MPNN}(j,A,X). (7)

Alternatively, we can choose SS as combinations of high-order neighbor sets of ii and jj, leading to the following form (see Appendix A.1 for the detailed derivation.):

uNl1l2(i)Nl1l2(j)g(Aiul2)g(Ajul2)MPNN(u,A,X),\sum_{u\in N_{l_{1}}^{l_{2}}(i)\oplus N_{l_{1}^{\prime}}^{l_{2}^{\prime}}(j)}g(A^{l_{2}}_{iu})g(A^{l_{2}^{\prime}}_{ju})\text{MPNN}(u,A,X), (8)

where gg is a function transforming the edge weight in high-order adjacency matrix. This framework exhibits stronger expressivity than existing SF-and-MPNN models.

Theorem 1.

Combination of Equation 7 and Equation 8 are strictly more expressive than MPNN-only model: GAE, SF-only models: CN, RA, AA, and MPNN-and-SF models: Neo-GNN, BUDDY.

A key factor contributing to its higher expressivity is the coupling of MPNN and SF. While SF-and-MPNN typically only counts the number of common neighbors, MPNN-then-SF, similar to SF-then-MPNN, can capture node properties of these common neighbors. As shown in Figure 3, node tuples (v1,v2)(v_{1},v_{2}) and (v1,v3)(v_{1},v_{3}) have the same number of common neighbors. However, their common neighbors have different node features, allowing MPNN-then-SF and SF-then-MPNN to distinguish them, a capability that SF-and-MPNN lacks.

4.2 Neural Common Neighbor

We will now present an implementation for the MPNN-then-SF framework. Notably, the previous models NeoGNN (Yun et al., 2021) and BUDDY (Chamberlain et al., 2023) all incorporate higher-order neighbors into their architectures, resulting in significant performance improvements. Surprisingly, in our experiments, we observed that the gains achieved by explicitly considering higher-order neighbors were marginal once we introduced MPNN into the framework (as discussed in Section 6.3). We speculate that this marginal improvement arises because MPNN implicitly learns information related to higher-order neighbors. Therefore, considering scalability, we opt to utilize only the target nodes and their first-order common neighbors as the node set, leading to the development of our NCN model:

NCN(i,j,A,X)=MPNN(i,A,X)MPNN(j,A,X)||uN(i)N(j)MPNN(u,A,X)\text{NCN}(i,j,A,X)=\text{MPNN}(i,A,X)\odot\text{MPNN}(j,A,X)|\!|\sum_{u\in N(i)\cap N(j)}\text{MPNN}(u,A,X) (9)

where g(Aiu)g(A_{iu}) and g(Aju)g(A_{ju}) are constants and ignored, and |||\!| denotes concatenation. It has high expressivity.

Theorem 2.

NCN is strictly more expressive than GAE, CN, RA, AA. Moreover, Neo-GNN and BUDDY are not more expressive than NCN.

To elaborate, in certain scenarios where the properties of common neighbors hold significant importance, NCN outperforms both BUDDY and Neo-GNN in expressiveness.

As our first major contribution, NCN represents a straightforward yet potent model for combining structural features and MPNNs. It operates as an implicit high-order model by aggregating first-order common neighbors, each of which implicitly learns higher-order information through MPNN. A comprehensive analysis of time complexity is in Appendix E.

Refer to caption
(a)
Refer to caption
(b)
Figure 4: Visualization of incompleteness on datasets. The incomplete graph only contains edges in the training set, and the complete graph further contains edges in the validation and test set. (a) and (b) visualize the ogbl-collab dataset. (c) and (d) visualize the Cora dataset. (a) and (c) are for distributions of the number of common neighbors of the training edges and test edges. (b) and (d) show performance of CN on the training set and test set.

5 Neural Common Neighbor with Completion

While NCN outperforms existing models, it relies heavily on the common neighbor structure, which the incompleteness of the graph can notably influence. For instance, in cases where node pairs lack common neighbors, NCN essentially degenerates to GAE, rendering it unable to leverage structural features. Although the absence of common neighbors can suggest that a link is unlikely to exist, certain node pairs may possess common neighbors in the ground truth that remain unobserved in the input graph due to graph incompleteness. Graph incompleteness is ubiquitous in link prediction tasks, given that the objective is to predict unobserved edges. However, few studies have delved into this issue. In this section, we initially demonstrate that incompleteness can result in the loss of common neighbor information, distribution shifts between the training and test sets, and the consequent deterioration of model performance. We propose a straightforward yet effective method to tackle these challenges: common neighbor completion (CNC). CNC completes unobserved common neighbors using a link prediction model. With the introduction of CNC, we enhance NCN and introduce Neural Common Neighbor with Completion (NCNC).

5.1 Incompleteness Visualization

To illustrate the challenges posed by incompleteness, we analyze two common datasets: ogbl-collab (Hu et al., 2020) and Cora (Yang et al., 2016). We refer to the graph containing only the edges from the training set as the incomplete graph, while the one encompassing edges from the training, validation, and test sets is termed the complete graph.

Given the pivotal role of common neighbor information in our NCN model and other link prediction models, we visualize the distribution of the number of common neighbors for training/test edges in both complete and incomplete graphs separately in Figure 4 (a)(c). To assess how incompleteness impacts model performance, we present the performance of CN model (as shown in Section 3) in four distinct scenarios in Figure 4 (b) (d). We observe the following effects of incompleteness:

Loss of Common Neighbors.

Figure 4(c) illustrates that in the incomplete graph, there are fewer common neighbors for both training and test sets, as indicated by the comparison between the blue (green) and red (orange) lines. When comparing the incomplete and complete graphs, it becomes evident that the incomplete graph suffers from a loss of common neighbor information due to incompleteness. Additionally, more links have no common neighbors at all.

Common Neighbor Distribution Shift.

A noticeable distribution shift between the training and test sets is evident in the incomplete graph of the ogbl-collab dataset, as seen in the comparison between the blue and green lines in Figure 4(a). This shift disappears when the graph is complete (the red and orange lines), indicating that incompleteness is the cause. Such a substantial distribution shift between training and test links could pose challenges in model generalization. This distribution shift is related to the dataset split method. Ogbl-collab is splitted based on the timestamp of edges, and the test edges all belong to the same year. Consequently, compared to the training edges, test edges exhibit stronger correlations with other test edges, resulting in a greater loss of common neighbor when these test edges are absent from the incomplete graph. Conversely, the Cora dataset is randomly splitted, so training and test edges lose a similar ratio of common neighbors and does not exhibit distribution shifts (Figure 4(c)).

Performance Degradation.

The performance of CN aligns with the common neighbor distribution. In the ogbl-collab dataset, the common neighbor distribution is nearly identical for the training set in both the complete and incomplete graphs, as is the performance (See Figure 4 (b)). However, test performance on the incomplete graph decreases significantly as the test distribution changes. Similar trends are observed in the Cora dataset, with test and training scores declining on incomplete graphs when the common neighbor distribution changes compared to the complete graph.

Remark.

Note that while common neighbor distribution changes may not fully account for the differences between complete and incomplete graphs, they offer valuable insights into how incompleteness alters the input graph structure for other learnable models. Despite CN is non-learnable and non-generalizable, its calculation for a target edge doesn’t involve the edge itself, thereby avoiding data leakage concerns. These findings suggest that having a more complete input graph could yield superior link prediction models. However, in practice, we can only work with the incomplete input graph, necessitating exploring other mitigation methods for these issues.

5.2 Common Neighbor Completion

Motivated by the analysis above, we address graph incompleteness issues with a two-step method:

Soft Completion of Common Neighbors.

We start by softly completing the input graph with a link prediction model, such as NCN. However, instead of completing all edges in the entire graph, which can be impractical for large graphs, we focus specifically on common neighbor links. We compute the probability that a node uu serves as a common neighbor for a node tuple (i,j)(i,j) as follows:

Puij={1if uN(i,A)N(j,A)A^iuif uN(j,A)N(i,A)A^juif uN(i,A)N(j,A)0otherwise P_{uij}=\begin{cases}1&\text{if }u\!\in\!N(i,\!A)\cap N(j,\!A)\\ \hat{A}_{iu}&\text{if }u\!\in N(j,\!A)\!-\!N(i,\!A)\\ \hat{A}_{ju}&\text{if }u\!\in\!N(i,\!A)\!-\!N(j,\!A)\\ 0&\text{otherwise }\end{cases} (10)

where A^iu\hat{A}_{iu} represents the predicted existence probability of link (i,u)(i,u) by the model. The idea is that uu is a common neighbor of (i,j)(i,j) iff both edges (i,u)(i,u) and (j,u)(j,u) exist. If one of these edges is unobserved, we use NCN to predict its link existence probability, which we also use as the probability of uu being a common neighbor. In the rare case where both (i,u)(i,u) and (j,u)(j,u) are unobserved, we set the probability to 0. This technique is called ”Common Neighbor Completion” (CNC).

Reapplication of NCN on the Completed Graph.

Following CNC, we apply the NCN model again on the graph that has been completed using the soft common neighbor weights PuijP_{uij}. This final model is named Neural Common Neighbor with Completion (NCNC) and is defined as:

NCNC(i,j,A,X)=MPNN(i,A,X)MPNN(j,A,X)||uN(i)N(j)PuijMPNN(u,A,X).\text{NCNC}(i,j,A,X)=\text{MPNN}(i,A,X)\odot\text{MPNN}(j,A,X)|\!|\sum_{u\in N(i)\cup N(j)}P_{uij}\text{MPNN}(u,A,X). (11)

Notably, the input graph of MPNN still takes the original graph as input, allowing it to run only once for all target links, thus maintaining high scalability. While PuijP_{uij} can be predicted using any link prediction model, weak models may not accurately recover the unobserved common neighbor structure. Therefore, in practice, we employ NCN to complete it.

In addition to addressing distribution shifts and common neighbor loss, NCNC also solves the problem that NCN can degrade to GAE when node pairs lack common neighbors. With NCNC, common neighbors are always completed, and the model only degenerates to GAE when both target nodes are isolated nodes. In such cases, where no structural features can be utilized, relying solely on the target node representations is reasonable. For a visual demonstration of CNC’s effect, please refer to Appendix H, which illustrates how NCN can make more precise predictions by completing common neighbors for node pairs with no observed common neighbors.

6 Experiment

Table 2: Results on link prediction benchmarks. The format is average score ±\pm standard deviation. OOM means out of GPU memory.
Cora Citeseer Pubmed Collab PPA Citation2 DDI
Metric HR@100 HR@100 HR@100 HR@50 HR@100 MRR HR@20
CN 33.92±0.4633.92{\scriptstyle\pm 0.46} 29.79±0.9029.79{\scriptstyle\pm 0.90} 23.13±0.1523.13{\scriptstyle\pm 0.15} 56.44±0.0056.44{\scriptstyle\pm 0.00} 27.65±0.0027.65{\scriptstyle\pm 0.00} 51.47±0.0051.47{\scriptstyle\pm 0.00} 17.73±0.0017.73{\scriptstyle\pm 0.00}
AA 39.85±1.3439.85{\scriptstyle\pm 1.34} 35.19±1.3335.19{\scriptstyle\pm 1.33} 27.38±0.1127.38{\scriptstyle\pm 0.11} 64.35±0.0064.35{\scriptstyle\pm 0.00} 32.45±0.0032.45{\scriptstyle\pm 0.00} 51.89±0.0051.89{\scriptstyle\pm 0.00} 18.61±0.0018.61{\scriptstyle\pm 0.00}
RA 41.07±0.4841.07{\scriptstyle\pm 0.48} 33.56±0.1733.56{\scriptstyle\pm 0.17} 27.03±0.3527.03{\scriptstyle\pm 0.35} 64.00±0.0064.00{\scriptstyle\pm 0.00} 49.33±0.0049.33{\scriptstyle\pm 0.00} 51.98±0.0051.98{\scriptstyle\pm 0.00} 27.60±0.0027.60{\scriptstyle\pm 0.00}
GCN 66.79±1.6566.79{\scriptstyle\pm 1.65} 67.08±2.9467.08{\scriptstyle\pm 2.94} 53.02±1.3953.02{\scriptstyle\pm 1.39} 44.75±1.0744.75{\scriptstyle\pm 1.07} 18.67±1.3218.67{\scriptstyle\pm 1.32} 84.74±0.2184.74{\scriptstyle\pm 0.21} 37.07±5.0737.07{\scriptstyle\pm 5.07}
SAGE 55.02±4.0355.02{\scriptstyle\pm 4.03} 57.01±3.7457.01{\scriptstyle\pm 3.74} 39.66±0.7239.66{\scriptstyle\pm 0.72} 48.10±0.8148.10{\scriptstyle\pm 0.81} 16.55±2.4016.55{\scriptstyle\pm 2.40} 82.60±0.3682.60{\scriptstyle\pm 0.36} 53.90±4.7453.90{\scriptstyle\pm 4.74}
SEAL 81.71±1.3081.71{\scriptstyle\pm 1.30} 83.89±2.1583.89{\scriptstyle\pm 2.15} 75.54±1.3275.54{\scriptstyle\pm 1.32} 64.74±0.4364.74{\scriptstyle\pm 0.43} 48.80±3.1648.80{\scriptstyle\pm 3.16} 87.67±0.3287.67{\scriptstyle\pm 0.32} 30.56±3.8630.56{\scriptstyle\pm 3.86}
NBFnet 71.65±2.2771.65{\scriptstyle\pm 2.27} 74.07±1.7574.07{\scriptstyle\pm 1.75} 58.73±1.9958.73{\scriptstyle\pm 1.99} OOM OOM OOM 4.00±0.584.00{\scriptstyle\pm 0.58}
Neo-GNN 80.42±1.3180.42{\scriptstyle\pm 1.31} 84.67±2.1684.67{\scriptstyle\pm 2.16} 73.93±1.1973.93{\scriptstyle\pm 1.19} 57.52±0.3757.52{\scriptstyle\pm 0.37} 49.13±0.6049.13{\scriptstyle\pm 0.60} 87.26±0.8487.26{\scriptstyle\pm 0.84} 63.57±3.5263.57{\scriptstyle\pm 3.52}
BUDDY 88.00±0.4488.00{\scriptstyle\pm 0.44} 92.93±0.27¯\underline{92.93{\scriptstyle\pm 0.27}} 74.10±0.7874.10{\scriptstyle\pm 0.78} 65.94±0.58¯\underline{65.94{\scriptstyle\pm 0.58}} 49.85±0.2049.85{\scriptstyle\pm 0.20} 87.56±0.1187.56{\scriptstyle\pm 0.11} 78.51±1.3678.51{\scriptstyle\pm 1.36}
NCN 89.05±0.96¯\underline{89.05{\scriptstyle\pm 0.96}} 91.56±1.43{91.56{\scriptstyle\pm 1.43}} 79.05±1.16¯\underline{79.05{\scriptstyle\pm 1.16}} 64.76±0.8764.76{\scriptstyle\pm 0.87} 61.19±0.85¯\underline{61.19{\scriptstyle\pm 0.85}} 88.09±0.06¯\underline{88.09{\scriptstyle\pm 0.06}} 82.32±6.10¯\underline{82.32{\scriptstyle\pm 6.10}}
NCNC 89.65±1.36\mathbf{89.65{\scriptstyle\pm 1.36}} 93.47±0.95\mathbf{93.47{\scriptstyle\pm 0.95}} 81.29±0.95\mathbf{81.29{\scriptstyle\pm 0.95}} 66.61±0.71\mathbf{66.61{\scriptstyle\pm 0.71}} 61.42±0.73\mathbf{61.42{\scriptstyle\pm 0.73}} 89.12±0.40\mathbf{89.12{\scriptstyle\pm 0.40}} 84.11±3.67\mathbf{84.11{\scriptstyle\pm 3.67}}
Table 3: Ablation study on link prediction benchmarks.
Cora Citeseer Pubmed Collab PPA Citation2 DDI
Metric HR@100 HR@100 HR@100 HR@50 HR@100 MRR HR@20
CN 33.92±0.4633.92{\scriptstyle\pm 0.46} 29.79±0.9029.79{\scriptstyle\pm 0.90} 23.13±0.1523.13{\scriptstyle\pm 0.15} 56.44±0.0056.44{\scriptstyle\pm 0.00} 27.65±0.0027.65{\scriptstyle\pm 0.00} 51.47±0.0051.47{\scriptstyle\pm 0.00} 17.73±0.0017.73{\scriptstyle\pm 0.00}
GAE 89.01±1.3289.01{\scriptstyle\pm 1.32} 91.78±0.9491.78{\scriptstyle\pm 0.94} 78.81±1.6478.81{\scriptstyle\pm 1.64} 36.96±0.9536.96{\scriptstyle\pm 0.95} 19.49±0.7519.49{\scriptstyle\pm 0.75} 79.95±0.0979.95{\scriptstyle\pm 0.09} 61.53±9.5961.53{\scriptstyle\pm 9.59}
GAE+CN 88.61±1.3188.61{\scriptstyle\pm 1.31} 91.75±0.9891.75{\scriptstyle\pm 0.98} 79.04±0.8379.04{\scriptstyle\pm 0.83} 64.47±0.1464.47{\scriptstyle\pm 0.14} 51.83±0.5851.83{\scriptstyle\pm 0.58} 87.81±0.0687.81{\scriptstyle\pm 0.06} 80.71±5.5680.71{\scriptstyle\pm 5.56}
NCN2 88.87±1.3488.87{\scriptstyle\pm 1.34} 91.36±1.0291.36{\scriptstyle\pm 1.02} 80.21±0.7880.21{\scriptstyle\pm 0.78} 65.43±0.4665.43{\scriptstyle\pm 0.46} OOM OOM OOM
NCN-diff 89.12±1.0489.12{\scriptstyle\pm 1.04} 91.96±1.2391.96{\scriptstyle\pm 1.23} 80.28±0.8880.28{\scriptstyle\pm 0.88} 64.08±0.4064.08{\scriptstyle\pm 0.40} 57.86±1.2657.86{\scriptstyle\pm 1.26} 86.68±0.1686.68{\scriptstyle\pm 0.16} 17.67±8.7017.67{\scriptstyle\pm 8.70}
NCN 89.05±0.9689.05{\scriptstyle\pm 0.96} 91.56±1.4391.56{\scriptstyle\pm 1.43} 79.05±1.1679.05{\scriptstyle\pm 1.16} 64.76±0.8764.76{\scriptstyle\pm 0.87} 61.19±0.8561.19{\scriptstyle\pm 0.85} 88.09±0.0688.09{\scriptstyle\pm 0.06} 82.32±6.1082.32{\scriptstyle\pm 6.10}
NCNC 89.65±1.3689.65{\scriptstyle\pm 1.36} 93.47±0.9593.47{\scriptstyle\pm 0.95} 81.29±0.9581.29{\scriptstyle\pm 0.95} 66.61±0.7166.61{\scriptstyle\pm 0.71} 61.42±0.7361.42{\scriptstyle\pm 0.73} 89.12±0.4089.12{\scriptstyle\pm 0.40} 84.11±3.6784.11{\scriptstyle\pm 3.67}

In this section, we extensively evaluate the performance of both NCN and NCNC. Detailed experimental settings are included in Appendix D.

We use seven popular real-world link prediction benchmarks. Among these, three are Planetoid citation networks: Cora, Citeseer, and Pubmed (Yang et al., 2016). Others are from Open Graph Benchmark (Hu et al., 2020): ogbl-collab, ogbl-ppa, ogbl-citation2, and ogbl-ddi. Their statistics and splits are shown in Appendix B.

6.1 Evaluation on Real-World Datasets

In our evaluation on real-world datasets, we employ a range of baseline methods, encompassing traditional heuristics like CN (Barabási & Albert, 1999), RA (Zhou et al., 2009), and AA (Adamic & Adar, 2003), as well as GAE models, such as GCN (Kipf & Welling, 2017) and SAGE (Hamilton et al., 2017). Additionally, we consider SF-then-MPNN models, including SEAL (Zhang & Chen, 2018) and NBFNet (Zhu et al., 2021), as well as SF-and-MPNN models like Neo-GNN (Yun et al., 2021) and BUDDY (Chamberlain et al., 2023). The baseline results are sourced from (Chamberlain et al., 2023). Our models consist of NCN and NCNC. Their architectures are detailed in Appendix C.

The experimental results are presented in Table 2. NCN surpasses all baselines on 5/7 datasets and exhibits an average score improvement of 5% compared to BUDDY, the most competitive baseline. Even on the remaining two datasets, NCN outperforms all baselines except BUDDY. These impressive results underscore the outstanding expressivity of our MPNN-then-SF architecture. Furthermore, NCNC enhances performance by an additional 2%, emerging as the top-performing method on all datasets. Notably, on ogbl-ppa, NCNC achieves an HR@100 score of 61.42%, surpassing the strongest baseline BUDDY by a substantial margin of over 10%. It’s worth mentioning that our models outperform node embedding techniques (Perozzi et al., 2014; Grover & Leskovec, 2016; Tang et al., 2015) and other GNNs lacking pairwise features (Wang et al., 2021; 2022) significantly (see Appendix G).

6.2 Scalability

We compare the inference time and GPU memory on ogbl-collab in Figure 5. NCN and NCNC have a similar computation overhead to GAE, as they both need to run MPNN only once.

Refer to caption
Figure 5: Inference time and GPU memory on ogbl-collab. The process we measure includes preprocessing and predicting one batch of test links. As shown in Appendix E, relation between time yy and batch size tt is y=B+Cty=B+Ct, where B,CB,C are model-specific constants. SEAL has out-of-memory problem and only uses small batch sizes.

In contrast, SEAL, which reruns MPNN for each target link, takes 8686 times more time compared with NCN with a small batch size 20482048, and the disadvantage will be more significant with a larger batch size. Surprisingly, BUDDY and Neo-GNN are slower than NCN. The reason is that it uses pairwise features depending on high order neighbors that are much more time-consuming than common neighbor. NCN and NCNC also achieve low GPU memory consumption. We also conduct scalability comparisons on other datasets and observe the same results (see Appendix F).

6.3 Ablation Analysis

To assess the effectiveness of the NCNC design, we conducted a comprehensive ablation analysis, as presented in Table 3.

Starting with GAE, which relies solely on node representations, we introduced GAE+CN, which incorporates Common Neighbor (CN) as pairwise features. Remarkably, GAE+CN outperforms GAE by 70% on Open Graph Benchmark (OGB) datasets, illustrating the importance of structural features. Furthermore, NCN exhibits a 5.5% score increase over GAE+CN, highlighting the advantages of the MPNN-then-SF architecture over the MPNN-and-SF architecture.

We also explore variants of NCN, namely NCN-diff and NCN2. In NCN-diff, we include neighborhood difference information by summing the representations of nodes in N(i,A)N(j,A)N(i,A)-N(j,A) and N(j,A)N(i,A)N(j,A)-N(i,A), while NCN2 incorporates high-order neighborhood overlap using N(i,A2)N(j,A)N(i,A^{2})\cap N(j,A) and N(i,A)N(j,A2)N(i,A)\cap N(j,A^{2}). Notably, NCN, NCN-diff, and NCN2 exhibit similar performances across most datasets, suggesting that first-order neighborhood overlap might be sufficient. However, NCN-diff achieves a lower score on the DDI dataset, possibly because the high node degree in DDI introduces noisy and uninformative neighborhood difference information.

7 Conclusion

In this work, we introduce Neural Common Neighbor (NCN), a scalable and robust model for link prediction that harnesses the power of learnable pairwise features. Additionally, we address the challenge of graph incompleteness by identifying and visualizing common neighbor loss and distribution shifts stemming from this issue. To mitigate these problems, we introduce the Common Neighbor Completion (CNC) technique. Combining CNC with NCN, our final model, Neural Common Neighbor with Completion (NCNC), outperforms state-of-the-art baselines across various datasets in terms of both speed and prediction performance.

8 Limitations

Though we propose MPNN-then-SF framework, we do not exhaust the design space and only propose one implementation, NCN, and its variants NCN2 and NCN-diff in ablation study. Moreover, while we only analyze the impact of incompleteness on common neighbor structures, graph incompleteness can also affect other structural features. Additionally, the proposed completion method has the potential to be generalized to address other structural features. Our future research will explore the design space of MPNN-then-SF and the broader implications of incompleteness on various structural features.

9 Reproducibility Statement

Our code is available at https://github.com/GraphPKU/NeuralCommonNeighbor. Proofs of all theorems in the maintext are in Appendix A.

Acknowledgement

This work is partially supported by the National Key R&D Program of China (2022ZD0160303), the National Natural Science Foundation of China (62276003), and Alibaba Innovative Research Program.

References

  • Adamic & Adar (2003) Lada A Adamic and Eytan Adar. Friends and neighbors on the web. Social networks, 25(3):211–230, 2003.
  • Akiba et al. (2019) Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In KDD, pp.  2623–2631, 2019.
  • Barabási & Albert (1999) Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. science, 286(5439):509–512, 1999.
  • Chamberlain et al. (2023) Benjamin Paul Chamberlain, Sergey Shirobokov, Emanuele Rossi, Fabrizio Frasca, Thomas Markovich, Nils Hammerla, Michael M. Bronstein, and Max Hansmire. Graph neural networks for link prediction with subgraph sketching. ICLR, 2023.
  • Dong et al. (2022) Kaiwen Dong, Yijun Tian, Zhichun Guo, Yang Yang, and Nitesh V. Chawla. Fakeedge: Alleviate dataset shift in link prediction. In LoG, volume 198, pp.  56. PMLR, 2022.
  • Fey & Lenssen (2019) Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019.
  • Gilmer et al. (2017) Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In ICML, pp.  1263–1272, 2017.
  • Grover & Leskovec (2016) Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In KDD, pp.  855–864. ACM, 2016.
  • Hamilton et al. (2017) William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. NeurIPS, pp.  1025–1035, 2017.
  • Hu et al. (2020) Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS, 2020.
  • Kipf & Welling (2016) Thomas N. Kipf and Max Welling. Variational graph auto-encoders. CoRR, abs/1611.07308, 2016.
  • Kipf & Welling (2017) Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  • Liben-Nowell & Kleinberg (2003) David Liben-Nowell and Jon Kleinberg. The link prediction problem for social networks. In International conference on Information and knowledge management, pp.  556–559, 2003.
  • Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. NeurIPS, 32, 2019.
  • Perozzi et al. (2014) Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: online learning of social representations. In KDD, pp.  701–710. ACM, 2014.
  • Souri et al. (2022) E. Amiri Souri, Roman Laddach, S. N. Karagiannis, Lazaros G. Papageorgiou, and Sophia Tsoka. Novel drug-target interactions via link prediction and network embedding. BMC Bioinform., 23(1):121, 2022.
  • Tang et al. (2015) Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. LINE: large-scale information network embedding. In Aldo Gangemi, Stefano Leonardi, and Alessandro Panconesi (eds.), WWW, pp.  1067–1077, 2015.
  • Velickovic et al. (2018) Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In ICLR, 2018.
  • Wang et al. (2021) Zhitao Wang, Yong Zhou, Litao Hong, Yuanhang Zou, Hanjing Su, and Shouzhi Chen. Pairwise learning for neural link prediction. CoRR, abs/2112.02936, 2021.
  • Wang et al. (2022) Zixiao Wang, Yuluo Guo, Jin Zhao, Yu Zhang, Hui Yu, Xiaofei Liao, Hai Jin, Biao Wang, and Ting Yu. GIDN: A lightweight graph inception diffusion network for high-efficient link prediction. CoRR, abs/2210.01301, 2022.
  • Xu et al. (2019) Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ICLR, 2019.
  • Yang et al. (2022) Haotong Yang, Zhouchen Lin, and Muhan Zhang. Rethinking knowledge graph evaluation under the open-world assumption. pp.  13683–13694, 2022.
  • Yang et al. (2016) Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In Proceedings of the 33nd International Conference on Machine Learning, volume 48, pp.  40–48, 2016.
  • Yun et al. (2021) Seongjun Yun, Seoyoon Kim, Junhyun Lee, Jaewoo Kang, and Hyunwoo J. Kim. Neo-gnns: Neighborhood overlap-aware graph neural networks for link prediction. In NeurIPS, pp.  13683–13694, 2021.
  • Zhang & Chen (2018) Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. NeurIPS, 31:5165–5175, 2018.
  • Zhang & Chen (2020) Muhan Zhang and Yixin Chen. Inductive matrix completion based on graph neural networks. In ICLR, 2020.
  • Zhang et al. (2021) Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using graph neural networks for multi-node representation learning. NeurIPS, 34:9061–9073, 2021.
  • Zhao et al. (2023) Jianan Zhao, Qianlong Wen, Mingxuan Ju, Chuxu Zhang, and Yanfang Ye. Self-supervised graph structure refinement for graph neural networks. In WSDM, pp.  159–167, 2023.
  • Zhao et al. (2021) Tong Zhao, Yozen Liu, Leonardo Neves, Oliver J. Woodford, Meng Jiang, and Neil Shah. Data augmentation for graph neural networks. In AAAI, pp.  11015–11023, 2021.
  • Zhou et al. (2009) Tao Zhou, Linyuan Lü, and Yi-Cheng Zhang. Predicting missing links via local information. The European Physical Journal B, 71(4):623–630, 2009.
  • Zhu et al. (2021) Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal A. C. Xhonneux, and Jian Tang. Neural bellman-ford networks: A general graph neural network framework for link prediction. In NeurIPS, pp.  29476–29490, 2021.

Appendix A Proof

A.1 Derivation of Equation 8

The MPNN-then-SF architecture is as follows,

Pool({{MPNN(u,A,X)|uS}})\text{Pool}(\{\!\!\{\text{MPNN}(u,A,X)|u\in S\}\!\!\}) (12)

Let SabS_{ab} be the following set,

(Nl1l2(i)Nl1l2(j)){uV|Aiul2=a}{uV|Aujl2=b}.(N_{l_{1}}^{l_{2}}(i)\oplus N_{l_{1}^{\prime}}^{l_{2}^{\prime}}(j))\cap\{u\in V|A^{l_{2}}_{iu}=a\}\cap\{u\in V|A^{l_{2}^{\prime}}_{uj}=b\}. (13)

Then, for SabS_{ab}, we set the pooling function to sum and multiplied with g(a)g(b)g(a)g(b), where gg is a function with high-order adjacency edge weight as input. Then the MPNN-then-SF architecture can express,

uSabg(a)g(b)MPNN(u,A,X)\sum_{u\in S_{ab}}g(a)g(b)\text{MPNN}(u,A,X) (14)

Simply sums the feature of all SabS_{ab} leads to,

uNl1l2(i)Nl1l2(j)g(Aiul2)g(Ajul2)MPNN(u,A,X).\sum_{u\in N_{l_{1}}^{l_{2}}(i)\oplus N_{l_{1}^{\prime}}^{l_{2}^{\prime}}(j)}g(A_{iu}^{l_{2}})g(A_{ju}^{l_{2}^{\prime}})\text{MPNN}(u,A,X). (15)

A.2 Proof of Theorem 1 and  2

Here, we present the theoretical proof of MPNN-then-SF’s higher expressivity. We say algorithm A is strictly more expressive than algorithm B when A can differentiate all pairs of links that B can differentiate, while there exists a pair of links that A can distinguish while B cannot. We first prove the more expressive results by simulating other models with SF-then-MPNN and NCN then prove the strictness by constructing an example.

Lemma 1.

Equation 7 and NCN are more expressive than Graph Autoencoder (GAE)

Proof.

Graph Autoencoder’s prediction for link (i, j) is MPNN(i,A,X),MPNN(j,A,X)\langle\text{MPNN}(i,A,X),\text{MPNN}(j,A,X)\rangle. So directly sum Equation 7 leads to GAE. Equation 7 is a part of NCN, so NCN can also express GAE. ∎

Lemma 2.

NCN is more expressive than CN,RA, and AA. Combination of Equation 7 and Equation 8 is more expressive than CN,RA,AA, BUDDY and Neo-GNN

Proof.

As MPNN can learn arbitrary functions of node degrees, NCN can express Equation 2, and Equation 8 can express the general form of structure feature 5. ∎

Furthermore, we construct an example in Figure 3. In that graph, v2v_{2} and v3v_{3} are symmetric and thus have the same MPNN representation, so GAE cannot distinguish (v1,v2)(v_{1},v_{2}) and (v1,v3)(v_{1},v_{3}). Moreover, (v1,v2)(v_{1},v_{2}) and (v1,v3)(v_{1},v_{3}) are symmetric if the node feature is ignored, so CN, RA, AA, Neo-GNN, BUDDY cannot distinguish them. However, (v1,v2)(v_{1},v_{2}) have a common neighbor with feature 22, and (v1,v3)(v_{1},v_{3}) have a common neighbor with feature 11, so NCN can distinguish them.

Appendix B Dataset Statistics

The statistics of each dataset are shown in Table 4.

Table 4: Statistics of dataset.

Cora Citeseer Pubmed Collab PPA DDI Citation2
#Nodes 2,708 3,327 18,717 235,868 576,289 4,267 2,927,963
#Edges 5,278 4,676 44,327 1,285,465 30,326,273 1,334,889 30,561,187
splits random random random fixed fixed fixed fixed
average degree 3.9 2.74 4.5 5.45 52.62 312.84 10.44

Random splits use 70%/10%/20%70\%/10\%/20\% edges for training/validation/test set respectively. Different from others, the collab dataset allows using validation edges as input on test set.

Appendix C Model Architecture

This section concludes our methods in Section 4 and Section 5.

Given an input graph AA, a node feature matrix XX, and target links {(i1,j1),(i2,j2),,(it,jt)}\{(i_{1},j_{1}),(i_{2},j_{2}),...,(i_{t},j_{t})\}, our models consist of three steps: target link removal, MPNN, and predictor. NCN and NCNC only differ in the last step. The model architecture is visualized in Figure 6

Refer to caption
Figure 6: Architecture of our models. (a) The overall architecture. Given node feature XX, adjacency matrix AA and target link (i,j)(i,j), models first set Aij=0A_{ij}=0 ( Target link removal, TLR). Then, A¯,X\bar{A},X are fed to a vanilla MPNN for node representations hh. With (i,j)(i,j), hh, and AA as input, the predictor produces A^ij\hat{A}_{ij}, the probability that edge (i,j)(i,j) exists. (b) The NCN predictor. It uses node representations of target nodes i,ji,j and their common neighbors to produce edge representations. Then, it feeds the edge representation to an MLP to produce the final prediction. (c) The NCNC predictor. It first uses NCN to predict unobserved links A^ik,A^jk\hat{A}_{ik},\hat{A}_{jk}, which is then used to complete unobserved common neighbors.

Target link removal.

We make no changes to the input graph in the validation and test set where the target links are unobserved. In the training set, we remove target links from AA. Let A¯\bar{A} denote the processed graph. This method is detailed in Section 5.

MPNN.

We use MPNN to produce node representations hh. For each node ii,

hi=MPNN(i,A¯,X).h_{i}=\text{MPNN}(i,\bar{A},X). (16)

For all target links, MPNN needs to run only once.

Predictor.

Predictors use the node representations and graph structure to produce link prediction. Link representations of NCN are as follows,

zij=(hihj||uN(i,A¯)N(j,A¯)hu),z_{ij}=(h_{i}\odot h_{j}||\sum_{u\in N(i,\bar{A})\cap\atop N(j,\bar{A})}h_{u}), (17)

where |||| means concatenation, zijz_{ij} is the representation of link (i,j)(i,j). zijz_{ij} composed of two components: two nodes’ presentation hihjh_{i}\odot h_{j} and representations of nodes within the common neighbor set. The former component is often used in link prediction models (Kipf & Welling, 2016; Yun et al., 2021; Chamberlain et al., 2023), while we propose the latter one for the first time. Link representations are then used to produce link existence probability.

A^ij=sigmoid(MLP(zij)),\hat{A}_{ij}=\text{sigmoid}(\text{MLP}(z_{ij})), (18)

where A^ij\hat{A}_{ij} is the probability that link (i,j)(i,j) exists.

NCNC has a similar form. The only difference is that uN(i,A¯)N(j,A¯)hu\sum_{u\in N(i,\bar{A})\cap N(j,\bar{A})}h_{u} in Equation (17) is replaced with the follow form:

uN(i,A¯)N(j,A¯)hu+uN(j,A¯)N(i,A¯)A^iuhu+uN(i,A¯)N(j,A¯)A^juhu.\sum_{u\in N(i,\bar{A})\cap\atop N(j,\bar{A})}h_{u}+\sum_{u\in N(j,\bar{A})-\atop N(i,\bar{A})}\hat{A}_{iu}h_{u}+\sum_{u\in N(i,\bar{A})-\atop N(j,\bar{A})}\hat{A}_{ju}h_{u}. (19)

where A^ab\hat{A}_{ab} is the link existence probability produced by NCNC.

Appendix D Experimental Settings

Computing infrastructure.  We leverage Pytorch Geometric (Fey & Lenssen, 2019) and Pytorch (Paszke et al., 2019) for model development. All experiments are conducted on an Nvidia 4090 GPU on a Linux server.

Baselines.  We directly use the results reported in (Chamberlain et al., 2023).

Model hyperparameter.  We use optuna (Akiba et al., 2019) to perform random searches. Hyperparameters were selected to maximize validation score. The best hyperparameters selected for each model can be found in our code.

Training process. We utilize Adam optimizer to optimize models and set an epoch upper bound 100100. All results of our models are provided from runs with 10 random seeds.

Computation cost The total time of each main experiment is shown in Table 5. Reproducing all main results takes 280 GPU hours.

Table 5: Total time(s) needed in one run

Cora Citeseer Pubmed Collab PPA Citation2 DDI
NCN 88 1616 2828 320320 93759375 71237123 546546
NCNC 1515 2727 5454 730730 7738577385 51705170 17851785

Appendix E Time and Space Complexity

Table 6: Scalability comparison. h,h,h′′h,h,h^{\prime\prime}: the complexity of hash function in BUDDY, where are all d\geq d. FF: the dimension of node representations. When predicting the tt target links, time and space complexity of existing models can be expressed as O(B+Ct)O(B+Ct) and O(D+Et)O(D+Et) respectively.
Architecture Method B C D E
MPNN only GAE ndF+nF2ndF+nF^{2} F2F^{2} nFnF FF
MPNN-and-SF Neo-GNN ndF+nF2+ndlndF+nF^{2}+nd^{l} dl+F2d^{l}+F^{2} nF+ndlnF+nd^{l} dl+Fd^{l}+F
BUDDY ndF+nhndF+nh h+F2h^{\prime}+F^{2} nF+nh′′nF+nh^{\prime\prime} F+hF+h^{\prime}
SF-then-MPNN SEAL 0 dl+1F+dlF2d^{l^{\prime}+1}F+d^{l^{\prime}}F^{2} 0 dl+1Fd^{l^{\prime}+1}F
MPNN-then-SF NCN ndF+nF2ndF+nF^{2} dF+F2dF+F^{2} nFnF dFdF
NCNC ndF+nF2ndF+nF^{2} d2F+dF2d^{2}F+dF^{2} nFnF d2Fd^{2}F
Refer to caption
(a) Cora
Refer to caption
(b) Citeseer
Refer to caption
(c) Pubmed
Refer to caption
(d) Ogbl-ppa
Refer to caption
(e) Ogbl-ddi
Refer to caption
(f) Ogbl-citation2
Figure 7: Inference time and GPU memory on datasets. The process we measure includes preprocessing, MPNN, and predicting one batch of test links.

Let tt denote the number of target links, nn denote the number of nodes in the graph, and dd denote the maximum node degree. Existing models’ time and space complexity can be expressed in O(B+Ct)O(B+Ct) and O(D+Et)O(D+Et) respectively, where B,C,D,EB,C,D,E are irrelevant to tt. B,C,D,EB,C,D,E of models are summarized in Table 6. The derivation of the complexity is as follows. As NCN, GAE, and GNN with separated structural features run MPNN on the original graph, they share similar ndF+nF2ndF+nF^{2} in BB. Specifically, BUDDY (Chamberlain et al., 2023) uses a simplified MPNN with ndFndF in BB. Moreover, Neo-GNN needs to precompute high order graph AlA^{l}, which takes O(ndl)O(nd^{l}) time and space. BUDDY needs to hash each node and takes O(nh)O(nh) time and O(nh)O(nh^{\prime}) space. In contrast, BB of SEAL is 0 as it does not run MPNN on the original graph. For each target link, vanilla GNN only needs to feed the feature vector to MLP for each link, so C=F2C=F^{2}. Besides GAE’s operation, BUDDY further needs to hash the structure for structural features, whose complexity is complex but higher than dd per edge, and Neo-GNN computes pairwise feature with O(dl)O(d^{l}) complexity, where ll is the number of hop Neo-GNN consider. NCN needs to compute common neighbor: O(d)O(d), pool node embeddings: O(dF)O(dF), and feed to MLP: O(F2)O(F^{2}). NCNC-11 runs NCN for each potential common neighbor: O(F2+d(dF+F2))=O(d2F+dF2)O(F^{2}+d(dF+F^{2}))=O(d^{2}F+dF^{2}). Similarly, NCNC-KK runs O(d)O(d) times NCNC-(K1)(K\!-\!1), so its time complexity is O(dK+1F+dKF2)O(d^{K+1}F+d^{K}F^{2}). For each target link, SEAL segregates a subgraph of size O(dl)O(d^{l^{\prime}}) and runs MPNN on it, so C=dlF2+dl+1FC=d^{l^{\prime}}F^{2}+d^{l^{\prime}+1}F, where ll^{\prime} is the number of hops of the subgraph.

Appendix F Scalability Comparison on datasets

The time and memory consumption of models on different datasets are shown in Figure 7. On these datasets, we observe results similar to those on the ogbl-collab dataset in Section 6.2: NCN achieves similar computation overhead to GAE; NCNC usually scales better than Neo-GNN; SEAL’s scalabilty is the worst. However, on the ogbl-citation2 dataset, SEAL has the lowest GPU memory consumption with small batch sizes, because the whole graph in ogbl-citation2 is large, on which MPNN is expensive, while SEAL only runs MPNN on small subgraphs sampled from the whole graph, leading to lower overhead.

Table 7: Results on link prediction benchmarks. The format is average score ±\pm standard deviation. NCN+tricks means NCN with tricks of PLNLP.
Collab PPA Citation2 DDI
Metric Hits@50 Hits@100 MRR Hits@20
NCN 64.76±0.8764.76\pm 0.87 61.19±0.8561.19\pm 0.85 88.64±0.1488.64\pm 0.14 82.32±6.1082.32\pm 6.10
NCNC 66.61±0.7166.61\pm 0.71 61.42±0.7361.42\pm 0.73 89.12±0.4089.12\pm 0.40 84.11±3.6784.11\pm 3.67
Node2Vec 41.36±0.6941.36\pm 0.69 27.83±2.0227.83\pm 2.02 53.47±0.1253.47\pm 0.12 21.95±1.5821.95\pm 1.58
DeepWalk 50.37±0.3450.37\pm 0.34 28.88±1.5328.88\pm 1.53 84.48±0.3084.48\pm 0.30 26.42±6.1026.42\pm 6.10
LINE 55.13±1.3555.13\pm 1.35 26.03±2.5526.03\pm 2.55 82.33±0.5282.33\pm 0.52 10.15±1.6910.15\pm 1.69
PLNLP 70.59±0.2970.59\pm 0.29 32.38±2.5832.38\pm 2.58 84.92±0.2984.92\pm 0.29 90.88±3.1390.88\pm 3.13
GIDN 70.96±0.5570.96\pm 0.55 - - -
NCN+tricks 68.04±0.4268.04\pm 0.42 - - 90.83±2.8390.83\pm 2.83

Appendix G Comparison with other link prediction models

Node embedding methods

The main advantage of GNN methods is that they keep permutation equivariance. In other words, these methods can give isomorphic links (links with the same structure) the same prediction. In contrast, node embedding methods, such as Node2Vec (Grover & Leskovec, 2016), LINE (Tang et al., 2015), and DeepWalk (Perozzi et al., 2014), will produce different results for isomorphic links, leading to potentially bad generalization.

We also compare our method with representative node embedding methods on ogb datasets in Table 7. NCN and NCNC outperform node embedding methods significantly on all datasets, indicating the advantages of MPNNs considering pairwise features for link prediction.

Other GNNs

Instead of representations of pairwise relations, PLNLP (Wang et al., 2021) and GIDN (Wang et al., 2022) boost GNNs on link prediction tasks by training tricks like loss function and data augmentation. These tricks are orthogonal to our model design. In experiments (Table 7), compared with PLNLP, NCN achieves 89%89\% performance gain on ogbl-ppa and 20%20\% gain on average. As GIDN only conducts experiment on one dataset ogbl-collab, the comparison is not complete. Moreover, tricks of PLNLP can also boost our models.

Refer to caption
Figure 8: Visualization of how NCNC works. The example is from the Cora dataset, a citation graph. The target link is (0, 1). Models should produce high link existence probability. However, in the observed graph, link (0, 5) is missing, and (0,1) thus has no common neighbor. So NCN predicts that the link is not likely to exist. However, for NCNC, it first completes common neighbors (see green lines). Therefore, NCNC predicts that (0, 1) is more likely to exist. Note that NCNC completes common neighbors by probability, and we only plot completion with probability ¿ 0.5 here. And the two completions are with about 0.95 probability. Though the common neighbor 2 completed by the model does not exist in the full graph, the full graph here only means a graph with all training, validation, and test edges, and the citation relation in the graph may still need to be completed.

Appendix H CNC Example

Figure 8 provides an example from Cora dataset on how CNC works.

Appendix I Ablation of MPNN

Here we provide an ablation study on the MPNN used in NCN. The results are shown in Table 8. The MPNN model includes GIN (Xu et al., 2019), GraphSage (Hamilton et al., 2017), MPNN with max aggregation, GCN (Kipf & Welling, 2017), and GAT (Velickovic et al., 2018). Though the performance of NCN is sensitive to the MPNN model, NCN achieves performance gain with all GNNs compared with GraphAutoencoder (GAE).

Table 8: Ablation study on MPNN.
Dataset Model GIN GraphSage max GCN GAT
Cora GAE 70.45±1.8870.45_{\pm 1.88} 70.59±1.7070.59_{\pm 1.70} 61.63±4.4361.63_{\pm 4.43} 89.01±1.3289.01_{\pm 1.32} 83.36±2.5483.36_{\pm 2.54}
NCN 70.62±1.6870.62_{\pm 1.68} 70.94±1.4770.94_{\pm 1.47} 66.53±2.2766.53_{\pm 2.27} 89.05±0.9689.05_{\pm 0.96} 83.93±2.0383.93_{\pm 2.03}
Citeseer GAE 61.21±1.1861.21_{\pm 1.18} 61.23±1.2861.23_{\pm 1.28} 53.02±3.7553.02_{\pm 3.75} 91.78±0.9491.78_{\pm 0.94} 68.49±2.7568.49_{\pm 2.75}
NCN 61.58±1.1861.58_{\pm 1.18} 61.95±1.0561.95_{\pm 1.05} 53.40±2.3453.40_{\pm 2.34} 91.56±1.4391.56_{\pm 1.43} 69.27±2.0869.27_{\pm 2.08}
Pubmed GAE 59.00±0.3159.00_{\pm 0.31} 57.20±1.3757.20_{\pm 1.37} 55.08±1.4355.08_{\pm 1.43} 78.81±1.6478.81_{\pm 1.64} 74.44±1.0474.44_{\pm 1.04}
NCN 59.06±0.4959.06_{\pm 0.49} 58.06±0.6958.06_{\pm 0.69} 56.32±0.7756.32_{\pm 0.77} 79.05±1.1679.05_{\pm 1.16} 74.43±0.8174.43_{\pm 0.81}
collab GAE 38.94±0.8138.94_{\pm 0.81} 28.11±0.2628.11_{\pm 0.26} 27.08±0.6127.08_{\pm 0.61} 36.96±0.9536.96_{\pm 0.95} OOM
NCN 64.38±0.0664.38_{\pm 0.06} 63.94±0.4363.94_{\pm 0.43} 64.19±0.1864.19_{\pm 0.18} 64.76±0.8764.76_{\pm 0.87} OOM
ppa GAE 18.20±0.4518.20_{\pm 0.45} 11.79±1.0211.79_{\pm 1.02} 20.86±0.8120.86_{\pm 0.81} 19.49±0.7519.49_{\pm 0.75} OOM
NCN 47.94±0.8947.94_{\pm 0.89} 56.41±0.6556.41_{\pm 0.65} 57.31±0.3057.31_{\pm 0.30} 61.19±0.8561.19_{\pm 0.85} OOM

Appendix J Choice of Metrics

We test our model in different metrics. The results are shown in Table 9. In total, NCN achieves 11 best score (in bold), NCNC achieves 22 best score, and our strongest baseline achieves 9 best score. Therefore, our NCN and NCNC still outperforms baselines in different metrics.

Table 9: Models’ performance with various metrics. BUDDY is our strongest baseline. Blanks mean unfinished experiments due to time constraints.
Cora Citeseer Pubmed Collab PPA Citation2 DDI
hit@1 NCN 16.24±14.18\mathbf{16.24_{\pm 14.18}} 29.32±18.1929.32_{\pm 18.19} 7.03±6.107.03_{\pm 6.10} 4.94±2.954.94_{\pm 2.95} 5.91±4.115.91_{\pm 4.11} 83.79±0.0683.79_{\pm 0.06} 0.24±0.110.24_{\pm 0.11}
NCNC 10.90±11.4010.90_{\pm 11.40} 32.45±17.01\mathbf{32.45_{\pm 17.01}} 8.57±6.76\mathbf{8.57_{\pm 6.76}} 9.82±2.499.82_{\pm 2.49} 7.78±0.63\mathbf{7.78_{\pm 0.63}} 0.16±0.070.16_{\pm 0.07}
BUDDY 11.74±5.7711.74_{\pm 5.77} 20.87±12.2220.87_{\pm 12.22} 2.97±2.022.97_{\pm 2.02} 10.71±0.64\mathbf{10.71_{\pm 0.64}} 2.29±1.262.29_{\pm 1.26} 2.40±4.81\mathbf{2.40_{\pm 4.81}}
hit@3 NCN 29.52±13.7929.52_{\pm 13.79} 49.98±14.4949.98_{\pm 14.49} 19.16±4.39\mathbf{19.16_{\pm 4.39}} 11.07±6.3211.07_{\pm 6.32} 15.32±3.3115.32_{\pm 3.31} 92.41±0.0692.41_{\pm 0.06} 1.54±3.431.54_{\pm 3.43}
NCNC 25.04±11.4025.04_{\pm 11.40} 50.49±12.01\mathbf{50.49_{\pm 12.01}} 17.58±6.5717.58_{\pm 6.57} 21.07±5.46\mathbf{21.07_{\pm 5.46}} 16.58±0.60\mathbf{16.58_{\pm 0.60}} 0.59±0.420.59_{\pm 0.42}
BUDDY 32.67±10.10\mathbf{32.67_{\pm 10.10}} 41.16±9.1241.16_{\pm 9.12} 10.41±4.1610.41_{\pm 4.16} 16.25±1.5916.25_{\pm 1.59} 7.75±0.487.75_{\pm 0.48} 10.84±7.55\mathbf{10.84_{\pm 7.55}}
hit@10 NCN 55.87±4.40\mathbf{55.87_{\pm 4.40}} 69.68±3.05\mathbf{69.68_{\pm 3.05}} 34.61±5.02\mathbf{34.61_{\pm 5.02}} 43.51±1.8443.51_{\pm 1.84} 25.76±3.6525.76_{\pm 3.65} 96.50±0.0696.50_{\pm 0.06} 40.04±19.5940.04_{\pm 19.59}
NCNC 53.78±7.3353.78_{\pm 7.33} 69.59±4.4869.59_{\pm 4.48} 34.29±4.4334.29_{\pm 4.43} 43.22±6.1943.22_{\pm 6.19} 26.67±1.51\mathbf{26.67_{\pm 1.51}} 45.64±14.1245.64_{\pm 14.12}
BUDDY 50.98±3.4650.98_{\pm 3.46} 67.05±2.8367.05_{\pm 2.83} 23.92±5.0123.92_{\pm 5.01} 53.11±0.86\mathbf{53.11_{\pm 0.86}} 17.41±0.0617.41_{\pm 0.06} 52.70±7.70\mathbf{52.70_{\pm 7.70}}
hit@20 NCN 68.31±3.00\mathbf{68.31_{\pm 3.00}} 78.02±1.9978.02_{\pm 1.99} 50.94±3.1150.94_{\pm 3.11} 55.87±0.3655.87_{\pm 0.36} 37.57±1.98\mathbf{37.57_{\pm 1.98}} 97.87±0.0497.87_{\pm 0.04} 82.55±4.0882.55_{\pm 4.08}
NCNC 67.10±2.9667.10_{\pm 2.96} 79.05±2.68\mathbf{79.05_{\pm 2.68}} 51.42±3.81\mathbf{51.42_{\pm 3.81}} 57.83±3.1457.83_{\pm 3.14} 35.00±2.2235.00_{\pm 2.22} 83.92±3.25\mathbf{83.92_{\pm 3.25}}
BUDDY 61.92±2.6761.92_{\pm 2.67} 76.15±3.3176.15_{\pm 3.31} 34.75±5.1234.75_{\pm 5.12} 59.06±0.57\mathbf{59.06_{\pm 0.57}} 27.28±0.5227.28_{\pm 0.52} 78.14±4.2378.14_{\pm 4.23}
hit@50 NCN 80.85±1.1280.85_{\pm 1.12} 86.33±1.5586.33_{\pm 1.55} 67.77±1.9167.77_{\pm 1.91} 64.45±0.3564.45_{\pm 0.35} 51.54±1.48\mathbf{51.54_{\pm 1.48}} 99.01±0.0299.01_{\pm 0.02} 94.17±0.3694.17_{\pm 0.36}
NCNC 81.36±1.86\mathbf{81.36_{\pm 1.86}} 88.60±1.51\mathbf{88.60_{\pm 1.51}} 69.25±2.87\mathbf{69.25_{\pm 2.87}} 66.88±0.66\mathbf{66.88_{\pm 0.66}} 48.66±0.1848.66_{\pm 0.18} 94.85±0.56\mathbf{94.85_{\pm 0.56}}
BUDDY 76.64±2.4576.64_{\pm 2.45} 85.46±2.1785.46_{\pm 2.17} 55.75±3.3855.75_{\pm 3.38} 66.09±0.4866.09_{\pm 0.48} 39.99±0.0239.99_{\pm 0.02} 92.17±0.9592.17_{\pm 0.95}
hit@100 NCN 89.14±1.04\mathbf{89.14_{\pm 1.04}} 91.82±1.1491.82_{\pm 1.14} 79.56±1.1179.56_{\pm 1.11} 67.25±0.1567.25_{\pm 0.15} 61.25±0.6161.25_{\pm 0.61} 99.51±0.0299.51_{\pm 0.02} 97.09±0.4397.09_{\pm 0.43}
NCNC 89.05±1.2489.05_{\pm 1.24} 93.13±1.13\mathbf{93.13_{\pm 1.13}} 81.18±1.24\mathbf{81.18_{\pm 1.24}} 71.96±0.14\mathbf{71.96_{\pm 0.14}} 62.02±0.74\mathbf{62.02_{\pm 0.74}} 97.60±0.22\mathbf{97.60_{\pm 0.22}}
BUDDY 84.82±1.9684.82_{\pm 1.96} 91.48±1.1591.48_{\pm 1.15} 70.92±2.0870.92_{\pm 2.08} 70.53±0.1770.53_{\pm 0.17} 48.07±0.0548.07_{\pm 0.05} 95.38±0.6595.38_{\pm 0.65}
mrr NCN 29.20±13.59\mathbf{29.20_{\pm 13.59}} 43.93±12.8743.93_{\pm 12.87} 17.44±3.40\mathbf{17.44_{\pm 3.40}} 13.76±2.4913.76_{\pm 2.49} 13.48±2.8313.48_{\pm 2.83} 88.62±0.0588.62_{\pm 0.05} 5.48±1.235.48_{\pm 1.23}
NCNC 23.55±9.6723.55_{\pm 9.67} 45.64±11.78\mathbf{45.64_{\pm 11.78}} 15.63±4.1315.63_{\pm 4.13} 17.68±2.7017.68_{\pm 2.70} 14.37±0.06\mathbf{14.37_{\pm 0.06}} 8.61±1.378.61_{\pm 1.37}
BUDDY 27.28±4.7127.28_{\pm 4.71} 35.77±9.5935.77_{\pm 9.59} 10.79±2.8110.79_{\pm 2.81} 18.97±0.50\mathbf{18.97_{\pm 0.50}} 7.47±0.027.47_{\pm 0.02} 13.53±6.07\mathbf{13.53_{\pm 6.07}}