Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization
Abstract
Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs. Some recent work started to study the pre-training of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear requirements and guarantees towards their transferability. In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs. Firstly, we propose a novel view towards the essential graph information and advocate the capturing of it as the goal of transferable GNN training, which motivates the design of Egi (Ego-Graph Information maximization) to analytically achieve this goal. Secondly, when node features are structure-relevant, we conduct an analysis of Egi transferability regarding the difference between the local graph Laplacians of the source and target graphs. We conduct controlled synthetic experiments to directly justify our theoretical conclusions. Comprehensive experiments on two real-world network datasets show consistent results in the analyzed setting of direct-transfering, while those on large-scale knowledge graphs show promising results in the more practical setting of transfering with fine-tuning.111Code and processed data are available at https://github.com/GentleZhu/EGI.
1 Introduction
Graph neural networks (GNNs) have been intensively studied recently [29, 26, 39, 68], due to their established performance towards various real-world tasks [15, 69, 53], as well as close connections to spectral graph theory [12, 9, 16]. While most GNN architectures are not very complicated, the training of GNNs can still be costly regarding both memory and computation resources on real-world large-scale graphs [10, 63]. Moreover, it is intriguing to transfer learned structural information across different graphs and even domains in settings like few-shot learning [56, 44, 25]. Therefore, several very recent studies have been conducted on the transferability of GNNs [21, 23, 22, 71, 31, 3, 47]. However, it is unclear in what situations the models will excel or fail especially when the pre-training and fine-tuning tasks are different. To provide rigorous analysis and guarantee on the transferability of GNNs, we focus on the setting of direct-transfering between the source and target graphs, under an analogous setting of “domain adaptation” [7, 59, 71].
In this work, we establish a theoretically grounded framework for the transfer learning of GNNs, and leverage it to design a practically transferable GNN model. Figure 1 gives an overview of our framework. It is based on a novel view of a graph as samples from the joint distribution of its k-hop ego-graph structures and node features, which allows us to define graph information and similarity, so as to analyze GNN transferability (§3). This view motivates us to design Egi, a novel GNN training objective based on ego-graph information maximization, which is effective in capturing the graph information as we define (§3.1). Then we further specify the requirement on transferable node features and analyze the transferability of Egi that is dependent on the local graph Laplacians of source and target graphs (§3.2).
All of our theoretical conclusions have been directly validated through controlled synthetic experiments (Table 1), where we use structural-equivalent role identification in an direct-transfering setting to analyze the impacts of different model designs, node features and source-target structure similarities on GNN transferability. In §4, we conduct real-world experiments on multiple publicly available network datasets. On the Airport and Gene graphs (§4.1), we closely follow the settings of our synthetic experiments and observe consistent but more detailed results supporting the design of Egi and the utility of our theoretical analysis. On the YAGO graphs (§4.2), we further evaluate Egi on the more generalized and practical setting of transfer learning with task-specific fine-tuning. We find our theoretical insights still indicative in such scenarios, where Egi consistently outperforms state-of-the-art GNN representation and transfer learning frameworks with significant margins.

2 Related Work
Representation learning on graphs has been studied for decades, with earlier spectral-based methods [6, 46, 52] theoretically grounded but hardly scaling up to graphs with over a thousand of nodes. With the emergence of neural networks, unsupervised network embedding methods based on the Skip-gram objective [37] have replenished the field [51, 14, 42, 45, 66, 62, 65]. Equipped with efficient structural sampling (random walk, neighborhood, etc.) and negative sampling schemes, these methods are easily parallelizable and scalable to graphs with thousands to millions of nodes. However, these models are essentially transductive as they compute fully parameterized embeddings only for nodes seen during training, which are impossible to be transfered to unseen graphs.
More recently, researchers introduce the family of graph neural networks (GNNs) that are capable of inductive learning and generalizing to unseen nodes given meaningful node features [29, 12, 15, 67]. Yet, most existing GNNs require task-specific labels for training in a semi-supervised fashion to achieve satisfactory performance [29, 15, 53, 64], and their usage is limited to single graphs where the downstream task is fixed. To this end, several unsupervised GNNs are presented, such as the auto-encoder-based ones like VGAE [28] and GNFs [35], as well as the deep-infomax-based ones like DGI [54] and InfoGraph [50]. Their potential in the transfer learning of GNN remains unclear when the node features and link structures vary across different graphs.
Although the architectures of popular GNNs such as GCN [29] may not be very complicated compared with heavy vision and language models, training a dedicated GNN for each graph can still be cumbersome [10, 63]. Moreover, as pre-training neural networks are proven to be successful in other domains [13, 18], the idea is intriguing to transfer well-trained GNNs from relevant source graphs to improve the modeling of target graphs or enable few-shot learning [59, 31, 3] when labeled data are scarce. In light of this, pioneering works have studied both generative [22] and discriminative [21, 23] GNN pre-training schemes. Though Graph Contrastive Coding [43] shares the most similar view towards graph structures as us, it utilizes contrastive learning across all graphs instead of focusing on the transfer learning between any specific pairs. On the other hand, unsupervised domain adaptive GCNs [59] study the domain adaption problem only when the source and target tasks are homogeneous.
Most previous pre-training and self-supervised GNNs lack a rigorous analysis towards their transferability and thus have unpredictable effectiveness. The only existing theoretical work on GNN transferability studies the performance of GNNs across different permutations of a single original graph [33, 34] and the tradeoff between discriminability and transferability of GNNs [47]. We, instead, are the first to rigorously study the more practical setting of transferring GNNs across pairs of different source and target graphs.
3 Transferable Graph Neural Networks
In this paper, we design a more transferable training objective for GNN (Egi) based on our novel view of essential graph information (§3.1). We then analyze its transferability as the gap between its abilities to model the source and target graphs, based on their local graph Laplacians (§3.2).
Based on the connection between GNN and spectral graph theory [29], we describe the output of a GNN as a combination of its input node features , fixed graph Laplacian and learnable graph filters . The goal of training a GNN is then to improve its utility by learning the graph filters that are compatible with the other two components towards specific tasks.
In the graph transfer learning setting where downstream tasks are often unknown during pre-training, we argue that the general utility of a GNN should be optimized and quantified w.r.t. its ability of capturing the essential graph information in terms of the joint distribution of its topology structures and node features, which motivates us to design a novel ego-graph information maximization model (Egi) (§3.1). The general transferability of a GNN is then quantified by the gap between its abilities to model the source and target graphs. Under reasonable requirements such as using structure-respecting node features as the GNN input, we analyze this gap for Egi based on the structural difference between two graphs w.r.t. their local graph Laplacians (§3.2).
3.1 Transferable GNN via Ego-graph Information Maximization
In this work, we focus on the direct-transfering setting where a GNN is pre-trained on a source graph in an unsupervised fashion and applied on a target graph without fine-tuning.111In the experiments, we show our model to be generalizable to the more practical settings with task-specific pre-training and fine-tuning, while the study of rigorous bound in such scenarios is left as future work. Consider a graph , where the set of nodes are associated with certain features and the set of edges form graph structures. Intuitively, the transfer learning will be successful only if both the features and structures of and are similar in some ways, so that the graph filters of a GNN learned on are compatible with the features and structures of .
Graph kernels [57, 8, 30, 38] are well-known for their capability of measuring similarity between pair of graphs. Motivated by k-hop subgraph kernels [4], we introduce a novel view of a graph as samples from the joint distribution of its k-hop ego-graph structures and node features. Since GNN essentially encodes such k-hop ego graph samples, this view allows us to give concrete definitions towards structural information of graphs in the transfer learning setting, which facilitates the measuring of similarity (difference) among graphs. Yet, none of the existing GNN training objectives are capable of recovering such distributional signals of ego graphs. To this end, we design Ego-Graph Information maximization (Egi), which alternatively reconstructs the k-hop ego-graph of each center node via mutual information maximization [20].
Definition 3.1 (K-hop ego-graph).
We call a graph a -hop ego-graph centered at node if it has a -layer centroid expansion [4] such that the greatest distance between and any other nodes in the ego-graph is k, i.e. , where is the graph distance between and .
In this paper, we use directed k-hop ego-graph and its direction is decided by whether it is composed of incoming or outgoing edges to the center node, i.e., and . The results apply trivially to undirected graphs with .
Definition 3.2 (Structural information).
Let be a topological space of sub-graphs, we view a graph as samples of k-hop ego-graphs drawn i.i.d. from with probability , i.e., . The structural information of is then defined to be the set of k-hop ego-graphs of and their empirical distribution.
As shown in Figure 1, three graphs , and are characterized by a set of 1-hop ego-graphs and their empirical distributions, which allows us to quantify the structural similarity among graphs as shown in §3.2 (i.e., is more similar to than under such characterization). In practice, the nodes in a graph are characterized not only by their k-hop ego-graph structures but also their associated node features. Therefore, should be regarded as samples drawn from the joint distribution on the product space of and a node feature space .

Ego-Graph Information Maximization. Given a set of ego-graphs drawn from an empirical joint distribution . We aim to train an GNN encoder to maximize the mutual informaion (MI ) between the defined structural information 222Later in section 3.2, we will discuss the equivalence between MI() and MI() when node feature is structure-respecting. (i.e. k-hop ego-graph) and node embedding . To maximize the MI, another discriminator is introduced to compute the probability of an edge belongs to the given ego-graph . We use the Jensen-Shannon MI estimator [20] in the Egi objective,
(1) |
where is the softplus function and is randomly drawn from the product of marginal distributions, i.e. . In general, we can also randomly draw negative in the topological space, while enumerating all possible graphs leads to high computation cost.
In Eq. 1, the computation of on depends on the node orders. Following the common practice in graph generation [70], we characterize the decision process of with a fixed graph ordering, i.e., the BFS-ordering over edges . is composed by another GNN encoder and scoring function over an edge sequence , which makes predictions on the BFS-ordered edges.
Recall our previous definition on the direction of k-hop ego-graph, the center node encoder receives pairs of while the neighbor node encoder in discriminator receives . Both encoders are parameterized as GNNs,
where is the adjacency matrix with self-loops of and , respectively. The self-loops are added following the common design of GNNs, which allows the convolutional node embeddings to always incorporate the influence of the center node. . The output of , i.e., , is the center node embedding, while outputs representation for neighbor nodes in the ego-graph.
Once node representation is computed, we now describe the scoring function . For each of the node pair , is the source node representation from , is the destination node features. The scoring function is,
(2) |
where and are Sigmoid and ReLU activation functions. Thus, the discriminator is asked to distinguish a positive and negative pair for each edge in .
(3) |
There are two types of edges in our consideration of node orders, type-a - the edges across different hops (from the center node), and type-b - the edges within the same hop (from the center node). The aforementioned BFS-based node ordering guarantees that Eq. 3 is sensitive to the ordering of type-a edges, and invariant to the ordering of type-b edges, which is consistent with the requirement of our theoretical analysis on . Due to the fact that the output of a k-layer GNN only depends on a k-hop ego-graph for both encoders and , Egi can be trained in parallel by sampling batches of ’s. Besides, the training objective of Egi is transferable as long as across source graph and satisfies the conditions given in §3.2. More model details in Appendix §B and source code in the Supplementary Materials.
Connection with existing work. To provide more insights into the Egi objective, we also present it as a dual problem of ego-graph reconstruction. Recall our definition of ego-graph mutual information MI. It can be related to an ego-graph reconstruction loss as
(4) |
When Egi is maximizing the mutual information, it simultaneously minimizes the upper error bound of reconstructing an ego-graph . In this view, the key difference between Egi and VGAE [28] is they assume each edge in a graph to be observed independently during the reconstruction. While in Egi, edges in an ego-graph are observed jointly during the GNN decoding. Moreover, existing mutual information based GNNs such as DGI [54] and GMI [41] explicitly measure the mutual information between node features and GNN output . In this way, they tend to capture node features instead of graph structures, which we deem more essential in graph transfer learning as discussed in §3.2.
Use cases of Egi framework. In this paper, we focus on the classical domain adaption (direct-transferring) setting [7], where no target domain labels are available and transferability is measured by the performance discrepancy without fine-tuning. In this setting, the transferability of Egi is theoretically guaranteed by Theorem 3.1. In §4.1, we validated this with the airport datasets. Beyond direct-transferring, Egi is also useful in the more generalized and practical setting of transfer learning with fine-tuning, which we introduced in §4.2 and validated with the YAGO datasets. In this setting, the transferability of Egi is not rigorously studied yet, but is empirically shown promising.
Supportive observations. In the first three columns of our synthetic experimental results (Table 1), in both cases of transfering GNNs between similar graphs (F-F) and dissimilar graphs (B-F), Egi significantly outperforms all competitors when using node degree one-hot encoding as transferable node features. In particular, the performance gains over the untrained GIN show the effectiveness of training and transfering, and our gains are always larger than the two state-of-the-art unsupervised GNNs. Such results clearly indicate advantageous structure preserving capability and transferability of Egi.
3.2 Transferability analysis based on local graph Laplacians
We now study the transferability of a GNN (in particular, with the training objective of ) between the source graph and target graph based on their graph similarity. We firstly establish the requirement towards node features, under which we then focus on analyzing the transferability of Egi w.r.t. the structural information of and .
Recall our view of the GNN output as a combination of its input node features, fixed graph Laplacian and learnable graph filters. The utility of a GNN is determined by the compatibility among the three. In order to fulfill such compatibility, we require the node features to be structure-respecting:
Definition 3.3 (Structure-respecting node features).
Let be an ordered ego-graph centered on node with a set of node features , where is the set of nodes in -th hop of . Then we say the node features on are structure-respecting if for any node , where is a function. In the strict case, should be injective.
In its essence, Def 3.3 requires the node features to be a function of the graph structures, which is sensitive to changes in the graph structures, and in an ideal case, injective to the graph structures (i.e., mapping different graphs to different features). In this way, when the learned graph filters of a transfered GNN is compatible to the structure of , they are also compatible to the node features of . As we will explain in Remark 2 of Theorem 3.1, this requirement is also essential for the analysis of Egi transferability which eventually only depends on the structural difference between two graphs.
In practice, commonly used node features like node degrees, PageRank scores [40], spectral embeddings [11], and many pre-computed unsupervised network embeddings [42, 51, 14] are all structure-respecting in nature. However, other commonly used node features like random vectors [68] or uniform vectors [60] are not and thus non-transferable. When raw node attributes are available, they are transferable as long as the concept of homophily [36] applies, which also implies Def 3.3, but we do not have a rigorous analysis on it yet.
Supportive observations. In the fifth and sixth columns in Table 1, where we use same fixed vectors as non-transferable node features to contrast with the first three columns, there is almost no transferability (see ) for all compared methods when non-transferable features are used, as the performance of trained GNNs are similar to or worse than their untrained baselines. More detailed experiments on different transferable and non-transferable features can be found in Appendix §C.1.
With our view of graphs and requirement on node features both established, now we derive the following theorem by characterizing the performance difference of Egi on two graphs based on Eq. 1.
Theorem 3.1 (GNN transferability).
Let and be two graphs, and assume node features are structure-relevant. Consider GCN with k layers and a 1-hop polynomial filter . With reasonable assumptions on the local spectrum of and , the empirical performance difference of evaluated on satisfies
(5) |
On the RHS, is only dependent on the graph encoders and node features, while measures the structural difference between the source and target graphs as follows,
(6) |
where , and denotes the normalised graph Laplacian of by its in-degree. is a constant dependant on and .
Proof.
The full proof is detailed in Appendix §A. ∎
The analysis in Theorem 3.1 naturally instantiates our insight about the correspondence between structural similarity and GNN transferability. It allows us to tell how well an Egi trained on can work on by only checking the local graph Laplacians of and without actually training any model. In particular, we define the EGI gap as in Eq. 6, as other term is the same for different methods using same GNN encoder. It can be computed to bound the transferability of Egi regarding its loss difference on the source and target graphs.
Remark 1.
Our view of a graph as samples of k-hop ego-graphs is important, as it allows us to obtain node-wise characterization of GNN similarly as in [55]. It also allows us to set the depth of ego-graphs in the analysis to be the same as the number of GNN layers (k), since the GNN embedding of each node mostly depends on its k-hop ego-graph instead of the whole graph.
Remark 2.
For Eq. 1, Def 3.3 ensures the sampling of GNN embedding at a node always corresponds to sampling an ego-graph from , which reduces to uniformly sampling from under the setting of Theorem 3.1. Therefore, the requirement of Def 3.3 in the context of Theorem 3.1 guarantees the analysis to be only depending on the structural information of the graph.
Supportive observations. In Table 1, in the columns, we compute the average structural difference between two Forest-fire graphs ((F,F)) and between Barabasi and Forest-fire graphs ((B,F)), based on the RHS of Eq. 5. The results validate the topological difference between graphs generated by different random-graph models, while also verifying our view of graph as k-hop ego-graph samples and the way we propose based on it to characterize structural information of graphs. We further highlight in the (acc) columns the accuracy difference between the GNNs transfered from Forest-fire graphs and Barabasi graphs to Forest-fire graphs. Since Forest-fire graphs are more similar to Forest-fire graphs than Barabasi graphs (as verified in the columns), we expect (acc.) to be positive and large, indicating more positive transfer between the more similar graphs. Indeed, the behaviors of Egi align well with the expectation, which indicates its well-understood transferability and the utility of our theoretical analysis.
Use cases of Theorem 3.1. Our Theorem 3.1 naturally allows for two practical use cases among many others: point-wise pre-judge and pair-wise pre-selection for Egi pre-training. Suppose we have a target graph which does not have sufficient training labels. In the first setting, we have a single source graph which might be useful for pre-training a GNN to be used on . The Egi gap in Eq. 6 can then be computed between and to pre-judge whether such transfer learning would be successful before any actual GNN training (i.e., yes if is empirically much smaller than ; no otherwise). In the second setting, we have two or more source graphs which might be useful for pre-training the GNN. The Egi gap can then be computed between every pair of and to pre-select the best source graph (i.e., select the one with the least Egi gap).
In practice, the computation of eigenvalues on the small ego-graphs can be rather efficient [2], and we do not need to enumerate all pairs of ego-graphs on two compared graphs especially if the graphs are really large (e.g., with more than a thousand nodes). Instead, we can randomly sample pairs of ego-graphs from the two graphs, update the average difference on-the-fly, and stop when it converges. Suppose we need to sample pairs of k-hop ego-graphs to compare two large graphs, and the average size of ego-graphs are , then the overall complexity of computing Eq. 5 is , where is often less than 1K and less than 50. In Appendix §C.4, we report the approximated ’s w.r.t. different sampling frequencies, and they are indeed pretty close to the actual value even with smaller sample frequencies, showing the feasible efficiency of computing through sampling.
Limitations. Egi is designed to account for the structural difference captured by GNNs (i.e., k-hop ego-graphs). The effectiveness of Egi could be limited if the tasks on target graphs depend on different structural signals. For example, as Eq. 6 is computing the average pairwise distances between the graph Laplacians of local ego-graphs, is possibly less effective in explicitly capturing global graph properties such as numbers of connected components (CCs). In some specific tasks (such as counting CCs or community detection) where such properties become the key factors, may fail to predict the transferability of GNNs.
. Method transferable features non-transferable feature structural difference F-F B-F (acc.) F-F B-F (acc.) (F,F) (B,F) GIN (untrained) 0.572 0.572 / 0.358 0.358 / 0.752 0.883 VGAE (GIN) 0.498 0.432 +0.066 0.240 0.239 0.001 DGI (GIN) 0.578 0.591 -0.013 0.394 0.213 +0.181 Egi (GIN) 0.710 0.616 +0.094 0.376 0.346 +0.03
4 Real Data Experiments
Baselines. We compare the proposed model against existing self-supervised GNNs and pre-training GNN algorithms. To exclude the impact of different GNN encoders on transferability, we always use the same encoder architecture for all compared methods (i.e., GIN [60] for direct-transfering experiments, GCN [29] for transfering with fine-tuning).
The self-supervised GNN baselines are GVAE [28], DGI [54] and two latest mutual information estimation methods GMI [41] and MVC [17]. As for pre-training GNN algorithms, MaskGNN and ContextPredGNN are two node-level pre-training models proposed in [21] Besides, Structural Pre-train [23] also conducts unsupervised node-level pre-training with structural features like node degrees and clustering coefficients.
Experimental Settings. The main hyperparameter is set 2 in Egi as a common practice. We use Adam [27] as optimizer and learning rate is 0.01. We provide the experimental result with varying in the Appendix §C.4. All baselines are set with the default parameters. Our experiments were run on an AWS g4dn.2xlarge machine with 1 Nvidia T4 GPU. By default, we use node degree one-hot encoding as the transferable feature across all different graphs. As stated before, other transferable features like spectral and other pre-computed node embeddings are also applicable. We focus on the setting where the downstream tasks on target graphs are unspecified but assumed to be structure-relevant, and thus pre-train the GNNs on source graphs in an unsupervised fashion.333The downstream tasks are unspecified because we aim to study the general transferability of GNNs that is not bounded to specific tasks. Nevertheless, we assume the tasks to be relevant to graph structures. In terms of evaluation, we design two realistic experimental settings: (1) Direct-transfering on the more structure-relevant task of role identification without given node features to directly evaluate the utility and transferability of Egi. (2) Few-shot learning on relation prediction with task-specific node features to evaluate the generalization ability of Egi.
4.1 Direct-transfering on role identification
First, we use the role identification without node features in a direct-transfering setting as a reliable proxy to evaluate transfer learning performance regarding different pre-training objectives. Role in a network is defined as nodes with similar structural behaviors, such as clique members, hub and bridge [19]. Across graphs in the same domain, we assume the definition of role to be consistent, and the task of role identification is highly structure-relevant, which can directly reflect the transferability of different methods and allows us to conduct the analysis according to Theorem 3.1. Upon convergence of pre-training each model on the source graphs, we directly apply them to the target graphs and further train a multi-layer perceptron (MLP) upon their outputs. The GNN parameters are frozen during the MLP training. We refer to this strategy as direct-transfering since there is no fine-tuning of the models after transfering to the target graphs.
We use two real-world network datasets with role-based node labels: (1) Airport [45] contains three networks from different regions– Brazil, USA and Europe. Each node is an airport and each link is the flight between airports. The airports are assigned with external labels based on their level of popularity. (2) Gene [68] contains the gene interactions regarding 50 different cancers. Each gene has a binary label indicating whether it is a transcription factor. More details about the results and dataset can be found in Appendix C.2.
The experimental setup on the Airport dataset closely resembles that of our synthetic experiments in Table 1, but with real data and more detailed comparisons. We train all models (except for the untrained ones) on the Europe network, and test them on all three networks. The results are presented in Table 2. We notice that the node degree features themselves (with MLP) show reasonable performance in all three networks, which is not surprising since the popularity-based airport role labels are highly relevant to node degrees. The untrained GIN encoder yields a significant margin over just node features, as GNN encoder incorporates structural information to node representations. While training of the DGI can further improve the performance on the source graph, Egi shows the best performance there with the structure-relevant node degree features, corroborating the claimed effectiveness of Egi in capturing the essential graph information (i.e. recover the k-hop ego-graph distributions) as we stress in §3.
When transfering the models to USA and Brazil networks, Egi further achieves the best performance compared with all baselines when structure relevant features are used (64.55 and 73.15), which reflects the most significant positive transfer. Interestingly, direct application of GVAE, DGI and MVC that do not capture the input k-hop graph jointly, leads to rather limited and even negative transferrability (through comparison against the untrained GIN encoders). The recently proposed transfer learning frameworks for GNN like MaskGNN and Structural Pre-train are able to mitigate negative transfer to some extent, but their performances are still inferior to Egi. We believe this is because their models are prone to learn the graph-specific information that is less transferable across different graphs. GMI is also known to capture the graph structure and node features, so it achieves second best result comparing with Egi.
Similarly as in Table 1, we also compute the structural differences among three networks w.r.t. the Egi gap in Eq. 6. The structural difference is 0.869 between the Europe and USA networks, and 0.851 between the Europe and Brazil datasets, which are pretty close. Consequently, the transferability of Egi regarding its performance gain over the untrained GIN baseline is on the USA network and on the Brazil network, which are also close. Such observations again align well with our conclusion in Theorem 3.1 that the transferability of Egi is closely related to the structural differences between source and target graphs.
Method | Airport [45] | |||
---|---|---|---|---|
Europe | USA | Brazil | ||
features | 0.5280.052 | 0.5570.028 | 0.6710.089 | |
GIN (random-init) | 0.5580.050 | 0.6160.030 | 0.7000.082 | |
GVAE (GIN) [28] | 0.5390.053 | 0.5550.029 | 0.6630.089 | |
DGI (GIN) [54] | 0.5780.050 | 0.5490.028 | 0.6730.084 | |
Mask-GIN [21] | 0.5640.053 | 0.6080.027 | 0.6670.073 | |
ContextPred-GIN [21] | 0.5270.048 | 0.5040.030 | 0.6210.078 | |
Structural Pre-train [23] | 0.5600.050 | 0.6220.030 | 0.6880.082 | |
MVC [17] | 0.5320.050 | 0.5970.030 | 0.6610.093 | |
GMI [41] | 0.5810.054 | 0.5930.031 | 0.7310.107 | |
Egi (GIN) | 0.5920.046∗∗ | 0.6460.029 ∗∗ | 0.7320.078 |
On the Gene dataset, with more graphs available, we focus on Egi to further validate the utility of Eq. 5 in Theorem 3.1, regarding the connection between the Egi gap (Eq. 6) and the performance gap (micro-F1) of Egi on them. Due to severe label imbalance that removes the performance gaps, we only use the seven brain cancer networks that have a more consistent balance of labels. As shown in Figure 3, we train Egi on one graph and test it on the other graphs. The -axis shows the Egi gap, and -axis shows the improvement on micro-F1 compared with an untrained GIN. The negative correlation between two quantities is obvious. Specifically, when the structural difference is smaller than 1, positive transfer is observed (upper left area) as the performance of transferred Egi is better than untrained GIN, and when the structural difference becomes large (), negative transfer is observed. We also notice a similar graph pattern, i.e. single dense cluster, between source graph and positive transferred target graph .

4.2 Few-shot learning on relation prediction
Here we evaluate Egi in the more generalized and practical setting of few-shot learning on the less structure-relevant task of relation prediction, with task-specific node features and fine-tuning. The source graph contains a cleaned full dump of 579K entities from YAGO [49], and we investigate 20-shot relation prediction on a target graph with 24 relation types, which is a sub-graph of 115K entities sampled from the same dump. In post-fine-tuning, the models are pre-trained with an unsupervised loss on the source graph and fine-tuned with the task-specific loss on the target graph. In joint-fine-tuning, the same pre-trained models are jointly optimized w.r.t. the unsupervised pre-training loss and task-specific fine-tuning loss on the target graph. In Table 3, we observe most of the existing models fail to transfer across pre-training and fine-tuning tasks, especially in the joint-fine-tuning setting. In particular, both Mask-GIN and ContextPred-GIN rely a lot on task-specific fine-tuning, while Egi focuses on the capturing of similar ego-graph structures that are transferable across graphs. The mutual information based method GMI also demonstrates considerable transferability and we believe the ability to capture the graph structure is the key to the transferability. As a consequence, Egi significantly outperforms all compared methods in both settings. More detailed statistics and running time are in Appendix §C.3.
Method | post-fine-tuning | joint-fine-tuning | ||
---|---|---|---|---|
AUROC | MRR | AUROC | MRR | |
No pre-train | 0.6870.002 | 0.5960.003 | N.A. | N.A. |
GVAE | 0.7010.003 | 0.6010.007 | 0.6790.004 | 0.5680.008 |
DGI | 0.6890.011 | 0.5860.025 | 0.6880.012 | 0.5370.023 |
MaskGNN | 0.7130.009 | 0.6310.015 | 0.7120.005 | 0.5600.010 |
ContextPredGNN | 0.6920.030 | 0.6620.030 | 0.7050.011 | 0.5750.021 |
GMI | 0.7280.005 | 0.6250.009 | 0.7210.007 | 0.6430.011 |
Structural Pre-train | OOM | OOM | OOM | OOM |
MVC | OOM | OOM | OOM | OOM |
Egi | 0.739 0.009∗∗ | 0.6700.014 | 0.787 0.011∗∗ | 0.729 0.016∗∗ |
5 Conclusion
To the best of our knowledge, this is the first research effort towards establishing a theoretically grounded framework to analyze GNN transferability, which we also demonstrate to be practically useful for guiding the design and conduct of transfer learning with GNNs. For future work, it is intriguing to further strengthen the bound with relaxed assumptions, rigorously extend it to the more complicated and less restricted settings regarding node features and downstream tasks, as well as analyze and improve the proposed framework over more transfer learning scenarios and datasets. It is also important to protect the privacy of pre-training data to avoid potential negative societal impacts.
Acknowledgments and Disclosure of Funding
Research was supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004, SocialSim Program No. W911NF-17-C-0099, and INCAS Program No. HR001121C0165, National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532, and the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897. Chao Zhang is supported NSF IIS-2008334, IIS-2106961, and ONR MURI N00014-17-1-2656. We would like to thank AWS Machine Learning Research Awards program for providing computational resources for the experiments in this paper. This work is also partially supported by the internal funding and GPU servers provided by the Computer Science Department of Emory University. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and do not necessarily represent the views, either expressed or implied, of DARPA or the U.S. Government.
References
- [1] Réka Albert and Albert-László Barabási. Statistical mechanics of complex networks. Reviews of modern physics, 74(1):47, 2002.
- [2] Sanjeev Arora, Elad Hazan, and Satyen Kale. Fast algorithms for approximate semidefinite programming using the multiplicative weights update method. In FOCS, pages 339–348, 2005.
- [3] Jinheon Baek, Dong Bok Lee, and Sung Ju Hwang. Learning to extrapolate knowledge: Transductive few-shot out-of-graph link prediction. Advances in Neural Information Processing Systems, 33, 2020.
- [4] Lu Bai and Edwin R Hancock. Fast depth-based subgraph kernels for unattributed graphs. Pattern Recognition, 50:233–245, 2016.
- [5] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. science, 286(5439):509–512, 1999.
- [6] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, pages 585–591, 2002.
- [7] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In NIPS, pages 137–144, 2007.
- [8] Karsten Borgwardt, Elisabetta Ghisu, Felipe Llinares-López, Leslie O’Bray, and Bastian Rieck. Graph kernels: State-of-the-art and future challenges. arXiv preprint arXiv:2011.03854, 2020.
- [9] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In ICLR, 2014.
- [10] Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: fast learning with graph convolutional networks via importance sampling. In ICLR, 2018.
- [11] Fan RK Chung and Fan Chung Graham. Spectral graph theory. Number 92. American Mathematical Soc., 1997.
- [12] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS, pages 3844–3852, 2016.
- [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In ACL, pages 4171–4186, 2019.
- [14] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In KDD, pages 855–864, 2016.
- [15] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NIPS, pages 1024–1034, 2017.
- [16] David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. ACHA, 30(2):129–150, 2011.
- [17] Kaveh Hassani and Amir Hosein Khasahmadi. Contrastive multi-view representation learning on graphs. In International Conference on Machine Learning, pages 4116–4126. PMLR, 2020.
- [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
- [19] Keith Henderson, Brian Gallagher, Tina Eliassi-Rad, Hanghang Tong, Sugato Basu, Leman Akoglu, Danai Koutra, Christos Faloutsos, and Lei Li. Rolx: structural role extraction & mining in large graphs. In KDD, pages 1231–1239, 2012.
- [20] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In ICLR, 2019.
- [21] Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In ICLR, 2019.
- [22] Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun. Gpt-gnn: Generative pre-training of graph neural networks. In KDD, pages 1857–1867, 2020.
- [23] Ziniu Hu, Changjun Fan, Ting Chen, Kai-Wei Chang, and Yizhou Sun. Pre-training graph neural networks for generic structural feature extraction. arXiv preprint arXiv:1905.13728, 2019.
- [24] Suk-Geun Hwang. Cauchy’s interlace theorem for eigenvalues of hermitian matrices. The American Mathematical Monthly, 111(2):157–159, 2004.
- [25] Xuan Kan, Hejie Cui, and Carl Yang. Zero-shot scene graph relation prediction through commonsense knowledge integration. In ECML-PKDD, 2021.
- [26] Nicolas Keriven and Gabriel Peyré. Universal invariant and equivariant graph neural networks. In NIPS, pages 7090–7099, 2019.
- [27] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- [28] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016.
- [29] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
- [30] Nils M Kriege, Fredrik D Johansson, and Christopher Morris. A survey on graph kernels. Applied Network Science, 5(1):1–42, 2020.
- [31] Lin Lan, Pinghui Wang, Xuefeng Du, Kaikai Song, Jing Tao, and Xiaohong Guan. Node classification on graphs with few-shot novel labels via meta transformed network embedding. Advances in Neural Information Processing Systems, 33, 2020.
- [32] Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graphs over time: densification laws, shrinking diameters and possible explanations. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 177–187, 2005.
- [33] Ron Levie, Wei Huang, Lorenzo Bucci, Michael M Bronstein, and Gitta Kutyniok. Transferability of spectral graph convolutional neural networks. arXiv preprint arXiv:1907.12972, 2019.
- [34] Ron Levie, Elvin Isufi, and Gitta Kutyniok. On the transferability of spectral graph filters. In 2019 13th International conference on Sampling Theory and Applications (SampTA), pages 1–5. IEEE, 2019.
- [35] Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, and Kevin Swersky. Graph normalizing flows. In Advances in Neural Information Processing Systems, pages 13556–13566, 2019.
- [36] Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415–444, 2001.
- [37] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. arXiv preprint arXiv:1310.4546, 2013.
- [38] Giannis Nikolentzos, Giannis Siglidis, and Michalis Vazirgiannis. Graph kernels: A survey. arXiv preprint arXiv:1904.12218, 2019.
- [39] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In ICLR, 2020.
- [40] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
- [41] Zhen Peng, Wenbing Huang, Minnan Luo, Qinghua Zheng, Yu Rong, Tingyang Xu, and Junzhou Huang. Graph representation learning via graphical mutual information maximization. In WWW, pages 259–270, 2020.
- [42] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In KDD, pages 701–710, 2014.
- [43] Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. Gcc: Graph contrastive coding for graph neural network pre-training. In KDD, pages 1150–1160, 2020.
- [44] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
- [45] Leonardo FR Ribeiro, Pedro HP Saverese, and Daniel R Figueiredo. struc2vec: Learning node representations from structural identity. In KDD, pages 385–394, 2017.
- [46] Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.
- [47] Luana Ruiz, Luiz Chamon, and Alejandro Ribeiro. Graphon neural networks and the transferability of graph neural networks. Advances in Neural Information Processing Systems, 33, 2020.
- [48] Yu Shi, Qi Zhu, Fang Guo, Chao Zhang, and Jiawei Han. Easing embedding learning by comprehensive transcription of heterogeneous information networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2190–2199, 2018.
- [49] Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: a core of semantic knowledge. In WWW, pages 697–706, 2007.
- [50] Fan-Yun Sun, Jordan Hoffman, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In ICLR, 2019.
- [51] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In WWW, pages 1067–1077, 2015.
- [52] Joshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.
- [53] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In ICLR, 2018.
- [54] Petar Velickovic, William Fedus, William L Hamilton, Pietro Lio, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. In ICLR, 2019.
- [55] Saurabh Verma and Zhi-Li Zhang. Stability and generalization of graph convolutional neural networks. In KDD, 2019.
- [56] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In NIPS, pages 3630–3638, 2016.
- [57] S Vichy N Vishwanathan, Nicol N Schraudolph, Risi Kondor, and Karsten M Borgwardt. Graph kernels. Journal of Machine Learning Research, 11:1201–1242, 2010.
- [58] Boris Weisfeiler and Andrei A Lehman. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsia, 2(9):12–16, 1968.
- [59] Man Wu, Shirui Pan, Chuan Zhou, Xiaojun Chang, and Xingquan Zhu. Unsupervised domain adaptive graph convolutional networks. In WWW, pages 1457–1467, 2020.
- [60] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ICLR, 2019.
- [61] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575, 2014.
- [62] Carl Yang, Yichen Feng, Pan Li, Yu Shi, and Jiawei Han. Meta-graph based hin spectral embedding: Methods, analyses, and insights. In ICDM, 2018.
- [63] Carl Yang, Aditya Pal, Andrew Zhai, Nikil Pancha, Jiawei Han, Chuck Rosenberg, and Jure Leskovec. Multisage: Empowering graphsage with contextualized multi-embedding on web-scale multipartite networks. In KDD, 2020.
- [64] Carl Yang, Yuxin Xiao, Yu Zhang, Yizhou Sun, and Jiawei Han. Heterogeneous network representation learning: A unified framework with survey and benchmark. In TKDE, 2020.
- [65] Carl Yang, Chao Zhang, Xuewen Chen, Jieping Ye, and Jiawei Han. Did you enjoy the ride? understanding passenger experience via heterogeneous network embedding. In ICDE, 2018.
- [66] Carl Yang, Jieyu Zhang, and Jiawei Han. Co-embedding network nodes and hierarchical labels with taxonomy based generative adversarial nets. In ICDM, 2020.
- [67] Carl Yang, Jieyu Zhang, Haonan Wang, Sha Li, Myungwan Kim, Matt Walker, Yiou Xiao, and Jiawei Han. Relation learning on social networks with multi-modal graph edge variational autoencoders. In WSDM, 2020.
- [68] Carl Yang, Peiye Zhuang, Wenhan Shi, Alan Luu, and Pan Li. Conditional structure generation through graph variational generative adversarial nets. In NIPS, pages 1338–1349, 2019.
- [69] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In NIPS, 2018.
- [70] Jiaxuan You, Rex Ying, Xiang Ren, William Hamilton, and Jure Leskovec. GraphRNN: Generating realistic graphs with deep auto-regressive models. In Proceedings of the 35th International Conference on Machine Learning, pages 5708–5717. PMLR, 2018.
- [71] Qi Zhu, Natalia Ponomareva, Jiawei Han, and Bryan Perozzi. Shift-robust gnns: Overcoming the limitations of localized graph training data. In NeurIPS, 2021.
Appendix A Theory Details
From the objective, we have assumed , , and , for . Then for a sample , we have access to the empirical distributions of the three. In the procedure of evaluating the objective, we sample uniformly.
Note that, in Eq. 2 of the main paper, we used a dimensional hidden state to denote an edge’s source node representation and as destination node features from the structure of the ego-graph and the associated source node feature with GNN. In our proof, we denote as the -th node in the -th layer of the ego-graph and let and . For simplicity, in i-th layer, we denote , where is the concatenation operation.
Finally, as we are considering GNN with layers, its computation only depends on the k-hop ego-graphs of , which is an important consideration when unfolding the embedding of GNN at a centre node with Lamma A.1.
Lemma A.1.
For any , where , and is a submatrix of , where , we have
Proof.
Note that, is a principle matrix of , i.e., is obtained by removing the same set of rows and columns from . Then, by Eigenvalue Interlacing Theorem [24] and the fact that and have the same set of non-zero singular values, the matrix operator norm satisfies . ∎
A.1 Center-node view of GCN
Recall that denotes the set of nodes in the th hop of k-hop ego-graph , and denotes the feature for th node in th hop of , for any . Similarly, denotes the entire set of nodes in . In each ego-graph sample , the layer-wise propagation rules for the center node embedding in encoder and discriminator can be written into the form of GCN as followed
where is adjacency matrix of . adds the self-loop and is the degree matrix.
We focus on the center node’s embedding obtained from a -layer GCN with 1-hop polynomial filter . Inspired by the characterization of GCN from a node-wise view in [55], we similarly denote the embedding of node in the final layer of the GCN as
where the weighted link between node and ; and is the weight for the th layer sharing across nodes. Then . We may denote similarly for , and as the node feature of center node . With the assumption of GCN in the statement, it is clear that only the k-hop ego-graph centered at is needed to compute for any instead of the whole of . Precisely, -hop of subgraph corresponds to the th layer in the model.
With such observation in mind, let us denote the matrix of node embeddings of at the th layer as for ; and let denote the matrix of node features in the -hop ego-graph . In addition, denote as the principle submatrix, which includes embeddings for nodes in the to th hop of , .
We denote as the out-degree normalised graph Laplacian of . Here, the out-degree is defined with respect to the direction from leaves to centre node in . Similarly, denote as the in-degree normalised graph Laplacian of , where the direction is from centre to leaves.
WLOG, we write the th layer embedding in matrix notation of the following form
where the GCN only updates the embedding of nodes in the to th hop. We also implicitly assume the embedding of nodes in to th hop are unchanged through the update, due to the directed nature of . Hence, we obtain from the following
Similarly, we are able to write down the form of discriminator using matrix representation for GCN. The edge information at th time point for nodes in can be described as follows
A.2 Proof for Theorem 4.1
We restate Theorem 4.1 from the main paper as below.
Theorem A.2.
Let and be two graphs and node features are structure-respecting with for some function . Consider GCN with k layers and a 1-hop polynomial filter ,the empirical performance difference of with satisfies
(7) |
where is dependant on , , node features, and the largest eigenvalue of and . is a constant dependant on the encoder, while is a constant dependant on the decoder. With a slight abuse of notation, we denote . Note that, in the main paper, we have , and .
Proof.
Now,
We make the following assumptions in the proof,
-
1.
Assume the size of the neighborhood for each node is bounded by , then the maximum number of node for -th layer subgraph is bounded by . WLOG, let ;
-
2.
Assume if , i.e. assume non-informative edge information and node features for non-existed nodes in the smaller neighborhood with no links;
From Assumption 2, we add isolated nodes to the smaller neighborhood such that the neighborhood size at each hop match. It can be found in our code to compute Egi gap as pad_nbhd. For the following proof, we WLOG assume .
First we consider . Recall that, is the set of nodes in layer of ,
where is the sigmoid function, is some -Lipschitz activation function and denotes the concatenation of two vectors. Then we obtain
Since , which is -Lipschitz, it gives
(8) | ||||
We provide the derivation for the unfolding of th layer GCN with the centre-node view in Lemma A.3. This will be used in the derivation of and .
Lemma A.3.
For any , we have an upper bound for the hidden representation difference between and ,
(9) | ||||
Specifically, for , we obtain the expansion for center node embedding .
Proof.
Then, for , we have
(10) | ||||
since is the principle submatrix of . Then we equivalently write the above equation as , which gives
With , we see the following is only dependant on the structure of and ,
∎
Since the the graph Laplacians are normalised, we have . In addition, let
Hence, . From Lemma A.3, it is clear that we obtain the following at the final layer
(11) | ||||
since is a linear function for . Indeed, this can be generalised to polynomial function of higher powers.
Now, consider the following term that is related with discriminator ,
Firstly, we denote as the in-degree graph Laplacian derived with the subgraph of centred at . Different from the encoder, we utilize every node’s hidden embedding in the computation. Specifically, is obtained by retrieving links in that connects to the th node in the th layer. This is a principle submatrix of the in-degree graph Laplacian of .
Just as defined in §A.1, we denote as the th layer GCN embedding for nodes in hop 0 to hop of . Note that in this case, , which is one row of , corresponding to the -th node in the neighborhood. So we may write the first term in as
where for short. In this way, we regard each node as the centre node, which allows us to unfold the convolution similarly as expanding the term. Now, for any , i.e. when , we apply Lemma A.3 similarly as for . Then,
where is the principle submatrix of and Lemma A.1 can be applied iin the last inequality. In addition, and are taken to be the maximum over any . In general, for , , we have
Take a common upper bound for over , we obtain
In addition, for the other half of , we have
We can write in terms of weights and , which is dependant on the activation function , and . Hence,
Note that the derived for is the same for , since the node features, edge information and embedded features are bounded by separate terms in Eq. 8. The only difference is given by , where a different set of graph Laplacians and node features are used. Therefore,
Hence the result. ∎
Note that, our view of structural information is closely related to graph kernels [4] and graph perturbation [55]. Specifically, our Definition on k-hop ego-graph is motivated by the concept of k-layer expansion sub-graph in [4]. However, [4] used the Jensen-Shannon divergence between pairwise representations of sub-graphs to define a depth-based sub-graph kernel, while we depict as samples of its ego-graphs. In this sense, our view is related to the setup in [55], which derived a uniform algorithmic stability bound of a 1-layer GNN under 1-hop structure perturbation of .
In the setting of domain adaptation, [7] draws a connection between the difference in the distributions of source and target domains and the model transferability, and learns a transferable model by minimizing such distribution differences. This coincides with our approach of connecting the structure difference of two graphs in terms of k-hop subgraph distributions and the transferability of GNNs in the above theory.
Appendix B Model Details
Following the same notations used in the main paper, Egi consists of a GNN encoder and a GNN discriminator . In general, the GNN encoder and discriminator can be any existing GNN models. For each ego-graph and its node features , the GNN encoder returns node embedding for the center node . As mentioned in Eq. 2 in the main paper, the GNN discriminator makes edge-level predictions as follows,
(12) |
where and (simplified as in the main paper, same for ) is the representation for edge between node in hop and in hop . The prediction relies on the combination of center node embedding , destination node feature and source node representation . And now we describe how we calculate the source node representation in .
To obtain the source node representation representations , the GNN in discriminator operates on a reversed ego-graph while encoder performs forward propagation on . The discriminator GNN starts from the center node and compute the hidden representation for node at each hop. We denote the source node at hop as . Although is calculated as node embedding, in reversed ego graph , node only has one incoming edge. Thus, we can also interpret as the edge embedding as it combines source node’s hidden representation and destination node features as follows,
(13) |
When , every edge origins from the center node and is the center node feature . Note that we the elaborated aggregation rule is equivalent as layer-wise propagation rules (different in-degree matrix for each ) of Egi earlier in §A.1.
In every batch, we sample a set of ego-graphs and their node features . During the forward pass of encoder , it aggregates from neighbor nodes to the center node . Then, the discriminator calculates the edge embedding in Eq. 13 from center node to its neighbors and make edge-level predictions– fake or true. Besides training framework Figure 2 in the main paper, the algorithm Egi is depicted in Algorithm 1.
We implement our method and all of the baselines using the same encoders : 2-layer GIN [60] for synthetic and role identification experiments, 2-layer GraphSAGE [15] for the relation prediction experiments. We set hidden dimension as 32 for both synthetic and role identification experiments, For relation prediction fine-tuning task, we set hidden dimension as 256. We train Egi in a mini-batch fashion since all the information for encoder and discriminators are within the k-hop ego-graph and its features . Further, we conduct neighborhood sampling and set maximum neighbors as 10 to speed up the parrallel training. The space and time complexity of Egi is , where is the batch size, is the number of the neighbors and k is the number of hops of ego-graphs. Notice that both the encoder and discriminator propagate message on the k-hop ego-graphs, so the extra computation cost of compared with a common GNN module is a constant multiplier over the original one. The scalability of Egi on million scale YAGO network is reported in section C.3.
B.1 Transfer Learning Settings
The goal of transfer learning is to train a model on a dataset or task, and use it on another. In our graph learning setting, we focus on training the model on one graph and using it on another. In particular, we focus our study on the setting of unsupervised-transfering, where the model learned on the source graph is directly applied on the target graph without fine-tuning. We study this setting because it allows us to directly measure the transferability of GNNs, which is not affected by the fine-tuning process on the target graph. In other words, the fine-tuning process introduces significant uncertainty to the analysis, because there is no guarantee on how much the fine-tuned GNN is different from the pre-trained one. Depending on specific tasks and labels distributions on the two graphs, the fine-tuned GNN might be quite similar to the pre-trained one, or it can be significantly different. It is then very hard to analyze how much the pre-trained GNN itself is able to help. Another reason is about efficiency. The fine-tuning of GNNs requires the same environment set-up and computation resource as training GNNs from scratch, although it may take less training time eventually if pre-training is effective. It is intriguing if this whole process can be eliminated when we guarantee the performance with unsupervised-transfering.
In our experiments, we also study the setting of transfer learning with fine-tuning, particularly on the real-world large-scale YAGO graphs. Since we aim to study the general transferability of GNNs not bounded to specific tasks, we always pre-train GNNs with the unsupervised pre-training objective on source graphs. Then we enable two types of fine-tuning. The first one is post-fine-tuning (), where the pre-trained GNNs are fine-tuned with the supervised task specific objective on the target graphs. The second on is joint-fine-tuning (), where pre-training is the same, but fine-tuning is done w.r.t. both the pre-training objective and task specific objective on target graphs in a semi-supervised learning fashion. The unsupervised pre-training objective of Egi is Algorithm 1, while those of the compared algorithms are as defined in their papers. The supervised fine-tuning objective is the same as in the DistMult paper [61] for all algorithms.
Appendix C Additional Experiment Details
C.1 Synthetic Experiments
Data. As mentioned in the main paper, we use two traditional graph generation models for synthetic data generation: (1) barabasi-albert graph [5] and (2) forest-fire graph [32]. We generate 40 graphs each with 100 nodes with each model. We control the parameters of two models to generate two graphs with different ego-graph distributions. Specifically, we set the number of attached edges as 2 for barabasi-albert model and set , for forest-fire model. In Figure 4(a) and 4(b), we show example graphs from two families in our datasets. They have the same size but different appearance which leads to our study on the transferability gap (F, F) and (B, F) in Table 1 in the main paper. The accuracy of this task defined as the percentage of nearest neighbors for target node in the embedding space that are structure-equivalent, i.e. #correct k-nn neighbors / #ground truth equivalent nodes.



Results. The structural equivalence label is obtained by a 2-hop WL-test [58] on the ego-graphs. If two nodes have the same 2-hop ego-graphs, they will be assigned the same label. In the example of Figure 4(c), the nodes labeled with same number (e.g. 2, 4) have the isomorphic 2-hop ego-graphs. Note that this task is exactly solvable when node features and GNN architectures are powerful enough like GIN [60]. In order to show the performance difference among different methods, we set the length of one-hot node degree encoding to 3 (all nodes with degrees higher than 3 have the same encoding). Here, we present the performance comparison with different length of degree encodings (d) in Table 4. When the capacity of initial node features is high (d=10), the transfer learning gap diminishes between different methods and different graphs because the structural equivalence problem can be exactly solved by neighborhood aggregations. However, when the information in initial node features is limited, the advantage of Egi in learning and transfering the graph structural information is obvious. In Table 5, we also show the performance of different transferable and non-transferable features discussed after Definition 4.3 in the main paper, i.e. node embedding [42] and random feature vectors. The observation is similar with Table 1 in the main paper: the transferable feature can reflect the performance gap between similar and dissimilar graphs while non-transferable features can not.
In both Table 4 and 8 here as well as Table 1 in the main paper, we report the structural difference among graphs in the two sets () calculated w.r.t. the term on the RHS of Theorem 4.1 in the main paper. This indicates that the Forest fire graphs are structurally similar to the other Forest fire graphs, while less similar to the Barabasi graphs, as can be verified from Figure 4(a) and 4(b). Our bound in Theorem 4.1 then tells us that the GNNs (in particular, Egi) should be more transferable in the F-F case than B-F. This is verified in Table 4 and 5 when using the transferable node features of degree encoding with limited dimension (d=3) as well as DeepWalk embedding, as Egi pre-trained on Forest fire graphs performs significantly better on Forest fire graphs than on Barabasi graphs (with +0.094 and +0.057 differences, respectively).
Method | #dim degree encoding d = 3 | # dim degree encoding d = 10 | structural difference | ||||||
F-F | B-F | (acc.) | F-F | B-F | (acc.) | (F,F) | (B,F) | ||
GCN (untrained) | 0.478 | 0.478 | / | 0.940 | 0.940 | / | 0.752 | 0.883 | |
GIN (untrained) | 0.572 | 0.572 | / | 0.940 | 0.940 | / | |||
VGAE (GIN) | 0.498 | 0.432 | +0.066 | 0.939 | 0.937 | 0.002 | |||
DGI (GIN) | 0.578 | 0.591 | -0.013 | 0.939 | 0.941 | -0.002 | |||
Egi (GIN) | 0.710 | 0.616 | +0.094 | 0.942 | 0.942 | 0 |
Method | DeepWalk embedding | random vectors | structural difference | ||||||
---|---|---|---|---|---|---|---|---|---|
F-F | B-F | (acc.) | F-F | B-F | (acc.) | (F,F) | (B,F) | ||
GCN (untrained) | 0.658 | 0.658 | / | 0.246 | 0.246 | / | 0.752 | 0.883 | |
GIN (untrained) | 0.663 | 0.663 | / | 0.520 | 0.520 | / | |||
GVAE (GIN) | 0.713 | 0.659 | +0.054 | 0.266 | 0.264 | 0.002 | |||
DGI (GIN) | 0.640 | 0.613 | +0.027 | 0.512 | 0.576 | -0.064 | |||
Egi (GIN) | 0.772 | 0.715 | +0.057 | 0.507 | 0.485 | +0.022 |
C.2 Real-world Role Identification Experiments
Data.
Dataset | # Nodes | # Edges | # Classes |
---|---|---|---|
Europe | 399 | 5,995 | 4 |
USA | 1,190 | 13,599 | 4 |
Brazil | 131 | 1,074 | 4 |
Gene | 9,228 | 57,029 | 2 |
We report the number of nodes, edges and classes for both airport and gene dataset. The numbers for the Gene dataset are the aggregations of the total 52 gene networks in the dataset. For the three airport networks, Figure 5 shows the power-law degree distribution on log-log scale. The class labels are between 0 to 3 reflecting the level of the airport activities [45]. For the Gene dataset, we matched the gene names in the TCGA dataset [68] to the list of transcription factors on wikipedia444https://en.wikipedia.org/wiki/Transcription_factor. 75% of the genes are marked as 1 (transcription factors) and some gene graphs have extremely imbalanced class distributions. So we conduct experiments on the relatively balanced gene graphs of brain cancers (Figure 2 in the main paper). Both datasets do not have organic node attributes. The role-based node labels are highly relevant to their local graph structures, but are not trivially computable such as from node degrees.



Results. As we can observe from Figure 5, the three airport graphs have quite different sizes and structures (e.g., regarding edge density and connectivity pattern). Thus, the absolute classification accuracy in both Table 2 in the main paper and Table 7 here varies across different graphs. However, as we mention in the main paper, the structural difference we compute based on Eq. 5 in Theorem 3.1 is close among the Europe-USA and Europe-Brazil graph pairs (0.869 and 0.851), which leads to close transferability of Egi from Europe to USA and Brazil. This indicates the effectiveness of our view over essential structural information. In Table 7, we also provide the comparison between transferable and non-transferable feature on airport dataset. As expected, Egi only yields good transferability with transferable features.
Method | Europe (source) | USA (target) | Brazil (target) | ||||
---|---|---|---|---|---|---|---|
node degree | same feat. | node degree | same feat. | node degree | same feat. | ||
features | 52.81 | 20.59 | 55.67 | 20.22 | 67.11 | 19.63 | |
GIN (untrained) | 55.75 | 53.88 | 61.56 | 58.32 | 70.04 | 70.37 | |
GVAE | 53.90 | 21.12 | 55.51 | 22.39 | 66.33 | 17.70 | |
DGI | 57.75 | 22.13 | 54.90 | 21.76 | 67.93 | 18.78 | |
MaskGNN | 56.37 | 55.53 | 60.82 | 54.64 | 66.71 | 74.54 | |
ContextPredGNN | 52.69 | 49.95 | 50.38 | 54.75 | 62.11 | 70.66 | |
Structural Pre-train | 56.00 | 53.83 | 62.17 | 57.49 | 68.78 | 72.41 | |
MVC | 53.16 | 51.69 | 59.66 | 50.42 | 66.07 | 61.55 | |
GMI | 58.12 | 46.25 | 59.28 | 47.64 | 73.07 | 62.96 | |
Egi (GIN) | 59.15∗∗ | 54.98 | 64.55∗∗ | 57.40 | 73.15 | 70.00 |
Besides that, the results present in Table 8 are the accuracy of GNNs directly trained and evaluated on each network without transfering. Therefore, only the Europe column has the same results as in Table 2 in the main paper, while the USA and Brazil columns can be regarded as providing an upper-bound performance of GNN transfered from other graphs. As we can see, Egi gives the closest results from Table 2 (main paper) to Table 8 here, demonstrating the its plausible transferability. The scores are so close, showing a possibility to skip fine-tuning when the source and target graphs are similar enough. Also note that, although the variances are pretty large (which is also observed in other works like [45] since the networks are small), our t-tests have shown the improvements of Egi to be significant.
Method | Airport [45] | |||
---|---|---|---|---|
Europe | USA | Brazil | ||
node degree | 52.81% 5.81% | 55.67% 3.63% | 67.11% 7.58% | |
GCN (random-init) | 52.96% 4.51% | 56.18% 3.82% | 55.93% 1.38% | |
GIN (random-init) | 55.75% 5.84% | 62.77% 2.35% | 69.26% 9.08% | |
GVAE (GIN) | 53.90% 4.65% | 58.99% 2.44% | 55.56% 6.83% | |
DGI (GIN) | 57.75% 4.47% | 62.44% 4.46% | 68.15% 6.24% | |
Mask-GIN | 56.37% 5.07% | 63.78% 2.79% | 61.85% 10.74% | |
ContextPred-GIN | 52.69% 6.12% | 56.22% 4.05% | 58.52% 10.18% | |
Structural Pre-train | 56.00% 4.58% | 62.29% 3.51% | 71.48% 9.38 % | |
MVC | 53.16% 4.07% | 62.81 % 3.12% | 67.78 % 4.79% | |
GMI | 58.12 % 5.28% | 63.36 % 2.92% | 73.70% 4.21% | |
Egi (GIN) | 59.15% 4.44% | 65.88% 3.65% | 74.07% 5.49% |
Dataset | # Nodes | # Edges | # Relations | # Train/Test | Training time per epoch |
---|---|---|---|---|---|
YAGO-Source | 579,721 | 2,191,464 | / | / | 338 seconds |
YAGO-Target | 115,186 | 409,952 | 24 | 480/409,472 | 134 seconds |
C.3 Real-world large-scale Relation Prediction Experiments
Data. As shown in Table 9, the source graph we use to pre-train GNNs is the full graph cleaned from the YAGO dump [49], where we assume the relations among entities are unknown. The target graph we use is a subgraph uniformed sampled from the same YAGO dump (we sample the nodes and then include all edges among the sampled nodes). The similar ratio between number of nodes and edges can be observed in Table 9. On the target graph, we also have the access to 24 different relations [48] such as isAdvisedBy, isMarriedTo and so on. Such relation labels are still relevant to the graph structures, but the relevance is lower compared with the structural role labels. We use the 256-dim degree encoding as node features for pre-training on the source graph, then we use the 128-dim positional embedding generated by LINE [51] for fine-tuning on the target graph, to explicitly make the features differ across source and target graphs.
Results. In Section B.1, we introduced two different types of fine-tuning, i.e., post-fine-tuning and joint-fine-tuning. For both types of fine-tuning, we add one feature encoder before feeding it into the GNNs for two purposes. First, the target graph fine-tuning feature usually has different dimensions with the pre-training features, such as the node degree encoding we use. Second, the semantics and distributions of fine-tuning features can be different from pre-training features. The feature encoder aims to bridge the gap between feature difference in practice. The supervised loss used in this experiment is the same as in DistMult [61]. In particular, the bilinear score function is calculated as , where is a diagonal matrix for each relation , and the the embedding of GNN encoder for head and tail entities. The experiments were run on GTX1080 with 12G memories. We report the average training time per epoch of our algorithm in pre-training and fine-tuning stage in Table 9 as well. The pre-training and fine-tuning takes about 40 epochs and 10 epochs to converge, respectively. In Table 9, we also present the per-epoch training time of Egi. Egi takes about 338 seconds per epoch for optimizing the ego-graph information maximization objective on YAGO-source. As we can see, fine-tuning also takes significant time compared to pre-training, which strengthens our arguments about avoiding or reducing fine-tuning through structural analysis. We implement all baselines within the same pipeline, and the running times are all in same scale.
C.4 Parameter study
In this section, we provide additional parameter analysis towards proposed EGI model - choices of , and efficiency study on EGI gap - sampling frequencies.
Performance of different size of ego-graphs. In our Theorem 3.1 and EGI algorithm (Eq. 1), number of hops determines the size of ego-graphs. In principle, may affect the transferability of EGI in two ways: (1) larger may make the EGI model (both center node encoder and neighbor node encoder ) more expressive (better precision) and the EGI gap more accurate (better predictiveness); (2) However, the GNN encoders may suffer from the over-smoothing problem and the computations may suffer from more noises. Therefore, it is hard to determine the influence of without empirical analysis. As we can observe in , when or , the classification accuracy of the source graph is worse than , likely because the GNN encoder is either less powerful or over-smoothed. As a result, obtains the best transferability to both the USA and Brazil networks. When , likely accounts for too subtle/noisy ego-graph differences and may become less effective in predicting the transferability. Therefore, we choose to conduct experiments in main paper.
Europe (source) | USA (target) | Brazil (target) | ||||
---|---|---|---|---|---|---|
acc. | acc. | acc. | ||||
EGI (k=1) | 58.25 | 60.08 | 0.385 | 60.74 | 0.335 | |
EGI (k=2) | 59.15 | 64.55 | 0.869 | 73.15 | 0.851 | |
EGI (k=3) | 57.63 | 64.12 | 0.912 | 72.22 | 0.909 |
Precision of under different sampling frequencies. In Table 11, we present the estimated versus sampling frequency for 10 runs on airport dataset. A theoretical study on its convergence could be an interesting future direction. As we can observe, large sample frequency leads to more accurate and robust estimation of . Between Europe and USA, although 100 pairs of ego-graphs are only equivalent as 2.1% of the total pair-wise enumerations, the estimated is pretty close.
Sampling frequency | (Europe, USA) | (Europe, Brazil) |
---|---|---|
100 pairs | 0.8720.039 | 0.8540.042 |
1000 pairs | 0.8590.012 | 0.8480.007 |
All pairs | 0.869 0.000 | 0.851 0.000 |