This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Cross-Network Learning with Partially Aligned Graph Convolutional Networks

Meng Jiang Department of Computer Science and Engineering, University of Notre Dame384 Fitzpatrick HallNotre DameIndianaUSA [email protected]
Abstract.

Graph neural networks have been widely used for learning representations of nodes for many downstream tasks on graph data. Existing models were designed for the nodes on a single graph, which would not be able to utilize information across multiple graphs. The real world does have multiple graphs where the nodes are often partially aligned. For examples, knowledge graphs share a number of named entities though they may have different relation schema; collaboration networks on publications and awarded projects share some researcher nodes who are authors and investigators, respectively; people use multiple web services, shopping, tweeting, rating movies, and some may register the same email account across the platforms. In this paper, I propose partially aligned graph convolutional networks to learn node representations across the models. I investigate multiple methods (including model sharing, regularization, and alignment reconstruction) as well as theoretical analysis to positively transfer knowledge across the (small) set of partially aligned nodes. Extensive experiments on real-world knowledge graphs and collaboration networks show the superior performance of our proposed methods on relation classification and link prediction.

Transfer Learning, Graph Neural Network, Representation Learning

1. Introduction

Refer to caption
Figure 1. Cross-GCN learning uses network alignment information to bridge the separated model parameters and representation spaces and transfer knowledge across the models.

Graph learning represents or encodes graph structure to perform tasks such as classification, clustering, and regression on graphs. It is important and ubiquitous with applications ranging from knowledge graph completion to movie recommendation. It is originally studied as dimensionality reduction in adjacency matrix factorization methods (Cai et al., 2010). Then network embedding methods aim at preserving particular structure, property, or side information in graph or networked data (Cui et al., 2018; Dong et al., 2017). Recently, graph convolutional networks (GCNs) have been introduced to specify model parameters (e.g., a set of weight matrices) on how to aggregate information from a node’s local neighborhood (Kipf and Welling, 2017; Hamilton et al., 2017b). Unlike the network embedding methods, these parameters are shared across nodes and allow the approaches to generate node representations that do not necessarily rely on the entire graph (Veličković et al., 2017; Hamilton et al., 2017a; Wang et al., 2019c).

The sparsity problem in graph data is a major bottleneck for most graph learning methods. For example, knowledge graphs are incomplete, causing inaccuracy in downstream tasks such as fact retrieval and question answering, especially when knowledge is represented under the Open World Assumption (Schlichtkrull et al., 2018). In real-world recommender systems, users can rate a very limited number of items, so the user-item bipartite graph is always extremely sparse (Ying et al., 2018; Fan et al., 2019). The observed data that can be used for adjacency matrix factorization or neural net graph learning are radially insufficient.

Refer to caption
Figure 2. Examples of partially aligned networks in real world for two different tasks.

Although we cannot fabricate more observations, we may borrow useful knowledge from graph data on different sources, domains, or platforms. Consider the following case: A new UberPool-like carpooling App has launched. Due to lack of usage at the beginning, machine learning on the very sparse user-user ride-sharing graph cannot be effective for ride recommendation. Now suppose that a small number of the users register the App with their social media accounts, and we have social graph data available from the social media platform. Then we ask, though the percentage of the “aligned” users is very small (10%, 5%, or even less than 1%), can we use the alignment as a bridge and transfer knowledge from the social graph to the ride-sharing graph? Since ride sharing and social relationship are somewhat related, the partially aligned users on different graphs (platforms) can share similar behavioral patterns. Thus, learning across the graphs can be beneficial. When the graph learning models are state-of-the-art, learning across graph convolutional networks can transfer knowledge via the partially aligned nodes.

Figure 1 illustrates the idea of cross-GCN learning. Originally, the three GCN models are trained separately to generate low-dimensional representations of nodes U[]U^{[\cdot]} with a set of model parameters {W(l)[]}\{W^{[\cdot]}_{(l)}\}. We seek for effective methods to transfer knowledge across the GCN models through the partial alignment information between 𝒢[]\mathcal{G}^{[\cdot]} into their node representations.

In this paper, I study how to alleviate the sparsity problem in graph learning by transferring knowledge across multiple GCN models, when the graphs are partially aligned.

The contributions of this work can be described in three aspects:

  1. (1)

    Cross-GCN learning methods: I design three different types of methods: (a) sharing model parameters, (b) aligning representation spaces with regularizations, and (c) simultaneously reconstructing the alignment and learning representations. I present how to generalize them for more than two GCNs and for relational graphs.

  2. (2)

    Theoretical foundations: I present theories on (a) the choice of the number of dimensions of the representation spaces to force knowledge transfer and (b) the conditions of positive transfer across GCN models, regularizations, and alignment reconstruction.

  3. (3)

    Extensive experiments: As shown in Figure 2, I perform experiments on two types of graphs datasets and graph learning tasks: (a) link prediction on weighted graphs (e.g., collaboration social networks) and (b) relation classification on knowledge graphs. Both graphs are partially aligned (see the number of aligned nodes in red). Cross-GCN learning performs consistently better than individual GCNs on all the datasets and tasks.

The rest of this paper is organized as follows. Section  gives definitions of basic concepts (e.g., graph convolutional network) and the proposed goal. Section  presents multiple kinds of methods to achieve the goal of cross-network learning. Section  provides theoretical analysis on choice of number of dimensions and positive transfer. Section   presents experimental results on knowledge graphs and collaboration networks for relation classification and link prediction, respectively. Section  reviews the related papers, and Section  concludes the work.

2. Preliminaries

2.1. Graphs and Learning Tasks

We study on two types of graphs and related tasks about learning, classifying, or predicting links between nodes.

Definition 2.1 (Weighted graph).

A weighted graph, denoted as 𝒢=(𝒰,𝒰×𝒰)\mathcal{G}=(\mathcal{U},\mathcal{E}\subset\mathcal{U}\times\mathcal{U}), contains a set of nodes 𝒰\mathcal{U} and a set of links \mathcal{E}, with a weight mapping function w:w:\mathcal{E}\rightarrow\mathbb{R}.

For example, collaboration networks are weighted graphs. The weight is the frequency of collaborations between two person nodes. So, it is a positive integer when the link exists. The task of link prediction is to predict whether the value of w(ui,uj)w(u_{i},u_{j}) is positive or what the value is, given a pair of nodes ui,uj𝒰u_{i},u_{j}\in\mathcal{U}.

Definition 2.2 (Relational graph).

A relational graph, denoted as 𝒢rel=(𝒰,𝒰×𝒰×,)\mathcal{G}_{rel}=(\mathcal{U},\mathcal{E}\subset\mathcal{U}\times\mathcal{U}\times\mathcal{R},\mathcal{R}), contains a set of nodes 𝒰\mathcal{U}, a set of relational links \mathcal{E}, and a relation schema \mathcal{R}. A relation link e=(ui,uj,r)e=(u_{i},u_{j},r)\in\mathcal{E} indicates that uiu_{i} and uju_{j} have relation rr.

Knowledge graphs are typical relational graphs, where the relational links need to be learned and completed. The task of relation classification is to predict the type of relation between two nodes (i.e., entities) ui,uju_{i},u_{j} from the relation schema \mathcal{R} so that a link e=(ui,uj,r)e=(u_{i},u_{j},r) is likely to exist in the graph.

The fundamental problem of both tasks is node representation learning that learns low-dimensional vectors of nodes to preserve the graph’s information. Take link prediction as an example – formally, it is to learn a mapping function: u=f(u,𝒢,Θ)d\vec{u}=f(u,\mathcal{G},\Theta)\in\mathbb{R}^{d}, where Θ\Theta is the node representation learning model’s parameters and dd is the number of dimensions of node representations. Then the value of w(ui,uj)w(u_{i},u_{j}) is assigned as σ(uiuj)\sigma(\vec{u_{i}}^{\top}\vec{u_{j}}), where σ\sigma is sigmoid. Simple predictive functions can be specified for the other two tasks.

2.2. Graph Convolutional Network

Graph convolutional network (GCN) is one of the most popular models for node representation learning (Kipf and Welling, 2017). Let us continue using link prediction as the example. We denote the number of nodes in graph 𝒢\mathcal{G} by n=|𝒰|n=|\mathcal{U}|. Suppose An×nA\in\mathbb{R}^{n\times n} is the adjacency matrix of 𝒢\mathcal{G} and Xn×mX\in\mathbb{R}^{n\times m} has the nodes’ raw attribute values, where mm is the number of the raw attributes. A two-layer GCN generates the matrix of node representations as follows:

(1) U=softmax(A^ReLU(A^XW(1))W(2))n×d,U=\mathrm{softmax}\left(\hat{A}~{}\mathrm{ReLU}(\hat{A}~{}X~{}W_{(1)})~{}W_{(2)}\right)\in\mathbb{R}^{n\times d},

where Θ={W(1)m×d,W(2)d×d}\Theta=\{W_{(1)}\in\mathbb{R}^{m\times d},W_{(2)}\in\mathbb{R}^{d\times d}\} has layer-specific weight matrices. We choose two layers as most of existing studies suggest. The pre-processing step includes (1) A~=A+In\tilde{A}=A+I_{n}, (2) D~ii=jA~ij\tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij}, and (3) A^=D~12A~D~12\hat{A}=\tilde{D}^{\frac{1}{2}}~{}\tilde{A}~{}\tilde{D}^{\frac{1}{2}}. Graph reconstruction is used to inform the GCN to learn Θ\Theta to generate effective UU:

(2) g(X,A,W(1),W(2))=σ(UU)[0,1]n×n.g(X,A,W_{(1)},W_{(2)})=\sigma(UU^{\top})\in{[0,1]}^{n\times n}.

The loss function can be written as follows:

(3) f(W(1),W(2))\displaystyle f(W_{(1)},W_{(2)}) =\displaystyle= (g(X,A,W(1),W(2)),A)\displaystyle\mathcal{L}\left(g(X,A,W_{(1)},W_{(2)}),~{}A\right)
=\displaystyle= i,j{1n}Aijlog(σ(uiuj))\displaystyle-\sum_{i,j\in\{1\dots n\}}A_{ij}~{}\mathrm{log}\left(\sigma(\vec{u_{i}}^{\top}\vec{u_{j}})\right)

where ui\vec{u_{i}} and uj\vec{u_{j}} are the ii-th and jj-th rows in matrix UU. It is worth noting that one can easily extend our study from two-layer GCN to ll-layer GCN (l>2l>2).

2.3. Cross-Network Learning

One GCN can be built for one graph. What if we have multiple graphs which overlap each other to perform the same type of tasks? The overlapping parts, referred as network alignment serve as bridge for transferring knowledge across multiple GCNs so that they can mutually enhance each other. Without loss of generality, we focus on the cases that have two graphs 𝒢[A]\mathcal{G}^{[A]} and 𝒢[B]\mathcal{G}^{[B]}. We denote by A[A,B]{0,1}nA×nBA^{[A,B]}\in{\{0,1\}}^{n_{A}\times n_{B}} the adjacency matrix of network alignment between the two graphs: Aij[A,B]=1A^{[A,B]}_{ij}=1, if node ui[A]u^{[A]}_{i} and node uj[B]u^{[B]}_{j} are aligned; where nA=|𝒰[A]|n_{A}=|\mathcal{U}^{[A]}| and nB=|𝒰[B]|n_{B}=|\mathcal{U}^{[B]}|. Usually the graphs are partially aligned, leading to missing alignment links and the sparsity of the alignment matrix.

Definition 2.3 (Separated GCNs).

Traditionally, the network alignment was ignored in building GCNs. So, two separated GCNs would be built for the tasks on graphs 𝒢[A]\mathcal{G}^{[A]} and 𝒢[B]\mathcal{G}^{[B]}:

f(W(1)[A],W(2)[A])=(g(X[A],A[A],W(1)[A],W(2)[A]),A[A]),\displaystyle f(W^{[A]}_{(1)},W^{[A]}_{(2)})=\mathcal{L}\left(g(X^{[A]},A^{[A]},W^{[A]}_{(1)},W^{[A]}_{(2)}),~{}A^{[A]}\right),
(4) f(W(1)[B],W(2)[B])=(g(X[B],A[B],W(1)[B],W(2)[B]),A[B]),\displaystyle f(W^{[B]}_{(1)},W^{[B]}_{(2)})=\mathcal{L}\left(g(X^{[B]},A^{[B]},W^{[B]}_{(1)},W^{[B]}_{(2)}),~{}A^{[B]}\right),

where no model parameters (between Θ[A]{\Theta}^{[A]} and Θ[B]{\Theta}^{[B]}) is shared and no alignment information is used.

Definition 2.4 (Cross-network learning).

Given two graphs 𝒢[A]\mathcal{G}^{[A]} and 𝒢[B]\mathcal{G}^{[B]} as well as the network alignment information A[A,B]A^{[A,B]}, build cross-network learning models that have the properties below:

  • generate node representations U[A]nA×dAU^{[A]}\in\mathbb{R}^{n_{A}\times d_{A}} and U[B]nB×dBU^{[B]}\in\mathbb{R}^{n_{B}\times d_{B}} through graph convolutions, where dAd_{A} and dBd_{B} are the numbers of dimensions, respectively – dAmAd_{A}\ll m_{A}, dBmBd_{B}\ll m_{B}, and they do not have to be equivalent;

  • perform better than separated GCNs by sharing model parameters and/or incorporating the network alignment information.

3. Proposed Methods

In this section, we present multiple types of methods for transferring knowledge across GCNs. The first type trains two GCNs by sharing their model parameters (Sections 3.1 and 3.2). The second uses the network alignment information to regularize the output representation spaces (Section 3.3). And the third type uses network alignment reconstruction as an extra task for knowledge transfer (Section 3.4). Generalizations to multiple GCNs and particular types of graphs are discussed (Sections 3.5 to LABEL:sec:bipartite).

3.1. Sharing Model Parameters across GCNs

Recall that a two-layer GCN on graph 𝒢[A]\mathcal{G}^{[A]} has parameters Θ[A]={W1[A]mA×dA,W2[A]dA×dA}\Theta^{[A]}=\{W^{[A]}_{1}\in\mathbb{R}^{m_{A}\times d_{A}},W^{[A]}_{2}\in\mathbb{R}^{d_{A}\times d_{A}}\}. Same goes for graph 𝒢[B]\mathcal{G}^{[B]}.

Suppose dAdBd_{A}\neq d_{B}. None of the parameters can be directly shared.

Suppose dA=dBd_{A}=d_{B}. The second-layer weight parameters can be shared: W2=W2[A]=W2[B]W_{2}=W^{[A]}_{2}=W^{[B]}_{2}. So, we can perform joint GCN training:

(5)

where α[A]\alpha^{[A]} and α[B]\alpha^{[B]} are per-graph weight of the training procedure. When we have two graphs, α[A]+α[B]=1\alpha^{[A]}+\alpha^{[B]}=1.

Suppose mA=mBm_{A}=m_{B} and dA=dBd_{A}=d_{B}. The weight parameters of both layers can be shared across GCNs. We omit the equation for space – it simply replaces W(1)[A]W^{[A]}_{(1)} and W(1)[B]W^{[B]}_{(1)} in Eq.(5) by W(1)W_{(1)}. However, the multiple graphs can hardly have the same raw attribute space. So, when d=dA=dBd=d_{A}=d_{B} but mAmBm_{A}\neq m_{B}, we assume that linear factorized components of W(1)[A]W^{[A]}_{(1)} and W(1)[B]W^{[B]}_{(1)} can be shared. We denote the shared component by W(1)m^×dW_{(1)}\in\mathbb{R}^{\hat{m}\times d}, where m^=min{mA,mB}\hat{m}=\mathrm{min}\{m_{A},m_{B}\} or a smaller value. Then the loss function of joint training is as follows:

(6)

where Q[A]mA×m^Q^{[A]}\in\mathbb{R}^{m_{A}\times\hat{m}} and Q[B]mB×m^Q^{[B]}\in\mathbb{R}^{m_{B}\times\hat{m}} are linear transformation matrices on raw feature spaces for aligning the first-layer weight parameter matrices.

When the number of graphs is two, we can perform alternatives to save one linear transformation matrix. Alternative 1 is to use Q[AB]mA×mBQ^{[A\rightarrow B]}\in\mathcal{R}^{m_{A}\times m_{B}} to align 𝒢[A]\mathcal{G}^{[A]}’s raw attribute space to 𝒢[B]\mathcal{G}^{[B]}’s:

(7)

And alternative 2 is to use Q[BA]mB×mAQ^{[B\rightarrow A]}\in\mathcal{R}^{m_{B}\times m_{A}} to align 𝒢[B]\mathcal{G}^{[B]}’s raw attribute space to 𝒢[A]\mathcal{G}^{[A]}’s.

3.2. Training Strategies across GCNs

The training strategy presented in the above section is joint training or referred as multi-task learning. The weights of the tasks, i.e., α[A]\alpha^{[A]} and α[B]\alpha^{[B]}, are determined prior to the training procedure. The other popular training strategy is called pre-training, that is to define one task a time as the target task and all the other tasks as source tasks. Take the method of sharing W(2)W_{(2)} as an example. The loss function of joint training was given in Eq.(5). Now if learning graph 𝒢[B]\mathcal{G}^{[B]} is the target task, then learning graph 𝒢[A]\mathcal{G}^{[A]} is the source task. The pre-training procedure has two steps. Their loss functions are:

f1(W(1)[A];W(2))=(g(X[A],A[A],W(1)[A],W(2)),A[A]),\displaystyle f_{1}(W^{[A]}_{(1)};W_{(2)})=\mathcal{L}\left(g(X^{[A]},A^{[A]},W^{[A]}_{(1)},W_{(2)}),~{}A^{[A]}\right),
(8) f2(W(1)[B];W(2))=(g(X[B],A[B],W(1)[B],W(2)),A[B]).\displaystyle f_{2}(W^{[B]}_{(1)};W_{(2)})=\mathcal{L}\left(g(X^{[B]},A^{[B]},W^{[B]}_{(1)},W_{(2)}),~{}A^{[B]}\right).

Each step still minimizes Eq.(5): specifically, the first step uses α[A]=1\alpha^{[A]}=1 and α[B]=0\alpha^{[B]}=0; and the second step uses α[A]=0\alpha^{[A]}=0 and α[B]=1\alpha^{[B]}=1. The first step warms up the training of the shared W(2)W_{(2)} on 𝒢[A]\mathcal{G}^{[A]} so that the “fine-tuning” step on 𝒢[B]\mathcal{G}^{[B]} can find more effective model parameters than starting from pure randomization. When the source and target are swapped, the two steps are swapped.

3.3. Aligning Representation Spaces with Regularizations

Besides sharing model parameters, an idea of bridging the gap between two GCNs is to align their output representation spaces. Given GCNs trained on two graphs 𝒢[A]\mathcal{G}^{[A]} and 𝒢[B]\mathcal{G}^{[B]}, if two nodes ui[A]u^{[A]}_{i} and uj[B]u^{[B]}_{j} are aligned, we assume that their representations, ui[A]dA\vec{u}^{[A]}_{i}\in\mathbb{R}^{d_{A}} and uj[B]dB\vec{u}^{[B]}_{j}\in\mathbb{R}^{d_{B}} are highly correlated. For example, one named entity (person, location, or organization) in two different knowledge graphs should have very similar representations. The same goes for one researcher in two collaboration networks (as an investigators in projects or an author in papers).

We add the network alignment information as two types of regularization terms into the loss function. The first type is Hard regularization. It assumes that the aligned nodes have exactly the same representations in the two output spaces. This requires the two spaces to have the same number of dimensions: dA=dBd_{A}=d_{B}. The term is written as:

(9) h(W(1)[A],W(2)[A],W(1)[B],W(2)[B])=U[A]A[A,B]U[B]F2.h(W^{[A]}_{(1)},W^{[A]}_{(2)},W^{[B]}_{(1)},W^{[B]}_{(2)})={\left\|U^{[A]}-A^{[A,B]}~{}U^{[B]}\right\|}^{2}_{\mathrm{F}}.

The entire loss function is as below:

(10) f(Θ[A],Θ[B])=f(W(1)[A],W(2)[A],W(1)[B],W(2)[B])\displaystyle f({\Theta}^{[A]},{\Theta}^{[B]})=f(W^{[A]}_{(1)},W^{[A]}_{(2)},W^{[B]}_{(1)},W^{[B]}_{(2)})
=\displaystyle= (1β)α[A](g(X[A],A[A],W(1)[A],W(2)[A]),A[A])\displaystyle(1-\beta)\cdot\alpha^{[A]}\cdot\mathcal{L}\left(g(X^{[A]},A^{[A]},W^{[A]}_{(1)},W^{[A]}_{(2)}),~{}A^{[A]}\right)
+(1β)α[B](g(X[B],A[B],W(1)[B],W(2)[B]),A[B])\displaystyle+~{}(1-\beta)\cdot\alpha^{[B]}\cdot\mathcal{L}\left(g(X^{[B]},A^{[B]},W^{[B]}_{(1)},W^{[B]}_{(2)}),~{}A^{[B]}\right)
+βh(W(1)[A],W(2)[A],W(1)[B],W(2)[B]),\displaystyle+~{}\beta\cdot h(W^{[A]}_{(1)},W^{[A]}_{(2)},W^{[B]}_{(1)},W^{[B]}_{(2)}),

where β\beta is the weight of the regularization task.

The second type is Soft regularization which is designed for more common cases that dAdBd_{A}\neq d_{B}. It assumes that the aligned nodes’ representations from one GCN can be linearly transformed into the ones in the other GCN’s output space:

(11) h(Θ[A],Θ[B],R[AB])=U[A]R[AB]A[A,B]U[B]F2,h({\Theta}^{[A]},{\Theta}^{[B]},R^{[A\rightarrow B]})={\left\|U^{[A]}R^{[A\rightarrow B]}-A^{[A,B]}~{}U^{[B]}\right\|}^{2}_{\mathrm{F}},

where R[AB]dA×dBR^{[A\rightarrow B]}\in\mathbb{R}^{d_{A}\times d_{B}} is the transformation matrix. Note that R[AB]R^{[A\rightarrow B]} is treated as model parameters in the loss function.

3.4. Alignment Reconstruction

When the two graphs are partially aligned, the network alignment information is usually sparse and incomplete. To address this issue, we treat the alignment information as a bipartite graph 𝒢[A,B]\mathcal{G}^{[A,B]} and learn to reconstruct and complete the graph. Given the node representations U[A]nA×dAU^{[A]}\in\mathbb{R}^{n_{A}\times d_{A}} and U[B]nB×dBU^{[B]}\in\mathbb{R}^{n_{B}\times d_{B}}, can we reconstruct the adjacency matrix of observed alignment data 𝒢[A,B]\mathcal{G}^{[A,B]}, i.e., I[A,B]A[A,B]nA×nBI^{[A,B]}\odot A^{[A,B]}\in\mathcal{R}^{n_{A}\times n_{B}}, where I[A,B]I^{[A,B]} is the indicator matrix of the observations? We introduce R[A,B]dA×dBR^{[A,B]}\in\mathbb{R}^{d_{A}\times d_{B}} and minimize the following optimization term:

(12) h=σ(U[A]R[A,B]U[B])I[A,B]A[A,B]F2.h={\left\|\sigma\left(U^{[A]}~{}R^{[A,B]}~{}U^{[B]\top}\right)-I^{[A,B]}\odot A^{[A,B]}\right\|}^{2}_{\mathrm{F}}.

The entire loss function optimizes on three graph reconstruction tasks (𝒢[A]\mathcal{G}^{[A]}, 𝒢[B]\mathcal{G}^{[B]}, and 𝒢[A,B]\mathcal{G}^{[A,B]}), by replacing hh in Eq.(10) with the above equation. The learning procedure transfers knowledge across the three tasks to find effective representations U[A]U^{[A]} and U[B]U^{[B]}.

3.5. Beyond Two: Extend to Multiple GCNs

Suppose we have K>2K>2 GCNs. All the above methods (i.e., loss functions) can be easily extended from 22 to KK GCNs. For example, the loss of joint multi-GCN training with shared W2W_{2} can be written as below (extended from Eq.(5)):

(13)

f({W(1)[i]}|i=1K;W(2))=i=1Kα[i](g(X[i],A[i],W(1)[i],W(2)),A[i]).f({\{W^{[i]}_{(1)}\}}|^{K}_{i=1};W_{(2)})=\sum^{K}_{i=1}\alpha^{[i]}\mathcal{L}\left(g(X^{[i]},A^{[i]},W^{[i]}_{(1)},W_{(2)}),~{}A^{[i]}\right).

And the loss of joint multi-GCN training with shared W1W_{1} and W2W_{2} can be written as below (extended from Eq.(6)):

(14)

f({Q[i]}|i=1K;W(1),W(2))=i=1Kα[i](g(X[i]Q[i],A[i],W(1),W(2)),A[i]).f({\{Q^{[i]}\}}|^{K}_{i=1};W_{(1)},W_{(2)})=\sum^{K}_{i=1}\alpha^{[i]}\mathcal{L}\left(g(X^{[i]}Q^{[i]},A^{[i]},W_{(1)},W_{(2)}),~{}A^{[i]}\right).

When using the network alignment information, the soft regularization term is written as below (extended from Eq.(11)):

(15) h({Θ[i]}|i=1K,{R[ij]}|i<j;i,j{1n})\displaystyle h\left({\{{\Theta}^{[i]}\}}|^{K}_{i=1},~{}{\{R^{[i\rightarrow j]}\}}|_{i<j;~{}i,j\in\{1\dots n\}}\right)
=\displaystyle= i<j;i,j{1n}γ[ij]U[i]R[ij]A[i,j]U[j]F2,\displaystyle\sum_{i<j;~{}i,j\in\{1\dots n\}}\gamma^{[i\rightarrow j]}{\left\|U^{[i]}R^{[i\rightarrow j]}-A^{[i,j]}~{}U^{[j]}\right\|}^{2}_{\mathrm{F}},

where γ[ij]\gamma^{[i\rightarrow j]} is the regularization weight of aligning representation spaces between graphs 𝒢[i]\mathcal{G}^{[i]} and 𝒢[j]\mathcal{G}^{[j]}.

The optimization term for alignment reconstruction and completion can be written as below (extended from Eq.(16)):

(16) h=i<jγ[ij]σ(U[i]R[i,j]U[j])I[i,j]A[i,j]F2.h=\sum_{i<j}\gamma^{[i\rightarrow j]}{\left\|\sigma\left(U^{[i]}~{}R^{[i,j]}~{}U^{[j]\top}\right)-I^{[i,j]}\odot A^{[i,j]}\right\|}^{2}_{\mathrm{F}}.

3.6. Implementation on Relational Graphs: Relation Classification across GCNs

Schlichtkrull et al. proposed neural relational modeling that uses GCNs for relation classification on relational graphs (Schlichtkrull et al., 2018). The output node representations from two-layer GCN are given as below:

(17) U=softmax(r1crA:,:,rReLU(r1crA:,:,rXWr,(1))Wr,(2)).U=\mathrm{softmax}\left(\sum_{r\in\mathcal{R}}\frac{1}{c_{r}}A_{:,:,r}~{}\mathrm{ReLU}\left(\sum_{r\in\mathcal{R}}\frac{1}{c_{r}}A_{:,:,r}~{}X~{}W_{r,(1)}\right)~{}W_{r,(2)}\right).

Clearly, it can be derived from Eq.(1) by making these changes:

  • The relational graph 𝒢rel={𝒰,,}\mathcal{G}_{rel}=\{\mathcal{U},\mathcal{E},\mathcal{R}\} is denoted as a three-way tensor A{0,1}n×n×nRA\in{\{0,1\}}^{n\times n\times n_{R}}, where n=|𝒰|n=|\mathcal{U}| is the number of entity nodes and nR=||n_{R}=|\mathcal{R}| is the number of relation types;

  • The model parameters are two three-way tensors:
    Θ={Wr,(1)|rm×d,Wr,(2)|rd×d}\Theta=\{{W_{r,(1)}}|_{r\in\mathcal{R}}\in\mathbb{R}^{m\times d},{W_{r,(2)}}|_{r\in\mathcal{R}}\in\mathbb{R}^{d\times d}\};

  • crc_{r} controls the weight of relation type rr.

Relation classification can be considered as relational link prediction if we apply the prediction on all the possible entity-relation triples. The full prediction matrix is:

(18) g(X,A,W(1),W(2),D)=σ(UDU)[0,1]n×n×nR,g(X,A,W_{(1)},W_{(2)},D)=\sigma(U\otimes D\otimes U^{\top})\in{[0,1]}^{n\times n\times n_{R}},

where Dd×d×nRD\in\mathbb{R}^{d\times d\times n_{R}} is a three-way core tensor. For each relation type rr, D:,:,rD_{:,:,r} is a d×dd\times d-diagonal matrix. The loss function is below:

(19) f(W(1),W(2),D)=(g(X,A,W(1),W(2),D),A)\displaystyle f(W_{(1)},W_{(2)},D)=\mathcal{L}\left(g(X,A,W_{(1)},W_{(2)},D),~{}A\right)
=\displaystyle= i,j{1n};rAi,j,rlog(σ(uiD:,:,ruj))\displaystyle-\sum_{i,j\in\{1\dots n\};~{}r\in\mathcal{R}}A_{i,j,r}~{}\mathrm{log}\left(\sigma(\vec{u_{i}}^{\top}~{}D_{:,:,r}~{}\vec{u_{j}})\right)

All the proposed methods for partially aligned GCNs in the previous sections can be applied here. The key differences are (1) a three-way tensor data for reconstruction and (2) additional parameters DD.

4. Theoretical Analysis

In this section, we present theoretical results for partially aligned GCNs. First, we study the choices of the number of dimensions of node representations that may perform no knowledge transfer across GCNs. Then, we study two factors of positive transfer across GCNs. Note that many details are given in Appendix to save space.

4.1. Choices of the Number of Dimensions dd

The task of graph reconstruction is to use the node representations Un×dU\in\mathbb{R}^{n\times d} to recover the adjacency matrix An×nA\in\mathbb{R}^{n\times n}. With singular value decomposition, we have AU^ΣU^A\simeq\hat{U}~{}\Sigma~{}\hat{U}^{\top}, where U^n×rank(A)\hat{U}\in\mathbb{R}^{n\times\mathrm{rank}(A)} and Σ\Sigma is square diagonal of size rank(A)×rank(A)\mathrm{rank}(A)\times\mathrm{rank}(A). So, the loss function in Eq.(3) can be approximated as the squared loss:

(20) f=UBU^ΣF2,f={\left\|~{}U~{}B-\hat{U}~{}\sqrt{\Sigma}~{}\right\|}^{2}_{\mathrm{F}},

where Bd×rank(A)B\in\mathcal{R}^{d\times\mathrm{rank}(A)}. It transforms the n×nn\times n classification tasks (for link prediction) into rank(A)\mathrm{rank}(A) regression tasks.

Proposition 4.1.

Suppose we have KK GCNs. In a linear setting for every model in the partially aligned GCNs, the loss function becomes:

(21)

f({Θi=W(1)[i]W(2)[i]B[i]}|i=1K)=i=1KA^[i]2X[i]ΘiU^[i]Σ[i]F2.f\left({\{\Theta_{i}=W^{[i]}_{(1)}W^{[i]}_{(2)}B^{[i]}\}}|^{K}_{i=1}\right)=\sum^{K}_{i=1}{{\left\|~{}\hat{A}^{[i]~{}2}X^{[i]}\Theta_{i}-\hat{U}^{[i]}~{}\sqrt{{\Sigma}^{[i]}}~{}\right\|}^{2}_{\mathrm{F}}}.

The optimal solution for the ii-th GCN is

(22) Θi=(X[i]A^[i]4X[i])X[i]A^[i]2U^[i]Σ[i]mi×rank(A),{\Theta}_{i}={\left(X^{[i]\top}\hat{A}^{[i]~{}4}X^{[i]}\right)}^{\dagger}X^{[i]\top}\hat{A}^{[i]~{}2}\hat{U}^{[i]}\sqrt{{\Sigma}^{[i]}}\in\mathcal{R}^{m_{i}\times\mathrm{rank}(A)},

where XX^{\dagger} denotes XX’s pseudoinverse. Hence a capacity of rank(A[i])\mathrm{rank}(A^{[i]}) suffices for the ii-th GCN. If di=1Krank(A[i])d\geq\sum^{K}_{i=1}\mathrm{rank}(A^{[i]}), then B[i]B^{[i]} can be an identity matrix and there is no transfer between any two GCNs. In that case, no matter {W(1)[i]}|i=1K{\{W^{[i]}_{(1)}\}}|^{K}_{i=1} or {W(2)[i]}|i=1K{\{W^{[i]}_{(2)}\}}|^{K}_{i=1} are shared or not, these exists an optimum W(1)[i]W(2)[i]=ΘiW^{[i]*}_{(1)}W^{[i]*}_{(2)}={\Theta}_{i}, for all i=1Ki=1\dots K.

The illustration of this idea and extension to nonlinear settings are discussed in Appendix.

4.2. Positive Transfer across GCN Models

We apply the theory of positive transfer in multi-task learning (MTL) (Wu et al., 2019) into cross-GCNs learning. If the cross-GCNs training improves over just training the target GCN on one particular graph, we say the source GCNs on other graphs transfer positively to the target GCN. (Wu et al., 2019) proposed a theorem to quantify how many data points are needed to guarantee positive transfer between two ReLU model tasks. By the above section, it is necessary to limit the capacity of the shared model parameters to enforce knowledge transfer. Here we consider d=1d=1 and m=mA=mBm=m_{A}=m_{B} to align with the theorem in (Wu et al., 2019). We have a theorem for positive transfer between two one-layer GCNs that share the first-layer model parameters W(1)W_{(1)}:

Theorem 4.2.

Let (A^[i]X[i]ni×m,u^[i]λ1[i])ni(\hat{A}^{[i]}X^{[i]}\in\mathbb{R}^{n_{i}\times m},\vec{\hat{u}}^{[i]}\sqrt{\lambda^{[i]}_{1}})\in\mathbb{R}^{n_{i}} denote the data of graph 𝒢[i]\mathcal{G}^{[i]} (i={A,B}i=\{A,B\}). Suppose the output is generated by ReLU(A^[i]X[i]θ[i])ReLU(\hat{A}^{[i]}X^{[i]}~{}{\theta}^{[i]}). And suppose csin(θ[A],θ[B])/κ(A^[B]X[B])c\geq\mathrm{sin}({\theta}^{[A]},{\theta}^{[B]})/\kappa(\hat{A}^{[B]}X^{[B]}). Denote by W(1)W^{\star}_{(1)} the optimal solution of the one-layer setting of Eq.(6): f(W(1))=i{A,B}(g(X[i],A[i],W(1)),A[i])f(W_{(1)})=\sum_{i\in\{A,B\}}\mathcal{L}\left(g(X^{[i]},A^{[i]},W_{(1)}),~{}A^{[i]}\right). With probability 1δ1-\delta over the randomness of (A^[A]X[A],u^[A]λ1[A])(\hat{A}^{[A]}X^{[A]},\vec{\hat{u}}^{[A]}\sqrt{\lambda^{[A]}_{1}}), when

(23) nimax(mlogmc2(1c2+logm),u^[B]λ1[B]2c2),n_{i}\geq\mathrm{max}\left(\frac{m\log m}{c^{2}}(\frac{1}{c^{2}}+\log m),\frac{{\|\vec{\hat{u}}^{[B]}\sqrt{\lambda^{[B]}_{1}}\|}^{2}}{c^{2}}\right),

we have that the estimation error is at most:

(24) sin(W(1),θ[A])sin(θ[A],θ[B])+O(c/κ(A^[B]X[B])).\mathrm{sin}(W^{\star}_{(1)},{\theta}^{[A]})\leq\mathrm{sin}({\theta}^{[A]},{\theta}^{[B]})+O(c/\kappa(\hat{A}^{[B]}X^{[B]})).

Notations for the above theorem. κ(A)=λmax(A)/λmin(A)\kappa(A)=\lambda_{\mathrm{max}}(A)/\lambda_{\mathrm{min}}(A) denotes the condition number of An×mA\in\mathbb{R}^{n\times m}, where λmax(A)\lambda_{\mathrm{max}}(A) denotes its largest singular value and λmin(A)\lambda_{\mathrm{min}}(A) denotes its min{n,m}\{n,m\}-th largest singular value. As MTL is effective over “similar” tasks (Xue et al., 2007), the cross-GCNs learning can be effective across similar GCN models. A natural definition of GCN similarity is cos(θ[A],θ[B])\mathrm{cos}(\theta^{[A]},\theta^{[B]}): higher means more similar. So sin(W,θ)=1cos2(W,θ)\mathrm{sin}(W^{\star},\theta)=\sqrt{1-\mathrm{cos}^{2}(W^{\star},\theta)} can denote the estimation error. Proof of the theorem can be referred in (Wu et al., 2019).

4.3. Positive Transfer with Network Alignment

Soft regularization: We transfer knowledge across two GCNs’ representation spaces by optimizing Eq.(10) where hh is specified as Eq.(11). The optimal solution is

U[A]=((1β)α[A]I+βR[AB]R[AB])1\displaystyle U^{[A]\star}={\left((1-\beta)\cdot\alpha^{[A]}~{}I+\beta R^{[A\rightarrow B]}R^{[A\rightarrow B]\top}\right)}^{-1}\cdot
(25) ((1β)α[A]U^[A]Σ[A]+βA[A,B]U[B]R[AB]).\displaystyle\cdot{\left((1-\beta)\cdot\alpha^{[A]}~{}\hat{U}^{[A]}\sqrt{\Sigma^{[A]}}+\beta~{}A^{[A,B]}U^{[B]\star}R^{[A\rightarrow B]\top}\right)}.

So, if β=0\beta=0, U[A]=U^[A]Σ[A]U^{[A]\star}=\hat{U}^{[A]}\sqrt{\Sigma^{[A]}} may overfit the graph data 𝒢[A]\mathcal{G}^{[A]}; if β(0,1]\beta\in(0,1], the information transferred from 𝒢[B]\mathcal{G}^{[B]} via the alignment A[A,B]A^{[A,B]}, i.e., A[A,B]U[B]R[AB]A^{[A,B]}U^{[B]\star}R^{[A\rightarrow B]}, can have positive impact on link prediction on 𝒢[A]\mathcal{G}^{[A]}.

Alignment reconstruction: Suppose we optimize Eq.(10 where hh is in Eq.(16). The optimal solution is

U[A]=((1β)α[A]U^[A]Σ[A]+βA[A,B]U[B]R[A,B])\displaystyle U^{[A]\star}={\left((1-\beta)\cdot\alpha^{[A]}~{}\hat{U}^{[A]}\sqrt{\Sigma^{[A]}}+\beta A^{[A,B]}U^{[B]\star}R^{[A,B]\top}\right)}
(26) ((1β)α[A]I+βR[A,B]U[B]U[B]R[A,B])1.\displaystyle\cdot{\left((1-\beta)\cdot\alpha^{[A]}~{}I+\beta R^{[A,B]}U^{[B]\star\top}U^{[B]\star}R^{[A,B]\top}\right)}^{-1}.

So, if β=0\beta=0, U[A]=U^[A]Σ[A]U^{[A]\star}=\hat{U}^{[A]}\sqrt{\Sigma^{[A]}} may overfit the graph data 𝒢[A]\mathcal{G}^{[A]}; if β(0,1]\beta\in(0,1], the information transferred from 𝒢[B]\mathcal{G}^{[B]} via the alignment A[A,B]A^{[A,B]}, i.e., A[A,B]U[B]R[A,B]A^{[A,B]}U^{[B]\star}R^{[A,B]\top}, can have positive impact on link prediction on 𝒢[A]\mathcal{G}^{[A]}.

[Uncaptioned image]
Table 1. Performance of VGAE (Kipf and Welling, 2016), (Separated) GCNs (Kipf and Welling, 2017), and 13 cross-GCN learning models on link prediction in NSFAward and DBLP academic graphs (𝒢[A]\mathcal{G}^{[A]} and 𝒢[B]\mathcal{G}^{[B]}). The models take various options on model sharing, regularizations, and alignment reconstruction. Equations can be found in Sections 2 and 3. Higher AUC or AP means better performance. We investigated the standard deviation of all the cells: all the values are smaller than 0.0016 and thus omitted for space. Underlined numbers are targeted values in pre-training strategies. Bolded numbers are the highest on the columns.

5. Experiments

In this section, I perform cross-network learning on three types of graph learning tasks. For each task, I present datasets (stats can be found in Figure 2), experimental settings, results, and analysis.

5.1. Link Prediction

5.1.1. Datasets

Below are two collaboration networks I use that can be considered as weighted graphs between researcher nodes.

  • NSFAward: It is publicly available on the National Science Foundation (NSF) Awards database from 1970 to 2019.111NSF Award Search: https://www.nsf.gov/awardsearch/download.jsp Nodes are investigators of the awards. Two investigator nodes have a link if they are participated in the same award(s) and the weight of the link is the number of their shared award(s).

  • DBLP: I expand the DBLP-4Area (Huang et al., 2016) from 5,915 authors to 14,387 authors by adding their co-authors in the graph. The weight of the link between two author nodes is the number of their co-authored papers.

Refer to caption
Refer to caption
Refer to caption
Figure 3. Sensitivity analysis on M13 (sharing W(1)W_{(1)} and W(2)W_{(2)}, plus alignment reconstruction): (a) on the weight of the alignment reconstruction task β\beta, (b) on the weight of learning on graph 𝒢[A]\mathcal{G}^{[A]}: α[A]\alpha^{[A]}. And (c) running time analysis in seconds per epoch.

5.1.2. Experimental settings

For each graph, I randomly split the observed links into training, validation, and test sets by 8:1:1. Then I randomly select unobserved links as negative examples in the validation and test sets. Positives and negatives are balanced. I use Area Under the ROC Curve (AUC) and Average Precision (AP) as evaluation metrics. I perform the evaluation 10 times, calculate mean value and standard deviation. I also report the “overall” performance by calculating the harmonic mean of the AUC/AP on the two datasets. All models are developed by PyTorch and one NVIDIA GeForce RTX 2080 Ti graphic card. I set dA=dB=64d_{A}=d_{B}=64. I use grid search on the hyperparameters in {0.1,0.2,,0.9}\{0.1,0.2,\dots,0.9\} and find that the best combination is α[A]=α[B]=β=0.5\alpha^{[A]}=\alpha^{[B]}=\beta=0.5.

5.1.3. Results

Table 1 presents the performance of as many as 13 cross-GCN learning models I implemented. Models M3–M8 share W(2)W_{(2)} across the NSFAward and DBLP academic graphs. Models M9–M13 share both W(1)W_{(1)} and W(2)W_{(2)}. M6 uses hard regularization; M1, M7, and M12 use soft regularization; and M2, M8, and M13 use alignment reconstruction. I have the following observations:

O1. Cross-network models perform better than baselines and the best is M13: M13 shares W(1)W_{(1)} and W(2)W_{(2)} between two GCNs, and uses alignment reconstruction to bridge the two representation spaces. In terms of the overall AUC on the two test sets, M13 achieves 0.865, which is significantly higher than VGAE’s 0.782 (Kipf and Welling, 2016) and (Separated) GCNs’ 0.803 (Kipf and Welling, 2017). The AP is improved from VGAE’s 0.773 and GCN’s 0.793 to 0.870. Clearly, the knowledge is successfully transferred across the GCN models on two academic graphs. Performance on test sets is worse than but very close to that on validation sets, which indicates no or very little overfitting in the GCN models. Both models need information beyond the corresponding graph data for effective learning and decision-making on the test data.

O2. Sharing weight matrices is beneficial, and sharing those of both layers is beneficial: Suppose we do not use the alignment information at all: the only possible bridge across two GCNs is weight matrices W(1)W_{(1)} and W(2)W_{(2)}. M3–M5 share W(2)W_{(2)}. Compared with separated GCNs, M5 improves the overall AUC on test sets from 0.803 to 0.836. And M9–M11 share both W(1)W_{(1)} and W(2)W_{(2)}. Compared with M5, M11 improves the AUC from 0.836 to 0.847. When soft regularization is adopted, we can compare M7 and M12 with M1; when alignment reconstruction is adopted, we can compare M8 and M13 with M2. The conclusions are the same.

Refer to caption
Figure 4. Empirical analysis on the choice of dd.

O3. If W(2)W_{(2)} is shared, joint training is slightly better than pre-training: M3 pre-trains the model with 𝒢[A]\mathcal{G}^{[A]} and then fine-tunes on 𝒢[B]\mathcal{G}^{[B]}; M4 switches the use of the graphs. So we focus on M3’s performance on 𝒢[B]\mathcal{G}^{[B]} and M4’s on 𝒢[A]\mathcal{G}^{[A]}. Compared with separated GCNs, M3 improves AUC on the test set from 0.840 to 0.854; M4 improves it from 0.769 to 0.806. However, with a joint training strategy, M5 achieves an AUC on 𝒢[B]\mathcal{G}^{[B]} of 0.860 over M3’s 0.854 and achieves an AUC on 𝒢[A]\mathcal{G}^{[A]} of 0.812 over M4’s 0.806.

O4. If W(1)W_{(1)} is shared, learning linear transformation matrices on both graphs is slightly better than learning a matrix on one of them: M11 learns Q[A]mA×m^Q^{[A]}\in\mathbb{R}^{m_{A}\times\hat{m}} and Q[B]mB×m^Q^{[B]}\in\mathbb{R}^{m_{B}\times\hat{m}}. M9 learns Q[AB]Q^{[A\rightarrow B]} only, and M10 learns Q[BA]Q^{[B\rightarrow A]} only. In terms of the overall AUC on the two test sets, compared with M5 that shares W(2)W_{(2)} only, M9 improves AUC from M5’s 0.836 to 0.840, M10 achieves an AUC of 0.845, and M11 achieves an AUC of 0.847.

O5. Regularizations, hard or soft, are beneficial; and alignment reconstruction is better: (a) M6 adds hard regularization to M5. M6 improves the overall AUC on the two test sets from M5’s 0.836 to 0.842. (b) M1, M7, and M12 add soft regularization to separated GCNs, M5, and M11, respectively. M1 improves the AUC from 0.803 to 0.805, M7 improves from 0.836 to 0.849, and M13 improves from 0.847 to 0.863. With more model parameters (e.g., weight matrices) shared, the improvements by regularizations become bigger. (c) M8 and M13 add alignment reconstruction to M5 and M11, respectively. M8 improves the AUC from 0.836 to 0.856. M13 improves from 0.847 to 0.865. The AUC scores are quite significantly improved by alignment reconstruction, slightly higher than regularizations.

5.1.4. Hyperparameter sensitivity

In Figure 3(a) and (b) we investigate the effect of hyperparameters β\beta and α[A]\alpha^{[A]} on the performance of model M13. When β=0\beta=0, M13 is equivalent with M11. With β\beta increasing, the performance becomes better by learning to reconstruct the network alignment. It is the best when β[0.4,0.6]\beta\in[0.4,0.6]. Then a too big β\beta will overfit the model on the alignment reconstruction, so the AUC/AP decreases quickly when β\beta goes from 0.6 to 1. Given β=0.5\beta=0.5, a too big or too small α[A]\alpha^{[A]} will overfit one of the graph data, 𝒢[A]\mathcal{G}^{[A]} or 𝒢[B]\mathcal{G}^{[B]}. When α[A]\alpha^{[A]} is around 0.5, the performance is the best. So, all the optimization terms are important.

5.1.5. Time complexity

Figure 3(c) presents the time cost per epoch of separated GCNs, M1, M2, M5, M11–M13. The models that share both W(1)W_{(1)} and W(2)W_{(2)} (M11), share W(2)W_{(2)} only (M5), and share neither of them (Separated) do not show significant difference of time cost. Adopting soft regularizations (M1 vs Separated, M12 vs M11) increases the time cost slightly. And adopting alignment reconstruction (M2 vs M1, M13 vs M12) take slightly more time. The time complexity remains at the same level in the proposed models.

5.1.6. On the choice of dd

Figure 4 presents the overall AUC on test sets of Separated GCNs, M5 (sharing W(2)W_{(2)}), and M11 (sharing W(1)W_{(1)} and W(2)W_{(2)}), when the number of dimensions dd is {2,22,,28}\{2,2^{2},\dots,2^{8}\}. When dd becomes bigger, the improvements by model parameters sharing become smaller. As presented in theoretical analysis, small dd would force the knowledge transfer across the GCN models; when dd is too big, there might be no transfer. We choose d=64d=64 as the default number of dimensions of the latent spaces.

5.2. Relation Classification

5.2.1. Datasets

Below are four knowledge graphs I use that can be considered as relational graphs between entity nodes.

  • FB15K: It is a subset of the relational database Freebase (Schlichtkrull et al., 2018).

  • WN18: It is a subset of WordNet containing lexical relations.

  • YAGO: The open source knowledge base developed by MPI.

  • DBpedia500: A subset of structured content in the Wikipedia.

I will investigate whether incorporating partially-aligned graphs YAGO and DBPedia500 can improve the performance of relation classification on FB15K and WN18, respectively. Entities are aligned if they have the same surface name. Resolving the inaccuracy of entity alignment is important but out of this work’s scope.

[Uncaptioned image]
Table 2. Performance of existing methods (e.g., RGCN (Schlichtkrull et al., 2018)) and cross-R-GCN learning models on relation classification.

5.2.2. Experimental settings

Both FB15K and WN18 have standard splits for training, validation, and test. I provide results using two common evaluation metrics: mean reciprocal rank (MRR) and Hits at kk (Hits@kk). I report both raw and filtered MRR (with filtered MRR typically considered more reliable) and filtered Hits@1 and Hits@3, following TransE (Bordes et al., 2013). Other baseline methods include:

  • HOlE (Nickel et al., 2016): it replaces vector-matrix product with circular correlation to compute knowledge graph embeddings.

  • DistMult (Yang et al., 2014): a neural embedding approach that learns representations of entities and relations.

  • ComplEx (Trouillon et al., 2016): it models asymmetric relations by generalizing DistMult to the complex domain.

  • R-GCN (Schlichtkrull et al., 2018): it is the first work of relation graph convolutional network, considered the state of the art. And I use it as the basic GCN model for cross-GCN learning.

  • CrossE (Zhang et al., 2019a): its “interaction embeddings” are learned from both knowledge graph connections and link explanations.

I use the Adam optimizer with a learning rate of 0.01. Implementation details can be found in Appendix.

5.2.3. Results

Table 2 presents the results of the baselines and six cross-GCN learning models (M1, M2, M5, M11–M13). Model M5 shares W(2)W_{(2)} across two knowledge graphs. M11–M13 share both W(1)W_{(1)} and W(2)W_{(2)}. M1 and M12 use soft regularization; and M2 and M13 use alignment reconstruction. Our main observation is that cross-network models perform better than baselines and the best model is M13. With the large-scale YAGO, M13 improves the filtered MRR from 0.696 to 0.742 and the filtered Hits@1 from 0.601 to 0.641 for relation classification on FB15K, compared to a sole R-GCN model. With DBPedia500 which is bigger than YAGO, M13 improves the filtered MRR from 0.819 to 0.843 and the filtered Hits@1 from 0.697 to 0.743 on WN18, compared to R-GCN. Both M12 and M13 that share two-layers parameters and alignment information achieve higher performance than the best baseline CrossE on all the metrics.

6. Related Work

6.1. Graph Neural Networks

Graph neural network (GNN) is a type of neural network that operates on graph structure to capture the dependence of graphs via message passing between the nodes of graphs (Zhang et al., 2020; Wu et al., 2020). Kipf et al. proposed graph convolutional networks that use a spectral graph convolution to capture the spatial dependency (Kipf and Welling, 2017). Velivckovic et al. considered the interaction between nodes in the graph convolution measured as graph attention weight (Veličković et al., 2017). Hamilton et al. implemented the graph convolution as an efficient aggregation and sampling algorithm for inductive node representation learning (Hamilton et al., 2017a).

Recently, GNN techniques have been applied for learning representations on relational graphs, specifically knowledge graphs (Sadeghian et al., 2019; Zhang et al., 2019c; Ren and Leskovec, 2020). Relational GNNs perform significantly better than conventional knowledge graph embeddings (Schlichtkrull et al., 2018). Most techniques suffer from the issue of data sparsity and our proposed methods address it with knowledge transfer across networks.

In this paper, we discussed weighted graph, relational graph, and bipartite graphs; however, heterogeneous graphs are ubiquitous and have unique essential challenges for modeling and learning (Wang et al., 2020; Luo et al., 2020; Liu et al., 2020; Wang et al., 2019a; Wang et al., 2019b). We have been seeing heterogeneous GNNs (Dong et al., 2017; Liu et al., 2018; Zhang et al., 2019b; Fu et al., 2020) and heterogeneous attention networks for graphs (Wang et al., 2019c; Yang et al., 2020; Hu et al., 2020a; Yao et al., 2020; Hu et al., 2020b). Extending our proposed cross-network learning to heterogeneous graphs and enabling inductive learning in the framework are very important. These two points are out of the scope of our work but worth being considered as future directions.

6.2. Network Alignment and Modeling

The alignment of nodes between two or more than two graphs has been created and/or utilized in at least three categories of methods (Jiang et al., 2012, 2015; Vijayan and Milenković, 2017; Chen et al., 2017; Qin et al., 2020; Gu and Milenković, 2020). First, Kong et al. inferred anchor links between two social networks by matching the subgraphs surrounding the linked nodes (Kong et al., 2013). Emmert et al. wrote a comprehensive survey on the algorithms of graph matching (Emmert-Streib et al., 2016). Second, Zhang et al. used graph pattern mining algorithms for node alignment on incomplete networks (Zhang et al., 2017) and multi-level network (Zhang et al., 2019d). Nassar et al. leveraged the multimodal information for network alignment (Nassar and Gleich, 2017). The third line of work learned node representations and trained the supervised models with node pairing labels (if available) (Du et al., 2019; Ye et al., 2019; Nassar et al., 2018). A matrix factorization-based model has demonstrated that cross-network or cross-platform modeling can alleviate the data sparsity on a single platform (Jiang et al., 2016).

7. Conclusions

In this paper, I proposed partially aligned GCNs that jointly learn node representations across graphs. I provided multiple methods as well as theoretical analysis to positively transfer knowledge across the set of partially aligned nodes. Extensive experiments on real-world knowledge graphs and collaboration networks show the superior performance of the proposed models on relation classification and link prediction.

Acknowledgment

This research was supported by National Science Foundation award IIS-1849816.

References

  • (1)
  • Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems 26 (2013), 2787–2795.
  • Cai et al. (2010) Deng Cai, Xiaofei He, Jiawei Han, and Thomas S Huang. 2010. Graph regularized nonnegative matrix factorization for data representation. IEEE transactions on pattern analysis and machine intelligence 33, 8 (2010), 1548–1560.
  • Chen et al. (2017) Zheng Chen, Xinli Yu, Bo Song, Jianliang Gao, Xiaohua Hu, and Wei-Shih Yang. 2017. Community-based network alignment for large attributed network. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. 587–596.
  • Cui et al. (2018) Peng Cui, Xiao Wang, Jian Pei, and Wenwu Zhu. 2018. A survey on network embedding. IEEE Transactions on Knowledge and Data Engineering 31, 5 (2018), 833–852.
  • Dong et al. (2017) Yuxiao Dong, Nitesh V Chawla, and Ananthram Swami. 2017. metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. 135–144.
  • Du et al. (2019) Xingbo Du, Junchi Yan, and Hongyuan Zha. 2019. Joint Link Prediction and Network Alignment via Cross-graph Embedding.. In IJCAI. 2251–2257.
  • Emmert-Streib et al. (2016) Frank Emmert-Streib, Matthias Dehmer, and Yongtang Shi. 2016. Fifty years of graph matching, network alignment and network comparison. Information sciences 346 (2016), 180–197.
  • Fan et al. (2019) Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In The World Wide Web Conference. 417–426.
  • Fu et al. (2020) Xinyu Fu, Jiani Zhang, Ziqiao Meng, and Irwin King. 2020. MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding. In Proceedings of The Web Conference 2020. 2331–2341.
  • Gu and Milenković (2020) Shawn Gu and Tijana Milenković. 2020. Data-driven network alignment. PloS one 15, 7 (2020), e0234978.
  • Hamilton et al. (2017a) Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017a. Inductive representation learning on large graphs. In Advances in neural information processing systems. 1024–1034.
  • Hamilton et al. (2017b) William L Hamilton, Rex Ying, and Jure Leskovec. 2017b. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584 (2017).
  • Hu et al. (2020b) Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun. 2020b. Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1857–1867.
  • Hu et al. (2020a) Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. 2020a. Heterogeneous graph transformer. In Proceedings of The Web Conference 2020. 2704–2710.
  • Huang et al. (2016) Zhipeng Huang, Yudian Zheng, Reynold Cheng, Yizhou Sun, Nikos Mamoulis, and Xiang Li. 2016. Meta structure: Computing relevance in large heterogeneous information networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1595–1604.
  • Jiang et al. (2015) Meng Jiang, Peng Cui, Xumin Chen, Fei Wang, Wenwu Zhu, and Shiqiang Yang. 2015. Social recommendation with cross-domain transferable knowledge. IEEE transactions on knowledge and data engineering 27, 11 (2015), 3084–3097.
  • Jiang et al. (2012) Meng Jiang, Peng Cui, Fei Wang, Qiang Yang, Wenwu Zhu, and Shiqiang Yang. 2012. Social recommendation across multiple relational domains. In Proceedings of the 21st ACM international conference on Information and knowledge management. 1422–1431.
  • Jiang et al. (2016) Meng Jiang, Peng Cui, Nicholas J Yuan, Xing Xie, and Shiqiang Yang. 2016. Little Is Much: Bridging Cross-Platform Behaviors through Overlapped Crowds. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI). 13–19.
  • Kipf and Welling (2016) Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. arXiv:1611.07308 (2016).
  • Kipf and Welling (2017) Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations.
  • Kong et al. (2013) Xiangnan Kong, Jiawei Zhang, and Philip S Yu. 2013. Inferring anchor links across multiple heterogeneous social networks. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management. 179–188.
  • Liu et al. (2020) Siwei Liu, Iadh Ounis, Craig Macdonald, and Zaiqiao Meng. 2020. A Heterogeneous Graph Neural Model for Cold-Start Recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2029–2032.
  • Liu et al. (2018) Ziqi Liu, Chaochao Chen, Xinxing Yang, Jun Zhou, Xiaolong Li, and Le Song. 2018. Heterogeneous graph neural networks for malicious account detection. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management. 2077–2085.
  • Luo et al. (2020) Wenjuan Luo, Han Zhang, Xiaodi Yang, Lin Bo, Xiaoqing Yang, Zang Li, Xiaohu Qie, and Jieping Ye. 2020. Dynamic Heterogeneous Graph Neural Network for Real-time Event Prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 3213–3223.
  • Nassar and Gleich (2017) Huda Nassar and David F Gleich. 2017. Multimodal network alignment. In Proceedings of the 2017 SIAM International Conference on Data Mining. SIAM, 615–623.
  • Nassar et al. (2018) Huda Nassar, Nate Veldt, Shahin Mohammadi, Ananth Grama, and David F Gleich. 2018. Low rank spectral network alignment. In Proceedings of the 2018 World Wide Web Conference. 619–628.
  • Nickel et al. (2016) Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 30.
  • Qin et al. (2020) Kyle K Qin, Flora D Salim, Yongli Ren, Wei Shao, Mark Heimann, and Danai Koutra. 2020. G-CREWE: Graph CompREssion With Embedding for Network Alignment. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 1255–1264.
  • Ren and Leskovec (2020) Hongyu Ren and Jure Leskovec. 2020. Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs. Advances in Neural Information Processing Systems 33 (2020).
  • Sadeghian et al. (2019) Ali Sadeghian, Mohammadreza Armandpour, Patrick Ding, and Daisy Zhe Wang. 2019. Drum: End-to-end differentiable rule mining on knowledge graphs. In Advances in Neural Information Processing Systems. 15347–15357.
  • Schlichtkrull et al. (2018) Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European Semantic Web Conference. Springer, 593–607.
  • Trouillon et al. (2016) Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. International Conference on Machine Learning (ICML).
  • Veličković et al. (2017) Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. In International Conference on Learning Representations.
  • Vijayan and Milenković (2017) Vipin Vijayan and Tijana Milenković. 2017. Multiple network alignment via multiMAGNA++. IEEE/ACM transactions on computational biology and bioinformatics 15, 5 (2017), 1669–1682.
  • Wang et al. (2020) Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous Graph Neural Networks for Extractive Document Summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
  • Wang et al. (2019a) Shen Wang, Zhengzhang Chen, Ding Li, Zhichun Li, Lu-An Tang, Jingchao Ni, Junghwan Rhee, Haifeng Chen, and Philip S Yu. 2019a. Attentional heterogeneous graph neural network: Application to program reidentification. In Proceedings of the 2019 SIAM International Conference on Data Mining. SIAM, 693–701.
  • Wang et al. (2019b) Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. 2019b. Kgat: Knowledge graph attention network for recommendation. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 950–958.
  • Wang et al. (2019c) Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. 2019c. Heterogeneous graph attention network. In The World Wide Web Conference. 2022–2032.
  • Wu et al. (2019) Sen Wu, Hongyang R Zhang, and Christopher Ré. 2019. Understanding and Improving Information Transfer in Multi-Task Learning. In International Conference on Learning Representations.
  • Wu et al. (2020) Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems (2020).
  • Xue et al. (2007) Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram. 2007. Multi-task learning for classification with dirichlet process priors. Journal of Machine Learning Research 8, Jan (2007), 35–63.
  • Yang et al. (2014) Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 (2014).
  • Yang et al. (2020) Xu Yang, Cheng Deng, Tongliang Liu, and Dacheng Tao. 2020. Heterogeneous Graph Attention Network for Unsupervised Multiple-Target Domain Adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).
  • Yao et al. (2020) Shaowei Yao, Tianming Wang, and Xiaojun Wan. 2020. Heterogeneous Graph Transformer for Graph-to-Sequence Learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 7145–7154.
  • Ye et al. (2019) Rui Ye, Xin Li, Yujie Fang, Hongyu Zang, and Mingzhong Wang. 2019. A Vectorized Relational Graph Convolutional Network for Multi-Relational Network Alignment.. In IJCAI. 4135–4141.
  • Ying et al. (2018) Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 974–983.
  • Zhang et al. (2019b) Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V Chawla. 2019b. Heterogeneous graph neural network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 793–803.
  • Zhang et al. (2019c) Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019c. Quaternion knowledge graph embeddings. In Advances in Neural Information Processing Systems. 2735–2745.
  • Zhang et al. (2019d) Si Zhang, Hanghang Tong, Ross Maciejewski, and Tina Eliassi-Rad. 2019d. Multilevel network alignment. In The World Wide Web Conference. 2344–2354.
  • Zhang et al. (2017) Si Zhang, Hanghang Tong, Jie Tang, Jiejun Xu, and Wei Fan. 2017. ineat: Incomplete network alignment. In 2017 IEEE International Conference on Data Mining (ICDM). IEEE, 1189–1194.
  • Zhang et al. (2019a) Wen Zhang, Bibek Paudel, Wei Zhang, Abraham Bernstein, and Huajun Chen. 2019a. Interaction embeddings for prediction and explanation in knowledge graphs. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 96–104.
  • Zhang et al. (2020) Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2020. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering (2020).

Appendix A Additional Information for Theoretical Analysis

A.1. Choices of the Number of Dimensions dd

The total capacities (i.e., sum of ranks of adjacency matrices) can be implemented as i=1Kargmaxpλp(A[i])ϵ\sum^{K}_{i=1}\mathrm{argmax}_{p}~{}\lambda_{p}(A^{[i]})\geq\epsilon, the sum of the minimum positions of no-smaller-than-ϵ\epsilon singular value. To illustrate the idea, suppose {W(2)[i]}\{W^{[i]}_{(2)}\} are shared across GCNs, which means W(2)=W(2)[1]==W(2)[K]W_{(2)}=W^{[1]}_{(2)}=\dots=W^{[K]}_{(2)}. As long as the shared W(2)W^{*}_{(2)} contains Θi|i=1K{{\Theta}_{i}}|^{K}_{i=1} in its column span, there exits W(1)[i]W^{[i]*}_{(1)} such that W(1)[i]W(2)=ΘiW^{[i]*}_{(1)}W^{*}_{(2)}={\Theta}_{i}, which is optimal for Eq.(21) with minimum error. But this means no transfer across any GCNs. This can hurt generalization if a GCN has limited data 𝒢[i]\mathcal{G}^{[i]}, in which its single-GCN solution overfits training data, whereas the cross-network solution can leverage other graphs’ data to improve generalization.

Extension to the ReLU/softmax setting. If the capacity of the shared model parameters (i.e., the number of dimensions dd) is larger than the total capacities, then we can share both {W(1)[i]}|i=1K{\{W^{[i]}_{(1)}\}}|^{K}_{i=1} and {W(2)[i]}|i=1K{\{W^{[i]}_{(2)}\}}|^{K}_{i=1}. This remains an optimal solution to the joint training problem in the ReLU/softmax setting. Furthermore, there is no transfer between any GCNs through the shared model parameters.

A.2. Positive Transfer with Network Alignment

Soft regularization: The loss function can be re-written as:

(27) f\displaystyle f =\displaystyle= (1β)α[A]U[A]U^[A]Σ[A]F2\displaystyle(1-\beta)\cdot\alpha^{[A]}~{}{\|U^{[A]}-\hat{U}^{[A]}\sqrt{\Sigma^{[A]}}\|}^{2}_{\mathrm{F}}
+(1β)α[B]U[B]U^[B]Σ[B]F2\displaystyle+~{}(1-\beta)\cdot\alpha^{[B]}~{}{\|U^{[B]}-\hat{U}^{[B]}\sqrt{\Sigma^{[B]}}\|}^{2}_{\mathrm{F}}
+βU[A]R[AB]A[A,B]U[B]F2.\displaystyle+~{}\beta~{}{\|U^{[A]}R^{[A\rightarrow B]}-A^{[A,B]}U^{[B]}\|}^{2}_{\mathrm{F}}.

The gradient is

dfdU[A]=2(1β)α[A](U[A]U^[A]Σ[A])\displaystyle\frac{\mathrm{d}~{}f}{\mathrm{d}~{}U^{[A]}}=2~{}(1-\beta)\cdot\alpha^{[A]}~{}(U^{[A]}-\hat{U}^{[A]}\sqrt{\Sigma^{[A]}})
(28) +2β(U[A]R[AB]A[A,B]U[B])R[AB].\displaystyle+2~{}\beta~{}(U^{[A]}R^{[A\rightarrow B]}-A^{[A,B]}U^{[B]})R^{[A\rightarrow B]~{}\top}.

Alignment reconstruction: The loss function is:

(29) f\displaystyle f =\displaystyle= (1β)α[A]U[A]U^[A]Σ[A]F2\displaystyle(1-\beta)\cdot\alpha^{[A]}~{}{\|U^{[A]}-\hat{U}^{[A]}\sqrt{\Sigma^{[A]}}\|}^{2}_{\mathrm{F}}
+(1β)α[B]U[B]U^[B]Σ[B]F2\displaystyle+~{}(1-\beta)\cdot\alpha^{[B]}~{}{\|U^{[B]}-\hat{U}^{[B]}\sqrt{\Sigma^{[B]}}\|}^{2}_{\mathrm{F}}
+βU[A]R[A,B]U[B]A[A,B]F2.\displaystyle+~{}\beta~{}{\|U^{[A]}R^{[A,B]}U^{[B]\top}-A^{[A,B]}\|}^{2}_{\mathrm{F}}.

The gradient is

dfdU[A]=2(1β)α[A](U[A]U^[A]Σ[A])\displaystyle\frac{\mathrm{d}~{}f}{\mathrm{d}~{}U^{[A]}}=2~{}(1-\beta)\cdot\alpha^{[A]}~{}(U^{[A]}-\hat{U}^{[A]}\sqrt{\Sigma^{[A]}})
(30) +2β(U[A]R[A,B]U[B]A[A,B])U[B]R[A,B].\displaystyle+2~{}\beta~{}(U^{[A]}R^{[A,B]}U^{[B]\top}-A^{[A,B]})U^{[B]}R^{[A,B]~{}\top}.