This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Graph Inference Learning for semi-supervised Classification

Chunyan Xu, Zhen Cui , Xiaobin Hong, Tong Zhang, and Jian Yang
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
{cyx,zhen.cui,xbhong,tong.zhang,csjyang}@njust.edu.cn
&Wei Liu
Tencent AI Lab, China
[email protected]
Corresponding author: Zhen Cui.
Abstract

In this work, we address semi-supervised classification of graph data, where the categories of those unlabeled nodes are inferred from labeled nodes as well as graph structures. Recent works often solve this problem via advanced graph convolution in a conventionally supervised manner, but the performance could degrade significantly when labeled data is scarce. To this end, we propose a Graph Inference Learning (GIL) framework to boost the performance of semi-supervised node classification by learning the inference of node labels on graph topology. To bridge the connection between two nodes, we formally define a structure relation by encapsulating node attributes, between-node paths, and local topological structures together, which can make the inference conveniently deduced from one node to another node. For learning the inference process, we further introduce meta-optimization on structure relations from training nodes to validation nodes, such that the learnt graph inference capability can be better self-adapted to testing nodes. Comprehensive evaluations on four benchmark datasets (including Cora, Citeseer, Pubmed, and NELL) demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods on the semi-supervised node classification task.

1 Introduction

Graph, which comprises a set of vertices/nodes together with connected edges, is a formal structural representation of non-regular data. Due to the strong representation ability, it accommodates many potential applications, e.g., social network (Orsini et al., 2017), world wide data (Page et al., 1999), knowledge graph (Xu et al., 2017), and protein-interaction network (Borgwardt et al., 2007). Among these, semi-supervised node classification on graphs is one of the most interesting also popular topics. Given a graph in which some nodes are labeled, the aim of semi-supervised classification is to infer the categories of those remaining unlabeled nodes by using various priors of the graph.

While there have been numerous previous works (Brandes et al., 2008; Zhou et al., 2004; Zhu et al., 2003; Yang et al., 2016; Zhao et al., 2019) devoted to semi-supervised node classification based on explicit graph Laplacian regularizations, it is hard to efficiently boost the performance of label prediction due to the strict assumption that connected nodes are likely to share the same label information. With the progress of deep learning on grid-shaped images/videos (He et al., 2016), a few of graph convolutional neural networks (CNN) based methods, including spectral (Kipf & Welling, 2017) and spatial methods (Niepert et al., 2016; Pan et al., 2018; Yu et al., 2018), have been proposed to learn local convolution filters on graphs in order to extract more discriminative node representations. Although graph CNN based methods have achieved considerable capabilities of graph embedding by optimizing filters, they are limited into a conventionally semi-supervised framework and lack of an efficient inference mechanism on graphs. Especially, in the case of few-shot learning, where a small number of training nodes are labeled, this kind of methods would drastically compromise the performance. For example, the Pubmed graph dataset (Sen et al., 2008) consists of 19,717 nodes and 44,338 edges, but only 0.3% nodes are labeled for the semi-supervised node classification task. These aforementioned works usually boil down to a general classification task, where the model is learnt on a training set and selected by checking a validation set. However, they do not put great efforts on how to learn to infer from one node to another node on a topological graph, especially in the few-shot regime.

Refer to caption
Figure 1: The illustration of our proposed GIL framework. For the problem of graph node labeling, the category information of these unlabeled nodes depends on the similarity computation between a query node (e.g., vjv_{j}) and these labeled reference nodes (e.g., viv_{i}). We consider the similarity from three points: node attributes, the consistency of local topological structures (i.e., the circle with dashed line), and the between-node path reachability (i.e., the red wave line from viv_{i} to vjv_{j}). Specifically, the local structures as well as node attributes are encoded as high-level features with graph convolution, while the between-node path reachability is abstracted as reachable probabilities of random walks. To better make the inference generalize to test nodes, we introduce a meta-learning strategy to optimize the structure relations learning from training nodes to validation nodes.

In this paper, we propose a graph inference learning (GIL) framework to teach the model itself to adaptively infer from reference labeled nodes to those query unlabeled nodes, and finally boost the performance of semi-supervised node classification in the case of a few number of labeled samples. Given an input graph, GIL attempts to infer the unlabeled nodes from those observed nodes by building between-node relations. The between-node relations are structured as the integration of node attributes, connection paths, and graph topological structures. It means that the similarity between two nodes is decided from three aspects: the consistency of node attributes, the consistency of local topological structures, and the between-node path reachability, as shown in Fig. 1. The local structures anchored around each node as well as the attributes of nodes therein are jointly encoded with graph convolution (Defferrard et al., 2016) for the sake of high-level feature extraction. For the between-node path reachability, we adopt the random walk algorithm to obtain the characteristics from a labeled reference node viv_{i} to a query unlabeled node vjv_{j} in a given graph. Based on the computed node representation and between-node reachability, the structure relations can be obtained by computing the similar scores/relationships from reference nodes to unlabeled nodes in a graph. Inspired by the recent meta-learning strategy (Finn et al., 2017), we learn to infer the structure relations from a training set to a validation set, which can benefit the generalization capability of the learned model. In other words, our proposed GIL attempts to learn some transferable knowledge underlying in the structure relations from training samples to validation samples, such that the learned structure relations can be better self-adapted to the new testing stage.

We summarize the main contributions of this work as three folds:

  • We propose a novel graph inference learning framework by building structure relations to infer unknown node labels from those labeled nodes in an end-to-end way. The structure relations are well defined by jointly considering node attributes, between-node paths, and graph topological structures.

  • To make the inference model better generalize to test nodes, we introduce a meta-learning procedure to optimize structure relations, which could be the first time for graph node classification to the best of our knowledge.

  • Comprehensive evaluations on three citation network datasets (including Cora, Citeseer, and Pubmed) and one knowledge graph data (i.e., NELL) demonstrate the superiority of our proposed GIL in contrast with other state-of-the-art methods on the semi-supervised classification task.

2 Related Work

Graph CNNs: With the rapid development of deep learning methods, various graph convolution neural networks (Kashima et al., 2003; Morris et al., 2017; Shervashidze et al., 2009; Yanardag & Vishwanathan, 2015; Jiang et al., 2019; Zhang et al., 2020) have been exploited to analyze the irregular graph-structured data. For better extending general convolutional neural networks to graph domains, two broad strategies have been proposed, including spectral and spatial convolution methods. Specifically, spectral filtering methods (Henaff et al., 2015; Kipf & Welling, 2017) develop convolution-like operators in the spectral domain, and then perform a series of spectral filters by decomposing the graph Laplacian. Unfortunately, the spectral-based approaches often lead to a high computational complex due to the operation of eigenvalue decomposition, especially for a large number of graph nodes. To alleviate this computation burden, local spectral filtering methods (Defferrard et al., 2016) are then proposed by parameterizing the frequency responses as a Chebyshev polynomial approximation. Another type of graph CNNs, namely spatial methods (Li et al., 2016; Niepert et al., 2016), can perform the filtering operation by defining the spatial structures of adjacent vertices. Various approaches can be employed to aggregate or sort neighboring vertices, such as diffusion CNNs (Atwood & Towsley, 2016), GraphSAGE (Hamilton et al., 2017), PSCN (Niepert et al., 2016), and NgramCNN (Luo et al., 2017). From the perspective of data distribution, recently, the Gaussian induced convolution model (Jiang et al., 2019) is proposed to disentangle the aggregation process through encoding adjacent regions with Gaussian mixture model.

Semi-supervised node classification: Among various graph-related applications, semi-supervised node classification has gained increasing attention recently, and various approaches have been proposed to deal with this problem, including explicit graph Laplacian regularization and graph-embedding approaches. Several classic algorithms with graph Laplacian regularization contain the label propagation method using Gaussian random fields (Zhu et al., 2003), the regularization framework by relying on the local/global consistency (Zhou et al., 2004), and the random walk-based sampling algorithm for acquiring the context information (Yang et al., 2016). To further address scalable semi-supervised learning issues (Liu et al., 2012), the Anchor Graph regularization approach (Liu et al., 2010) is proposed to scale linearly with the number of graph nodes and then applied to massive-scale graph datasets. Several graph convolution network methods (Abu-El-Haija et al., 2018; Du et al., 2017; Thekumparampil et al., 2018; Velickovic et al., 2018; Zhuang & Ma, 2018) are then developed to obtain discriminative representations of input graphs. For example, Kipf et al. (Kipf & Welling, 2017) proposed a scalable graph CNN model, which can scale linearly in the number of graph edges and learn graph representations by encoding both local graph structures and node attributes. Graph attention networks (GAT) (Velickovic et al., 2018) are proposed to compute hidden representations of each node for attending to its neighbors with a self-attention strategy. By jointly considering the local- and global-consistency information, dual graph convolutional networks (Zhuang & Ma, 2018) are presented to deal with semi-supervised node classification. The critical difference between our proposed GIL and those previous semi-supervised node classification methods is to adopt a graph inference strategy by defining structure relations on graphs and then leverage a meta optimization mechanism to learn an inference model, which could be the first time to the best of our knowledge, while the existing graph CNNs take semi-supervised node classification as a general classification task.

3 The Proposed Model

3.1 Problem Definition

Formally, we denote an undirected/directed graph as 𝒢={𝒱,,𝒳,𝒴}\mathcal{G}=\{\mathcal{V},\mathcal{E},\mathcal{X},\mathcal{Y}\}, where 𝒱={vi}i=1n\mathcal{V}=\{v_{i}\}_{i=1}^{n} is the finite set of nn (or |𝒱||\mathcal{V}|) vertices, n×n\mathcal{E}\in\mathbb{R}^{n\times n} defines the adjacency relationships (i.e., edges) between vertices representing the topology of 𝒢\mathcal{G}, 𝒳n×d\mathcal{X}\in\mathbb{R}^{n\times d} records the explicit/implicit attributes/signals of vertices, and 𝒴n\mathcal{Y}\in\mathbb{R}^{n} is the vertex labels of CC classes. The edge ij=(vi,vj)=0\mathcal{E}_{ij}=\mathcal{E}(v_{i},v_{j})=0 if and only if vertices vi,vjv_{i},v_{j} are not connected, otherwise ij0\mathcal{E}_{ij}\neq 0. The attribute matrix 𝒳\mathcal{X} is attached to the vertex set 𝒱\mathcal{V}, whose ii-th row 𝒳vi\mathcal{X}_{v_{i}} (or 𝒳i\mathcal{X}_{i\cdot}) represents the attribute of the ii-th vertex viv_{i}. It means that vi𝒱v_{i}\in\mathcal{V} carries a vector of dd-dimensional signals. Associated with each node vi𝒱v_{i}\in\mathcal{V}, there is a discrete label yi{1,2,,C}y_{i}\in\{1,2,\cdots,C\}.

We consider the task of semi-supervised node classification over graph data, where only a small number of vertices are labeled for the model learning, i.e., |𝒱Label||𝒱||\mathcal{V}_{Label}|\ll|\mathcal{V}|. Generally, we have three node sets: a training set 𝒱tr\mathcal{V}_{tr}, a validation set 𝒱val\mathcal{V}_{val}, and a testing set 𝒱te\mathcal{V}_{te}. In the standard protocol of prior literatures (Yang et al., 2016), the three node sets share the same label space. We follow but do not restrict this protocol for our proposed method. Given the training and validation node sets, the aim is to predict the node labels of testing nodes by using node attributes as well as edge connections. A sophisticated machine learning technique used in most existing methods (Kipf & Welling, 2017; Zhou et al., 2004) is to choose the optimal classifier (trained on a training set) after checking the performance on the validation set. However, these methods essentially ignore how to extract transferable knowledge from these known labeled nodes to unlabeled nodes, as the graph structure itself implies node connectivity/reachability. Moreover, due to the scarcity of labeled samples, the performance of such a classifier is usually not satisfying. To address these issues, we introduce a meta-learning mechanism (Finn et al., 2017; Ravi & Larochelle, 2017; Sung et al., 2017) to learn to infer node labels on graphs. Specifically, the graph structure, between-node path reachability, and node attributes are jointly modeled into the learning process. Our aim is to learn to infer from labeled nodes to unlabeled nodes, so that the learner can perform better on a validation set and thus classify a testing set more accurately.

3.2 Structure Relation

For convenient inference, we specifically build a structure relation between two nodes on the topology graph. The labeled vertices (in a training set) are viewed as the reference nodes, and their information can be propagated into those unlabeled vertices for improving the label prediction accuracy. Formally, given a reference node vi𝒱Labelv_{i}\in\mathcal{V}_{Label}, we define the score of a query node vjv_{j} similar to viv_{i} as

sij=fr(fe(𝒢vi),fe(𝒢vj),f𝒫(vi,vj,)),\displaystyle s_{i\rightarrow j}=f_{r}(f_{e}(\mathcal{G}_{v_{i}}),f_{e}(\mathcal{G}_{v_{j}}),f_{\mathcal{P}}(v_{i},v_{j},\mathcal{E})), (1)

where 𝒢vi\mathcal{G}_{v_{i}} and 𝒢vj\mathcal{G}_{v_{j}} may be understood as the centralized subgraphs around viv_{i} and vjv_{j}, respectively. fe,fr,f𝒫f_{e},f_{r},f_{\mathcal{P}} are three abstract functions that we explain as follows:

  • Node representation fe(𝒢vi)dvf_{e}(\mathcal{G}_{v_{i}})\longrightarrow\mathbb{R}^{d_{v}}, encodes the local representation of the centralized subgraph 𝒢vi\mathcal{G}_{v_{i}} around node viv_{i}, and may thus be understood as a local filter function on graphs. This function should not only take the signals of nodes therein as input, but also consider the local topological structure of the subgraph for more accurate similarity computation. To this end, we perform the spectral graph convolution on subgraphs to learn discriminative node features, analogous to the pixel-level feature extraction from convolution maps of gridded images. The details of feature extraction fef_{e} are described in Section 4.

  • Path reachability f𝒫(vi,vj,)dpf_{\mathcal{P}}(v_{i},v_{j},\mathcal{E})\longrightarrow\mathbb{R}^{d_{p}}, represents the characteristics of path reachability from viv_{i} to vjv_{j}. As there usually exist multiple traversal paths between two nodes, we choose the function as reachable probabilities of different lengths of walks from viv_{i} to vjv_{j}. More details will be introduced in Section 4.

  • Structure relation fr(dv,dv,dp)f_{r}(\mathbb{R}^{d_{v}},\mathbb{R}^{d_{v}},\mathbb{R}^{d_{p}})\longrightarrow\mathbb{R}, is a relational function computing the score of vjv_{j} similar to viv_{i}. This function is not exchangeable for different orders of two nodes, due to the asymmetric reachable relationship f𝒫f_{\mathcal{P}}. If necessary, we may easily revise it as a symmetry function, e.g., summarizing two traversal directions. The score function depends on triple inputs: the local representations extracted from the subgraphs w.r.t. fe(𝒢vi)f_{e}(\mathcal{G}_{v_{i}}) and fe(𝒢vj)f_{e}(\mathcal{G}_{v_{j}}), respectively, and the path reachability from viv_{i} to vjv_{j}.

In semi-supervised node classification, we take the training node set 𝒱tr\mathcal{V}_{tr} as the reference samples, and the validation set 𝒱val\mathcal{V}_{val} as the query samples during the training stage. Given a query node vj𝒱valv_{j}\in\mathcal{V}_{val}, we can derive the class similarity score of vjv_{j} w.r.t. the cc-th (c=1,,Cc=1,\cdots,C) category by weighting the reference samples 𝒞c={vk|yvk=c}\mathcal{C}_{c}=\{v_{k}|y_{v_{k}}=c\}. Formally, we can further revise Eqn. (1) and define the class-to-node relationship function as

s𝒞cj\displaystyle s_{\mathcal{C}_{c}\rightarrow j} =ϕr(F𝒞cvjvi𝒞cwijfe(𝒢vi),fe(𝒢vj)),\displaystyle=\phi_{r}(F_{\mathcal{C}_{c}\rightarrow v_{j}}\sum_{v_{i}\in\mathcal{C}_{c}}w_{i\rightarrow j}\cdot f_{e}(\mathcal{G}_{v_{i}}),f_{e}(\mathcal{G}_{v_{j}})), (2)
s.t. wij=ϕw(f𝒫(vi,vj,)),\displaystyle\quad w_{i\rightarrow j}=\phi_{w}(f_{\mathcal{P}}(v_{i},v_{j},\mathcal{E})), (3)

where the function ϕw\phi_{w} maps a reachable vector f𝒫(vi,vj,)f_{\mathcal{P}}(v_{i},v_{j},\mathcal{E}) into a weight value, and the function ϕr\phi_{r} computes the similar score between vjv_{j} and the cc-th class nodes. The normalization factor F𝒞cvjF_{\mathcal{C}_{c}\rightarrow v_{j}} of the cc-th category w.r.t. vjv_{j} is defined as

F𝒞cvj=1vi𝒞cwij.\displaystyle F_{\mathcal{C}_{c}\rightarrow v_{j}}=\frac{1}{\sum_{v_{i}\in\mathcal{C}_{c}}w_{i\rightarrow j}}. (4)

For the relation function ϕr\phi_{r} and the weight function ϕw\phi_{w}, we may choose some subnetworks to instantiate them in practice. The detailed implementation of our model can be found in Section 4.

3.3 Inference Learning

According to the class-to-node relationship function in Eqn. (2), given a query node vjv_{j}, we can obtain a score vector 𝐬𝒞j=[s𝒞1j,,s𝒞Cj]C\mathbf{s}_{\mathcal{C}\rightarrow j}=[s_{\mathcal{C}_{1}\rightarrow j},\cdots,s_{\mathcal{C}_{C}\rightarrow j}]^{\intercal}\in\mathbb{R}^{C} after computing the relations to all classes . The indexed category with the maximum score is assumed to be the estimated label. Thus, we can define the loss function based on cross entropy as follows:

=vjc=1Cyj,clogy^𝒞cj,\displaystyle\quad\mathcal{L}=-\sum_{v_{j}}\sum_{c=1}^{C}y_{j,c}\log\hat{y}_{\mathcal{C}_{c}\rightarrow j}, (5)

where yj,cy_{j,c} is a binary indicator (i.e., 0 or 1) of class label cc for node vjv_{j}, and the softmax operation is imposed on s𝒞cjs_{\mathcal{C}_{c}\rightarrow j}, i.e., y^𝒞cj=exp(s𝒞cj)/k=1Cexp(s𝒞kj)\hat{y}_{\mathcal{C}_{c}\rightarrow j}=\exp(s_{\mathcal{C}_{c}\rightarrow j})/\sum_{k=1}^{C}\exp(s_{\mathcal{C}_{k}\rightarrow j}). Other error functions may be chosen as the loss function, e.g., mean square error. In the regime of general classification, the cross entropy loss is a standard one that performs well.

Given a training set 𝒱tr\mathcal{V}_{tr}, we expect that the best performance can be obtained on the validation set 𝒱val\mathcal{V}_{val} after optimizing the model on 𝒱tr\mathcal{V}_{tr}. Given a trained/pretrained model Θ={fe,ϕw,ϕr}\Theta=\{f_{e},\phi_{w},\phi_{r}\}, we perform iteratively gradient updates on the training set 𝒱tr\mathcal{V}_{tr} to obtain the new model, formally,

Θ=ΘαΘtr(Θ),\displaystyle\Theta^{\prime}=\Theta-\alpha\nabla_{\Theta}\mathcal{L}_{tr}(\Theta), (6)

where α\alpha is the updating rate. Note that, in the computation of class scores, since the reference node and query node can be both from the training set 𝒱tr\mathcal{V}_{tr}, we set the computation weight wij=0w_{i\rightarrow j}=0 if i=ji=j in Eqn. (3). After several iterates of gradient descent on 𝒱tr\mathcal{V}_{tr}, we expect a better performance on the validation set 𝒱val\mathcal{V}_{val}, i.e., minΘval(Θ)\underset{\Theta}{\min}~{}\mathcal{L}_{val}(\Theta^{\prime}). Thus, we can perform the gradient update as follows

Θ=ΘβΘval(Θ),\displaystyle\Theta=\Theta-\beta\nabla_{\Theta}\mathcal{L}_{val}(\Theta^{\prime}), (7)

where β\beta is the learning rate of meta optimization (Finn et al., 2017).

During the training process, we may perform batch sampling from training nodes and validation nodes, instead of taking all one time. In the testing stage, we may take all training nodes and perform the model update according to Eqn. (6) like the training process. The updated model is used as the final model and is then fed into Eqn. (2) to infer the class labels for those query nodes.

4 Modules

In this section, we instantiate all modules (i.e., functions) of the aforementioned structure relation. The implementation details can be found in the following.

Node Representation fe(𝒢vi)f_{e}(\mathcal{G}_{v_{i}}): The local representation at vertex viv_{i} can be extracted by performing the graph convolution operation on subgraph 𝒢vi\mathcal{G}_{v_{i}}. Similar to gridded images/videos, on which local convolution kernels are defined as multiple lattices with various receptive fields, the spectral graph convolution is used to encode the local representations of an input graph in our work.

Given a graph sample 𝒢={𝒱,,𝒳}\mathcal{G}=\{\mathcal{V},\mathcal{E},\mathcal{X}\}, the normalized graph Laplacian matrix is 𝐋=𝐈n𝒟1/2𝒟1/2=𝐔Λ𝐔T\mathbf{L}=\mathbf{I}_{n}-\mathcal{D}^{-1/2}\mathcal{E}\mathcal{D}^{-1/2}=\mathbf{U}\Lambda\mathbf{U}^{T}, with a diagonal matrix of its eigenvalues Λ\Lambda. The spectral graph convolution can be defined as the multiplication of signal 𝒳\mathcal{X} with a filter gθ(Λ)=diag(θ)g_{\theta}(\Lambda)=\operatorname{{diag}}(\theta) parameterized by θ\theta in the Fourier domain: conv(𝒳)=gθ(𝐋)𝒳=𝐔gθ(Λ)𝐔T𝒳\text{conv}(\mathcal{X})=g_{\theta}(\mathbf{L})*\mathcal{X}=\mathbf{U}g_{\theta}(\Lambda)\mathbf{U}^{T}\mathcal{X}, where parameter θn\theta\in\mathbb{R}^{n} is a vector of Fourier coefficients. To reduce the computational complexity and obtain the local information, we use an approximate local filter of the Chebyshev polynomial (Defferrard et al., 2016), gθ(Λ)=k=0K1θkTk(Λ^),g_{\theta}(\Lambda)=\sum_{k=0}^{K-1}\theta_{k}T_{k}(\hat{\Lambda}), where parameter θK\theta\in\mathbb{R}^{K} is a vector of Chebyshev coefficients and Tk(Λ^)n×nT_{k}(\hat{\Lambda})\in\mathbb{R}^{n\times n} is the Chebyshev polynomial of order kk evaluated at Λ^=2Λ/λmax𝐈n\hat{\Lambda}=2\Lambda/\lambda_{max}-\mathbf{I}_{n}, a diagonal matrix of scaled eigenvalues. The graph filtering operation can then be expressed as gθ(Λ)𝒳=k=0K1θkTk(𝐋^)𝒳g_{\theta}(\Lambda)*\mathcal{X}=\sum_{k=0}^{K-1}\theta_{k}T_{k}(\hat{\mathbf{L}})\mathcal{X}, where Tk(𝐋^)n×nT_{k}(\hat{\mathbf{L}})\in\mathbb{R}^{n\times n} is the Chebyshev polynomial of order kk evaluated at the scaled Laplacian 𝐋^=2𝐋/λmax𝐈n\hat{\mathbf{L}}=2\mathbf{L}/\lambda_{max}-\mathbf{I}_{n}. Further, we can construct multi-scale receptive fields for each vertex based on the Laplacian matrix 𝐋\mathbf{L}, where each receptive field records hopping neighborhood relationships around the reference vertex viv_{i}, and forms a local centralized subgraph.

Path Reachability f𝒫(vi,vj,)f_{\mathcal{P}}(v_{i},v_{j},\mathcal{E}): Here we compute the probabilities of paths from vertex ii to vertex jj by employing random walks on graphs, which refers to traversing the graph from viv_{i} to vjv_{j} according to the probability matrix 𝐏\mathbf{P}. For the input graph 𝒢\mathcal{G} with nn vertices, the random-walk transition matrix can be defined as 𝐏=𝒟1\mathbf{P}=\mathcal{D}^{-1}\mathcal{E}, where 𝒟n×n\mathcal{D}\in\mathbb{R}^{n\times n} is the diagonal degree matrix with 𝒟ii=iij\mathcal{D}_{ii}=\sum_{i}\mathcal{E}_{ij}. That is to say, each element PijP_{ij} is the probability of going from vertex ii to vertex jj in one step.

The sequence of nodes from vertex ii to vertex jj is a random walk on the graph, which can be modeled as a classical Markov chain by considering the set of graph vertices. To represent this formulation, we show that PijtP^{t}_{ij} is the probability of getting from vertex viv_{i} to vertex vjv_{j} in tt steps. This fact is easily exhibited by considering a tt-step path from vertex viv_{i} to vertex vjv_{j} as first taking a single step to some vertex hh, and then taking t1t-1 steps to vjv_{j}. The transition probability PtP^{t} in tt steps can be formulated as

Pijt={Pijift=1hPihPh,jt1ift>1,{P}_{ij}^{t}=\left\{\begin{aligned} &{P}_{ij}\hskip 52.63777pt\text{if}\hskip 5.69046ptt=1\\ &\sum_{h}{P}_{ih}{P}^{t-1}_{h,j}\hskip 14.22636pt\text{if}\hskip 5.69046ptt>1\\ \end{aligned}\right., (8)

where each matrix entry PijtP_{ij}^{t} denotes the probability of starting at vertex ii and ending at vertex jj in tt steps. Finally, the node reachability from viv_{i} to vjv_{j} can be written as a dpd_{p}-dimensional vector:

f𝒫(vi,vj,)=[Pij,Pij2,,Pijdp],\displaystyle f_{\mathcal{P}}(v_{i},v_{j},\mathcal{E})=[P_{ij},P_{ij}^{2},\dots,P_{ij}^{d_{p}}], (9)

where dpd_{p} refers to the step length of the longest path from viv_{i} to vjv_{j}.

Class-to-Node Relationship s𝒞cjs_{\mathcal{C}_{c}\rightarrow j}: To define the node relationship sijs_{i\rightarrow j} from viv_{i} to vjv_{j}, we simultaneously consider the property of path reachability f𝒫(vi,vj,)f_{\mathcal{P}}(v_{i},v_{j},\mathcal{E}), local representations fe(𝒢vi)f_{e}(\mathcal{G}_{v_{i}}), and fe(𝒢vj)f_{e}(\mathcal{G}_{v_{j}}) of nodes vi,vjv_{i},v_{j}. The function ϕw(f𝒫(vi,vj,))\phi_{w}(f_{\mathcal{P}}(v_{i},v_{j},\mathcal{E})) in Eqn. (3), which is to map the reachable vector f𝒫(vi,vj,)dpf_{\mathcal{P}}(v_{i},v_{j},\mathcal{E})\in\mathbb{R}^{d_{p}} into a weight value, can be implemented with two 16-dimensional fully connected layers in our experiments. The computed value wijw_{i\rightarrow j} can be further used to weight the local features at node viv_{i}, fe(𝒢vi)dvf_{e}(\mathcal{G}_{v_{i}})\in\mathbb{R}^{d_{v}}. For obtaining the similar score between vjv_{j} and the cc-th class nodes 𝒞c\mathcal{C}_{c} in Eqn. (2), we perform a concatenation of two input features, where one refers to the weighted features of vertex viv_{i}, and another is the local features of vertex vjv_{j}. One fully connected layer (w.r.t. ϕr\phi_{r}) with CC-dimensions is finally adopted to obtain the relation regression score.

5 Experiments

Datasets Nodes Edges Classes Features Label Rates
Cora 2,708 5,429 7 1,433 0.052
Citeseer 3,327 4,732 6 3,703 0.036
Pubmed 19,717 44,338 3 500 0.003
NELL 65,755 266,144 210 5,414 0.001
Table 1: The properties (especially for label rate) of various graph datasets used for the semi-supervised classification task.

5.1 Experimental Settings

We evaluate our proposed GIL method on three citation network datasets: Cora, Citeseer, Pubmed (Sen et al., 2008), and one knowledge graph NELL dataset (Carlson et al., 2010). The statistical properties of graph data are summarized in Table 1. Following the previous protocol in (Kipf & Welling, 2017; Zhuang & Ma, 2018), we split the graph data into a training set, a validation set, and a testing set. Taking into account the graph convolution and pooling modules, we may alternately stack them into a multi-layer Graph convolutional network. The GIL model consists of two graph convolution layers, each of which is followed by a mean-pooling layer, a class-to-node relationship regression module, and a final softmax layer. We have given the detailed configuration of the relationship regression module in the class-to-node relationship of Section 4. The parameter dpd_{p} in Eqn. (9) is set to the mean length of between-node reachability paths in the input graph. The channels of the 1-st and 2-nd convolutional layers are set to 128 and 256, respectively. The scale of the respective filed is 2 in both convolutional layers. The dropout rate is set to 0.5 in the convolution and fully connected layers to avoid over-fitting, and the ReLU unit is leveraged as a nonlinear activation function. We pre-train our proposed GIL model for 200 iterations with the training set, where its initial learning rate, decay factor, and momentum are set to 0.05, 0.95, and 0.9, respectively. Here we train the GIL model using the stochastic gradient descent method with the batch size of 100. We further improve the inference learning capability of the GIL model for 1200 iterations with the validation set, where the meta-learning rates α\alpha and β\beta are both set to 0.001.

5.2 Comparison with state-of-the-arts

We compare the GIL approach with several state-of-the-art methods (Monti et al., 2017; Kipf & Welling, 2017; Zhou et al., 2004; Zhuang & Ma, 2018) over four graph datasets, including Cora, Citeseer, Pubmed, and NELL. The classification accuracies for all methods are reported in Table 2. Our proposed GIL can significantly outperform these graph Laplacian regularized methods on four graph datasets, including Deep walk (Zhou et al., 2004), modularity clustering (Brandes et al., 2008), Gaussian fields (Zhu et al., 2003), and graph embedding (Yang et al., 2016) methods. For example, we can achieve much higher performance than the deepwalk method (Zhou et al., 2004), e.g., 43.2% vs 74.1% on the Citeseer dataset, 65.3% vs 83.1% on the Pubmed dataset, and 58.1% vs 78.9% on the NELL dataset. We find that the graph embedding method (Yang et al., 2016), which has considered both label information and graph structure during sampling, can obtain lower accuracies than our proposed GIL by 9.4% on the Citeseer dataset and 10.5% on the Cora dataset, respectively. This indicates that our proposed GIL can better optimize structure relations and thus improve the network generalization. We further compare our proposed GIL with several existing deep graph embedding methods, including graph attention network (Velickovic et al., 2018), dual graph convolutional networks (Zhuang & Ma, 2018), topology adaptive graph convolutional networks (Du et al., 2017), Multi-scale graph convolution (Abu-El-Haija et al., 2018), etc. For example, our proposed GIL achieves a very large gain, e.g., 86.2% vs 83.3% (Du et al., 2017) on the Cora dataset, and 78.9% vs 66.0% (Kipf & Welling, 2017) on the NELL dataset. We evaluate our proposed GIL method on a large graph dataset with a lower label rate, which can significantly outperform existing baselines on the Pubmed dataset: 3.1% over DGCN (Zhuang & Ma, 2018), 4.1% over classic GCN (Kipf & Welling, 2017) and TAGCN (Du et al., 2017), 3.2% over AGNN (Thekumparampil et al., 2018), and 3.6% over N-GCN (Abu-El-Haija et al., 2018). It demonstrates that our proposed GIL performs very well on various graph datasets by building the graph inference learning process, where the limited label information and graph structures can be well employed in the predicted framework.

Table 2: Performance comparisons of semi-supervised classification methods.
Methods Cora Citeseer Pubmed NELL
Clustering (Brandes et al., 2008) 59.5 60.1 70.7 21.8
DeepWalk (Zhou et al., 2004) 67.2 43.2 65.3 58.1
Gaussian (Zhu et al., 2003) 68.0 45.3 63.0 26.5
G-embedding (Yang et al., 2016) 75.7 64.7 77.2 61.9
DCNN (Atwood & Towsley, 2016) 76.8 - 73.0 -
GCN (Kipf & Welling, 2017) 81.5 70.3 79.0 66.0
MoNet (Monti et al., 2017) 81.7 - 78.8 -
N-GCN (Abu-El-Haija et al., 2018) 83.0 72.2 79.5 -
GAT (Velickovic et al., 2018) 83.0 72.5 79.0 -
AGNN (Thekumparampil et al., 2018) 83.1 71.7 79.9 -
TAGCN (Du et al., 2017) 83.3 72.5 79.0 -
DGCN (Zhuang & Ma, 2018) 83.5 72.6 80.0 74.2
Our GIL 86.2 74.1 83.1 78.9

5.3 Analysis

Meta-optimization: As can be seen in Table 3, we report the classification accuracies of semi-supervised classification with several variants of our proposed GIL and the classical GCN method (Kipf & Welling, 2017) when evaluating them on the Cora dataset. For analyzing the performance improvement of our proposed GIL with the graph inference learning process, we report the classification accuracies of GCN (Kipf & Welling, 2017) and our proposed GIL on the Cora dataset under two different situations, including “only learning with the training set 𝒱tr\mathcal{V}_{tr}" and “with jointly learning on a training set 𝒱tr\mathcal{V}_{tr} and a validation set 𝒱val\mathcal{V}_{val}". “GCN /w jointly learning on 𝒱tr\mathcal{V}_{tr} & 𝒱val\mathcal{V}_{val}" achieves a better result than “GCN /w learning on 𝒱tr\mathcal{V}_{tr}" by 3.6%, which demonstrates that the network performance can be improved by employing validation samples. When using structure relations, “GIL /w learning on 𝒱tr\mathcal{V}_{tr}" obtains an improvement of 1.9% (over “GCN /w learning on 𝒱tr\mathcal{V}_{tr}”), which can be attributed to the building connection between nodes. The meta-optimization strategy (“GIL /w meta-training from 𝒱tr\mathcal{V}_{tr} \rightarrow 𝒱val\mathcal{V}_{val}" vs “GIL /w learning on 𝒱tr\mathcal{V}_{tr}”) has a gain of 2.9%, which indicates that a good inference capability can be learnt through meta-optimization. It is worth noting that, GIL adopts a meta-optimization strategy to learn the inference model, which is a process of migrating from a training set to a validation set. In other words, the validation set is only used to teach the model itself how to transfer to unseen data. In contrast, the conventional methods often employ a validation set to tune parameters of a certain model of interest.

Table 3: Performance comparisons with several GIL variants and the classical GCN method on the Cora dataset.
Methods Acc. (%)
GCN (Kipf & Welling, 2017) /w learning on 𝒱tr\mathcal{V}_{tr} 81.4
/w jointly learning on 𝒱tr\mathcal{V}_{tr} & 𝒱val\mathcal{V}_{val} 84.0
GIL /w learning on 𝒱tr\mathcal{V}_{tr} 83.3
/w meta-train 𝒱tr𝒱val\mathcal{V}_{tr}\rightarrow\mathcal{V}_{val} 86.2
GIL+mean pooling /w 1 conv. layer 84.5
/w 2 conv. layers 86.2
/w 3 conv. layers 85.4
GIL+2 conv. layers /w max-pooling 85.2
/w mean pooling 86.2

Network settings: We explore the effectiveness of our proposed GIL with the same mean pooling mechanism, but with different numbers of convolutional layers, i.e., “GIL + mean pooling" with one, two, and three convolutional layers, respectively. As can be seen in Table 3, the proposed GIL with two convolutional layers can obtain a better performance on the Cora data than the other two network settings (i.e., GIL with one or three convolutional layers). For example, the performance of ‘GIL /w 1 conv. layer + mean pooling" is slightly decreased by 1.7% over “GIL /w 2 conv. layers + mean pooling" on the Cora dataset. Furthermore, we report the classification results of our proposed GIL by using mean and max-pooling mechanisms, respectively. GIL with mean pooling (i.e., “GIL /w 2 conv layers + mean pooling") can get a better result than the GIL method with max-pooling (i.e., “GIL /w 2 conv layers + max-pooling"), e.g., 86.2% vs 85.2% on the Cora graph dataset. The reason may be that the graph network with two convolutional layers and the mean pooling mechanism can obtain the optimal graph embeddings, but when increasing the network layers, more parameters of a certain graph model need to be optimized, which may lead to the over-fitting issue.

Influence of different between-node steps: We compare the classification performance within different between-node steps for our proposed GIL and GCN (Kipf & Welling, 2017), as illustrated in Fig. 2. The length of between-node steps can be computed with the shortest path between reference nodes and query nodes. When the step between nodes is smaller, both GIL and GCN methods can predict the category information for a small part of unlabeled nodes in the testing set. The reason may be that the node category information may be disturbed by its nearest neighboring nodes with different labels and fewer nodes are within 1 or 2 steps in the testing set. The GIL and GCN methods can infer the category information for a part of unlabeled nodes by adopting node attributes, when two nodes are not connected in the graph (i.e., step=\infty). By increasing the length of reachability path, the inference process of the GIL method would become difficult and more graph structure information may be employed in the predicted process. GIL can outperform the classic GCN by analyzing the accuracies within different between-node steps, which indicates that our proposed GIL has a better reference capability than GCN by using the meta-optimization mechanism from training nodes to validation nodes.

Refer to caption
Refer to caption
Figure 2: (a) Performance comparisons within different between-node steps on the Cora dataset. The accuracy equals to the number of correctly classified nodes divided by all testing samples, and is accumulated from step 1 to step kk. (b) Performance comparisons with different label rates on the Pubmed dataset.

Influence of different label rates: We also explore the performance comparisons of the GIL method with different label rates, and the detailed results on the Pubmed dataset can be shown in Fig. 2. When label rates increase by multiplication, the performances of GIL and GCN are improved, but the relative gain becomes narrow. The reason is that, the reachable path lengths between unlabeled nodes and labeled nodes will be reduced with the increase of labeled nodes, which will weaken the effect of inference learning. In the extreme case, labels of unlabeled nodes could be determined by those neighbors with the 121\sim 2 step reachability. In summary, our proposed GIL method prefers small ratio labeled nodes on the semi-supervised node classification task.

Inference learning process: Classification errors of different epochs on the validation set of the Cora dataset can be illustrated in Fig. 5.3. Classification errors are rapidly decreasing as the number of iterations increases from the beginning to 400 iterations, while they are with a slow descent from 400 iterations to 1200 iterations. It demonstrates that the learned knowledge from the training samples can be transferred for inferring node category information from these reference labeled nodes. The performance of semi-supervised classification can be further increased by improving the generalized capability of the Graph CNN model.

Refer to caption
Figure 3: Classification errors of different iterations on the validation set of the Cora dataset.
Table 4: Performance comparisons with different modules on the Cora dataset, where fef_{e}, f𝒫f_{\mathcal{P}}, and frf_{r} denote node representation, path reachability, and structure relation, respectively.
fef_{e} frf_{r} f𝒫f_{\mathcal{P}} Acc.(%)
- - - 56.0
- - 81.5
- 85.0
86.2

Module analysis: We evaluate the effectiveness of different modules within our proposed GIL framework, including node representation fef_{e}, path reachability f𝒫f_{\mathcal{P}}, and structure relation frf_{r}. Note that the last one frf_{r} defines on the former two ones, so we consider the cases in Table 5.3 by adding modules. When not using all modules, only original attributes of nodes are used to predict labels. The case of only using fef_{e} belongs to the GCN method, which can achieve 81.5% on the Cora dataset. The large gain of using the relation module frf_{r} (i.e., from 81.5% to 85.0%) may be contributed to the ability of inference learning on attributes as well as local topology structures which are implicitly encoded in fef_{e}. The path information f𝒫f_{\mathcal{P}} can further boost the performance by 1.2%, e.g., 86.2% vs 85.0%. It demonstrates that three different modules of our method can improve the graph inference learning capability.

Computational complexity: For the computational complexity of our proposed GIL, the cost is mainly spent on the computations of node representation, between-node reachability, and class-to-node relationship, which are about O((ntr+nte)edindout)O((n_{tr}+n_{te})*e*d_{in}*d_{out}), O((ntr+nte)eP)O((n_{tr}+n_{te})*e*P), and O(ntrntedout2)O(n_{tr}*n_{te}d_{out}^{2}), respectively. ntrn_{tr} and nten_{te} refer to the numbers of training and testing nodes, dind_{in} and doutd_{out} denote the input and output dimensions of node representation, ee is about the average degree of graph node, and PP is the step length of node reachability. Compared with those classic Graph CNNs (Kipf & Welling, 2017), our proposed GIL has a slightly higher cost due to an extra inference learning process, but can complete the testing stage with several seconds on these benchmark datasets.

6 Conclusion

In this work, we tackled the semi-supervised node classification task with a graph inference learning method, which can better predict the categories of these unlabeled nodes in an end-to-end framework. We can build a structure relation for obtaining the connection between any two graph nodes, where node attributes, between-node paths, and graph structure information can be encapsulated together. For better capturing the transferable knowledge, our method further learns to transfer the mined knowledge from the training samples to the validation set, finally boosting the prediction accuracy of the labels of unlabeled nodes in the testing set. The extensive experimental results demonstrate the effectiveness of our proposed GIL for solving the semi-supervised learning problem, even in the few-shot paradigm. In the future, we would extend the graph inference method to handle more graph-related tasks, such as graph generation and social network analysis.

Acknowledgment

This work was supported by the National Natural Science Foundation of China (Nos. 61972204, 61906094, U1713208), the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20191283 and BK20190019), and Tencent AI Lab Rhino-Bird Focused Research Program (No. JR201922).

References

  • Abu-El-Haija et al. (2018) Sami Abu-El-Haija, Amol Kapoor, Bryan Perozzi, and Joonseok Lee. N-gcn: Multi-scale graph convolution for semi-supervised node classification. arXiv preprint arXiv:1802.08888, 2018.
  • Atwood & Towsley (2016) James Atwood and Don Towsley. Diffusion-convolutional neural networks. In NeurIPS, pp.  1993–2001, 2016.
  • Borgwardt et al. (2007) Karsten M Borgwardt, Hans-Peter Kriegel, SVN Vishwanathan, and Nicol N Schraudolph. Graph kernels for disease outcome prediction from protein-protein interaction networks. Pacific Symposium on Biocomputing Pacific Symposium on Biocomputing, pp.  4–15, 2007.
  • Brandes et al. (2008) Ulrik Brandes, Daniel Delling, Marco Gaertler, Robert Gorke, Martin Hoefer, Zoran Nikoloski, and Dorothea Wagner. On modularity clustering. IEEE transactions on knowledge and data engineering, 20(2):172–188, 2008.
  • Carlson et al. (2010) Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. Toward an architecture for never-ending language learning. In AAAI, 2010.
  • Defferrard et al. (2016) Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS, pp.  3844–3852, 2016.
  • Du et al. (2017) Jian Du, Shanghang Zhang, Guanhang Wu, José MF Moura, and Soummya Kar. Topology adaptive graph convolutional networks. arXiv preprint arXiv:1710.10370, 2017.
  • Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, pp.  1126–1135, 2017.
  • Hamilton et al. (2017) Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, pp.  1025–1035, 2017.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pp.  770–778, 2016.
  • Henaff et al. (2015) Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
  • Jiang et al. (2019) Jiatao Jiang, Zhen Cui, Chunyan Xu, and Jian Yang. Gaussian-induced convolution for graphs. In AAAI, volume 33, pp.  4007–4014, 2019.
  • Kashima et al. (2003) Hisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. Marginalized kernels between labeled graphs. In ICML, pp.  321–328, 2003.
  • Kipf & Welling (2017) Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  • Li et al. (2016) Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. ICLR, 2016.
  • Liu et al. (2010) Wei Liu, Junfeng He, and Shih-Fu Chang. Large graph construction for scalable semi-supervised learning. In ICML, 2010.
  • Liu et al. (2012) Wei Liu, Jun Wang, and Shih-Fu Chang. Robust and scalable graph-based semisupervised learning. Proceedings of the IEEE, 100(9):2624–2638, 2012.
  • Luo et al. (2017) Zhiling Luo, Ling Liu, Jianwei Yin, Ying Li, and Zhaohui Wu. Deep learning of graphs with ngram convolutional neural networks. IEEE Transactions on Knowledge and Data Engineering, 29(10):2125–2139, 2017.
  • Monti et al. (2017) Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In CVPR, pp.  5115–5124, 2017.
  • Morris et al. (2017) Christopher Morris, Kristian Kersting, and Petra Mutzel. Glocalized weisfeiler-lehman graph kernels: Global-local feature maps of graphs. In ICDM, pp.  327–336. IEEE, 2017.
  • Niepert et al. (2016) Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In ICML, pp.  2014–2023, 2016.
  • Orsini et al. (2017) Francesco Orsini, Daniele Baracchi, and Paolo Frasconi. Shift aggregate extract networks. arXiv preprint arXiv:1703.05537, 2017.
  • Page et al. (1999) Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking: Bringing order to the web. Technical Report 1999-66, 1999.
  • Pan et al. (2018) Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, and Chengqi Zhang. Adversarially regularized graph autoencoder for graph embedding. In IJCAI, pp.  2609–2615, 2018.
  • Ravi & Larochelle (2017) Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
  • Sen et al. (2008) Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
  • Shervashidze et al. (2009) Nino Shervashidze, SVN Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten Borgwardt. Efficient graphlet kernels for large graph comparison. In Artificial Intelligence and Statistics, pp.  488–495, 2009.
  • Sung et al. (2017) Flood Sung, Li Zhang, Tao Xiang, Timothy Hospedales, and Yongxin Yang. Learning to learn: Meta-critic networks for sample efficient learning. arXiv preprint arXiv:1706.09529, 2017.
  • Thekumparampil et al. (2018) Kiran K Thekumparampil, Chong Wang, Sewoong Oh, and Li-Jia Li. Attention-based graph neural network for semi-supervised learning. arXiv preprint arXiv:1803.03735, 2018.
  • Velickovic et al. (2018) Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. ICLR, 2018.
  • Xu et al. (2017) Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In CVPR, pp.  5410–5419, 2017.
  • Yanardag & Vishwanathan (2015) Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In SIGKDD, pp.  1365–1374, 2015.
  • Yang et al. (2016) Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. ICML, 2016.
  • Yu et al. (2018) Bing Yu, Haoteng Yin, and Zhanxing Zhu. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. In IJCAI, pp.  3634–3640, 2018.
  • Zhang et al. (2020) Tong Zhang, Zhen Cui, Chunyan Xu, Wenming Zheng, and Jian Yang. Variational pathway reasoning for eeg emotion recognition. In AAAI, 2020.
  • Zhao et al. (2019) Wenting Zhao, Zhen Cui, Chunyan Xu, Chengzheng Li, Tong Zhang, and Jian Yang. Hashing graph convolution for node classification. In CIKM, 2019.
  • Zhou et al. (2004) Dengyong Zhou, Olivier Bousquet, Thomas N Lal, Jason Weston, and Bernhard Schölkopf. Learning with local and global consistency. In NeurIPS, pp.  321–328, 2004.
  • Zhu et al. (2003) Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, pp.  912–919, 2003.
  • Zhuang & Ma (2018) Chenyi Zhuang and Qiang Ma. Dual graph convolutional networks for graph-based semi-supervised classification. In WWW, pp.  499–508, 2018.