This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, China
11email: {22110240021,23212010010,21210240043}@m.fudan.edu.cn, {yunx,18110240037}fudan.edu.cn
22institutetext: College of Intelligence and Computing, Tianjin University
22email: [email protected]
33institutetext: School of Informatics, University of Edinburgh
33email: [email protected]
44institutetext: Department of Computer Science, University of Illinois at Chicago 44email: [email protected]

Beyond the Known: Novel Class Discovery for Open-world Graph Learning

Yucheng Jin 11    Yun Xiong 11    Juncheng Fang 11    Xixi Wu 11    Dongxiao He 22    Jia Xing 11    Bingchen Zhao 33    Philip Yu 44
Abstract

Node classification on graphs is of great importance in many applications. Due to the limited labeling capability and evolution in real-world open scenarios, novel classes can emerge on unlabeled testing nodes. However, little attention has been paid to novel class discovery on graphs. Discovering novel classes is challenging as novel and known class nodes are correlated by edges, which makes their representations indistinguishable when applying message passing GNNs. Furthermore, the novel classes lack labeling information to guide the learning process. In this paper, we propose a novel method Open-world gRAph neuraL network (ORAL) to tackle these challenges. ORAL first detects correlations between classes through semi-supervised prototypical learning. Inter-class correlations are subsequently eliminated by the prototypical attention network, leading to distinctive representations for different classes. Furthermore, to fully explore multi-scale graph features for alleviating label deficiencies, ORAL generates pseudo-labels by aligning and ensembling label estimations from multiple stacked prototypical attention networks. Extensive experiments on several benchmark datasets show the effectiveness of our proposed method.

Keywords:
Novel class discovery Graph neural network Open-world learning.

1 Introduction

Graphs are ubiquitously used to reveal the interactions among various entities. One fundamental learning task on graphs is the node classification [12, 13, 15, 21, 22], which has many applications such as predicting the areas of publications in academic networks [13, 21], identifying the functions of proteins in biology graphs [10, 22]. Graph neural networks [2, 23], which rely on extensive annotation information for training, have shown superior performance on the node classification task. To further ease the annotation burden, researchers have recently paid much attention to the semi-supervised setting [4, 6, 15, 17], where only a small proportion of nodes on the graph is labeled.

Refer to caption
Figure 1: Illustration of novel class discovery for open-world learning on an academic graph. Nodes represent papers, edges represent citation relationships, and each paper belongs to a certain research field (node class).

However, existing node classification methods [2, 12, 13, 15, 21, 22, 23] assume a closed-world setting [3], where unlabeled testing nodes belong to classes seen on the labeled training nodes. This assumption rarely holds for graphs in the open world, since novel classes can frequently emerge on unlabeled nodes. Take the toy academic graph in Figure 1 as an example, with nodes and edges denoting papers and citation relationships respectively. In addition to nodes labeled with well-known research topics, there are newly emerging unlabeled papers with unknown themes. The emergence of new research papers can introduce novel research topics. A crucial problem is to learn distinguishable representations for both known and novel classes, and facilitates novel class discovery along the way. We term this problem as Open-world Graph Learning in this paper.

While the open-world graph learning naturally suits many real-world applications, it poses unique challenges: (1) Novel and known classes are correlated by edges, which makes learning distinguishable representations difficult. For example, papers proposing new research areas will cite existing articles in related fields on the academic graph. Consequently, novel and existing research areas are correlated by edges representing citation relationships. Edges between novel and known classes can make their representations indistinguishable, since typical GNNs [2, 12, 13, 15, 23] tend to learn similar representations for connected nodes through message passing. (2) Only unlabeled nodes of novel classes are provided, lacking labeling information to guide representation learning and novel class discovery during the training process. Generating pseudo-labeling on novel class nodes is challenging due to the absence of labeled novel class nodes for learning the labeling mechanism. Given known class labeling information and rich features on graphs, how can we obtain novel class pseudo-labels and incorporate the generated label information for open-world graph learning?

In this paper, we propose a novel method Open-world gRAph neuraL network (ORAL) to tackle these challenges. ORAL first employs the prototypical attention network to detect and eliminate correlations between novel and known classes. By leveraging labeling granularity mined from known classes, the prototypical attention network first approximates the ground-truth label with semi-supervised prototypical clustering. Distinctive representations for different classes can be obtained by eliminating message passing over inter-class edges detected from label estimations. ORAL stacks multiple prototypical attention networks to explore graph features at different scales. To alleviate the lack of novel class labeling information, label estimations from different layers are aligned and ensembled to generate reliable pseudo-labels. Pseudo-labeling information is incorporated into the input graph data to guide the novel class discovery by recovering intra-class edges and removing inter-class ones. Confidence filtering and perturbation-robust consistency training are further introduced to combat pseudo-labeling noise during the learning process.

In summary, our contributions can be summarized as,

  • Our approach constitutes the first attempt to investigate open-world graph neural network which can automatically discover novel classes on graphs.

  • We propose the prototypical attention network to detect and eliminate inter-class correlations for distinguishing novel classes from others during representation learning.

  • To ease the lack of novel class label information, we generate pseudo-labels according to multi-scale graph features and labeling granularity mined from known classes.

  • We perform extensive experiments and analyze the results, proving the effectiveness of our method.

Refer to caption
Figure 2: An overview of the proposed model ORAL.

2 Problem Formulation

Notations. A graph is represented as G=(𝒱,,𝐗)G=(\mathcal{V},\mathcal{E},\mathbf{X}), where 𝒱={vi}i=1,,N\mathcal{V}=\{v_{i}\}_{i=1,...,N} is a node set representing nodes in a graph, eij=(vi,vj)e_{ij}=(v_{i},v_{j})\in\mathcal{E} is an edge indicating the relationship between two nodes and 𝐱i𝐗\mathbf{x}_{i}\in\mathbf{X} indicates content features associated with each node viv_{i}.

2.0.1 Open-world Graph Learning.

The node set 𝒱\mathcal{V} is composed of labeled training nodes 𝒱L\mathcal{V}_{L} and unlabeled testing nodes 𝒱U\mathcal{V}_{U}, which belong to class set 𝒴L\mathcal{Y}^{L} and 𝒴U\mathcal{Y}^{U} respectively. In open-world scenarios, novel class can emerge on the unlabeled nodes, i.e., 𝒴L𝒴U\mathcal{Y}^{L}\subset\mathcal{Y}^{U}. The aim of open-world graph learning is to discover novel classes automatically and classify the unlabeled nodes into known classes (𝒴L\mathcal{Y}^{L}) or novel classes (𝒴U𝒴L\mathcal{Y}^{U}\setminus\mathcal{Y}^{L}).

3 Methodology

Overview. The framework of our proposed method ORAL is shown in Figure 2. The prototypical attention network (PAN) is first employed to eliminate inter-class correlations. Given representative nodes (prototypes) extracted by prototype learning, PAN performs semi-supervised clustering on the prototype graph of relatively small size, efficiently obtaining group assignments of prototypes and nodes. Message passing between groups are eliminated by the group-aware attention mechanism to generate separable representations. Furthermore, ORAL eases the lack of labeling information by the pseudo-label generator, which aligns and ensembles multiple group assignment estimations in the last stage, facilitating the open-world learning through structure refining.

3.1 Prototypical Attention Network

The prototypical attention network clusters nodes into groups and eliminates correlations between classes by reducing message passing between groups, where the clustering results are approximations of ground-truth labels. Nonetheless, the number of novel classes is unknown in the open world, we have to repeatedly perform clustering on graphs to search for the optimal clustering granularity by fitting the labeling information, which can be time-consuming. To ease the computation burden, we extract representative nodes through prototype learning and conduct semi-supervised clustering on the small prototype graph instead of the large whole graph. The node clustering result is approximated from that of prototypes, which is employed to eliminate message passing through the group-aware attention mechanism.

Representative nodes are modeled as a set of trainable prototypes 𝒞={𝐜1,𝐜2,𝐜Npro}\mathcal{C}=\{\mathbf{c}_{1},\mathbf{c}_{2},\cdots\mathbf{c}_{N_{pro}}\} in the representation space, where NproN_{pro} denotes the number of prototypes. Given a node with representation 𝐡i\mathbf{h}_{i} which is initialized as node feature 𝐱i\mathbf{x}_{i}, we compute the representativeness score of each prototype 𝐜j\mathbf{c}_{j} to the node as follows,

rij=exp(𝐡iT𝐜j)k=1Nproexp(𝐡iT𝐜k).r_{ij}=\frac{\exp({\mathbf{h}_{i}^{T}\mathbf{c}_{j}})}{\sum_{k=1}^{N_{pro}}{\exp({\mathbf{h}_{i}^{T}\mathbf{c}_{k}}})}. (1)

To ensure each prototype is representative of a set of nodes, we regularize the marginal distribution of representativeness scores to be close to a balanced prior,

reg=KL[(1Npro𝟏)||(1|𝒱|i=1|𝒱|𝐫i)],\mathcal{L}_{reg}=\mathrm{KL}[(\frac{1}{N_{pro}}\cdot\mathbf{1})||(\frac{1}{|\mathcal{V}|}\sum_{i=1}^{|\mathcal{V}|}\mathbf{r}_{i})], (2)

where 𝐫i=[ri1,,riNpro]Npro\mathbf{r}_{i}=[r_{i1},\ldots,r_{iN_{pro}}]\in\mathbb{R}^{N_{pro}}, 𝟏Npro\mathbf{1}\in\mathbb{R}^{N_{pro}} represents the all-ones vector and KL\mathrm{KL} represents the Kullback-Leibler divergence. Each node is regarded as an associated instance of prototypes with the top kk largest representativeness score. The similarity relationships between prototypes 𝐬Npro×Npro\mathbf{s}\in\mathbb{R}^{N_{pro}\times N_{pro}} can be calculated as the jaccard index of their associated node set, i.e., sij=Jaccard(Γi,Γj)s_{ij}=\text{Jaccard}(\Gamma_{i},\Gamma_{j}), with Γi\Gamma_{i} denotes the associated node set of the ii-th prototype.

3.1.1 Semi-supervised Prototypical Clustering.

Applying a clustering algorithm with polynomial time complexity on the small prototype graph (𝒞,p)(\mathcal{C},\mathcal{E}^{p}) instead of the whole graph can substantially ease the computational burden, where p={(𝐜i,𝐜j,sij)}\mathcal{E}^{p}=\{(\mathbf{c}_{i},\mathbf{c}_{j},{s}_{ij})\}. Given the groups 𝒞g={𝒞1g,,𝒞Nclug}\mathcal{C}^{g}=\{\mathcal{C}_{1}^{g},\cdots,\mathcal{C}_{N_{clu}}^{g}\} discovered by clustering on prototypes, the assignment probability of a node viv_{i} to a group 𝒞kg\mathcal{C}_{k}^{g} can be approximated according to their relationship scores with prototypes,

pik=𝐜j𝒞kgrij.p_{ik}=\sum_{\mathbf{c}_{j}\in\mathcal{C}^{g}_{k}}r_{ij}. (3)

Despite prototype-based clustering can improve efficiency significantly, the labeling granularity which decides the class (cluster) number is still unknown. However, overly fine-grained clustering can cluster nodes of the same class into different clusters, while overly coarse-grained clustering can cluster nodes of different classes into the same cluster. Both cases lead to drops in clustering accuracy. Therefore, we search for the optimal clustering granularity by maximizing the accuracy on the labeled known class nodes. The clustering accuracy is calculated as,

ACC=maxf𝒫(𝒴L)\displaystyle\mathrm{ACC}=\mathop{\max}_{f\in\mathcal{P}(\mathcal{Y}^{L})} vi𝒱L1|𝒱L|𝟙(y^i=f(yi)),\displaystyle\sum_{v_{i}\in\mathcal{V}_{L}}\frac{1}{|\mathcal{V}_{L}|}\mathds{1}(\hat{y}_{i}=f(y_{i})), (4)
y^i\displaystyle\hat{y}_{i} =argmaxkpik,\displaystyle=\mathop{\arg\max}\limits_{k}p_{ik},

where 𝟙\mathds{1} is the indicator function and y^i\hat{y}_{i} is the predicted group ID. Considering the potential misalignment between between class and groups IDs, we find the alignment function from the set of all permutations 𝒫(𝒴L)\mathcal{P}(\mathcal{Y}^{L}) by solving the maximization problem through the Hungarian algorithm [16]. Groups do not match with any known class are regarded as discovered novel classes.

3.1.2 Group-aware Attention Mechanism.

Besides local attribute information on nodes, the neighborhood information on graphs has also been shown to be important for node classification [2, 12, 13, 15, 23]. However, due to the existence of edges between known class nodes and novel class ones, aggregating all neighborhood information with existing graph convolution layers can make different classes indistinguishable in the representation space. Despite graph attention networks can assign different weights to different neighbors, which are not guaranteed to contain knowledge about how to distinguish each novel class from others.

To learn compact representations for novel class discovery, we propose the group-aware attention mechanism. Group assignments predicted by semi-supervised prototypical learning are approximations of the ground-truth labels, which can be utilized to eliminate correlations between novel and known classes during neighborhood aggregation. The group-aware attention mechanism calculates the attention scores for neighbors according to the group assignment similarities eik=cos(𝐩i,𝐩k)e_{ik}=\cos{(\mathbf{p}_{i},\mathbf{p}_{k})}, where 𝐩i=[pi1,,piNclu]\mathbf{p}_{i}=[p_{i1},\ldots,p_{iN_{clu}}],

αij=softmax\displaystyle\alpha_{ij}=\mathrm{softmax} (eij)=exp(eij)kN(i)exp(eik).\displaystyle(e_{ij})=\frac{\exp{(e_{ij})}}{\sum_{k\in N(i)}\exp{(e_{ik})}}. (5)

The low group assignment similarities between novel and known class nodes lead to low attention weights, while the attention weights between nodes from the same class are high. Consequently, aggregating information according to the group assignments can eliminate correlations between classes due to the low attention weights on inter-class edges, obtaining compact representations,

𝐡i=σ(vjN(i){vi}αij𝐖𝐡j).\mathbf{h}_{i}^{*}=\sigma(\sum_{v_{j}\in N(i)\cup\{v_{i}\}}\alpha_{ij}\mathbf{W}\mathbf{h}_{j}). (6)

where σ\sigma is the relu activation function. By stacking multiple prototypical attention networks, ORAL further predicts over the representation 𝐡i\mathbf{h}_{i}^{*} obtained after selectively aggregating neighborhood information. Consequently, we get access to a list of predictions as well as node representations, {(𝐩(1),𝐡(1)),,(𝐩(L),𝐡(L))}\{(\mathbf{p}^{(1)},\mathbf{h}^{(1)}),...,(\mathbf{p}^{(L)},\mathbf{h}^{(L)})\}, calculated at different scales using different hops of neighborhood information, where LL is the number of layers, (𝐩(i),𝐡(i))(\mathbf{p}^{(i)},\mathbf{h}^{(i)}) is the group assignments and representations generated by the ii-th prototypical attention network. The prototypical attention networks are trained by the cross-entropy loss layer by layer,

ce=vi𝒱l=1Llogpiyi(l),\mathcal{L}_{ce}=\sum_{v_{i}\in\mathcal{V}}\sum_{l=1}^{L}-\log p^{(l)}_{iy_{i}}, (7)

where yiy_{i} is the label of node viv_{i} and piyi(l)p^{(l)}_{iy_{i}} is the predicted probability of node ii belonging to the yiy_{i}-th group (class) in the ll-th layer.

3.2 Pseudo-label Guided Open-world Learning

To mitigate the lack of novel class labeling information, we generate reliable pseudo-labels to guide the open-world graph learning. The generation process of pseudo-labels is shown in the pseudo-label generator part, while the utilization of pseudo-labels is detailed in the pseudo-label enhanced structure refinement part.

3.2.1 Pseudo-label Generator.

ORAL introduces the pseudo-label generator to mitigate the lack of novel class labeling information. To fully leverage the rich information captured at different scales, ORAL ensembles prediction results estimated by multiple stacked prototypical attention networks. However, ensembling multiple group assignment results is non-trivial. Due to the unsupervised nature of novel class discovery, groups from different layers are not aligned, even the number of groups varies from layer to layer.

We overcome these challenges through the proposed padding, aligning and suppressing operation. Given the set of predictions {𝐩(1),,𝐩(L)}\{\mathbf{p}^{(1)},...,\mathbf{p}^{(L)}\}, we first pad them with zero vectors such that they have the same number of prototype groups. Then we align and ensemble predictions from different layers by the Hungarian algorithm [16],

p^ij=1Lk=1Lpijk(k),\hat{p}_{ij}=\frac{1}{L}\sum_{k=1}^{L}{p^{(k)}_{ij_{k}}}, (8)

where jkj_{k} denotes the prototype group ID in the k-th layer that corresponds to the j-th group in the initial layer. p^ij\hat{p}_{ij} is the probability of assigning node ii to group jj obtained from ensemble inference. Groups which only exist in a certain layer or associates with almost no nodes can be regarded as detection noise. We suppress neurons responding to these noisy groups by masking over the prediction results,

𝐩^i=𝐩^i𝐦\displaystyle\hat{\mathbf{p}}_{i}=\hat{\mathbf{p}}_{i}\odot\mathbf{m} (9)

where the mask 𝐦\mathbf{m} is calculated by comparing the group popularity with a pre-defined threshold η\eta, mi=𝟙(1|𝒱|ip^i,j>η)m_{i}=\mathds{1}(\frac{1}{|\mathcal{V}|}\sum_{i}\hat{p}_{i,j}>\eta). The symbol \odot stands for the element-wise product. For each known class and discovered novel class, we select reliable pseudo labels as the labeling information on confident node subset, which is the top γ\gamma percentage of unlabeled nodes with the highest prediction probability (confidence) of belonging to this label.

Data: Graph G=(𝒱,,𝐗)G=(\mathcal{V},\mathcal{E},\mathbf{X}), Labeled training nodes (𝒱L,𝒴L)(\mathcal{V}_{L},\mathcal{Y}^{L})
Result: Predictions of unlabeled nodes 𝒱U\mathcal{V}_{U}, where new classes can emerge.
1 Randomly initialize model parameters;
2 repeat
3       for l=1l=1 to LL do
4             Calculate group assignments 𝐩(l)\mathbf{p}^{(l)} according to Eq. (3);
5             Calculate node embeddings 𝐡(l)\mathbf{h}^{(l)} according to Eq. (6) ;
6      Update model parameters according to loss defined in Eq. (13);
7       Update the graph structure according to Eq. (11);
8      
9until Convergence or iteration number reaches the threshold;
10Predict test node labels by ensemble inference in Eq. (LABEL:Eq_ensemble);
Algorithm 1 ORAL

3.2.2 Pseudo-label Enhanced Structure Refinement.

Reliable pseudo-labels information is incorporated into the input graph data for guiding open-world learning. Specifically, we recover intra-class edges to increase similarities between nodes of the same pseudo-label, while eliminating inter-class edges to decrease inter-class similarities. Structure refining according to pseudo-labeling makes it easier to distinguish novel classes from others.

Recovering intra-class edges according to pseudo-labels can introduce information from unconnected nodes in the original graph for open-world learning. However, excessive edge recovery can result in over-smoothing. We select the top μ\mu percent of intra-class node pairs with the lowest similarities as recovered edges +\mathcal{E}_{+}, where the similarity is calculated according to their relationship scores to prototypes in Eq. 1,

sijn=1Ll=1Lcos(𝐫i(l),𝐫j(l)),{s}^{n}_{ij}=\frac{1}{L}\sum_{l=1}^{L}\cos(\mathbf{r}^{(l)}_{i},\mathbf{r}^{(l)}_{j}), (10)

where 𝐫i(l)\mathbf{r}^{(l)}_{i} is the relationship between nodes and prototypes in the l-th layer. Consequently, edges between intra-class nodes associated to different prototypes are recovered, which increases similarities between prototypes related to the same class and facilitates more accurate semi-supervised prototypical clustering for novel class discovery.

ORAL further eliminate inter-class edges \mathcal{E}_{-} according to pseudo-labeling. Since ensemble pseudo-labeling incorporates knowledge from different layers, taking the refined structure as input for each prototypical attention layer can transfer knowledge to eliminate unwanted correlations between classes among layers. The final refined structure can be formulated as,

^=()+.\hat{\mathcal{E}}=(\mathcal{E}\setminus\mathcal{E}_{-})\cup\mathcal{E}_{+}. (11)

To make ORAL robust to the noise introduced during the structure refining process, we introduce the permutation-consistent training loss, which augments the structure information with the adaptive augmentation technique [36] and regularizes the prediction consistency between different views,

con=vi𝒱l=1LKL(𝐩i(l)||𝐩¯i(l)),\mathcal{L}_{con}=\sum_{v_{i}\in\mathcal{V}}\sum_{l=1}^{L}\mathrm{KL}(\mathbf{p}^{(l)}_{i}||{\bar{\mathbf{p}}^{(l)}_{i}}), (12)

where 𝐩¯i(l)\bar{\mathbf{p}}_{i}^{(l)} is the prediction over the augmented graph. The overall training loss can be calculated as,

=ce+reg+con\mathcal{L}=\mathcal{L}_{ce}+\mathcal{L}_{reg}+\mathcal{L}_{con} (13)

The final predictions on unlabeled nodes can be obtained based on Eq. (LABEL:Eq_ensemble). The training process is summarized in Algorithm 1.

4 Experiments

4.1 Experiment settings

Table 1: Statistics of the experiment datasets.
Dataset # Nodes # Edges # Attributes # Classes # Known Classes
Cora 2708 5429 1433 7 5
AmazonPhoto 7650 238162 745 8 6
BlogCatlog 5196 343486 8189 6 4

4.1.1 Datasets.

We test our method on three commonly used datasets: (1) Cora [32] is the citation network where classes are subjects of scientific publications. (2) AmazonPhoto [25] is the co-purchase network with categories of goods as node labels. (3) BlogCatlog [19] is the blog network where node labels are topics of blogs. The dataset statistics is shown in Table 1. For each dataset, we sample 80 percent of classes for which we have labels during training, the remaining ones are novel classes emerged in the test set. We further sub-sample 70 percent and 15 percent of the nodes from known classes to constitute the labelled training set and validation set. The remaining nodes from known classes, along with all instances from novel classes, constitute the unlabeled test node set VUV_{U}.

4.1.2 Evaluation Metrics.

Following the novel class discovery literature [18, 27] in the CV domain, we conduct experiments in scenarios where the number of classes is known as well as in scenarios where it is unknown. The model performance is measured with clustering accuracy, since the label IDs of ground-truth labeling and predictions may not be aligned. Similar to Equation 4, the clustering accuracy on test nodes can be calculated as,

ACC=maxf𝒫(𝒴U)\displaystyle\mathrm{ACC}=\mathop{\max}_{f\in\mathcal{P}(\mathcal{Y}^{U})} vi𝒱U1|𝒱U|𝟙(y^i=f(yi)).\displaystyle\sum_{v_{i}\in\mathcal{V}_{U}}\frac{1}{|\mathcal{V}_{U}|}\mathds{1}(\hat{y}_{i}=f(y_{i})). (14)

Besides the main metric indicating the classification performance over all testing nodes, we further report values for both known class subset (nodes in 𝒱U\mathcal{V}_{U} belonging to classes in 𝒴L\mathcal{Y}^{L}) and novel classes subset (nodes in 𝒱U\mathcal{V}_{U} belonging to classes in 𝒴U𝒴L\mathcal{Y}^{U}\setminus\mathcal{Y}^{L}).

Table 2: Performance comparison of all methods, the experiment results are average over ten runs. “Prior” refers to the number of classes.
Dataset Method Without Prior With Prior
GCN GCD OCD ORAL VGAE GCD OCD ORAL
Cora All 39.29 50.67 53.28 69.91 50.60 48.08 51.03 71.39
Known 67.31 54.31 56.76 73.60 49.48 51.69 50.56 76.97
Novel 0.00 45.50 48.41 64.72 52.22 43.03 51.67 65.71
BlogCatalog All 26.04 37.79 42.83 70.23 38.18 37.91 42.73 73.76
Known 70.28 26.00 34.55 87.39 25.03 27.13 34.84 88.46
Novel 0.00 44.73 47.70 60.14 45.92 44.25 47.37 65.11
Amazon Photo All 56.69 70.66 67.98 84.09 66.35 70.52 67.41 88.05
Known 93.44 71.98 68.58 92.71 65.89 71.98 66.75 95.36
Novel 0.00 68.44 67.04 70.79 67.06 68.27 68.43 76.77
Average All 40.67 53.04 54.69 74.74 51.71 52.17 53.73 77.73
Known 77.01 50.76 53.29 84.57 46.76 50.23 50.72 86.93
Novel 0.00 52.89 54.38 65.21 54.96 51.85 55.82 69.20

4.1.3 Baselines.

We choose the following methods as baselines, which can be classified into the three families. (1) Semi-supervised Approach. GCN [15] is a popular graph convolution network for semi-supervised node classification, which is unable to discover novel classes and will mistakenly classify novel class nodes into known classes.(2) Unsupervised Approach. VGAE [14] is an unsupervised method to learn graph embeddings. We extend this method for open-world graph learning by combining VGAE with the non-parametric kmeans classifier, where the number of classes is assumed to be a known prior. (3) Open-world Semi-supervised Approaches. GCD [27] is the most popular method for discovering novel categories in the CV domain, which estimates the number of novel classes by maximizing the clustering accuracy on labeled instances. OpenNCD [18] is the state-of-the-art method for open-world semi-supervised learning. Considering the performance decrease caused by non-parametric classifier in GCD, OpenNCD replaces the kmeans classifier with learnable prototypes and trains the network with a novel progressive bi-level contrastive learning method. We abbreviate OpenNCD as OCD in this article. We extend these methods to open-world graph learning by replacing the encoding module with graph convolution layers [15]. When the number of novel classes is known as a prior, both open-world semi-supervised baselines and ORAL search for the clustering granularity where the number of clusters (classes) matches with the prior knowledge.

Table 3: Experiment results of class number estimation, where PredNum and MAE represents the estimated class number and mean absolute estimation error.
Method Cora Amazon BlogCatalog Average
PredNum MAE PredNum MAE PredNum MAE MAE
GCD 6.0 1.0 7.1 0.9 6.3 0.3 0.7
OpenNCD 7.4 0.4 8.8 0.8 5.6 0.4 0.5
ORAL 7.7 0.7 8.0 0.0 5.7 0.3 0.3
Table 4: Ablation study, where PAN stands for prototypical attention network and POL stands for pseudo-label guided open-world learning.
Modules Without prior With prior
PAN POL Cora BlogCatalog Cora BlogCatalog
All Novel All Novel All Novel All Novel
×\times ×\times 53.28 48.41 42.83 47.70 51.03 51.67 42.73 47.27
×\times 65.97 55.27 70.44 60.38 64.53 49.10 73.10 63.80
69.91 64.72 70.23 60.14 71.39 65.71 73.76 65.11

4.2 Main results

Table 2 illustrates the experiment results of baselines and our work. From the results, we have the following observations: (1) Our proposed method consistently shows strong performance on both known and novel classes across all datasets. The improvements can be attributed to that ORAL distinguishes novel classes from others by utilizing node similarities discovered by semi-supervised prototypical clustering and generates reliable pseudo-labels to mitigate lack of novel class labeling information. (2) Despite existing semi-supervised method GCN performs well on known classes, they are unable to discover novel classes and mistakenly classify novel class nodes into known classes. Consequently, its accuracy over novel classes is zero. (3) Given the number of classes, we can approximate the ground-truth labels by combining clustering and unsupervised representation learning method VGAE, which results in low performance on known classes due to the lack of supervision information. (4) With the help of labeled known class nodes, open-world semi-supervised learning methods can discover novel classes automatically without knowing the number of classes in advance. ORAL achieves the lowest mean absolute error in the estimated class number as shown in Table 3. (5) The performance of open-world semi-supervised learning baselines is still unsatisfactory since they are unable to eliminate correlations between classes on graphs. We further compare the performance of baselines equipped with different graph convolution methods in Figure 3. Although graph attention network [28] can assign learnable scores to edges, it still lacks the knowledge to distinguish novel classes from known ones. (6) By guiding the models to search for the proper labeling granularity during training, the number of novel classes can improve the novel class accuracy of OpenNCD and ORAL by 1.5% and 4.0% separately. Nonetheless, the prior information is simply incorporated into the clustering during testing in GCD, which can not guide the learning process and improve performance [29]. (7) Due to the lack of novel class labeling information, open-world semi-supervised baselines generally perform better on known classes than novel ones, except for the BlogCatlog dataset. Baseline methods tend to cluster different classes into the same group on BlogCatlog due to the strong inter-class correlations. To achieve best overall performance, the clustering accuracy is biased towards classes with more testing nodes, which constitute the novel classes in BlogCatlog.

Refer to caption
Figure 3: Performance of baselines with different graph convolution networks. Our ORAL consistently outperforms all variants of existing open-world baseline methods, showing its superiority.
Refer to caption
Figure 4: Performance of different prototypical attention network layers.
Refer to caption
Figure 5: Impact of key hyper-parameters on performance of ORAL on the BlogCatlog dataset when the class number is known.

4.3 Abaltion Study

In this section, we investigate the contribution of two main components in our model by ablation experiments. Specifically, we ablate the prototypical attention network (PAN) by replacing it with the graph convolution layer [15]. The ablation of the pseudo-label guided open-world learning (POL) module is achieved by directly taking the model trained on the original graph for prediction, where the incorporation of pseudo-labeling information is ablated. The results of these variants are shown in Table 4, we have the following observations: (1) The improvement of PAN module is more significant on BlogCatlog which has more inter-class edges (60%) compared with Cora (19%). This observation shows the effectiveness of PAN in eliminating inter-class correlations on graphs. (2) Adding the POL module can further improve accuracy over novel classes, indicating the effectiveness of POL in mitigating the lack of novel class labeling information.

We also conduct experiments to show the effectiveness of the proposed ensemble inference technique in Eq. LABEL:Eq_ensemble. The performance of different prototypical attention network layers is shown in Figure 4, we have the following observations: (1) The layer which achieves the best performance varies across datasets and classes. For instance, the second layer exhibits optimal performance for known classes classification in the AmazonPhoto dataset, whereas it is surpassed by the third layer on novel classes in the Cora dataset. (2) The ensemble inference consistently achieves strong performance over different classes, even surpassing any individual layer. Moreover, it alleviates the challenge of manually searching for the optimal layer for prediction, particularly in cases where labeled validation data for novel classes is unavailable.

4.4 Impact of Hyper-parameter Settings

Next, we conduct experiments to investigate the impact of key hyper-parameters on the performance of ORAL:

Impact of prototype number NproN_{pro}. Prototypes are representative nodes on graphs, which are modeled as trainable vectors. We select {20, 40, 60, 80} as NproN_{pro} respectively, and show experiment results in Figure 5. The performance improves as NproN_{pro} increases and peaks at 40. Then the performance decreases as NproN_{pro} becomes larger. Insufficient number of prototypes are not representative of nodes on graphs, while setting NproN_{pro} too high can lead to over-fitting, both results in performance drops.

Impact of reliable pseudo-label ratio γ\gamma. The pseudo-label generator selects the labeling information on top γ\gamma percentage of unlabeled nodes with the highest prediction probability (confidence) as reliable pseudo-labels. We range γ\gamma within {0.2, 0.3, 0.4, 0.5} and plot the results. In the beginning, performance increases as γ\gamma increases, indicating the effectiveness of utilizing pseudo-labels to guide open-world learning. The performance declines when γ\gamma is larger than 0.3, which may be caused by noise in pseudo-labels with low prediction confidence.

Impact of recovery edge ratio μ\mu. ORAL selects top μ\mu percent of intra-class node pairs with the lowest similarities as recovered edges to guide open-world learning. We select μ\mu as {0.01, 0.015, 0.02, 0.025} respectively. When μ\mu is 0.015, ORAL achieves the best performance. This is because too many edges can lead to over-smoothing, while recovering too few edges can not incorporate enough information into the input graph to guide open-world learning.

5 Related work

In this section, we briefly introduce the relevant research lines of our work, namely graph neural networks and novel class discovery.

Graph Neural Networks. Graph Neural Networks (GNNs) are designed to use deep learning architectures on graph-structured data [8, 24, 30] and node classification [2, 12, 13, 15, 21, 22, 23] is one of the most important downstream tasks of GNNs. Early methods mainly focus on classifying the nodes within a set of fixed classes with abundant labelling [2, 23]. To ease the annotation burden, researchers have paid much attention to semi-supervised setting [12, 15] and few-shot setting [5, 20]. However, these methods are unable to automatically discover novel classes emerged in the unlabeled test data where the method is required to estimate the number of novel classes and classify the unlabeled nodes into either known or novel classes.

Novel Class Discovery. To handle the unknown novel classes during testing, early works [7, 26, 31, 33] propose out-of-distribution detection methods to reject samples not belonging to any known classes based on the statistics of prediction logits. However, these methods can not further discover the novel classes among the rejected samples. Under the assumption that the number of novel classes is known, [1, 9, 11, 35] propose to discover novel classes by clustering unlabeled data on the representation space with the help of labeled known classes. [18, 27, 34] further propose to automatically estimate the novel class number by utilizing the labeling granularity provided by known classes. Nonetheless, all these methods face challenges in distinguishing novel classes from others on graphs due to the unwanted correlations between classes caused by edge connections.

6 Conclusion

In this paper, we propose ORAL to discover novel classes for open-world graph learning. As far as we known, this is the first work to deal with this challenging yet practical problem. ORAL first employs the prototypical attention network to distinguish novel classes from others during representation learning. ORAL further generates pseudo-labels through clustering ensembles to exploit unlabeled data and mitigate the lack of novel class labeling information. Extensive experiment results on three public datasets show the superiority of our proposed method. In the future, we would like to extend our method to more challenging scenarios, like discovering novel classes on class-imbalanced graphs.

References

  • [1] Brbić, M., Zitnik, M., Wang, S., et al.: Mars: discovering novel cell types across heterogeneous single-cell experiments. Nature Methods 17, 1200–1206 (2020)
  • [2] Bruna, J., Zaremba, W., Szlam, A., et al.: Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203 (2013)
  • [3] Cao, K., Brbic, M., Leskovec, J.: Open-world semi-supervised learning. In: Proceedings of ICLR (2022)
  • [4] Chen, D., Jacob, L., Mairal, J.: Convolutional kernel networks for graph-structured data. In: Proceedings of ICML (2020)
  • [5] Ding, K., Wang, J., Li, J., et al.: Graph prototypical networks for few-shot learning on attributed networks. In: Proceedings CIKM (2020)
  • [6] Gao, X., Hu, W., Guo, Z.: Exploring structure-adaptive graph learning for robust semi-supervised classification. In: Proceedings of ICME (2020)
  • [7] Geng, C., Huang, S.J., Chen, S.: Recent advances in open set recognition: A survey. In: TPAMI (2020)
  • [8] Gori, M., Monfardini, G., Scarselli, F.: A new model for learning in graph domains. In: Proceedings of IJCNN. IEEE (2005)
  • [9] Han, K., Vedaldi, A., Zisserman, A.: Learning to discover novel visual categories via deep transfer clustering. In: Proceedings of ICCV (2019)
  • [10] Hetzel, L., Fischer, D.S., Günnemann, S., et al.: Graph representation learning for single-cell biology. Current Opinion in Systems Biology (2021)
  • [11] Hsu, Y.C., Lv, Z., Kira, Z.: Learning to cluster in order to transfer across domains and tasks. In: Proceedings of ICLR (2018)
  • [12] Huang, W., Zhang, T., Rong, Y., et al.: Adaptive sampling towards fast graph representation learning. In: Proceedings of NeurlIPS (2018)
  • [13] Izadi, M.R., Fang, Y., Stevenson, R.L., et al.: Optimization of graph neural networks with natural gradient descent. In: Proceedings of IEEE Big Data (2020)
  • [14] Kipf, T.N., Welling, M.: Variational graph auto-encoders. In: Proceedings of NeurIPS (2016)
  • [15] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: Proceedings of ICLR (2017)
  • [16] Kuhn, H.W.: The hungarian method for the assignment problem. In: NRL (1955)
  • [17] Liao, R., Brockschmidt, M., Tarlow, D., et al.: Graph partition neural networks for semi-supervised classification. arXiv preprint arXiv:1803.06272 (2018)
  • [18] Liu, J., Wang, Y., Zhang, T., et al.: Open-world semi-supervised novel class discovery. In: Proceedings of IJCAI (2023)
  • [19] Liu, Y., Li, Z., Pan, S., et al.: Anomaly detection on attributed networks via contrastive self-supervised learning. In: IEEE TNNLS (2021)
  • [20] Lu, B., Gan, X., Yang, L., et al.: Geometer: Graph few-shot class-incremental learning via prototype representation. In: Proceedings of SIGKDD (2022)
  • [21] Luan, S., Hua, C., Lu, Q., et al.: Is heterophily a real nightmare for graph neural networks on performing node classification? arXiv preprint arXiv:2109.05641 (2021)
  • [22] Muzio, G., O’Bray, L., Borgwardt, K.: Biological network analysis with deep learning. In: Briefings in bioinformatics (2021)
  • [23] Niepert, M., Ahmed, M., Kutzkov, K.: Learning convolutional neural networks for graphs. In: Proceedings of ICML (2016)
  • [24] Scarselli, F., Gori, M., Tsoi, A.C., et al.: The graph neural network model. In: IEEE Trans. Neural Netw. (2008)
  • [25] Shchur, O., Mumme, M., Bojchevski, A., et al.: Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868 (2018)
  • [26] Sun, X., Yang, Z., Zhang, C., et al.: Conditional gaussian distribution learning for open set recognition. In: Proceedings of CVPR (2020)
  • [27] Vaze, S., Han, K., Vedaldi, A., et al.: Generalized category discovery. In: Proceedings of CVPR (2022)
  • [28] Velickovic, P., Cucurull, G., Casanova, A., et al.: Graph attention networks. In: Proceedings of ICLR (2018)
  • [29] Wen, X., Zhao, B., Qi, X.: Parametric classification for generalized category discovery: A baseline study. In: Proceedings of ICCV (2023)
  • [30] Wu, M., Pan, S., Zhou, C., et al.: Unsupervised domain adaptive graph convolutional networks. In: Proceedings of WWW (2020)
  • [31] Wu, M., Pan, S., Zhu, X.: Openwgl: Open-world graph learning. In: Proceedings of ICDM (2020)
  • [32] Yang, Z., Cohen, W., Salakhudinov, R.: Revisiting semi-supervised learning with graph embeddings. In: Proceedings of ICML (2016)
  • [33] Zhang, Q., Li, Q., Chen, X., et al.: A dynamic variational framework for open-world node classification in structured sequences. In: Proceedings of ICDM (2022)
  • [34] Zhao, B., Wen, X., Han, K.: Learning semi-supervised gaussian mixture models for generalized category discovery. In: Proceedings of ICCV (2023)
  • [35] Zhong, Z., Zhu, L., Luo, Z., et al.: Openmix: Reviving known knowledge for discovering novel visual categories in an open world. In: Proceedings of CVPR (2021)
  • [36] Zhu, Y., Xu, Y., Yu, F., et al.: Graph contrastive learning with adaptive augmentation. In: Proceedings of WWW (2021)