Beyond Graph Convolutional Network: An Interpretable Regularizer-centered Optimization Framework
Abstract
Graph convolutional networks (GCNs) have been attracting widespread attentions due to their encouraging performance and powerful generalizations. However, few work provide a general view to interpret various GCNs and guide GCNs’ designs. In this paper, by revisiting the original GCN, we induce an interpretable regularizer-centerd optimization framework, in which by building appropriate regularizers we can interpret most GCNs, such as APPNP, JKNet, DAGNN, and GNN-LF/HF. Further, under the proposed framework, we devise a dual-regularizer graph convolutional network (dubbed tsGCN) to capture topological and semantic structures from graph data. Since the derived learning rule for tsGCN contains an inverse of a large matrix and thus is time-consuming, we leverage the Woodbury matrix identity and low-rank approximation tricks to successfully decrease the high computational complexity of computing infinite-order graph convolutions. Extensive experiments on eight public datasets demonstrate that tsGCN achieves superior performance against quite a few state-of-the-art competitors w.r.t. classification tasks.
Introduction
Owing to the powerful ability to aggregate neighborhood information, Graph Convolutional Network (GCN) has been successfully applied to diverse domains, such as computer vision [1, 2, 3], recommender systems [4, 5], privacy preserving [6], and traffic forecasting [7, 8]. Rooted in a series of theoretical foundations, GCN extends convolution operations to the non-Euclidean spaces and effectively propagates label signals, and therefore its variants have been extensively employed for a variety of graph-related tasks, including classification [9, 10], clustering [11, 12] and link prediction [13, 14]. In a nutshell, GCN generates the graph embedding with the well-established graph convolutional layers gathering semantics from neighbors according to the network topology, which are revealed to be the most critical component.
Although GCN has behaved well in many machine learning tasks, lots of studies have pointed out its certain drawbacks and made efforts for further improvements. Bo et al. [15] indicated that the propagation mechanism could be considered as a special form of low-pass filter, and presented a GCN with an adaptive frequency. Zhang et al. [16] argued that most GCN-based methods ignored the global information and proposed SHNE, which leveraged the structure and feature similarity to capture latent semantics. Wang et al. [17] revealed that the original GCN aggregated information from node neighbors inadequately, and then developed a multi-channel GCN by utilizing feature-based semantic graph. In spite of the performance boosts of these GCN variants, they didn’t establish a generalized framework, i.e., these approaches understood and enhanced GCN from certain and non-generalizable perspectives, thereby they are exceedingly difficult to be further developed, and with limited interpretability.
Consequently, it is expected to construct a unified framework for various GCNs with better interpretability; however, it is a pity that this kind of work is still in shortage. Zhao et al. [18] linked GCN and Graph-regularized PCA (GPCA), and then proposed a multi-layer network by stacking the GPCA layers. Zhu et al. [19] attempted to interpret existing GCN-based methods with a unified optimization framework, under which they devised an adjustable graph filter for a new GCN variant. Yang et al. [20] designed a family of graph convolutional layers inspired by the updating rules of two typical iterative algorithms. Although these efforts have contributed to better understanding of GCNs, they only explained GCNs in partial aspects, promoting the expectation of a more comprehensive analysis of GCNs.
To tackle the aforementioned issues, this paper induces an interpretable regularizer-centered optimization framework, which provides a novel perspective to digest various GCNs, i.e., this framework captures the common essential properties of existing state-of-the-art GCN variants and could defines them just by devising different regularizers. Moreover, in light of the analyses on current representative GCNs, we find that most of the existing approaches only consider capturing the topological regularization, while the feature-based semantic structure is underutilized, and hence this motivates us to design a dual-regularizer graph convolutional network (called tsGCN) within the regularizer-centered optimization framework for the fullest explorations of both structures and semantics from graph data. Due to the high computational complexity of performing infinite-order graph convolutions, the unified framework provides a straightforward way employing truncated polynomials to approximate the graph Laplacian, similar to the truncated Chebyshev polynomials by vanilla GCN, restricting the message passing of a single graph convolution to the first-order neighborhood.
The main contributions of this paper can be summarized as the following three aspects:
-
•
Propose a regularizer-centered constrained optimization framework, which interprets various existing GCNs with specific regularizers.
-
•
Establish a new dual-regularizer graph convolutional network (tsGCN), which exploits topological and semantic structures of the given data; and develop an efficient algorithm to reduce the computational complexity of solving infinite-order graph convolutions.
-
•
Conduct a series of experiments to show that tsGCN performs much better than many SOTA GCNs, and also consumes much less time than the newly GNN-HF/LF.
Related Work
Graph Convolutional Networks
The original GCN was first introduced by Kipf et al. [21], who generalized the convolution operations from the Euclidean domain to the non-Euclidean domain. SGC [22] assumed that the nonlinear transform of GCN was not that significant, and then devised a simplified GCN by removing the nonlinear activation functions and collapsing the weight matrices. PPNP [23] employed the relationship between PageRank and GCN for the improvement on the propagation mechanism of GCN, and an iterative version called APPNP was further proposed to reduce the high computational complexity. Attempting to adaptively learn the influence radii for each node and task, JKNet [24] combined various aggregations at the last layer and was able to learn representations of different orders for graph substructures. GNN-LF and GNN-HF [19] considered the low-pass and the high-pass filter as the convolution kernels to improve GCN’s expressive power, respectively. AdaGCN [25] leveraged Adaboost strategy for the enhancement of GCN, allowing information to be shared between layers. To sum up, a main characteristic of these methods is exploring GCN from the perspectives of redesigning information aggregation strategies or modifying graph convolutions, and few work try to construct a unified framework to interpret various GCNs and reveal the underlying common principles.
Further Insights on GCNs
Quite a few studies have been devoted to explore the mechanisms of GCN for deeper insights. Li et al. [26] indicated that the convolutional operation of GCN was a special form of Laplacian smoothing, attributed to which GCN suffered from the so-called over-smoothing problem. Specifically, the performance of GCN will decrease as the number of layers increases, which has been validated by many other studies. However, Liu et al. [27] held a different opinion that the entanglement of two steps in GCN damages the performance of the deep GCN, where the two steps were explained as propagation and transformation. Based on this view, they decoupled the two operations and further presented a deeper GCN. Zhu et al. [19] also decomposed the convolution operation of GCN into two separate stages, called aggregation and transformation, and focused on the aggregation process, formulating an optimization objective to interpret it. Yang et al. [28] explored network topology refinement, leveraging a topology optimization process for the explanation. Oono et al. [29] analyzed the forward propagation of GCN and interpreted it with a specific dynamical system, allowing GCN to be related to the underlying topological structures. Overall, these studies have contributed to the interpretability of GCNs, and also let researchers better understand GCNs. In this paper, we build a unified optimization framework from a novel view of graph regularizers to interpret and understand GCNs.
Mathematical Notations
For the convenience of formal descriptions, derivations, and analyses, necessary notations are narrated as below. A graph is denoted as , where marks the vertex set with ( is the total number of nodes in graph ), marks the edge set, and marks an affinity matrix of which measures the similarity between the -th and the -th node. In addition, represents the degree matrix of with , and then the normalized symmetrical graph Laplacian of is computed as with .
Revisiting Graph Convolutional Network
Methods | Propagation Rules | Regularizer | Projective Set |
---|---|---|---|
GCN | |||
SGC | |||
APPNP | |||
JKNet | |||
DAGNN | |||
GNN-HF | |||
GNN-LF | |||
tsGCN |
For a graph , the svd of its graph Laplacian is , where is comprised of orthonormal eigenvectors and is a diagonal matrix with denoting the -th eigenvalue and (). Essentially, this decomposition induces a Fourier transform on the graph domain, where eigenvectors correspond to Fourier components and eigenvalues represent frequencies of the graph. For an input signal defined on the graph , the corresponding graph Fourier transform of is , and its inverse transform is derived as . Consequently, the graph convolution between the signal and the filter is
(1) |
where is the Hadamard product between two vectors. Particularly, denoting parameterized by , the graph convolution between and can be rewritten as
(2) |
where is regarded as the filter coefficients to be optimized. Especially, is assumed to be the polynomials of the spectrums of the graph Laplacian [30], i.e.,
(3) |
where is the order of Chebyshev polynomials. By fixing , the graph convolutional network (GCN) [21] takes an effective form
(4) |
where is a parameter to be optimized. When extending single channel signal and filter to multi-channel and , the GCN is converted to
(5) |
where is a normalized version of , is an activation function, and is the output of the -th layer with being the input feature matrix.
An Interpretable Regularizer-centered Optimization Framework for GCNs
Given the input of the -th layer, GCN can compute the output by minimizing
(6) |
where with ; it is a normalized regularizer to preserve the pairwise similarity of any two nodes in the given graph. Besides, the is actually a fitting loss term bewteen and , i.e., with and fixed when optimizing . Note that the square term is a L2-regularized smoother, which can be ignored or absorbed in the second graph regularizer .
Taking derivative of with respect to and setting it to zero, we obtain as
(7) |
and then there yields
(8) |
when the nonnegative constraints are further considered. Notice that is the activation function. Here, if the matix inverse is approximated by the first-order expansion, i.e., , then Eq. (8) will lead to the updating rule (5) of GCN.
Usually, the activation functions in GCN are and , which could be converted to different projection optimizations. Concretely, the activation function is equivalent to project a point onto the non-negative plane , i.e.,
(9) |
By the way, we denote , which corresponds to an identity activation function. In terms of the activation function, it can be regarded as projecting onto the set , i.e.,
(10) |
where is the negative entropy of [31]. In fact, with respect to other activation functions, they can also be equivalent to project a point onto some feasible set with some metric.
Up to present, we have actually utilized a constrained optimization problem to interpret GCN, including information propagations (i.e., Eq. (7)) and the nonlinear activation functions (i.e., and ).
The above analyses can not only explain the vanilla GCN, but also stimulate a regularizer-centered optimization framework that can further unify various GCNs. By extending the optimization (6), a more general framework is written as
(11) |
Under this framework, different regularizers could derive different GCNs, for example,
-
•
If with , , and , then it induces the updating rule , which corresponds to GNN-HF [19].
-
•
If with and , then it gives rise to the updating rule , which corresponds to GNN-LF [19].
For more cases, their results are summarized in Table 4, and the derivation details can refer to those of the original GCN (from Eq. (7) to Eq. (10)) and the supplementary.
Remarks. The work [19] is most similar to our work with the same research idea: they both want to propose a unified framework to interpret the current GCNs and guide the design of new GCN variants; however, they are realized in different ways. To be specific, (1) Zhu et al. [19] develop an optimization framework to explain different GCNs’ propagation processes; whereas we propose a constrained optimization framework not only to interpret various GCNs’ propagation processes, but also explain the nonlinear activation layers; (2) [19] unifies various GCNs via devising various fitting items which are essentially constructed by limited graph filters; while our work derives different GCNs through designing different regularizers. To sum up, our work interprets the whole (not partial) GCNs with regularizer-centered constrained optimizations.
Datasets | #Nodes | #Edges | #Classes | #Features | #Train/Val/Test |
---|---|---|---|---|---|
Cora | 2,708 | 5,429 | 7 | 1,433 | 140/500/1,000 |
Citeseer | 3,327 | 4,732 | 6 | 3,703 | 120/500/1,000 |
Pubmed | 19,717 | 44,338 | 3 | 500 | 60/500/1,000 |
ACM | 3,025 | 13,128 | 3 | 1,870 | 60/500/1,000 |
BlogCatalog | 5,196 | 171,743 | 6 | 8,189 | 120/500/1,000 |
CoraFull | 19,793 | 65,311 | 70 | 8,710 | 1,400/500/1,000 |
Flickr | 7,575 | 239,738 | 9 | 12,047 | 180/500/1,000 |
UAI | 3,067 | 28,311 | 19 | 4,973 | 367/500/,1000 |
Metrics | Methods / Datasets | Cora | Citeseer | Pubmed | ACM | BlogCatalog | CoraFull | Flickr | UAI |
---|---|---|---|---|---|---|---|---|---|
Chebyshev | 76.2 (0.7) | 69.3 (0.4) | 74.0 (0.8) | 82.8 (1.4) | 68.3 (1.6) | 57.2 (1.1) | 38.5 (1.6) | 49.7 (0.4) | |
GraphSAGE | 76.7 (0.6) | 64.4 (0.9) | 75.5 (0.2) | — | 57.8 (0.7) | 59.9 (0.7) | 32.7 (1.0) | 41.7 (1.4) | |
GAT | 79.1 (0.8) | 68.3 (0.5) | 78.4 (0.3) | 84.6 (0.5) | 67.1 (1.7) | 62.4 (0.4) | 40.4 (0.9) | 49.7 (3.0) | |
GCN | 80.6 (1.4) | 69.1 (1.5) | 77.6 (1.3) | 88.8 (0.5) | 84.2 (0.6) | 62.8 (0.4) | 51.0 (1.2) | 58.5 (1.1) | |
SGC | 79.3 (1.0) | 66.4 (1.7) | 76.8 (2.0) | 80.8 (2.7) | 81.3 (0.2) | 62.9 (2.2) | 51.0 (0.1) | 56.5 (3.5) | |
APPNP | 78.0 (0.1) | 65.8 (0.2) | 78.0 (0.0) | 88.2 (0.0) | 87.7 (0.3) | 63.1 (0.5) | 57.5 (0.2) | 62.3 (1.2) | |
JKNet | 83.1 (0.1) | 72.3 (0.1) | 80.1 (0.2) | 82.3 (0.6) | 75.7 (0.1) | 62.6 (0.0) | 54.0 (0.3) | 45.6 (0.5) | |
DAGNN | 81.9 (0.7) | 70.0 (1.1) | 80.6 (0.7) | 87.4 (0.9) | 84.6 (1.9) | 65.6 (0.3) | 54.6 (5.9) | 46.7 (12.4) | |
GNN-LF | 81.1 (0.5) | 72.3 (0.9) | 80.0 (0.4) | 90.8 (0.5) | 86.7 (0.6) | 63.5 (0.9) | 56.6 (0.6) | 36.6 (19.8) | |
GNN-HF | 80.7 (0.2) | 68.8 (1.3) | 77.7 (0.2) | 91.2 (0.5) | 84.5 (0.4) | 63.0 (0.7) | 60.7 (0.4) | 54.8 (1.4) | |
tsGCN (inv) | 80.3 (0.3) | 73.3 (0.4) | 78.4 (0.3) | 85.1 (1.6) | 87.8 (6.3) | 67.0 (0.9) | 53.3 (12.6) | 64.2 (1.8) | |
ACC | tsGCN | 82.0 (0.3) | 73.1 (0.4) | 82.4 (0.1) | 92.8 (0.3) | 92.3 (0.5) | 67.9 (0.9) | 79.1 (3.0) | 67.9 (0.6) |
Chebyshev | 76.3 (0.7) | 65.4 (0.8) | 73.9 (0.7) | 82.5 (1.4) | 64.3 (1.6) | 40.0 (0.5) | 38.4 (1.5) | 39.1 (0.2) | |
GraphSAGE | 76.7 (0.5) | 60.7 (0.5) | 74.7 (0.2) | — | 54.7 (0.6) | 51.9 (0.6) | 31.0 (1.1) | 35.3 (1.0) | |
GAT | 77.1 (0.7) | 64.6 (0.5) | 78.2 (0.2) | 84.8 (0.5) | 66.3 (1.9) | 46.4 (0.4) | 38.1 (1.1) | 40.8 (1.3) | |
GCN | 79.4 (1.4) | 65.2 (2.4) | 77.2 (1.4) | 88.9 (0.5) | 82.4 (0.5) | 52.8 (0.8) | 50.0 (1.7) | 45.0 (1.1) | |
SGC | 77.7 (0.9) | 61.5 (1.7) | 76.5 (2.3) | 81.1 (2.6) | 80.7 (0.3) | 53.2 (2.1) | 44.2 (0.2) | 46.7 (1.7) | |
APPNP | 77.6 (0.1) | 63.2 (0.2) | 77.7 (0.0) | 88.3 (0.0) | 85.7 (0.3) | 48.2 (0.7) | 56.9 (0.2) | 48.6 (1.6) | |
JKNet | 82.3 (0.3) | 67.8 (0.1) | 79.3 (0.3) | 82.2 (0.6) | 75.0 (0.1) | 51.3 (0.1) | 51.1 (0.5) | 31.7 (1.5) | |
DAGNN | 80.0 (0.7) | 65.7 (0.7) | 80.7 (0.7) | 87.5 (0.9) | 83.8 (2.4) | 53.0 (0.9) | 55.5 (6.7) | 39.3 (11.2) | |
GNN-LF | 79.1 (0.7) | 66.7 (0.4) | 80.2 (0.5) | 90.9 (0.5) | 85.9 (0.6) | 50.5 (1.9) | 54.3 (1.0) | 29.7 (15.1) | |
GNN-HF | 78.6 (0.3) | 64.3 (1.7) | 78.1 (0.2) | 91.3 (0.5) | 83.8 (0.4) | 49.0 (1.1) | 58.6 (0.6) | 44.9 (0.8) | |
tsGCN (inv) | 78.5 (0.3) | 69.6 (0.4) | 78.7 (0.3) | 85.1 (1.5) | 85.2 (7.1) | 57.2 (1.1) | 52.9 (15.8) | 48.5 (0.8) | |
F1 | tsGCN | 80.5 (0.5) | 69.0 (0.3) | 82.4 (0.1) | 92.8 (0.4) | 90.1 (0.6) | 58.7 (0.7) | 79.3 (2.9) | 50.1 (0.1) |
tsGCN: Topological and Semantic Regularized Graph Convolutional Network
One finding from most existing GCNs is that they often ignored feature-based semantic structures, which can weaken the representation learning abilities of graph networks, then we focus on two regularizers, i.e.,
(12) | |||
(13) |
where is a graph Laplacian from the given adjacency matrix (e.g., ), and is a graph Laplacian calculated from the pairwise similarity of any two graph nodes. Hence, we devise a dual-regularizer, i.e., , and if it is under the optimization framework (19), then there yields the following updating rule
(14) |
Since this method seeks to preserve both the topological and semantic structures for more accurate presentations, we call it tsGCN (i.e., Topological and Semantic regularized GCN).
Notably, the computational complexity of is , which tends to be unaffordable in practical applications. To this end, a low-rank approximation is operated, i.e., , where with . This leads to the Woodbury matrix identity:
(15) |
of which the computational complexity costs .
Given that the optimal of the following problem
(16) |
is attained at the -truncated singular value decomposition of , i.e., , where is a diagonal matrix containing the largest singular values. An optimal to can be given by an analytic form of .
To obtain the optimum , the iterative algorithm [32] with is leveraged as
(17) |
(18) |
where QR denotes the QR-decomposition. Note that this algorithm can converge to the largest eigenvalues and its corresponding eigenvectors when the iterative number is large enough. Finally, there will be .
Gathering all analyses mentioned above, the procedure for tsGCN is summarized in Algorithm 1.
Experiment
This section will show tsGCN’s effectiveness and efficiency via comprehensive experiments.
Datasets
Cora, Citeseer and Pubmed are citation networks, and CoraFull is a larger version of Cora; ACM is a paper network, and BlogCatalog and Flickr are social networks; UAI has been utilized for community detection. The detailed statistics of the above eight public datasets are concluded in Table 2.
Compared Methods
Two types of methods are employed here for comparisons. Chebyshev [33], GraphSAGE [34] and GAT [35] are classical graph neural networks. GCN, SGC [22], APPNP [23], JKNet [24], DAGNN [27], GNN-LF and GNN-HF [19] are selected as state-of-the-art GCN variants.










Parameter Setups
For all experiments, we randomly split samples into a small set of labeled samples per class for training, a set of samples for validating and a set of samples for testing. In terms of the ten baseline methods, all their configurations are set as the default in their original papers. With respect to tsGCN, following the vanilla GCN, the learning rate, weight decay and the size of hidden units are set to , and , respectively. The hyperparameters and are selected in for different datasets, and is chosen in , where is the feature dimension of the original data.
Semi-supervised Classification
Performance Comparisons. The semi-supervised classification task is conducted on selected datasets, whose results are recorded in Table 3. Specifically, we compare our tsGCN with the ten baseline methods in terms of both accuracy and F1-score, marking the best and second-best results on each dataset. Note that tsGCN (inv) denotes tsGCN without the low-rank approximation, which directly calculates the matrix inverse in Eq. (14). From Table 3, we have the following observations:
-
•
tsGCN achieves the best performances on most datasets, and is only slightly inferior to the JKNet method on the smallest Cora dataset.
-
•
tsGCN yields higher scores than JKNet and APPNP, especially on Pubmed, CoraFull, BlogCatalog, and Flickr, where the first two are relatively large datasets and the latter two have dense edges. tsGCN even outperforms the second-best approach GNN-HF by about on Flickr.
It is worth mentioning that tsGCN utilizes high-order information by the infinite-order graph convolution, and JKNet and APPNP also develop different techniques for the same goal. Hence, the advantage of tsGCN over APPNP and JKNet implies that the infinite-order graph convolution implemented by the low-rank approximation not only requires less computational complexity, but also effectively captures high-order neighborhood information and filters significant noises.
Runtime Comparisons. This section collects the training time (i.e., runtime) of all methods on two selected large datasets, i.e., Pubmed and CoraFull, as exhibited in Fig. 1(a): the first three columns correspond to classical GNNs, while the rest are GCNs. From Fig. 1(a), we find that tsGCN takes much less runtime than Chebyshev, GAT, and GraphSAGE; however, it performs moderately well among the state-of-the-art GCN variants. Specifically, tsGCN is (1) inferior to SGC, JKNet, and DAGNN; (2) well-matched with the original GCN; (3) but advantageous over the recently proposed GNN-LF and GNN-HF.













Parameter Sensitivity Analysis
Fig. 1(b) curves the accuracy of tsGCN w.r.t. various ranks by fixing other parameters and . Considering that different datasets hold different distributions, their optimal ranks to the optimization (16) are also different. For example, in regard to the curves on BlogCatalog and ACM, their accuracy first go up to a high value and then keep steady, which indicates that when rank , the low-rank approximation is effective and efficient enough. When it comes to the curve on Pubmed, the trend of its performance monotonically decreases as rank becomes bigger, which implies that a very low-rank (i.e., ) approximation is sufficient enough to preserve abundant information for good results. However, with respect to the other curves such as on Flickr and Cora, the y-axis’ scores generally rise to a peak first and then fall continuously as the rank increases. For these cases, the optimal ranks differ at their peaks.
Fig. 2 plots the accuracy of tsGCN w.r.t. (, ) by fixing the optimal ranks. On Cora, Citeseer, Pubmed, BlogCatalog, and CoraFull, tsGCN performs well with large and small ; while, on ACM, Flickr, and UAI, tsGCN generates high accuracy when these two parameters are both large.
For detailed settings of these hyperparameters, please reference the codes and datasets to be released on Github.
Ablation Study
The results of GCN, tsGCN-s, tsGCN-t, tsGCN (inv), and tsGCN are plotted in Fig. 3 (notice that tsGCN-s and tsGCN-t are with semantic and topological regularizer, respectively), telling us:
-
•
The performance is unsatisfactory when the two regularizers are adopted alone, while tsGCN can always effectively fuse the two to better capture underlying structures.
-
•
tsGCN (inv) is even worse than single-regularizer model on some datasets, indicating that the infinite-order graph convolutions implemented by the matrix inverse can pull-in instability to the model.
-
•
Compared to GCN, tsGCN (inv) performs comparable or even worse, whereas tsGCN shows substantial improvements on all datasets, which indicates that the low-rank approximation enhances the robustness of infinite-order graph convolutions.
Data Visualization
Fig. 4 exhibits the graph representations learned by different methods on BlogCatalog. As can be seen clearly, the results of the three classical graph neural networks, i.e., Chebyshev, GraphSAGE and GAT, are unsatisfactory; while for the other competitors, there are:
-
•
Both tsGCN (inv) and tsGCN are better than other GCNs, which indicates that the dual-regularizer can extract more accurate inter-relationships from the topological and semantic structures.
-
•
Comparing the embeddings learned by tsGCN with those of tsGCN (inv), classes in the former sub-figure are more clearly recognized and the within-clusters are more compact, which testifies the effectiveness of the low-rank approximation.
In a nutshell, the embeddings of the proposed model show the best inter-class separation and intra-class aggregation.
Conclusion
By revisiting GCN, this paper puts forward an interpretable regularizer-centered optimization framework, in which the connections between existing GCNs and diverse regularizers are revealed. It’s worth mentioning that this framework provides a new perspective to interpret existing work and guide new GCNs just by designing new graph regularizers. Impressed by the significant effectiveness of the feature based semantic graph, we further combine it with nodes’ topological structures, and develop a novel dual-regularizer graph convolutional network, called tsGCN. Since the analytical updating rule of tsGCN contains a time-consuming matrix inverse, we devise an efficient algorithm with low-rank approximation tricks. Experiments on node classification tasks demonstrate that tsGCN performs much better than quite a few state-of-the-art competitors, and also exhibit that tsGCN runs much faster than the very recently proposed GCN variants, e.g., GNN-HF and GNN-LF.
Acknowledgments
This work is in part supported by the National Natural Science Foundation of China (Grant Nos. U21A20472 and 62276065), the Natural Science Foundation of Fujian Province (Grant No. 2020J01130193).
Supplementary
In this supplementary, we mainly present specific details to link various GCNs with various graph regularizers under the regularizer-centered optimization framework. Besides, more experimental settings and results are provided to further enrich the main paper.
The Framework Review
An interpretable regularizer-centered constrained optimiazation framework is induced as
(19) |
with the aim to unify various GCNs in an interpretable way, and also to guide the design of new GCN variants. Note that the first term in optimization (19) is equivalent to the fitting loss between the forward propagation and the output , while the second term is the priors-based graph regularizer. Besides, , and are separately defined to be
(20) |
(21) |
and
(22) |
The above three sets correspond to the , , and activation functions frequently used in graph convolutional networks, respectively.
It’s claimed that by designing different regularizers, this framework can give birth to different GCN methods. In the following, we will give specific details about how they could be derived from optimization (19).
Link Various GCNs with Various Regularizers
Methods | Propagation Rules | Regularizer | Projective Set |
---|---|---|---|
GCN | |||
SGC | |||
APPNP | |||
JKNet | |||
DAGNN | |||
GNN-HF | |||
GNN-LF | |||
tsGCN |
Theorem 1.
The updating rule of the vanilla GCN [21]
(23) |
is equivalent to solving the following optimization
(24) |
where
(25) |
Proof.
Taking derivative of w.r.t. , we obtain
(26) |
if it () is further set to zero, then there yields
(27) |
Theorem 2.
Given and , the updating rule of APPNP [23]
(29) |
is equivalent to solving the following optimization
(30) |
where
(31) | ||||
Proof.
Taking the derivative of w.r.t. and setting it to zero, we come to
(32) |
which leads to
(33) |
By projecting onto , we could achieve
(34) |
which completes the proof.
Theorem 3.
The updating rule of JKNet [24]
(35) |
is equivalent to solving the following optimization
(36) |
where
(37) | ||||
Proof.
Taking the derivative of w.r.t. and setting it to zero, we have
(38) |
which leads to
(39) |
It is noted that the spectral radius of is smaller than one, indicating
(40) |
If its -order approximation is employed, then there goes
(41) |
which suggests that can be approximated by
(42) |
If denote (), then is a set of parameters with .
By projecting onto , we can realize
(43) |
which completes the proof.
Theorem 4.
Given and a trainable projection vector , the updating rule of DAGNN [27]
(44) |
is equivalent to solving the following optimization
(45) |
where
(46) | ||||
Proof.
Taking the derivative of w.r.t. and setting it to zero, we can harvest
(47) |
Similar to the proof of Theorem 3, the -order approximation of is utilized, and then we obtain
(48) |
By projecting onto , we can arrive at
(49) |
which completes the proof.
Theorem 5.
The updating rule of GNN-HF [19]
(50) |
is equivalent to solving the following optimization
(51) |
where
(52) | ||||
with and .
Proof.
Taking the derivative of w.r.t. and setting it to zero, we own
(53) |
which yields
(54) |
Substituting and into Eq. (54), we obtain
(55) |
By projecting onto , we can ahieve
(56) |
which completes the proof.
Theorem 6.
The updating rule of GNN-LF [19]
(57) |
is equivalent to solving the following optimization
(58) |
where
(59) | ||||
with and .
Proof.
Taking the derivative of w.r.t. and setting it to zero, we can get
(60) |
which leads to
(61) |
Datasets/Parameters | Learning rate | Weight decay | Hidden units | |||
---|---|---|---|---|---|---|
Cora | ||||||
Citeseer | ||||||
Pubmed | ||||||
ACM | ||||||
BlogCatalog | ||||||
CoraFull | ||||||
Flickr | ||||||
UAI |
Absorbing the scale into the to-be-learnt variable , and substituting and into Eq. (61), we can harvest
(62) | ||||
By projecting onto , we can attain
(63) |
For notation consistency, we denote and as and in Table 4, completing the proof.
More Experimental Settings and Results
In this part, we provide more experimental settings and results for tsGCN, including hyperparameter settings, t-SNE visualizations of various methods, and the parameter sensitivity analysis of tsGCN w.r.t. F1-score.
Hyperparameter Settings. The detailed values of several hyperpaerameters are recorded in Table 5, which can be used to reproduce the reported experimental results. And the code is also provided as a supplementary file.
More visualizations. We draw the t-SNE of embeddings generated by all methods on all datasets from Fig. 5 to Fig. 11, from which we have the following observations:
-
•
The results are generally matched with the quantitative performance, i.e., tsGCN achieves better results than the others on most datasets.
-
•
Embeddings generated by tsGCN achieve better inter-class separation alongside intra-class clustering than those generated by tsGCN (inv), even when their quantitative performance is comparable.
Parameter Sensitivity. It can be seen that the F1-scores of tsGCN w.r.t. (, ) hold the similar trends with the classification accuracies of tsGCN.








References
- [1] Z. Chen, X. Wei, P. Wang, Y. Guo, Multi-label image recognition with graph convolutional networks, in: CVPR, 2019, pp. 5177–5186.
- [2] W. Nie, Y. Zhao, A. Liu, Z. Gao, Y. Su, Multi-graph convolutional network for unsupervised 3d shape retrieval, in: MM, 2020, pp. 3395–3403.
- [3] Y. Wang, M. Cao, Z. Fan, S. Peng, Learning to detect 3d facial landmarks via heatmap regression with graph convolutional network, in: AAAI, 2022, pp. 2595–2603.
- [4] F. Xu, J. Lian, Z. Han, Y. Li, Y. Xu, X. Xie, Relation-aware graph convolutional networks for agent-initiated social e-commerce recommendation, in: CIKM, 2019, pp. 529–538.
- [5] Y. Chen, L. Huang, C. Wang, J. Lai, Hybrid-order gated graph neural network for session-based recommendation, IEEE Transactions on Industrial Informatics 18 (3) (2022) 1458–1467.
- [6] H. Hu, L. Cheng, J. P. Vap, M. Borowczak, Learning privacy-preserving graph convolutional network with partially observed sensitive attributes, in: WWW, 2022, pp. 3552–3561.
- [7] B. Yu, H. Yin, Z. Zhu, Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting, in: IJCAI, 2018, pp. 3634–3640.
- [8] W. Chen, L. Chen, Y. Xie, W. Cao, Y. Gao, X. Feng, Multi-range attentive bicomponent graph convolutional network for traffic forecasting, in: AAAI, 2020, pp. 3529–3536.
- [9] Y. Zhang, S. Pal, M. Coates, D. Üstebay, Bayesian graph convolutional neural networks for semi-supervised classification, in: AAAI, 2019, pp. 5829–5836.
- [10] M. Yang, Y. Shen, R. Li, H. Qi, Q. Zhang, B. Yin, A new perspective on the effects of spectrum in graph neural networks, in: ICML, 2022, pp. 25261–25279.
- [11] S. Fan, X. Wang, C. Shi, E. Lu, K. Lin, B. Wang, One2multi graph autoencoder for multi-view graph clustering, in: WWW, 2020, pp. 3070–3076.
- [12] H. Zhu, P. Koniusz, Simple spectral graph convolution, in: ICLR, 2021, pp. 1–11.
- [13] H. Chen, H. Yin, X. Sun, T. Chen, B. Gabrys, K. Musial, Multi-level graph convolutional networks for cross-platform anchor link prediction, in: KDD, 2020, pp. 1503–1511.
- [14] N. Halliwell, Evaluating explanations of relational graph convolutional network link predictions on knowledge graphs, in: AAAI, 2022, pp. 12880–12881.
- [15] D. Bo, X. Wang, C. Shi, H. Shen, Beyond low-frequency information in graph convolutional networks, in: AAAI, 2021, pp. 3950–3957.
- [16] Z. Zhang, C. Chen, Y. Chang, W. Hu, X. Xing, Y. Zhou, Z. Zheng, Shne: Semantics and homophily preserving network embedding, IEEE Transactions on Neural Networks and Learning Systems (2021) 1–12.
- [17] X. Wang, M. Zhu, D. Bo, P. Cui, C. Shi, J. Pei, Am-gcn: Adaptive multi-channel graph convolutional networks, in: KDD, 2020, pp. 1243–1253.
- [18] L. Zhao, L. Akoglu, Connecting graph convolutional networks and graph-regularized pca, arXiv preprint arXiv:2006.12294 (2020).
- [19] M. Zhu, X. Wang, C. Shi, H. Ji, P. Cui, Interpreting and unifying graph neural networks with an optimization framework, in: WWW, 2021, pp. 1215–1226.
- [20] Y. Yang, T. Liu, Y. Wang, J. Zhou, Q. Gan, Z. Wei, Z. Zhang, Z. Huang, D. Wipf, Graph neural networks inspired by classical iterative algorithms, in: ICML, 2021, pp. 11773–11783.
- [21] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, in: ICLR, 2017, pp. 1–13.
- [22] F. Wu, A. H. S. Jr., T. Zhang, C. Fifty, T. Yu, K. Q. Weinberger, Simplifying graph convolutional networks, in: ICML, 2019, pp. 6861–6871.
- [23] J. Klicpera, A. Bojchevski, S. Günnemann, Predict then propagate: Graph neural networks meet personalized pagerank, in: ICLR, 2019, pp. 1–15.
- [24] K. Xu, C. Li, Y. Tian, T. Sonobe, K. ichi Kawarabayashi, S. Jegelka, Representation learning on graphs with jumping knowledge networks, in: ICML, 2018, pp. 5449–5458.
- [25] K. Sun, Z. Zhu, Z. Lin, Adagcn: Adaboosting graph convolutional networks into deep models, in: ICLR, 2021, pp. 1–15.
- [26] Q. Li, Z. Han, X. Wu, Deeper insights into graph convolutional networks for semi-supervised learning, in: AAAI, 2018, pp. 3538–3545.
- [27] M. Liu, H. Gao, S. Ji, Towards deeper graph neural networks, in: KDD, 2020, pp. 338–348.
- [28] L. Yang, Z. Kang, X. Cao, D. Jin, B. Yang, Y. Guo, Topology optimization based graph convolutional network, in: AAAI, 2019, pp. 4054–4061.
- [29] K. Oono, T. Suzuki, Graph neural networks exponentially lose expressive power for node classification, in: ICLR, 2020, pp. 1–8.
- [30] D. K. Hammond, P. Vandergheynst, R. Gribonval, Wavelets on graphs via spectral graph theory, Applied and Computational Harmonic Analysis 30 (2011) 129–150.
- [31] B. Amos, Differentiable optimization-based modeling for machine learning, Ph.D. thesis, Carnegie Mellon University (2019).
- [32] J. Sun, Z. Xu, Neural diffusion distance for image segmentation, in: NeurIPS, 2019, pp. 1441–1451.
- [33] M. Defferrard, X. Bresson, P. Vandergheynst, Convolutional neural networks on graphs with fast localized spectral filtering, in: NeurIPS, 2016, pp. 1–9.
- [34] W. L. Hamilton, Z. Ying, J. Leskovec, Inductive representation learning on large graphs, in: NeurIPS, 2017, pp. 1024–1034.
- [35] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention networks, in: ICLR, 2018, pp. 1–12.