Deeper Insights into Deep Graph Convolutional Networks: Stability and Generalization
Abstract
Graph convolutional networks (GCNs) have emerged as powerful models for graph learning tasks, exhibiting promising performance in various domains. While their empirical success is evident, there is a growing need to understand their essential ability from a theoretical perspective. Existing theoretical research has primarily focused on the analysis of single-layer GCNs, while a comprehensive theoretical exploration of the stability and generalization of deep GCNs remains limited. In this paper, we bridge this gap by delving into the stability and generalization properties of deep GCNs, aiming to provide valuable insights by characterizing rigorously the associated upper bounds. Our theoretical results reveal that the stability and generalization of deep GCNs are influenced by certain key factors, such as the maximum absolute eigenvalue of the graph filter operators and the depth of the network. Our theoretical studies contribute to a deeper understanding of the stability and generalization properties of deep GCNs, potentially paving the way for developing more reliable and well-performing models.
keywords:
Graph convolutional networks (GCNs); Generalization gap; Deep GCNs; Uniform stability.1 Introduction
Graph-structured structured data is pervasive across diverse domains, including knowledge graphs, traffic networks, and social networks to name a few [1, 2]. Several pioneering works [3, 4] introduced the initial concept of graph neural networks (GNNs), incorporating recurrent mechanisms and necessitating neural network parameters to define contraction mappings. Concurrently, Micheli [5] introduced the neural network for graphs, commonly referred to as NN4G, over a comparable timeframe. It is worth noting that the NN4G diverges from recurrent mechanisms and instead employs a feed-forward architecture, exhibiting similarities to contemporary GNNs. In recent years, (contemporary) GNNs have gained significant attention as an effective methodology for modeling graph data [6, 7, 8, 9, 10, 11]. To obtain a comprehensive understanding of GNNs and deep learning for graphs, we refer the readers to relevant survey papers for an extensive overview [12, 13, 14, 15].
Among the various GNN variants, one of the most powerful and frequently used GNNs is graph convolutional networks (GCNs). A widely accepted perspective posits that GCNs can be regarded as an extension or generalization of traditional spatial filters, which are commonly employed in Euclidean data analysis, to the realm of non-Euclidean data. Due to its success on non-Euclidean data, GCN has attracted widespread attention on its theoretical exploration. Recent works on GCNs includes understanding over-smoothing [16, 17, 18, 19], interpretability and explainability[20, 21, 22, 23, 24], expressiveness [25, 26, 27], and generalization [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. In this paper, we specifically address the generalization of GCNs to provide a bound on their generalization gap.
Investigating the generalization of GCNs is essential in understanding its underlying working principles and capabilities from a theoretical perspective. However, the theoretical establishment in this area is still in its infancy. In recent work [36], Zhang et al. provided a novel technique based on algorithmic stability to investigate the generalization capability of single-layer GCNs in semi-supervised learning tasks. Their results indicate that the stability of a single-layer GCN trained with the stochastic gradient descent (SGD) algorithm is dependent on the largest absolute eigenvalue of graph filter operators. This finding highlights the crucial role of graph filters in determining the generalization capability of single-layer GCNs, providing guidance for designing effective graph filters for these networks. On the other hand, a number of prior studies have shown that deep GCNs possess greater expressive power than their single-layer counterparts. Consequently, it is essential to extend the generalization results of single-layer GCNs to their multi-layer counterparts. This will help us understand the effect of factors (e.g., graph filters, number of layers) on the generalization capability of deep GCNs.
In this paper, we study the generalization of deep GCNs. Our methods mainly follow the work proposed in [36] by estimating the uniform stability of the learning algorithm of deep GCNs in semi-supervised learning problems, but a more sophisticated analysis is required. The findings of our investigation reveal a strong association between the generalization gap of deep GCNs and the characteristics of the graph filter, particularly the number of layers employed. Specifically, we observe that if the maximum absolute eigenvalue (or the largest singular value) of graph filter operators remains invariant with respect to graph size, the generalization gap diminishes asymptotically at a rate of as the training data size approaches infinity. This explains why normalized graph filters perform better than non-normalized ones in the deep GCN. Additionally, our results suggest that large number of layers can increase the generalization gap and subsequently degrade the performance of deep GCNs. This provides guidance for designing well-performing deep GCNs with a proper number of layers.
The key contributions of our paper are as follows:
-
1.
We prove the uniform stability of deep GCNs trained with SGD, which extends the findings of single-layer GCNs presented in [36].
-
2.
An upper bound for the generalization gap of deep GCNs is provided with rigorous proofs. Our theoretical results shed light on the crucial components influencing the generalization ability of the deep GCN model.
-
3.
Our empirical studies across three benchmark datasets for node classification verify convincingly our theoretical findings regarding the role of graph filters, the depth and width of deep GCN models.
The remainder of this paper is organized as follows. In Section 2, an overview of prior studies on the generalization of GCNs (or generic GNNs) is presented, along with a comparative analysis highlighting the similarities and distinctions between our work and previous research. Section 3 offers an exposition of the essential concepts. The primary findings of this paper are given in Section 4. Experimental studies designed to validate our theoretical findings are presented in Section 5. Section 6 concludes the paper with additional remarks. Detailed proofs of our theoretical results are included in the appendices.
2 Related Work
Theoretically, contemporary research on the generalization capability of GCNs predominantly employs methodologies such as Vapnik-Chervonenkis dimension (VC-dim) [30, 34], Rademacher complexity [31, 32, 33, 34, 35], and algorithmic stability [36, 37, 42, 43], as the mainstream categories revisited in this section. To provide a broader perspective, we also mention briefly other methodologies such as the classic PAC-Bayesian [38, 39], neural tangent kernels (NTK) [40, 41], algorithm alignment [44, 45], statistical physics and random matrix theory [46].
VC-dim and Rademacher Complexity. In [30], Scarselli et al. examined the generalization capability of GNNs by providing upper bounds on the order of growth of the VC-dim of GNNs. While the VC-dim serves as a traditional concept for establishing learning bounds, its applicability does not account for the underlying graph structure. In [34], the authors also provided a generalization error bound for GNNs using VC-dim. However, the error bound based on VC-dim is trivial and fails to capture the beneficial impact of degree normalization. Esser et al. [34] explored the generalization upper bound using transductive Rademacher complexity (TRC), examining the impact of graph convolutions and network architectures on minimizing generalization errors and gaining insights into the conditions that enhance learning through the graph structure. Tang et al. [35] derived the upper bound for the generalization gap of popular GNNs by establishing high probability learning guarantees using transductive SGD. However, their upper bound depends on the dimensionality of parameters due to the inclusion of the parameter dimension in the TRC-based technique utilized for deriving the bounds.
Algorithmic Stability. In addition to VC-dim and Rademacher complexity, the uniform stability of learning algorithms plays a crucial role in the examination of generalization. Expanding on the previous discoveries made by Hardt et al. [47], Verma and Zhang [36] provided evidence that one-layer GCNs possess uniform stability characteristics and established an upper bound on generalization that scales in accordance with the largest absolute eigenvalue of the graph filter operator. In a continuation of the research presented in [36], Liu et al.[42] contributed to a comprehensive theoretical understanding of single-layer GCNs by analyzing the stability of their SGD proximal algorithm, incorporating -regularization. However, it should be noted that these studies are limited to single-layer GCNs. Ng and Yip [37] focused their investigation on the stability and generalization properties of GCNs within eigen-domains. However, their formulation of the two-layer GCN relies on spectral graph convolution defined in [48], which necessitates the computationally expensive eigendecomposition of the graph Laplacian. As a result, their approach fails to offer meaningful theoretical insights for node classification in large-scale scenarios. In the context of this methodology, the studies most closely related to ours are [36] and [37]. However, unlike these works, our theoretical investigation specifically centers around deep GCNs without the constraint of assuming a spectral-based formulation.
Other Methodologies. The groundbreaking study conducted in [38] marks the initial endeavor to establish generalization bounds for GCNs and message passing neural networks, employing the PAC-Bayesian approach. Drawing upon an improved PAC-Bayesian analysis, the work presented in [39] establishes comprehensive generalization bounds for GNNs, highlighting a notable correlation with the graph diffusion matrix. Furthermore, the neural tangent kernel (NTK) introduced by [40] provides a promising avenue for investigating the generalization of infinitely wide multi-layer GNNs trained by gradient descent, as studied in [41]. These works however concentrate on the formulation of graph classification problems, instead of the node classification task (under a transductive setting) which poses a greater challenge. In addition, there exist related studies that employ a special theoretical framework distinct from ours, such as the analysis of the generalization capability of GNNs trained using topology-sampling techniques [49] or on large random graphs [50]. For a comprehensive review of the emerging theoretical perspectives on characterizing the capabilities of GNNs, we recommend interested readers to refer to [51].
3 Preliminaries and Notations
In this section, we provide a comprehensive description of the problem setup examined in this paper. Additionally, we present an extensive review of fundamental concepts related to uniform stability for training algorithms, which serve as the foundation for the subsequent analysis.
3.1 Deep Graph Convolutional Networks
Let denote an undirected graph with a node set of size , an edge set and the adjacency matrix . As usual, is denoted as its conventional graph Laplacian, where signifies the degree diagonal matrix. Furthermore, represents a graph filter and is defined as a function of (or its normalized versions). We denote by the maximum absolute eigenvalue of a symmetric filter or the maximum singular value of an asymmetric .
We denote by the input features ( stands for input dimension) and the node feature of node , while represents the Frobenius norm of . For the input feature , a deep GCN with updates the representation as follows:
where is the output feature matrix of the -th layer with , the matrix represents the trained parameter matrix specific to the -th layer. The function denotes a nonlinear activation function applied within the GCN model. For simplicity, we set a final output in a single dimension, that is, the final output label of nodes is given by
(1) |
where and .
3.2 The SGD Algorithm
We denote by the unknown joint distribution of input features and output labels. Let
be the training set i.i.d sampled from and be a learning algorithm for a deep GCN trained on . For a deep GCN model (1) with parameters , denote as the output of node , where is the corresponding learned parameter and is the indicator vector with respect to node . For a loss function , the generalization error or risk is defined by
where the expectation is taken over , and the empirical error or risk is
When considering a randomized algorithm ,
(2) |
gives the generalization gap between the generalization error and the empirical error, where the expectation corresponds to the inherent randomness of .
In this paper, is considered to be the algorithm given by the SGD algorithm. Following the approach employed in [36], our analysis focuses solely on the randomness inherent in arising from the SGD algorithm, while disregarding the stochasticity introduced by parameter initialization. The SGD algorithm for a deep GCN(1) aims to optimize its empirical error on a dataset by updating parameters iteratively. For and considering the parameters obtained after iterations, the -th iteration of SGD involves randomly drawing a sample from the dataset . Subsequently, parameters are iteratively updated as follows:
(3) |
with the learning rate .
3.3 Uniform Stability
For the sake of estimating the generalization gap of , we invoke the notion of uniform stability of as adopted in [52, 36].
Let
be the dataset obtained by removing the -th data point in , and
the dataset obtained by replacing the -th data point in . Then, the formal definition of uniform stability of a randomized algorithm is given in the following.
Definition 1 (Uniform Stability [36]).
A randomized algorithm is considered to be -uniformly stable in relation to a loss function when it fulfills the following condition:
(4) |
where , and .
As shown in Definition 1, indicates a bound on how much the variation of the training set can influence the output of . It further implies the following property:
(5) |
where , and .
Moreover, it is shown that the uniform stability of a learning algorithm can yield the following upper bound on the generalization gap .
Lemma 1 (Stability Guarantees [36]).
Suppose that a randomized algorithm is -uniformly stable with a bounded loss function . Then, with a probability of at least , considering the random draw of with , the following inequality holds for the expected value of the generalization gap:
where is an upper bound of the loss function , i.e., .
4 Main Results
This section presents an established upper bound on the generalization gap as defined in (2) for deep GCNs trained using the SGD algorithm. Notably, this generalization bound, derived from a meticulous analysis of the comprehensive back-propagation algorithm, demonstrates the enhanced insight gained through the utilization of SGD.
4.1 Assumptions
First, we make some assumptions about the considered deep GCN model (1), which are necessary to derive our results.
Assumption 1. The activation function is assumed to satisfy the following:
-
1.
-Lipschitz:
-
2.
-smooth:
-
3.
.
With these assumptions, the derivative of , denoted by , is bounded, i.e., , and thus holds for any matrix . It can be easily verified that activation functions such as ELU and tanh satisfy the above assumptions.
Assumption 2. Let and be the predicted and true labels, respectively. We denote the loss function by . Similar to [37], we adopt the following assumptions for .
-
1.
The loss function exhibits continuity with respect to the variables and possesses continuous differentiability with respect to .
-
2.
The loss function satisfies -Lipschitz with respect to :
-
3.
The loss function meets -smooth with respect to :
With these assumptions, , and is bounded, i.e., .
Assumption 3. The learned parameters during the training procedure with limited iterations satisfies
Notation | Description |
---|---|
the graph filter operator used in the considered deep GCNs | |
the 2-norm of , i.e., | |
the Frobenius norm of the input feature , i.e., | |
the number of hidden layers of the considered deep GCNs | |
, | parameters w.r.t the continuity of the activation function |
, | parameters w.r.t the continuity of the loss function |
the upper bound of the loss function | |
the learning algorithm for deep GCNs trained on dataset | |
the number of samples in the trained dataset | |
the learning rate of | |
the number of iterations for training using the SGD algorithm | |
the upper bound of the 2-norm of the parameters |
4.2 Generalization Gap
This section presents the main results of this paper. For convenience, the notations used in the result are summarized in Table 1. Under the assumptions made in Section 4.1, the bound on the generalization gap of deep GCNs is provided in the following theorem.
Theorem 1 (Generalization gap for deep GCNs).
Consider the deep GCN model, defined in equation (1), which comprises hidden layers and utilizes as the graph filter operator. The model is trained on using SGD for iterations. Under Assumptions 1, 2 and 3 stated in Section 4.1, the following expected generalization gap is valid with a probability of at least , where :
(6) |
where
(7) |
(8) |
A fundamental correlation between the generalization gap and the parameters governing deep GCNs is induced by Theorem 1. This correlation implies that the uniform stability of deep GCNs, trained using the SGD algorithm, exhibits an increase with the number of samples when the upper bound approaches zero as the sample size tends to infinity. Specifically, it is observed that if the value of (presenting the largest absolute eigenvalue of a symmetry or the maximum singular value of an asymmetry ) remains unaffected by the size , a generalization gap decaying at the order of is obtained. To compare with the result in [36], let us discuss at length the role of and the hidden layer number on the generalization gap.
According to (7) and (8), and . Therefore, the bound on the generalization gap of deep GCNs in Theorem 1 is
(9) |
When , the GCN model (1) degenerates into the single-layer GCN model considered in [36]. At this point, according to (9), we have
(10) |
which is the same as the result of [36].
Remarks. Based on (9), we present certain observations regarding the impact of filter and the hidden layer number on the generalization capacity of deep GCNs in (1).
-
1.
Normalized vs. Unnormalized Graph Filters: We examine the three most commonly utilized filters: 1) , 2) , and 3) . For the unnormalized filter , its maximum absolute eigenvalue is bounded by . Consequently, as the value of approaches the magnitude to , the upper bound indicated by (9) tends towards for some , leading to an impractical upper bound when become infinitely large. On the contrary, for two normalized filters and , their largest absolute eigenvalues are bounded and independent of graph size . Therefore, both filters yield a diminishing generalization gap at a rate of as goes to infinity. This discovery underscores the superior performance of normalized filters over unnormalized counterparts in deep GCNs. This observation is consistent with the findings in [36, 37].
-
2.
The Role of Parameter : It is evident that, when the values of and are fixed, the upper bound (9) exhibits an exponential dependence on parameter . This observation implies that a larger value leads to an increase in the upper bound of the generalization gap, thereby offering valuable insights for the architectural design of deep GCNs. This finding diverges from the ones presented in [36, 37], as these studies do not account for generic deep GCNs and overlook the significance of the parameter .
Furthermore, based on Theorem 1, we give a brief analysis of the impact of (width of the -th layer) on the generalization. Actually, the impact of on the generalization is reflected in its impact on . More specifically, let us consider the case where parameters belong to the set , where
i.e., is the collection of all matrices whose elements’ absolute values are all less than . At this point, for , we have
Therefore, a larger (i.e., width of the -th layer) results in a larger upper bound of , which implies that a larger results in a larger (see Assumption 3 in Section 4.1). Finally, Theorem 1 indicates that a larger leads to a larger bound on the generalization gap, thus we conclude that a larger leads to a larger bound on the generalization gap. To justify this argument, we add some experimental studies in Section 5. The empirical results are consistent with our analysis.
Table 2 offers a concise summary of various upper bounds on the generalization gap, derived through the application of uniform stability. From Table 2, we can see that all the works derive a generalization gap decaying at the order of . However, compared to the other three works which only consider shallow GCNs, our work explores the case of deep GCNs. We should point out that the generalization of single-layer GCNs into deep GCNs is not trivial. To derive the results for deep GCNs, we tackle two significant challenges that arise specifically in the context of deep GCNs, which are unique to deep GCNs and are non-existent in single-layer models. The first challenge is the derivation of the gradient of the final output with respect to the learnable parameters across multiple layers, which requires determining how the gradient of the overall error of a GCN is shared among neurons in different hidden layers. In particular, in Appendix A.1, we provide a recursive formula to compute the related gradients. The second challenge is the evaluation of gradient variations between GCNs trained on different datasets. In the single layer case, since the input feature is the same, the variation of the related gradient is only dependent on the variations of learnable parameters. While, in the case of deep GCNs, the variation of the related gradients is also dependent on the variations of the gradients of the final output with respect to the hidden layer outputs. Please see Lemma 7 and its proof for details.
4.3 Stability Upper Bound
In this subsection, we establish the uniform stability of SGD for deep GCNs, which is the key to further proving Theorem 1.
Theorem 2 (Uniform stability of deep GCNs).
Let us consider the deep GCNs defined by equation (1). These networks are trained on a dataset using the SGD algorithm for a total of iterations and denoted as . Assume that Assumptions 1, 2 and 3 stated in Section 4.1 are satisfied. Then, is -uniformly stable, with satisfying the following condition:
(11) |
where
With a straightforward calculation, one can see that
which decays at the rate of as tends to infinity. Together with Lemma 1, it yields the result of Theorem 1.
-
1.
Step 1: We begin by bounding the stability of deep GCNs with respect to perturbations in the learned parameters caused by changes in the training set. The result is given in Lemma 2.
-
2.
Step 2: Next, we provide a bound for the perturbation of the learned parameters. The result is presented in Theorem 3.
Consider , a set of deepGCNs defined by (1), trained on the dataset using SGD for iterations. Let and denote the parameters of two GCNs trained on and after iterations, respectively. We set and to be the perturbation of learning parameters and define
(12) |
In the following lemma, it is shown that the stability of can be bounded by .
Lemma 2.
Let and be the learnt parameters of two GCNs trained on and using SGD in the -th iteration with , and . Suppose that all the assumptions made in Section 4.1 hold. Then, after iterations, we have that for any taken from ,
(13) |
where and .
(14) |
So, to estimate the uniform stability of , we need to bound . Now, let us recall (3) for parameter updating, for training on ,
, and for training on ,
, where and are the samples drawn at the -th SGD iteration. Therefore, has the following iterations:
and for ,
Then, we provide two Lemmas to bound
and
Lemma 3.
Lemma 4.
Consider two GCNs with parameters and , respectively. Then, the following holds for any two samples and :
(17) |
for . Note that .
The proofs of Lemma 3 and Lemma 4 are given in A.3. Using Lemma 3 and Lemma 4, we now provide a bound for .
Theorem 3.
5 Experiments
In this section, we conduct some empirical studies using three benchmark datasets commonly utilized for the node classification task, namely Cora, Citeseer, and Pubmed [53, 54]. Table 3 summarizes the basic statistics of these datasets. In our experiments, we follow the standard transductive learning problem formulation and the training/test setting used in [55]. To rigorously test our theoretical insights, our experiments aim to answer the following key questions:
-
1.
Q1: How does the design of graph filters (i.e., ) influence the generalization gap?
-
2.
Q2: How does the generalization gap change with the number of hidden layers (i.e., )?
-
3.
Q3: How does the width (i.e., the number of hidden units: ) affect the generalization gap?
To address each question, we empirically estimate the generalization gap by calculating the absolute difference in loss between training and test samples. We adopt the official TensorFlow implementation111https://github.com/tkipf/gcn for GCN [55] and the Adam optimizer with default settings. The number of iterations is fixed to for all the simulations.
Cora | Citeseer | Pubmed | |
---|---|---|---|
# Nodes | |||
# Edges | |||
# Features | |||
# Classes | |||
Label Rate |
Results and Discussion for Q1. We analyze two types of graph filters in our study: 1) the normalized graph filter, defined as with and (which was first employed in the vanilla GCN [55] and has subsequently become widely used in follow-up works on GCNs), and 2) the random walk filter, . To fit our theoretical finding, we compare the performance of two 5-layer GCN models (with width for each layer), each employing one of these filters. Table 4 presents the numerical records of , , , for both filters. The results indicate clearly that the 5-layer GCN with the normalized graph filter exhibits a smaller generalization gap compared to the one with the random walk filter. Furthermore, Figure 1 illustrates the performance of each filter across different datasets over iterations, demonstrating the superior performance of the normalized graph filter. Overall, the empirical findings in Table 4 and Figure 1 align well with our theoretical finding regarding the impact of on the generalization gap.
Dataset | Graph filter | ||||
---|---|---|---|---|---|
Cora | 1.488 | 0.136 | 1.352 | 1 | |
1.914 | 0.118 | 1.796 | 4.746 | ||
Citeseer | 2.896 | 0.235 | 2.661 | 1 | |
3.206 | 0.145 | 3.061 | 4.690 | ||
Pubmed | 1.594 | 0.023 | 1.571 | 1 | |
2.534 | 0.037 | 2.497 | 7.131 |



Results and Discussion for Q2. In this experimental study, we try different settings of , i.e., the number of hidden layers. Specifically, for , we compare the performance of two -layer GCNs (with width for each layer): one employing the normalized graph filter , and one using the random walk filter . Figure 2 shows the performance comparison results for each . It demonstrates clearly that, consistent with the aforementioned results for Q1, GCN with a normalized graph filter (with smaller ) consistently exhibits smaller generalization gaps compared to those with the random walk filter. Also, it is observed that the generalization gap becomes larger as increases, further validating our theoretical assertions regarding the influence of on the model’s generalization gap.



Results and Discussion for Q3. To empirically investigate the impact of width (i.e., the number of hidden units) on the generalization gap, we conduct additional experiments using a 5-layer GCN equipped with a normalized graph filter. The experiments specifically involve a comparison between a 5-layer GCN configured with a width of for each layer and the previously studied model with width (), as illustrated in Figure 3. This setup allows for a direct comparison under varying network configurations, providing insights into how changes in the number of hidden units influence the generalization gap. As demonstrated in Figure 3, across all the datasets examined, a -width GCN consistently exhibits smaller generalization gaps compared to one with a -width. This observation is in harmony with our theoretical explanation presented after Theorem 1, that is, the factor (i.e., the upper bound of 2-norm of the parameters ) directly influences factors and in the upper bound of the generalization gap.



6 Conclusion and Further Remarks
This paper explores the generalization of deep GCNs by providing an upper bound on their generalization gap. Our generalization bound is obtained based on the algorithmic stability of deep GCNs trained by the SGD algorithm. Our analysis demonstrates that the algorithmic stability of deep GCNs is contingent upon two factors: the largest absolute eigenvalue (or maximum singular value) of graph filter operators and the number of layers utilized. In particular, if the aforementioned eigenvalue (or singular value) remains invariant regardless of changes in the graph size, deep GCNs exhibit robust uniform stability, resulting in an enhanced generalization capability. Additionally, our results suggest that a greater number of layers can increase the generalization gap and subsequently degrade the performance of deep GCNs. This provides guidance for designing well-performing deep GCNs with a proper number of layers [56]. Most importantly, the result of single-layer GCNs in [36] can be regarded as a special case of our results in deep GCNs without hidden layers.
While our study is primarily focused on exploring the fundamental principles of generalizability and stability in the context of a simple deep GCN model framework, it can offer preliminary insights into several pressing issues that are the subject of recent attention in the GNN domain. These include: i) the over-smoothing problem, which stands as a pivotal challenge in the development of deep GNNs [57, 58], and ii) the design of advanced GNNs tailored for heterophilic graphs, characterized by nodes whose labels significantly diverge from those of their neighbors [59, 60]. Some further remarks on these two issues are as follows:
-
1.
We note that, given a trivial deep GCN model characterized by over-smoothed node embeddings (which typically result in significant training errors), our theoretical upper bound still holds, that is, for a given graph filter, an increase of layers could potentially increase this upper bound in a probabilistic sense. This also motivates the exploration of advanced deep GCN models that incorporate mechanisms to counteract over-smoothing, like the skip connection trick used in GCNII [61] and its follow-up works. This observation encourages the investigation of more sophisticated deep GCN models that employ strategies to mitigate over-smoothing effects, such as the implementation of skip connections, a technique exemplified by GCNII and its subsequent developments. In both theory and practice, reducing the maximum absolute eigenvalue of graph filter operators is achievable through the strategic implementation of skip connections across layers, which can potentially reduce the generalization gap. From this perspective, we anticipate that our findings will inspire further studies into advanced deep GCN structures, especially those designed to mitigate the over-smoothing issue, offering a new direction for both theoretical exploration and practical application in advanced deep GCN architectures.
-
2.
Expanding our theoretical insights to include specific models tailored for heterophily graphs is valuable but requires deliberate effort. This involves assessing the impact of the homophily/heterophily ratio on the input graph signal, and incorporating this ratio into the upper bound estimation. It is important to clarify that, although our current empirical study considers two types of low-pass filters, the scope of our theoretical findings is not restricted to low-pass scenarios alone. To ensure a consistent and fair empirical evaluation, as demonstrated in [36], we utilized the benchmark graph datasets (Cora, Citeseer, Pubmed) known for their homophilic properties in node classification tasks. However, for analyses involving high-pass filters, it would be appropriate to engage with benchmark datasets representing heterophily graphs (such as Texas, Wisconsin, Cornell, etc.). We refer the readers interested in delving deeper into this topic to the recent work [46], in which the authors use analytical tools from statistical physics and random matrix theory to precisely characterize generalization in simple graph convolution networks on the contextual stochastic block model (CSBM). This research, though based on specific assumptions on the graph signal, can inspire further refinements in our theoretical framework, outcomes, and methodologies, taking into account unique graph signal characteristics (e.g., homophily/heterophily) and model complexities (e.g., low-pass/high-pass filters, depth and width of network architecture).
In terms of future research directions, it would be valuable to extend the theoretical analysis presented in this study to encompass other commonly used learning algorithms in graph neural networks, moving beyond the scope of SGD. Moreover, our theoretical results offer insights that can inform the exploration of various strategies to enhance the generalization capability of deep graph neural networks. This could involve investigating the efficacy of regularization techniques, conducting advanced network architecture searches, or developing adaptive graph filters. Additionally, a significant area for future investigation is to establish the potential connection between the model’s stability and generalization, and the issues of over-smoothing and over-squashing encountered in deep graph neural networks. Understanding these interrelationships can potentially contribute to the development of novel techniques and algorithms that address these challenges and improve the overall effectiveness of deep graph neural networks in dealing with more complex tasks.
Appendix A Proofs
The proofs of our main results are given in this section. We first make some statements about the notations used in the paper. denotes the transpose of a matrix ; the -entry of is denoted as ; however when contributing to avoid confusion, the alternative notation will be used. denotes the 2-norm of a matrix or vector and denotes the Frobenius norm. denotes the unit pulse signal at node that all elements are 0 except the -th one, which is 1. Let be a real-valued function of variable . Then, the gradient of with respect to is denoted as
To make it easier to understand the derivation of our results, we first provide the following inequalities, which will be used frequently in the derivation.
For any matrix , , and , we have:
-
1.
. To prove this, let be the SVD of , where and are both orthogonal matrix. Then,
-
2.
To show this, note that
Then, the proof is complete using the first inequality ,
-
3.
, where is the maximum absolute value of the entries of . Note that holds true because . Furthermore,
A.1 Gradient computation for SGD
To work with the SGD algorithm, we provide a recursive formula for the gradient of the final output at node in the GCNs model (1) with respect to the learnable parameters.
-
1.
For the final layer,
(19) -
2.
For the hidden layer ,
(20) where and
(21) with
(22)
The notation represents the Hadamard product of two matrices. (19) and (22) are easy to verify, while (20) and (21) are not. In the following, a detailed procedure is provided to derive (20) and (21).
First, since ,
and
Based on the above recursive formula, we prove the following lemma recursively.
Lemma 5.
A.2 Proof of Lemma 2
To prove Lemma 2, we first provide the following lemma to show the variation of output in each layer for two GCNs with different learned parameters and . Let and be their output of the hidden layer, as well as and the final output of node . The following lemma provides a bound of and based on .
Lemma 6.
Consider two GCNs with parameters and , respectively. Then, we obtain the following results for their variations.
-
1.
Their variation of outputs in hidden layers satisfies
(26) -
2.
Furthermore, for the final output of node ,
(27)
Proof.
A.3 Proof of Lemma 3 and Lemma 4
Lemma 7.
Consider two GCNs with parameters and , respectively. Then, their variation of gradients of with respect to satisfies
(28) |
and for ,
(29) |
where
(30) |
Proof. First, according to the proof of (26) and (27), the following holds true for :
(31) |
where . Furthermore,
Combining (23), (26) and (31),
which completes the proof of (28). Next, we turn to prove (29). First, for ,
By (23), (24) and (26), we have
(32) |
where . Now, we need to bound .
where . By (31),
Combining (24), we have
It is easy to see that
Therefore,
Furthermore, since
we have
Finally, based on the above recursive formula of , we have
(33) |
where . Finally, substituting (33) into (32),
which completes the proof of (29).
A.3.1 Proof of Lemma 3
A.3.2 Proof of Lemma 4
A.4 Proof of Theorem 3
Note that with probability and with probability . By considering (3) and incorporating the probability of the two scenarios presented in Lemmas 3 and 4, using and to denote and , respectively, we have:
Similarly, for ,
Then,
where . By (30), we have , as defined in (8). Finally, since
This completes the proof of Theorem 3.
Acknowledgment
The work of Ming Li was supported in part by the National Natural Science Foundation of China (No. U21A20473, No. 62172370). The work of Han Feng was supported in part by the Research Grants Council of Hong Kong Special Administrative Region, China, under Project CityU 11303821 and Project CityU 11315522. The work of Xiaosheng Zhuang was supported in part by the Research Grants Council of Hong Kong Special Administrative Region, China, under Project CityU 11309122 and Project CityU 11302023. The authors also thank Dr. Yi Wang (ZJNU) and Dr. Xianchen Zhou (NUDT) for discussions and dedicated efforts regarding the experimental studies.
References
- [1] Y. Liang, F. Meng, Y. Zhang, Y. Chen, J. Xu, J. Zhou, Emotional conversation generation with heterogeneous graph neural network, Artificial Intelligence 308 (2022) 103714.
- [2] Y. Ma, J. Tang, Deep learning on graphs, Cambridge University Press, 2021.
- [3] M. Gori, G. Monfardini, F. Scarselli, A new model for learning in graph domains, in: Proceedings of the IEEE International Joint Conference on Neural Networks, IEEE, 2005, pp. 729–734.
- [4] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, G. Monfardini, The graph neural network model, IEEE Transactions on Neural Networks 20 (1) (2008) 61–80.
- [5] A. Micheli, Neural network for graphs: A contextual constructive approach, IEEE Transactions on Neural Networks 20 (3) (2009) 498–511.
- [6] K. Yao, J. Liang, J. Liang, M. Li, F. Cao, Multi-view graph convolutional networks with attention mechanism, Artificial Intelligence 307 (2022) 103708.
- [7] W. L. Hamilton, Graph representation learning, Morgan & Claypool, 2020.
- [8] L. Wu, P. Cui, J. Pei, L. Zhao, Graph Neural Networks: Foundations, Frontiers, and Applications, Springer, 2022.
- [9] F. M. Bianchi, D. Grattarola, L. Livi, C. Alippi, Graph neural networks with convolutional arma filters, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (7) (2021) 3496–3507.
- [10] B. Jiang, B. Wang, S. Chen, J. Tang, B. Luo, Graph neural network meets sparse representation: Graph sparse neural networks via exclusive group lasso, IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (10) (2023) 12692–12698.
- [11] H. Zhang, Y. Zhu, X. Li, Decouple graph neural networks: Train multiple simple gnns simultaneously instead of one, IEEE Transactions on Pattern Analysis and Machine Intelligence, DOI: 10.1109/TPAMI.2024.3392782.
- [12] D. Bacciu, F. Errica, A. Micheli, M. Podda, A gentle introduction to deep learning for graphs, Neural Networks 129 (2020) 203–221.
- [13] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, M. Sun, Graph neural networks: A review of methods and applications, AI Open 1 (2020) 57–81.
- [14] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, P. S. Yu, A comprehensive survey on graph neural networks, IEEE Transactions on Neural Networks and Learning Systems 32 (1) (2021) 4–24.
- [15] Z. Zhang, P. Cui, W. Zhu, Deep learning on graphs: A survey, IEEE Transactions on Knowledge and Data Engineering 34 (1) (2022) 249–270.
- [16] Q. Li, Z. Han, X.-M. Wu, Deeper insights into graph convolutional networks for semi-supervised learning, in: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018, pp. 3538–3545.
- [17] L. Zhao, L. Akoglu, PairNorm: Tackling oversmoothing in GNNs, in: International Conference on Learning Representations, 2020.
- [18] K. Oono, T. Suzuki, Graph neural networks exponentially lose expressive power for node classification, in: International Conference on Learning Representations, 2020.
- [19] Y. Rong, W. Huang, T. Xu, J. Huang, DropEdge: Towards deep graph convolutional networks on node classification, in: International Conference on Learning Representations, 2020.
- [20] H. Yuan, J. Tang, X. Hu, S. Ji, XGNN: Towards model-level explanations of graph neural networks, in: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2020, pp. 430–438.
- [21] H. Yuan, H. Yu, J. Wang, K. Li, S. Ji, On explainability of graph neural networks via subgraph explorations, in: Proceedings of the 38th International Conference on Machine Learning, 2021, pp. 12241–12252.
- [22] H. Yuan, H. Yu, S. Gui, S. Ji, Explainability in graph neural networks: A taxonomic survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (5) (2022) 5782–5799.
- [23] T. Schnake, O. Eberle, J. Lederer, S. Nakajima, K. T. Schütt, K.-R. Müller, G. Montavon, Higher-order explanations of graph neural networks via relevant walks, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (11) (2021) 7581–7596.
- [24] G. Bouritsas, F. Frasca, S. Zafeiriou, M. M. Bronstein, Improving graph neural network expressivity via subgraph isomorphism counting, IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (1) (2022) 657–668.
- [25] K. Xu, W. Hu, J. Leskovec, S. Jegelka, How powerful are graph neural networks?, in: International Conference on Learning Representations, 2019.
- [26] Z. Chen, S. Villar, L. Chen, J. Bruna, On the equivalence between graph isomorphism testing and function approximation with GNNs, in: Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019, pp. 15868–15876.
- [27] N. Dehmamy, A.-L. Barabási, R. Yu, Understanding the representation power of graph neural networks in learning graph topology, in: Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019, pp. 15413–15423.
- [28] S. S. Du, K. Hou, R. R. Salakhutdinov, B. Poczos, R. Wang, K. Xu, Graph neural tangent kernel: Fusing graph neural networks with graph kernels, in: Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019, pp. 5723–5733.
- [29] S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong, Fast learning of graph neural networks with guaranteed generalizability: one-hidden-layer case, in: Proceedings of the 37th International Conference on Machine Learning, 2020, pp. 11268–11277.
- [30] F. Scarselli, A. C. Tsoi, M. Hagenbuchner, The Vapnik-Chervonenkis dimension of graph and recursive neural networks, Neural Networks 108 (2018) 248–259.
- [31] V. Garg, S. Jegelka, T. Jaakkola, Generalization and representational limits of graph neural networks, in: Proceedings of the 37 th International Conference on Machine Learning, 2020, pp. 3419–3430.
- [32] K. Oono, T. Suzuki, Optimization and generalization analysis of transduction through gradient boosting and application to multi-scale graph neural networks, in: Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020, pp. 18917–18930.
- [33] S. Lv, Generalization bounds for graph convolutional neural networks via Rademacher complexity, arXiv preprint arXiv:2102.10234.
- [34] P. Esser, L. Chennuru Vankadara, D. Ghoshdastidar, Learning theory can (sometimes) explain generalisation in graph neural networks, in: Proceedings of the 35th International Conference on Neural Information Processing Systems, 2021, pp. 27043–27056.
- [35] H. Tang, Y. Liu, Towards understanding the generalization of graph neural networks, in: Proceedings of the 40th International Conference on Machine Learning, 2023, pp. 33674–33719.
- [36] S. Verma, Z.-L. Zhang, Stability and generalization of graph convolutional neural networks, in: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019, pp. 1539–1548.
- [37] M. K. Ng, A. Yip, Stability and generalization of graph convolutional networks in eigen-domains, Analysis and Applications 21 (03) (2023) 819–840.
- [38] R. Liao, R. Urtasun, R. Zemel, A PAC-Bayesian approach to generalization bounds for graph neural networks, in: International Conference on Learning Representations, 2021.
- [39] H. Ju, D. Li, A. Sharma, H. R. Zhang, Generalization in graph neural networks: Improved PAC-Bayesian bounds on graph diffusion, in: Proceedings of the 26th International Conference on Artificial Intelligence and Statistics, 2023, pp. 6314–6341.
- [40] A. Jacot, F. Gabriel, C. Hongler, Neural tangent kernel: Convergence and generalization in neural networks, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018, pp. 8580–8589.
- [41] S. S. Du, K. Hou, R. R. Salakhutdinov, B. Poczos, R. Wang, K. Xu, Graph neural tangent kernel: Fusing graph neural networks with graph kernels, in: Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019, pp. 5723–5733.
- [42] S. Liu, L. Wei, S. Lv, M. Li, Stability and generalization of -regularized stochastic learning for GCN, in: Proceedings of the 32nd International Joint Conference on Artificial Intelligence, 2023, pp. 5685–5693.
- [43] C. Huang, M. Li, F. Cao, H. Fujita, Z. Li, X. Wu, Are graph convolutional networks with random weights feasible?, IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (3) (2023) 2751–2768.
- [44] K. Xu, J. Li, M. Zhang, S. S. Du, K.-i. Kawarabayashi, S. Jegelka, What can neural networks reason about?, in: International Conference on Learning Representations, 2020.
- [45] K. Xu, M. Zhang, J. Li, S. S. Du, K.-i. Kawarabayashi, S. Jegelka, How neural networks extrapolate: From feedforward to graph neural networks, in: International Conference on Learning Representations, 2021.
- [46] C. Shi, L. Pan, H. Hu, I. Dokmanić, Homophily modulates double descent generalization in graph convolution networks, Proceedings of the National Academy of Sciences 121 (8) (2024) e2309504121.
- [47] M. Hardt, B. Recht, Y. Singer, Train faster, generalize better: Stability of stochastic gradient descent, in: Proceedings of The 33rd International Conference on Machine Learning, 2016, pp. 1225–1234.
- [48] J. Bruna, W. Zaremba, A. Szlam, Y. LeCun, Spectral networks and locally connected networks on graphs, in: International Conference on Learning Representations, 2014.
- [49] H. Li, M. Wang, S. Liu, P.-Y. Chen, J. Xiong, Generalization guarantee of training graph convolutional networks with graph topology sampling, in: Proceedings of The 39th International Conference on Machine Learning, 2022, pp. 13014–13051.
- [50] N. Keriven, A. Bietti, S. Vaiter, Convergence and stability of graph convolutional networks on large random graphs, in: Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020, pp. 21512–21523.
- [51] S. Jegelka, Theory of graph neural networks: Representation and learning, arXiv preprint arXiv:2204.07697.
- [52] A. Elisseeff, T. Evgeniou, M. Pontil, L. P. Kaelbing, Stability of randomized learning algorithms., Journal of Machine Learning Research 6 (1).
- [53] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, T. Eliassi-Rad, Collective classification in network data, AI Magazine 29 (3) (2008) 93–93.
- [54] Z. Yang, W. Cohen, R. Salakhudinov, Revisiting semi-supervised learning with graph embeddings, in: Proceedings of the International Conference on Machine Learning, 2016, pp. 40–48.
- [55] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, in: International Conference on Learning Representations, 2017.
- [56] G. Li, C. Xiong, G. Qian, A. Thabet, B. Ghanem, Deepergcn: training deeper gcns with generalized aggregation functions, IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (11) (2023) 13024–13034.
- [57] T. K. Rusch, M. M. Bronstein, S. Mishra, A survey on oversmoothing in graph neural networks, arXiv preprint arXiv:2303.10993.
- [58] T. Chen, K. Zhou, K. Duan, W. Zheng, P. Wang, X. Hu, Z. Wang, Bag of tricks for training deeper graph neural networks: A comprehensive benchmark study, IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (3) (2022) 2769–2781.
- [59] X. Zheng, Y. Liu, S. Pan, M. Zhang, D. Jin, P. S. Yu, Graph neural networks for graphs with heterophily: A survey, arXiv preprint arXiv:2202.07082.
- [60] J. Zhu, Y. Yan, M. Heimann, L. Zhao, L. Akoglu, D. Koutra, Heterophily and graph neural networks: Past, present and future, IEEE Data Engineering Bulletin 47 (2) (2023) 10–32.
- [61] M. Chen, Z. Wei, Z. Huang, B. Ding, Y. Li, Simple and deep graph convolutional networks, in: Proceedings of the International Conference on Machine Learning, 2020, pp. 1725–1735.