This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Improving Expressive Power of Spectral Graph Neural Networks
with Eigenvalue Correction

Kangkang Lu1, Yanhua Yu1,, Hao Fei2, Xuan Li1, Zixuan Yang1, Zirui Guo1, Meiyu Liang1, Mengran Yin1, Tat-Seng Chua2 Corresponding author.
Abstract

In recent years, spectral graph neural networks, characterized by polynomial filters, have garnered increasing attention and have achieved remarkable performance in tasks such as node classification. These models typically assume that eigenvalues for the normalized Laplacian matrix are distinct from each other, thus expecting a polynomial filter to have a high fitting ability. However, this paper empirically observes that normalized Laplacian matrices frequently possess repeated eigenvalues. Moreover, we theoretically establish that the number of distinguishable eigenvalues plays a pivotal role in determining the expressive power of spectral graph neural networks. In light of this observation, we propose an eigenvalue correction strategy that can free polynomial filters from the constraints of repeated eigenvalue inputs. Concretely, the proposed eigenvalue correction strategy enhances the uniform distribution of eigenvalues, thus mitigating repeated eigenvalues, and improving the fitting capacity and expressive power of polynomial filters. Extensive experimental results on both synthetic and real-world datasets demonstrate the superiority of our method. The code is available at: https://github.com/Lukangkang123/EC-GNN

Introduction

Refer to caption
(a) Cora
Refer to caption
(b) Photo
Refer to caption
(c) Chameleon
Refer to caption
(d) Squirrel
Figure 1: The distribution of eigenvalues of the normalized Laplacian matrix on different datasets. The abscissa represents eigenvalues, and the ordinate represents the probability density.

Graph neural networks (GNNs) have recently achieved excellent performance in graph learning tasks such as abnormal detection (Gao et al. 2023a, b), molecular prediction (Fang et al. 2024, 2023; Gong, Zheng, and Feng 2025), and recommendation systems (Wu et al. 2022; Gong et al. 2025). Existing GNNs can be divided into two categories: spatial GNNs and spectral GNNs. Spatial GNNs often adopt a message-passing mechanism to learn node representations by aggregating neighbor node features. Spectral GNNs map node features to a new desired space by selectively attenuating or amplifying the Fourier coefficients induced by the normalized Laplacian matrix of the graph. This study primarily centers on the realm of spectral GNNs.

Existing spectral filters are commonly approximated using fixed-order (or truncated) polynomials. Specifically, these polynomial filters take the normalized Laplacian eigenvalues as input and map them to varying scalar values via a polynomial function. However, our empirical findings unveil that the eigenvalues of normalized Laplacian matrix in real-world graphs tend to exhibit high multiplicity. This phenomenon is visually demonstrated in Figure 1111Additional distribution histograms for more datasets are provided in Appendix B., where a substantial number of eigenvalues cluster around the value 1. Consequently, when two frequency components share the same eigenvalue λ\lambda, they will be scaled by the same amount h(λ)h(\lambda)(Wang and Zhang 2022). As a result, the coefficients of these frequency components will retain the same proportion in predictions as in the input. And an abundance of repeated eigenvalues inevitably curbs the expressive capacity of polynomial filters. To illustrate, Figure 2 highlights how a filter possessing seven distinct eigenvalues exhibits weaker fitting capabilities than the one endowed with eleven distinct eigenvalues.

Based on the insights gained thus far, it’s apparent that the eigenvalue multiplicity of the normalized Laplacian matrix significantly impedes the polynomial filter’s capacity to effectively fit. More severely, the challenge cannot be readily addressed by simply elevating the polynomial order, because even exceedingly high-order polynomials could fail to differentiate outputs for identical eigenvalues. To tackle this, we propose an eigenvalue correction strategy to generate more distinct eigenvalues, which reduces eigenvalue multiplicity and in return enhances the fitting prowess of polynomial filters across intricate and diverse scenarios. In essence, our strategy combines the original eigenvalues and the eigenvalues sampled at equal intervals from [0, 2]. This amalgamation ensures a more uniform eigenvalue distribution while still retaining the frequency information encapsulated within the eigenvalues. Furthermore, we manage to enhance training efficiency without excessively elongating precomputation time, which allows us to efficiently amplify the order of polynomial spectral GNNs, contributing to a refined and potent modeling capability.

To summarize, our contributions in this work are highlighted as follows:

  • We empirically discover that the normalized Laplacian matrix of a majority of real-world graphs exhibits elevated eigenvalue multiplicity. Furthermore, we provide theoretical evidence that the distinctiveness of eigenvalues within the normalized Laplacian matrix critically shapes the efficacy of polynomial filters. Consequently, a surplus of repeated eigenvalues obstructs the potential expressiveness of these filters.

  • We introduce an innovative eigenvalue correction strategy capable of substantially diminishing eigenvalue multiplicity. By ingeniously amalgamating original eigenvalues with equidistantly sampled counterparts, our method can seamlessly integrate with existing polynomial filters of GNNs.

  • Extensive experiments on synthetic and real-world datasets unequivocally demonstrate the superiority of our proposed approach over state-of-the-art polynomial filters. This substantiates the remarkable enhancement in the fitting prowess and expressive capacity of polynomial filters, achieved through the novel eigenvalue correction strategy we present.

Refer to caption
(a) seven eigenvalues
Refer to caption
(b) eleven eigenvalues
Figure 2: Illustration of two band-rejection filters with different numbers of eigenvalues. The red dotted line in (b) represents more eigenvalues than (a).

Related Work

Spectral GNNs

According to whether the filter can be learned, the spectral GNNs can be divided into predefined filters (Kipf and Welling 2017; Gasteiger, Bojchevski, and Günnemann 2019; Zhu et al. 2021) and learnable filters (Defferrard, Bresson, and Vandergheynst 2016; Chien et al. 2021; He et al. 2021; Wang and Zhang 2022).

In the category of predefined filters, GCN (Kipf and Welling 2017) uses a simplified first-order Chebyshev polynomial, which is shown to be a low-pass filter. APPNP (Gasteiger, Bojchevski, and Günnemann 2019) utilizes Personalized Page Rank (PPR) to set the weight of the filter. GNN-LF/HF (Zhu et al. 2021) designs filter weights from the perspective of graph optimization functions, which can fit high-pass and low-pass filters.

Another category of spectral GNNs employs learnable filters. ChebNet (Defferrard, Bresson, and Vandergheynst 2016) uses Chebyshev polynomials with learnable coefficients. GPR-GNN (Chien et al. 2021) extends APPNP by directly parameterizing its weights and training them with gradient descent. ARMA (Bianchi et al. 2021) learns a reasonable filter through an autoregressive moving average filter family. BernNet (He et al. 2021) uses Bernstein polynomials to learn filters and forces all coefficients positive. JacobiConv (Wang and Zhang 2022) adopts an orthogonal and flexible Jacobi basis to accommodate a wide range of weight functions. FavardGNN (Guo and Wei 2023) learns an optimal polynomial basis from the space of all possible orthonormal basis.

However, the above filters typically use repeated eigenvalues as input, which hinders their expressiveness. In contrast, we propose a novel eigenvalue correction strategy that can generate more distinct eigenvalues and thus improve the expressive power of polynomial filters.

datasets Cora Citeseer Pubmed Computers Photo Texas Cornell Squirrel Chameleon Actor
𝒱\|\mathcal{V}\| 2708 3327 19717 13752 7650 183 183 5201 2277 7600
\|\mathcal{E}\| 5278 4552 44324 245861 119081 279 277 198353 31371 26659
NdistinctN_{\text{distinct}} 2199 1886 7595 13344 7474 106 115 3272 1120 6419
PdistinctP_{\text{distinct}} 81.2% 56.7% 38.5% 97.0% 97.7% 57.9% 62.8% 62.9% 49.2% 84.5%
Table 1: Dataset statistics. 𝒱\|\mathcal{V}\| is the number of nodes, \|\mathcal{E}\| is the number of edges, NdistinctN_{\text{distinct}} is the number of distinct eigenvalues, and PdistinctP_{\text{distinct}} is the proportion of distinct eigenvalues to all eigenvalues.

Expressive Power of Spectral GNNs

The Weisfeiler-Lehman test (Weisfeiler and Leman 1968) is considered as a classic algorithm for judging graph isomorphism. GIN (Xu et al. 2019) shows that the 1-WL test limits the expressive power of GNNs to distinguish non-isomorphic graphs. Balcilar et al. (2021) theoretically demonstrates some equivalence of graph convolution procedures by bridging the gap between spectral and spatial designs of graph convolutions. JacobiConv (Wang and Zhang 2022) points out that the condition for spectral GNN to generate arbitrary one-dimensional predictions is that the normalized Laplacian matrix has no repeated eigenvalues and no missing frequency components of node features.

Distribution of Eigenvalues of Normalized Laplacian Matrix

For random graphs, Jiang (2012) proves that the empirical distribution of the eigenvalues of the normalized Laplacian matrix of Erdős-Rényi random graphs converges to the semicircle law. Chung, Lu, and Vu (2003) also proves that the Laplacian spectrum of a random graph with expected degree obeys the semicircle law. Empirically, we observe that as the average degree of a random graph increases, its eigenvalue distribution becomes more concentrated around 1. Histograms of their distributions can be found in Appendix C.

For real-world graphs, we empirically find that almost all real-world graphs have many repeated eigenvalues, especially around the value 1, which is consistent with the findings of QPGCN (Wu et al. 2023) and SignNet (Lim et al. 2023). Furthermore, some works (Banerjee and Jost 2008; Mehatari and Banerjee 2015) attempt to explain the generation of the multiplicity of eigenvalue 1 by motif doubling and attachment. The number and proportion of distinct eigenvalues for real-world graphs are shown in Table 1. It’s evident that almost all datasets do not have enough distinct eigenvalues, and some are even lower than 50%, which will hinder the fitting ability and expressive power of polynomial filters.

Proposed Method

Preliminaries

Assume that we have a graph 𝒢=(𝒱,,𝐗)\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X}), where 𝒱={v1,,vn}\mathcal{V}=\left\{v_{1},\ldots,v_{n}\right\} denotes the vertex set of nn nodes, \mathcal{E} is the edge set and 𝐗n×d\mathbf{X}\in\mathbb{R}^{n\times d} is node feature matrix. The corresponding adjacency matrix is 𝐀{0,1}n×n\mathbf{A}\in\{0,1\}^{n\times n}, where 𝐀ij=1\mathbf{A}_{ij}=1 if there is an edge between nodes viv_{i} and vjv_{j}, and 𝐀ij=0\mathbf{A}_{ij}=0 otherwise. The degree matrix 𝐃=diag(d1,,dn)\mathbf{D}=\operatorname{diag}(d_{1},...,d_{n}) of 𝐀\mathbf{A} is a diagonal matrix with its ii-th diagonal entry as di=j𝐀ijd_{i}=\sum_{j}\mathbf{A}_{ij}. The normalized adjacency matrix is 𝐀^=𝐃12𝐀𝐃12\mathbf{\hat{A}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}. Let 𝐈\mathbf{I} denote the identity matrix. The normalized Laplacian matrix 𝐋^=𝐈𝐀^\mathbf{\hat{L}}=\mathbf{I}-\mathbf{\hat{A}}. Let 𝐋^=𝐔𝚲𝐔\mathbf{\hat{L}}=\mathbf{U}\boldsymbol{\Lambda}\mathbf{U}^{\top} denote the eigen-decomposition of 𝐋^\mathbf{\hat{L}}, where 𝐔\mathbf{U} is the matrix of eigenvectors and 𝚲=diag(𝝀)=diag([λ1,λ2,,λn])\boldsymbol{\Lambda}=\text{diag}(\boldsymbol{\lambda})=\text{diag}([\lambda_{1},\lambda_{2},\dots,\lambda_{n}]) is the diagonal matrix of eigenvalues.

The Fourier transform of a graph signal 𝐱\mathbf{x} is defined as 𝐱^=𝐔𝐱\hat{\mathbf{x}}=\mathbf{U}^{\top}\mathbf{x}, and its inverse is 𝐱=𝐔𝐱^\mathbf{x}=\mathbf{U}\hat{\mathbf{x}}. Thus the graph propagation for signal 𝐱\mathbf{x} with kernel 𝐠\mathbf{g} can be defined as:

𝐳=𝐠𝒢𝐱=𝐔((𝐔𝐠)𝐔𝐱)=𝐔𝐆^𝐔𝐱,\mathbf{z}=\mathbf{g}*_{\mathcal{G}}\mathbf{x}=\mathbf{U}\left(\left(\mathbf{U}^{\top}\mathbf{g}\right)\odot\mathbf{U}^{\top}\mathbf{x}\right)=\mathbf{U}\mathbf{\hat{G}}\mathbf{U}^{\top}\mathbf{x}, (1)

where 𝐆^=diag(g^1,,g^n)\mathbf{\hat{G}}=\operatorname{diag}\left(\hat{g}_{1},\ldots,\hat{g}_{n}\right) denotes the spectral kernel coefficients. In Eq. (1), the signal 𝐱\mathbf{x} is first transformed from the spatial domain to the spectral domain, then signals of different frequencies are enhanced or weakened by the filter 𝐆^\mathbf{\hat{G}}, and finally transformed back to the spatial domain to obtain the filtered signal 𝐳\mathbf{z}. To avoid eigen-decomposition, current works with spectral convolution usually use the polynomial functions h(𝐋^)h(\mathbf{\hat{L}}) to approximate different kernels, we call them polynomial spectral GNNs:

𝐙=h(𝐋^)𝐗𝐖=𝐔h(𝚲)𝐔𝐗𝐖,\mathbf{Z}=h(\mathbf{\hat{L}})\mathbf{X}\mathbf{W}=\mathbf{U}h(\boldsymbol{\Lambda})\mathbf{U}^{\top}\mathbf{X}\mathbf{W}, (2)

where 𝐖\mathbf{W} is a learnable weight matrix for feature mapping and 𝐙\mathbf{Z} is the prediction matrix.

Expressive Power of Existing Polynomial Spectral GNNs

For the expressive power of polynomial spectral GNNs, The following lemma is given in JacobiConv:

Lemma 1

polynomial spectral GNNs can produce any one-dimensional prediction if 𝐋^\mathbf{\hat{L}} has no repeated eigenvalues and 𝐗\mathbf{X} contains all frequency components.

The proof of this lemma can be found in Appendix B.1 of JacobiConv (Wang and Zhang 2022). The lemma shows that the expressive power of polynomial spectral GNNs is related to the repeated eigenvalues.

To focus on exploring the fitting ability of the filter h(𝚲)h(\boldsymbol{\Lambda}) and avoid the influence of 𝐖\mathbf{W}, we fix the weight matrix 𝐖\mathbf{W} and assume that all elements of (𝐔𝐗𝐖)(\mathbf{U}^{\top}\mathbf{X}\mathbf{W}) are non-zero. Thus, for the relationship between the expressive power of polynomial spectral GNNs and the distinct eigenvalues, we have the following theorem:

Theorem 1

When there are only kk distinct eigenvalues of the normalized Laplacian matrix, polynomial spectral GNNs can produce at most kk different filter coefficients, and thus can only generate one-dimensional predictions with a maximum of kk arbitrary elements.

We provide the proof of this theorem in Appendix A.1. Theorem 1 strictly shows that when the eigenvalues are more distinct, the polynomial spectral GNNs will be more expressive. In other words, the more repeated eigenvalues, the less expressive polynomial spectral GNNs are. Thus, the number of distinct eigenvalues is the key to the expressive power of polynomial spectral GNNs. Therefore, we need to reduce the repeated eigenvalues of the normalized Laplacian matrix as much as possible, given that real-world graphs often have many repeated eigenvalues.

Eigenvalue Correction

We now elaborate on the proposed eigenvalue correction strategy. In order to reduce repeated eigenvalues, a natural solution is that we can sample equally spaced from [0, 2] as new eigenvalues 𝝊\boldsymbol{\upsilon}. Thus, the ii-th value of 𝝊\boldsymbol{\upsilon} is defined as follows:

υi=2in1,i{0,1,2,,n1},\upsilon_{i}=\frac{2i}{n-1},\quad i\in\{0,1,2,...,n-1\},\quad (3)

where nn is the number of nodes, which is also the number of eigenvalues. We serialize the vector 𝝊\boldsymbol{\upsilon} into {υi}\{\upsilon_{i}\}. Obviously, the sequence {υi}\{\upsilon_{i}\} is a strictly monotonically increasing arithmetic sequence with 0 as the first item and 2n1\frac{2}{n-1} as the common difference. We call 𝝊\boldsymbol{\upsilon} equidistant eigenvalues.

Refer to caption
(a) original eigenvalues
Refer to caption
(b) corrected eigenvalues
Figure 3: Histogram of the probability density distribution of the original eigenvalues and the corrected eigenvalues.

However, the original eigenvalue 𝝀\boldsymbol{\lambda} of the normalized Laplacian matrix describes the frequency information and represents the speed of the signal change. Specifically, the magnitude of the eigenvalue represents the difference information of the signal between the neighbors of the node. Therefore, if we directly use the 𝝊\boldsymbol{\upsilon} in Eq. (3) as the new eigenvalues, polynomial spectral GNNs will not be able to capture the frequency information represented by each frequency component. Hence, it is still necessary to include the original eigenvalues in the input. A simple way is to amalgamate them linearly to obtain the new eigenvalues 𝝁\boldsymbol{\mu}:

μi=βλi+(1β)υi,i{0,1,2,,n1},\mu_{i}=\beta\lambda_{i}+(1-\beta)\upsilon_{i},\quad i\in\{0,1,2,...,n-1\},\quad (4)

where β[0,1]\beta\in[0,1] is a tunable hyperparameter, and {λi}\{\lambda_{i}\} is a sequence of original eigenvalues 𝝀\boldsymbol{\lambda} in ascending order. Thus, we have the following theorem:

Theorem 2

When β[0,1)\beta\in[0,1), the sequence {μi}\{\mu_{i}\} is strictly monotonically increasing, that is, all n elements of the sequence {μi}\{\mu_{i}\} are different from each other.

The proof of this theorem is given in Appendix A.2. Theorem 2 indicates that the new eigenvalues 𝝁={μ1,,μn}\boldsymbol{\mu}=\{\mu_{1},...,\mu_{n}\} have no repeated elements when β[0,1)\beta\in[0,1), thus satisfying the condition of Lemma 1. It should be noted that if the spacing between two eigenvalues is very small, it is difficult for a polynomial spectral GNN to generate different filter coefficients for them. Therefore, the spacing between eigenvalues is crucial to guarantee the expressive power of polynomial spectral GNNs.

It is worth noting that β\beta is a key hyperparameter in Eq. (4), which can be used to trade off between {λi}\{\lambda_{i}\} and {υi}\{\upsilon_{i}\}. The smaller β\beta, the closer {μi}\{\mu_{i}\} is to {υi}\{\upsilon_{i}\}, and the more uniform the spacing between the new eigenvalues {μi}\{\mu_{i}\}, but the frequency information may not be captured. On the contrary, the larger β\beta, the closer {μi}\{\mu_{i}\} is to {λi}\{\lambda_{i}\}, and the closer the new eigenvalue is to the frequency it represents, but the spacing between the eigenvalues may be uneven. When β=1\beta=1 , {μi}\{\mu_{i}\} degenerates into the original eigenvalues {λi}\{\lambda_{i}\} of the Laplacian matrix. When β=0\beta=0, {μi}\{\mu_{i}\} degenerates to the equidistantly sampled {υi}\{\upsilon_{i}\} in Eq. (3), whose spacing is absolutely uniform. In short, the hyperparameter β\beta needs to be set to an appropriate value so that the spacing between eigenvalues is relatively uniform and not too far away from the original eigenvalues.

For example, the Figure 3 shows the distribution histogram of the original eigenvalues {λi}\{\lambda_{i}\} and the corrected eigenvalues {μi}\{\mu_{i}\} on Cora dataset when β\beta=0.5. We can see that, compared to Figure 3(a), Figure 3(b) has no sharp bumps, and the distribution of eigenvalues is more uniform. Hence, it is expected to fit more complex and diverse filters.

Combination of Corrected Eigenvalues with Existing Polynomial Filters

In order to fully take advantage of polynomial spectral GNNs, after getting the corrected eigenvalues, We are required to integrate them with a polynomial filter. First, we set 𝐄=𝐔diag(𝝊)𝐔\mathbf{E}=\mathbf{U}\operatorname{diag}(\boldsymbol{\upsilon})\mathbf{U}^{\top}, 𝐇=𝐔diag(𝝁)𝐔\mathbf{H}=\mathbf{U}\operatorname{diag}(\boldsymbol{\mu})\mathbf{U}^{\top}. Combined with Eq. (4) and 𝐋^=𝐔diag(𝝀)𝐔\mathbf{\hat{L}}=\mathbf{U}\operatorname{diag}(\boldsymbol{\lambda})\mathbf{U}^{\top}, we have:

𝐇=β𝐋^+(1β)𝐄.\mathbf{H}=\beta\mathbf{\hat{L}}+(1-\beta)\mathbf{E}. (5)

Then, a polynomial h(𝐇)h(\mathbf{H}) of order KK can be written as:

h(𝐇)=k=0Kαk𝐇k=k=0Kαk(β𝐋^+(1β)𝐄)k,h(\mathbf{H})=\sum_{k=0}^{K}\alpha_{k}\mathbf{H}^{k}=\sum_{k=0}^{K}\alpha_{k}(\beta\mathbf{\hat{L}}+(1-\beta)\mathbf{E})^{k}, (6)

where h(x)=k=0Kαkxkh(x)=\sum_{k=0}^{K}\alpha_{k}x^{k}. The difference between the proposed method and the existing polynomial filter is that 𝐋^\mathbf{\hat{L}} is replaced by 𝐇\mathbf{H}. Therefore, our method can be quite flexibly integrated into existing polynomial filters without excessive modifications. However, when the degree of the polynomial is high, the matrix multiplication will be computationally expensive, especially for dense graphs. Fortunately, we can compute h(𝐇)h(\mathbf{H}) by directly using polynomials of the eigenvalues instead of polynomials of the matrix:

h(𝐇)=h(𝐔diag(𝝁)𝐔)=𝐔k=0Kdiag(αk𝝁k)𝐔.h(\mathbf{H})=h(\mathbf{U}\operatorname{diag}(\boldsymbol{\mu})\mathbf{U}^{\top})=\mathbf{U}\sum_{k=0}^{K}\operatorname{diag}(\alpha_{k}\boldsymbol{\mu}^{k})\mathbf{U}^{\top}. (7)

We find that using Eq. (7) to calculate h(𝐇)h(\mathbf{H}) is much faster than Eq. (6), resulting in improved training efficiency. Moreover, we can increase the order KK to improve the fitting ability without increasing too much computational cost.

The complete filtering process can be expressed as:

𝐙=h(𝐇)𝐗𝐖=𝐔k=0Kdiag(αk𝝁k)𝐔𝐗𝐖.\mathbf{Z}=h(\mathbf{H})\mathbf{X}\mathbf{W}=\mathbf{U}\sum_{k=0}^{K}\operatorname{diag}(\alpha_{k}\boldsymbol{\mu}^{k})\mathbf{U}^{\top}\mathbf{X}\mathbf{W}. (8)

Next, we present the details of the proposed eigenvalue correction strategy integrated into existing polynomial filters.

Insertion into GPR-GNN.

GPR-GNN (Chien et al. 2021) directly assigns a learnable coefficient to each order of the normalized adjacency matrix 𝐀^\mathbf{\hat{A}}, and its polynomial filter is defined as:

k=0Kγk𝐀^k=𝐔gγ,K(Λ)𝐔T,\sum_{k=0}^{K}\gamma_{k}\mathbf{\hat{A}}^{k}=\mathbf{U}g_{\gamma,K}(\Lambda)\mathbf{U}^{T}, (9)

where gγ,K(Λ)g_{\gamma,K}(\Lambda) is an element-wise operation, and gγ,K(x)=k=0Kγkxkg_{\gamma,K}(x)=\sum_{k=0}^{K}\gamma_{k}x^{k}. It is worth noting that GPR-GNN uses a polynomial of the normalized adjacency matrix instead of the Laplacian matrix. Fortunately, they are constrained via : 𝐀^=𝐈𝐋^\mathbf{\hat{A}}=\mathbf{I}-\mathbf{\hat{L}}. Therefore, we integrate the corrected eigenvalues 𝝁\boldsymbol{\mu} into GPR-GNN to obtain a new polynomial filter:

k=0Kγk𝐀^k=𝐔k=0Kdiag(γk(1𝝁)k)𝐔T.\sum_{k=0}^{K}\gamma_{k}\mathbf{\hat{A}}^{k}=\mathbf{U}\sum_{k=0}^{K}\operatorname{diag}(\gamma_{k}(1-\boldsymbol{\mu})^{k})\mathbf{U}^{T}. (10)

Insertion into BernNet.

BernNet (He et al. 2021) expresses the filtering operation with Bernstein polynomials and forces all coefficients to be positive, and its filter is defined as:

k=0Kθk12K(Kk)(2𝐈𝐋)Kk𝐋k.\sum_{k=0}^{K}\theta_{k}\frac{1}{2^{K}}\left(\begin{array}[]{c}K\\ k\end{array}\right)(2\mathbf{I}-\mathbf{L})^{K-k}\mathbf{L}^{k}. (11)

Similarly, we have:

𝐔k=0Kdiag(θk12K(Kk)(2𝝁)Kk𝝁k)𝐔.\mathbf{U}\sum_{k=0}^{K}\operatorname{diag}(\theta_{k}\frac{1}{2^{K}}\left(\begin{array}[]{l}K\\ k\end{array}\right)(2-\boldsymbol{\mu})^{K-k}\odot\boldsymbol{\mu}^{k})\mathbf{U}^{\top}. (12)

Insertion into JacobiConv.

JacobiConv (Wang and Zhang 2022) proposes a Jacobi basis to adapt a wide range of weight functions due to its orthogonality and flexibility. The iterative process of Jacobi basis can be defined as:

P0a,b(x)=1,\displaystyle P_{0}^{a,b}(x)=1, (13)
P1a,b(x)=0.5a0.5b+(0.5a+0.5b+1)x,\displaystyle P_{1}^{a,b}(x)=0.5a-0.5b+(0.5a+0.5b+1)x,
Pka,b(x)=(2k+a+b1)\displaystyle P_{k}^{a,b}(x)=(2k+a+b-1)
(2k+a+b)(2k+a+b2)x+a2b22k(k+a+b)(2k+a+b2)Pk1a,b(x)\displaystyle\cdot\frac{(2k+a+b)(2k+a+b-2)x+a^{2}-b^{2}}{2k(k+a+b)(2k+a+b-2)}P_{k-1}^{a,b}(x)
(k+a1)(k+b1)(2k+a+b)k(k+a+b)(2k+a+b2)Pk2a,b(x),\displaystyle-\frac{(k+a-1)(k+b-1)(2k+a+b)}{k(k+a+b)(2k+a+b-2)}P_{k-2}^{a,b}(x),

where aa and bb are tunable hyperparameters. Unlike GPR-GNN and BernNet, JacobiConv adopts an individual filter function for each output dimension ll:

𝐙:l=k=0KαklPka,b(𝐀^)(𝐗𝐖):l.\mathbf{Z}_{:l}=\sum_{k=0}^{K}\alpha_{kl}P_{k}^{a,b}(\mathbf{\hat{A}})(\mathbf{X}\mathbf{W})_{:l}. (14)

Then, we integrate the corrected eigenvalues 𝝁\boldsymbol{\mu} into the JacobiConv:

𝐙:l=𝐔k=0Kdiag(αklPka,b(1𝝁))𝐔T(𝐗𝐖):l.\mathbf{Z}_{:l}=\mathbf{U}\sum_{k=0}^{K}\operatorname{diag}(\alpha_{kl}\cdot P_{k}^{a,b}(1-\boldsymbol{\mu}))\mathbf{U}^{T}(\mathbf{X}\mathbf{W})_{:l}. (15)

Experiment

In this section, we conduct a series of comprehensive experiments to demonstrate the effectiveness of the proposed method. First, we integrate the eigenvalue correction strategy into the three methods of GPR-GNN, BernNet, and JacobiConv to complete the learning of filter functions on synthetic datasets and node classification tasks on real-world datasets. Then, we report experiments on ablation analysis and sensitivity analysis of the hyperparameter β\beta. Finally, we show the evaluation of the time cost. All experiments are conducted on a machine with 3 NVIDIA A5000 24GB GPUs and Intel(R) Xeon(R) Silver 4310 2.10GHz CPU.

Evaluation on Learning Filters

Following previous works (He et al. 2021; Wang and Zhang 2022), we transform 50 real images to 2D regular 4-neighbor grid graphs. The nodes are pixels, and every two neighboring pixels have an edge between the corresponding nodes. Five predefined graph filters, i.e., low (e10λ2e^{-10\lambda^{2}}), high (1e10λ21-e^{-10\lambda^{2}}), band (e10(λ1)2e^{-10(\lambda-1)^{2}}), reject (1e10(λ1)21-e^{-10(\lambda-1)^{2}}), and comb (|sinπλ||\sin\pi\lambda|) are used to generate ground truth graph signals. We use the mean squared error of 50 images as the evaluation metric.

To verify the effectiveness of the proposed Eigenvalue Correction (EC) strategy, we integrate it into three representative spectral GNNs with trainable polynomial filters, i.e., GPR-GNN (Chien et al. 2021), BernNet (He et al. 2021), and JacobiConv (Wang and Zhang 2022). Thus, we have three variants: EC-GPR, EC-Bern, and EC-Jacobi. For fairness, all experimental settings are the same as the base model, including dataset partition, hyperparameters, random seeds, etc. We just tune the β{0,0.1,,0.9}\beta\in\{0,0.1,...,0.9\} hyperparameter to select the best model. In addition, for a more comprehensive comparison, we also consider four other baseline methods, including GCN (Kipf and Welling 2017), GAT (Veličković et al. 2019), ARMA (Bianchi et al. 2021), and ChebNet (Defferrard, Bresson, and Vandergheynst 2016).

The quantitative experiment results are shown in Table 2, revealing that our methods all achieve the best performance compared to the base models. For example, on low-pass and high-pass filters, our method outperforms the base model by an average of 12.9%, while on band-pass, comb-pass, and reject-pass filters, our method outperforms the base model by an average of 77.9%. Since complex filters require more frequency components to fit, our method improves more on band-pass, comb-pass, and reject-pass filters. Moreover, GCN and GAT only perform better on low-pass filters applied to homophilic graphs, illustrating the limited fitting ability of fixed filters. In contrast, polynomial-based GNNs perform better due to the ability to learn arbitrary filters. Nevertheless, they all take a large number of repeated eigenvalues as input, therefore, their expressive power is still weaker than our methods.

Low High Band Reject Comb
GCN 3.47993.4799 67.663567.6635 25.875525.8755 21.074721.0747 50.512050.5120
GAT 2.35742.3574 21.961821.9618 14.432614.4326 12.638412.6384 23.181323.1813
ARMA 1.84781.8478 1.86321.8632 7.69227.6922 8.27328.2732 15.121415.1214
ChebyNet 0.82200.8220 0.78670.7867 2.27222.2722 2.52962.5296 4.07354.0735
GPR-GNN 0.41690.4169 0.09430.0943 3.51213.5121 3.79173.7917 4.65494.6549
EC-GPR 0.2703\bf{0.2703} 0.0656\bf{0.0656} 0.2920\bf{0.2920} 1.1266\bf{1.1266} 1.0681\bf{1.0681}
BernNet 0.03140.0314 0.0113\bf{0.0113} 0.04110.0411 0.93130.9313 0.99820.9982
EC-Bern 0.0277\bf{0.0277} 0.0113\bf{0.0113} 0.0058\bf{0.0058} 0.2391\bf{0.2391} 0.3302\bf{0.3302}
JacobiConv 0.0003\bf{0.0003} 0.0011\bf{0.0011} 0.02130.0213 0.01560.0156 0.29330.2933
EC-Jacobi 0.0003\bf{0.0003} 0.0011\bf{0.0011} 0.0088\bf{0.0088} 0.0018\bf{0.0018} 0.0370\bf{0.0370}
Table 2: Average loss in learning filters experiments.
Datasets GCN APPNP Chebynet GPR-GNN EC-GPR BernNet EC-Bern JacobiConv EC-Jacobi
Cora 87.14±1.0187.14_{\pm 1.01} 88.14±0.7388.14_{\pm 0.73} 86.67±0.8286.67_{\pm 0.82} 88.57±0.6988.57_{\pm 0.69} 89.38±0.95\bf{89.38_{\pm 0.95}} 88.52±0.9588.52_{\pm 0.95} 88.65±0.87\bf{88.65_{\pm 0.87}} 88.98±0.4688.98_{\pm 0.46} 89.00±0.67\bf{89.00_{\pm 0.67}}
Citeseer 79.86±0.6779.86_{\pm 0.67} 80.47±0.7480.47_{\pm 0.74} 79.11±0.7579.11_{\pm 0.75} 80.12±0.8380.12_{\pm 0.83} 80.66±0.85\bf{80.66_{\pm 0.85}} 80.09±0.7980.09_{\pm 0.79} 80.26±0.60\bf{80.26_{\pm 0.60}} 80.78±0.7980.78_{\pm 0.79} 81.23±1.00\bf{81.23_{\pm 1.00}}
Pubmed 86.74±0.2786.74_{\pm 0.27} 88.12±0.3188.12_{\pm 0.31} 87.95±0.2887.95_{\pm 0.28} 88.46±0.3388.46_{\pm 0.33} 89.63±0.31\bf{89.63_{\pm 0.31}} 88.48±0.4188.48_{\pm 0.41} 89.10±0.34\bf{89.10_{\pm 0.34}} 89.62±0.4189.62_{\pm 0.41} 89.64±0.37\bf{89.64_{\pm 0.37}}
Computers 83.32±0.3383.32_{\pm 0.33} 85.32±0.3785.32_{\pm 0.37} 87.54±0.4387.54_{\pm 0.43} 86.85±0.2586.85_{\pm 0.25} 89.89±0.43\bf{89.89_{\pm 0.43}} 87.64±0.4487.64_{\pm 0.44} 88.36±0.48\bf{88.36_{\pm 0.48}} 90.39±0.29\bf{90.39_{\pm 0.29}} 90.33±0.2890.33_{\pm 0.28}
Photo 88.26±0.7388.26_{\pm 0.73} 88.51±0.3188.51_{\pm 0.31} 93.77±0.3293.77_{\pm 0.32} 93.85±0.2893.85_{\pm 0.28} 94.76±0.94\bf{94.76_{\pm 0.94}} 93.63±0.3593.63_{\pm 0.35} 94.49±0.30\bf{94.49_{\pm 0.30}} 95.43±0.2395.43_{\pm 0.23} 95.54±0.34\bf{95.54_{\pm 0.34}}
Chameleon 59.61±2.2159.61_{\pm 2.21} 51.84±1.8251.84_{\pm 1.82} 59.28±1.2559.28_{\pm 1.25} 67.28±1.0967.28_{\pm 1.09} 74.20±1.07\bf{74.20_{\pm 1.07}} 68.29±1.5868.29_{\pm 1.58} 74.20±1.31\bf{74.20_{\pm 1.31}} 74.20±1.0374.20_{\pm 1.03} 75.62±1.51\bf{75.62_{\pm 1.51}}
Actor 33.23±1.1633.23_{\pm 1.16} 39.66±0.5539.66_{\pm 0.55} 37.61±0.8937.61_{\pm 0.89} 39.92±0.6739.92_{\pm 0.67} 40.43±0.84\bf{40.43_{\pm 0.84}} 41.79±1.0141.79_{\pm 1.01} 41.88±1.05\bf{41.88_{\pm 1.05}} 41.17±0.64\bf{41.17_{\pm 0.64}} 40.99±0.7040.99_{\pm 0.70}
Squirrel 46.78±0.8746.78_{\pm 0.87} 34.71±0.5734.71_{\pm 0.57} 40.55±0.4240.55_{\pm 0.42} 50.15±1.9250.15_{\pm 1.92} 62.43±0.67\bf{62.43_{\pm 0.67}} 51.35±0.7351.35_{\pm 0.73} 62.74±1.04\bf{62.74_{\pm 1.04}} 57.38±1.2557.38_{\pm 1.25} 59.88±0.86\bf{59.88_{\pm 0.86}}
Texas 77.38±3.2877.38_{\pm 3.28} 90.98±1.6490.98_{\pm 1.64} 86.22±2.4586.22_{\pm 2.45} 92.95±1.31\bf{92.95_{\pm 1.31}} 92.30±1.8092.30_{\pm 1.80} 93.12±0.6593.12_{\pm 0.65} 94.43±1.31\bf{94.43_{\pm 1.31}} 93.44±2.1393.44_{\pm 2.13} 93.44±1.48\bf{93.44_{\pm 1.48}}
Cornell 65.90±4.4365.90_{\pm 4.43} 91.81±1.9691.81_{\pm 1.96} 83.93±2.1383.93_{\pm 2.13} 91.37±1.81\bf{91.37_{\pm 1.81}} 90.66±2.1390.66_{\pm 2.13} 92.13±1.6492.13_{\pm 1.64} 93.77±0.98\bf{93.77_{\pm 0.98}} 92.95±2.4692.95_{\pm 2.46} 93.28±2.30\bf{93.28_{\pm 2.30}}
Table 3: Results on real-world datasets: Mean accuracy (%) ±\pm 9595% confidence interval.

Evaluation on Real-World Datasets

We also evaluate the proposed method on the node classification task on real-world datasets. Following JacobiConv (Wang and Zhang 2022), for the homophilic graphs, we employ three citation datasets Cora, CiteSeer and PubMed (Yang, Cohen, and Salakhudinov 2016), and two co-purchase datasets Computers and Photo (Shchur et al. 2018). For heterophilic graphs, we use Wikipedia graphs Chameleon and Squirrel (Rozemberczki, Allen, and Sarkar 2021), the Actor co-occurrence graph, and the webpage graph Texas and Cornell from WebKB3 (Pei et al. 2020). Their statistics are listed in Table 1.

Following previous work (Wang and Zhang 2022), we use the full-supervised split, i.e., 60% for training, 20% for validation, and 20% for testing. Mean accuracy and 95% confidence intervals over 10 runs are reported. We integrate the eigenvalue correction strategy into GPR-GNN (Chien et al. 2021), BernNet (He et al. 2021), and JacobiConv (Wang and Zhang 2022). Moreover, we also add the results of GCN (Kipf and Welling 2017), APPNP (Gasteiger, Bojchevski, and Günnemann 2019), and ChebyNet (Defferrard, Bresson, and Vandergheynst 2016). For a fair comparison, we only tune the hyperparameter β{0,0.01,,0.99}\beta\in\{0,0.01,...,0.99\}, and the other experimental settings remain the same as the base model. All hyperparameter β\beta settings can be found in Appendix D.

Datasets GPR-GNN EC-GPR BernNet EC-Bern JacobiConv EC-Jacobi Decomposition
Cora 6.73/1.40 4.09/0.84 22.25/6.06 4.73/1.28 9.50/5.02 9.29/3.74 0.62
Citeseer 6.90/1.46 4.09/0.93 22.12/6.46 4.39/1.29 9.39/4.71 9.45/3.07 0.82
Pubmed 7.18/1.46 9.15/1.86 20.45/8.70 9.73/4.15 9.90/7.63 13.79/11.26 58.59
Computers 8.20/1.88 7.83/1.09 29.40/8.32 8.49/2.31 9.81/5.58 11.91/7.92 27.53
Photo 7.09/1.56 4.54/1.23 21.04/8.74 5.26/2.21 9.23/6.66 9.97/8.18 3.86
Chameleon 7.31/1.65 4.12/1.00 22.61/5.12 5.01/1.18 9.32/7.47 9.62/7.10 0.27
Actor 7.17/2.54 4.53/1.49 18.74/6.68 5.44/2.02 9.37/4.25 9.97/3.86 3.56
Squirrel 7.23/3.65 3.96/1.75 22.66/7.54 2.42/0.63 9.44/8.16 9.60/7.86 1.61
Texas 6.83/1.41 3.80/0.79 22.42/4.76 4.50/0.95 9.58/6.47 9.33/6.48 0.03
Cornell 7.17/1.48 3.89/0.80 22.34/4.66 4.49/0.93 9.10/5.30 9.53/3.18 0.02
Table 4: Per-epoch time (ms)/total training time (s) and eigen-decomposition time (s).

Table 3 presents the experimental results on real-world datasets, from which we have the following observations. 1) GNNs with learnable filters tend to perform better than GNNs with fixed filters since fixed filters, such as GCN and APPNP, cannot learn complex filter responses. In contrast, GPR-GNN, BernNet, and JacobiConv can learn different types of filters from both homophilic graphs and heterophilic graphs. 2) The proposed method outperforms the base models in at least eight of the ten datasets, especially on the Squirrel dataset, with an average improvement of 17% over the three base models. This is because heterophilic graphs require more frequency components than homophilic graphs to fit the patterns. In general, we achieve a 4.2% improvement in GPR-GNN, 3.7% improvement in BernNet, and 0.68% improvement in JacobiConv, demonstrating that our proposed eigenvalue correction strategy can improve the expressive power of polynomial spectral GNNs. 3) With our eigenvalue correction strategy, GPR-GNN and BernNet achieve competitive performance with JacobiConv, which shows that the expressive power of different polynomial filters with the same order is the same.

Ablation Analysis

Refer to caption
Figure 4: Ablation study of proposed method EC-xx on four datasets with our variants EC-xx-A and EC-xx-B for all xx\in {GPR, Bern, Jacobi}.

This subsection aims to validate our design through ablation studies. Previously, we believe that neither the original eigenvalues nor the equally spaced eigenvalues are optimal, while only combining them in a certain proportion can achieve the optimal performance. Therefore, we design two variants EC-xx-A and EC-xx-B to verify our conjecture, where xx\in{GPR, Bern, Jacobi}. EC-xx-A directly uses the equidistant eigenvalues 𝝊\boldsymbol{\upsilon} as input, that is, β\beta in Eq. (4) equals to 0. In contrast, EC-xx-B directly uses the original eigenvalues 𝝀\boldsymbol{\lambda} as input, that is, β\beta in Eq. (4) equals to 1.

Figure 4 shows the results of the ablation experiments on two homophilic graphs, Cora and Photo, and two heterophilic graphs, Chameleon and Squirrel. From Figure 4, we have the following findings: 1) The utilization of solely the original eigenvalues and equidistant eigenvalues as input demonstrates a decrease in performance, corroborating our hypothesis necessitating their amalgamation. 2) EC-xx-B of the heterophilic graphs drops more than the homophilic graphs, which indicates that the heterophilic graphs need more frequency component information to fit various responses.

Parameter Sensitivity Analysis

As discussed above, we add a key hyperparameter β\beta to adjust the ratio of the original eigenvalue 𝝀\boldsymbol{\lambda} and the equidistant eigenvalue 𝝊\boldsymbol{\upsilon}. Now we analyze the impact of this hyperparameter on the performance of homophilic graphs and heterophilic graphs. Similar to the ablation experiments, we select two homophilic datasets Cora and Photo, and two heterophilic datasets Chameleon and Squirrel. Figure 5 shows the effect of the hyperparameter β\beta on the performance of the three methods EC-GPR, EC-Bern, and EC-Jacobi.

Figure 5 indicates that on the Cora dataset, the general trend of model performance is to decrease with the increase of β\beta, and then increase, which shows that both the equidistant eigenvalues and original eigenvalues are useful. The trend on the Photo dataset is not prominent, possibly due to its limited occurrence of repeated eigenvalues. In comparison, for heterophilic graphs, the model performance is maintained at a high level when β\beta is less than 0.6, and when β\beta is greater than 0.6, the model performance drops significantly. This suggests that equidistant eigenvalues contribute more to heterophilic graphs, which is consistent with our previous analysis.

Refer to caption
Figure 5: Effect of hyperparameter β\beta on model performance.

Evaluation on Time Cost

Table 4 shows our time cost compared with the base models GPR-GNN, BernNet and JacobiConv. We can conclude that our time spent with the JacobiConv model is similar, 30.9% less than GPR-GNN and 75.5% less than BernNet. In particular, we significantly improve the training efficiency of BernNet, which is a quadratic time complexity related to the order KK of its polynomial. This is because we change the time-consuming polynomial of the matrix to the polynomial of the eigenvalues, thus reducing the calculation time.

Since obtaining eigenvalues and eigenvectors requires eigen-decomposition, we also add the time overhead of eigen-decomposition in Table 4. We can see from Table 4, the eigen-decomposition time for most datasets is less than 5 seconds, even for a large graph with nearly 20,000 nodes like Pubmed, the time for eigen-decomposition is less than one minute. For larger graphs, we can follow previous works (Lim et al. 2023; Bo et al. 2023; Lin, Chen, and Wang 2023; Yao, Yu, and Jiao 2021) and only take the largest and smallest kk eigenvalues and eigenvectors without a full eigen-decomposition. Moreover, the eigen-decomposition only needs to be precomputed once and can be reused in training afterward. In summary, the cost of eigen-decomposition is acceptable and negligible.

Conclusions

In this paper, we point out that the current polynomial spectral GNNs have insufficient expressive power due to a large number of repeated eigenvalues, so we propose a novel eigenvalue correction strategy that amalgamates the original eigenvalues with equidistant eigenvalues counterparts to make the distribution of eigenvalues more uniform, and not to deviate too far from the original frequency information. Moreover, we improve the training efficiency without increasing the precomputation time too much, which allows us to efficiently increase the order of polynomial spectral GNNs. Extensive experiments on synthetic and real-world datasets demonstrate that the proposed eigenvalue correction strategy outperforms the base model.

Acknowledgments

The work was supported by the National Natural Science Foundation of China (Grant No. U22B2019), National key research and development plan (Grant No. 2020YFB2104503), BUPT Excellent Ph.D. Students Foundation(CX2023128) and CCF-Baidu Open Fund.

References

  • Balcilar et al. (2021) Balcilar, M.; Guillaume, R.; Héroux, P.; Gaüzère, B.; Adam, S.; and Honeine, P. 2021. Analyzing the expressive power of graph neural networks in a spectral perspective. In Proceedings of the International Conference on Learning Representations (ICLR).
  • Banerjee and Jost (2008) Banerjee, A.; and Jost, J. 2008. On the spectrum of the normalized graph Laplacian. Linear algebra and its applications, 428(11-12): 3015–3022.
  • Bianchi et al. (2021) Bianchi, F. M.; Grattarola, D.; Livi, L.; and Alippi, C. 2021. Graph neural networks with convolutional arma filters. IEEE transactions on pattern analysis and machine intelligence, 44(7): 3496–3507.
  • Bo et al. (2023) Bo, D.; Shi, C.; Wang, L.; and Liao, R. 2023. Specformer: Spectral Graph Neural Networks Meet Transformers. In The Eleventh International Conference on Learning Representations.
  • Chien et al. (2021) Chien, E.; Peng, J.; Li, P.; and Milenkovic, O. 2021. Adaptive Universal Generalized PageRank Graph Neural Network. In International Conference on Learning Representations.
  • Chung, Lu, and Vu (2003) Chung, F.; Lu, L.; and Vu, V. 2003. Spectra of random graphs with given expected degrees. Proceedings of the National Academy of Sciences, 100(11): 6313–6318.
  • Defferrard, Bresson, and Vandergheynst (2016) Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29.
  • Fang et al. (2023) Fang, J.; Liu, W.; Gao, Y.; Liu, Z.; Zhang, A.; Wang, X.; and He, X. 2023. Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis. In Thirty-seventh Conference on Neural Information Processing Systems.
  • Fang et al. (2024) Fang, J.; Zhang, S.; Wu, C.; Yang, Z.; Liu, Z.; Li, S.; Wang, K.; Du, W.; and Wang, X. 2024. MolTC: Towards Molecular Relational Modeling In Language Models. In ACL (Findings). Association for Computational Linguistics.
  • Gao et al. (2023a) Gao, Y.; Wang, X.; He, X.; Liu, Z.; Feng, H.; and Zhang, Y. 2023a. Addressing Heterophily in Graph Anomaly Detection: A Perspective of Graph Spectrum. In WWW, 1528–1538. ACM.
  • Gao et al. (2023b) Gao, Y.; Wang, X.; He, X.; Liu, Z.; Feng, H.; and Zhang, Y. 2023b. Alleviating Structural Distribution Shift in Graph Anomaly Detection. In WSDM, 357–365. ACM.
  • Gasteiger, Bojchevski, and Günnemann (2019) Gasteiger, J.; Bojchevski, A.; and Günnemann, S. 2019. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In International Conference on Learning Representations.
  • Gong et al. (2025) Gong, C.; Yang, B.; Zheng, W.; Huang, X.; Huang, W.; Wen, J.; Sun, S.; and Yang, B. 2025. Scalable GNN Training via Parameter Freeze and Layer Detachment. In Database Systems for Advanced Applications(DASFAA).
  • Gong, Zheng, and Feng (2025) Gong, C.; Zheng, W.; and Feng, H. 2025. Accelerating DeepWalk via Context-Level Parameter Update and Huffman Tree Pruning. In Database Systems for Advanced Applications(DASFAA).
  • Guo and Wei (2023) Guo, Y.; and Wei, Z. 2023. Graph Neural Networks with Learnable and Optimal Polynomial Bases. arXiv preprint arXiv:2302.12432.
  • He et al. (2021) He, M.; Wei, Z.; Xu, H.; et al. 2021. Bernnet: Learning arbitrary graph spectral filters via bernstein approximation. Advances in Neural Information Processing Systems, 34: 14239–14251.
  • Jiang (2012) Jiang, T. 2012. Empirical distributions of Laplacian matrices of large dilute random graphs. Random Matrices: Theory and Applications, 1(03): 1250004.
  • Kipf and Welling (2017) Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations.
  • Lim et al. (2023) Lim, D.; Robinson, J. D.; Zhao, L.; Smidt, T.; Sra, S.; Maron, H.; and Jegelka, S. 2023. Sign and Basis Invariant Networks for Spectral Graph Representation Learning. In The Eleventh International Conference on Learning Representations.
  • Lin, Chen, and Wang (2023) Lin, L.; Chen, J.; and Wang, H. 2023. Spectral Augmentation for Self-Supervised Learning on Graphs. In The Eleventh International Conference on Learning Representations.
  • Mehatari and Banerjee (2015) Mehatari, R.; and Banerjee, A. 2015. Effect on normalized graph Laplacian spectrum by motif attachment and duplication. Applied Mathematics and Computation, 261: 382–387.
  • Pei et al. (2020) Pei, H.; Wei, B.; Chang, K. C.-C.; Lei, Y.; and Yang, B. 2020. Geom-GCN: Geometric Graph Convolutional Networks. In International Conference on Learning Representations.
  • Rozemberczki, Allen, and Sarkar (2021) Rozemberczki, B.; Allen, C.; and Sarkar, R. 2021. Multi-scale attributed node embedding. Journal of Complex Networks, 9(2): cnab014.
  • Shchur et al. (2018) Shchur, O.; Mumme, M.; Bojchevski, A.; and Günnemann, S. 2018. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868.
  • Veličković et al. (2019) Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; and Bengio, Y. 2019. Graph Attention Networks. In International Conference on Learning Representations.
  • Wang and Zhang (2022) Wang, X.; and Zhang, M. 2022. How powerful are spectral graph neural networks. In International Conference on Machine Learning, 23341–23362. PMLR.
  • Weisfeiler and Leman (1968) Weisfeiler, B.; and Leman, A. 1968. The reduction of a graph to canonical form and the algebra which appears therein. nti, Series, 2(9): 12–16.
  • Wu et al. (2023) Wu, G.; Lin, S.; Shao, X.; Zhang, P.; and Qiao, J. 2023. QPGCN: Graph convolutional network with a quadratic polynomial filter for overcoming over-smoothing. Applied Intelligence, 53(6): 7216–7231.
  • Wu et al. (2022) Wu, S.; Sun, F.; Zhang, W.; Xie, X.; and Cui, B. 2022. Graph neural networks in recommender systems: a survey. ACM Computing Surveys, 55(5): 1–37.
  • Xu et al. (2019) Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2019. How Powerful are Graph Neural Networks? In International Conference on Learning Representations.
  • Yang, Cohen, and Salakhudinov (2016) Yang, Z.; Cohen, W.; and Salakhudinov, R. 2016. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, 40–48. PMLR.
  • Yao, Yu, and Jiao (2021) Yao, S.; Yu, D.; and Jiao, X. 2021. Perturbing eigenvalues with residual learning in graph convolutional neural networks. In Asian Conference on Machine Learning, 1569–1584. PMLR.
  • Zhu et al. (2021) Zhu, M.; Wang, X.; Shi, C.; Ji, H.; and Cui, P. 2021. Interpreting and unifying graph neural networks with an optimization framework. In Proceedings of the Web Conference 2021, 1215–1226.

Appendix A Proofs of Theorems

Below we show proofs of several theorems in the main text of the paper.

A.1 Proof of Theorem 1

Proof. We first prove the first part of Theorem 1. We know from Eq. 2 in the paper that the filter coefficients of polynomial spectral GNNs are generated by h(λ)h(\lambda). We assume that the polynomial h(λ)h(\lambda) is of sufficiently high order so that it can produce different outputs h(λi)h(\lambda_{i}) for kk different inputs λi\lambda_{i}. However, when λi=λj\lambda_{i}=\lambda_{j}, h(λi)h(\lambda_{i}) = h(λj)h(\lambda_{j}). That is, h(λ)h(\lambda) can only produce the same output for the same input. Therefore, when the normalized Laplacian matrix 𝐋^\mathbf{\hat{L}} has only kk different eigenvalues, polynomial spectral GNNs can only produce at most kk different filter coefficients.

Next, we prove the second part of Theorem 1. For the filtering process 𝐙=𝐔diag(h(𝝀))𝐔𝐗𝐖\mathbf{Z}=\mathbf{U}\operatorname{diag}(h(\boldsymbol{\lambda}))\mathbf{U}^{\top}\mathbf{X}\mathbf{W}. We first assume all elements of 𝐔𝐗𝐖\mathbf{U}^{\top}\mathbf{X}\mathbf{W} are non-zero. And we set vector 𝐜=𝐔𝐗𝐖\mathbf{c}=\mathbf{U}^{\top}\mathbf{X}\mathbf{W} and 𝜽=h(𝝀)𝐜\boldsymbol{\theta}=h(\boldsymbol{\lambda})\odot\mathbf{c} . Thus we have 𝐙=𝐔diag(h(𝝀))𝐜=𝐔𝜽\mathbf{Z}=\mathbf{U}\operatorname{diag}(h(\boldsymbol{\lambda}))\cdot\mathbf{c}=\mathbf{U}\boldsymbol{\theta} where 𝐙n×1\mathbf{Z}\in\mathbb{R}^{n\times 1}.

It can be seen that if λi=λj\lambda_{i}=\lambda_{j}, then θj=θicjci\theta_{j}=\theta_{i}\frac{c_{j}}{c_{i}}. Given that we have kk distinct eigenvalues, only kk elements of the vector 𝜽\boldsymbol{\theta} are arbitrary. Therefore, we can always represent Z by a vector 𝜽\boldsymbol{\theta}^{{}^{\prime}} of kk elements and a matrix 𝐕\mathbf{V} of nn rows and kk columns. Thus, we have 𝐕𝜽=𝐙\mathbf{V}\boldsymbol{\theta}^{{}^{\prime}}=\mathbf{Z}:

(𝐕11𝐕12𝐕1k𝐕21𝐕22𝐕2k𝐕n1𝐕n2𝐕nk)(θ1θ2θk)=(𝐙1𝐙2𝐙n),\left(\begin{matrix}\mathbf{V}_{11}&\mathbf{V}_{12}&\cdots&\mathbf{V}_{1k}\\ \mathbf{V}_{21}&\mathbf{V}_{22}&\cdots&\mathbf{V}_{2k}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{V}_{n1}&\mathbf{V}_{n2}&\cdots&\mathbf{V}_{nk}\end{matrix}\right)\left(\begin{matrix}\theta_{1}^{{}^{\prime}}\\ \theta_{2}^{{}^{\prime}}\\ \vdots\\ \theta_{k}^{{}^{\prime}}\end{matrix}\right)=\left(\begin{matrix}\mathbf{Z}_{1}\\ \mathbf{Z}_{2}\\ \vdots\\ \mathbf{Z}_{n}\end{matrix}\right), (16)

where knk\leq n is the number of distinct eigenvalues. We can conclude that if Eq. (1) has a solution, it has at least nkn-k redundant equations. Without loss of generality, we assume that the last nkn-k equations are redundant, so we have: j{k+1,,n}{\forall}j\in\{k+1,\cdots,n\}, 𝐙j=α1𝐙1++αk𝐙k\mathbf{Z}_{j}=\alpha_{1}\mathbf{Z}_{1}+\cdots+\alpha_{k}\mathbf{Z}_{k} where αi\alpha_{i} is a constant. Therefore, given 𝐙1\mathbf{Z}_{1},\cdots,𝐙k\mathbf{Z}_{k}, we can uniquely determine 𝐙k+1\mathbf{Z}_{k+1},\cdots,𝐙n\mathbf{Z}_{n}, thus polynomial spectral GNNs can only generate one-dimensional predictions with a maximum of kk arbitrary elements.

A.2 Proof of Theorem 2

We formalize Theorem 2 in the paper as follows:

Theorem 3

When β[0,1)\beta\in[0,1), i{1,,n}{\forall}i\in\{1,\cdots,n\}, μi>μi1\mu_{i}>\mu_{i-1}.

Proof. When β[0,1)\beta\in[0,1), given λiλi1\lambda_{i}\geq\lambda_{i-1}, we have βλiβλi1\beta\lambda_{i}\geq\beta\lambda_{i-1} where i{1,,n}i\in\{1,\cdots,n\}.

Similarly, when β[0,1)\beta\in[0,1), given υi>υi1\upsilon_{i}>\upsilon_{i-1}, we have (1β)υi>(1β)υi1(1-\beta)\upsilon_{i}>(1-\beta)\upsilon_{i-1} where i{1,,n}i\in\{1,\cdots,n\}.

Combining the above two conclusions, we can get the following inequality:

μi=βλi+(1β)υi>βλi1+(1β)υi1=μi1\mu_{i}=\beta\lambda_{i}+(1-\beta)\upsilon_{i}>\beta\lambda_{i-1}+(1-\beta)\upsilon_{i-1}=\mu_{i-1} (17)

Therefore, when β[0,1)\beta\in[0,1), i{1,,n}{\forall}i\in\{1,\cdots,n\}, μi>μi1\mu_{i}>\mu_{i-1}.

Appendix B Eigenvalue distribution of normalized Laplacian matrix

Figure 6 shows the distribution of the eigenvalues of the normalized Laplacian matrix in different datasets, from which we can see that the eigenvalues of all datasets are clustered around 1, so there will be many repeated eigenvalues.

Refer to caption
(a) Cora
Refer to caption
(b) CiteSeer
Refer to caption
(c) Pubmed
Refer to caption
(d) Actor
Refer to caption
(e) Photo
Refer to caption
(f) Computers
Refer to caption
(g) Chameleon
Refer to caption
(h) Squirrel
Refer to caption
(i) Texas
Refer to caption
(j) Cornell
Figure 6: Histogram of the distribution of eigenvalues of the normalized Laplacian matrix on different datasets.

Appendix C Eigenvalue distribution of random graphs

Figure 7 shows the variation of eigenvalues with the average degree for a random graph with four thousand nodes. We find that the higher the average degree of the node, the more concentrated the distribution of eigenvalues is at the value 1, which is consistent with the distribution trend in Figure 1. Therefore, we speculate that the tendency that the eigenvalues of real-world graphs to be concentrated around 1 is also related to their average degree.

Refer to caption
(a) 2
Refer to caption
(b) 3
Refer to caption
(c) 5
Refer to caption
(d) 10
Refer to caption
(e) 50
Refer to caption
(f) 100
Refer to caption
(g) 150
Refer to caption
(h) 200
Refer to caption
(i) 250
Refer to caption
(j) 300
Figure 7: The distribution of eigenvalues of random graphs varies with different average degrees.

Appendix D The setting of hyperparameter β\beta

datasets Low High Band Reject Comb
EC-GPR 0.80 0.70 0.00 0.00 0.00
EC-Bern 0.60 0.90 0.70 0.00 0.00
EC-Jacobi 0.90 0.90 0.70 0.70 0.00
Table 5: β\beta-hyperparameter settings for synthetic datasets.

Tables 5 and 6 present the hyperparameter β\beta settings of EC-GPR, EC-Bern, and EC-Jacobi on synthetic datasets and real-world datasets.

datasets Cora Citeseer Pubmed Computers Photo Chameleon Actor Squirrel Texas Cornell
EC-GPR 0.90 0.25 0.75 0.15 0.05 0.09 0.90 0.23 0.91 0.77
EC-Bern 0.63 0.78 0.05 0.12 0.05 0.33 0.86 0.44 0.86 0.18
EC-Jacobi 0.50 0.32 0.50 0.39 0.47 0.29 0.26 0.76 0.40 0.70
Table 6: β\beta-hyperparameter settings for real-world datasets.