IGNN-Solver: A Graph Neural Solver for
Implicit Graph Neural Networks
Abstract
Implicit graph neural networks (IGNNs), which exhibit strong expressive power with a single layer, have recently demonstrated remarkable performance in capturing long-range dependencies (LRD) in underlying graphs while effectively mitigating the over-smoothing problem. However, IGNNs rely on computationally expensive fixed-point iterations, which lead to significant speed and scalability limitations, hindering their application to large-scale graphs. To achieve fast fixed-point solving for IGNNs, we propose a novel graph neural solver, IGNN-Solver, which leverages the generalized Anderson Acceleration method, parameterized by a tiny GNN, and learns iterative updates as a graph-dependent temporal process. To improve effectiveness on large-scale graph tasks, we further integrate sparsification and storage compression methods, specifically tailored for the IGNN-Solver, into its design. Extensive experiments demonstrate that the IGNN-Solver significantly accelerates inference on both small- and large-scale tasks, achieving a to speedup without sacrificing accuracy. This advantage becomes more pronounced as the graph scale grows, facilitating its large-scale deployment in real-world applications. The code to reproduce our results is available at https://github.com/landrarwolf/IGNN-Solver.
Index Terms:
Implicit graph neural networks, large-scale graphs, fixed-point solver, deep equilibrium models.I Introduction
Implicit graph neural networks (IGNNs) [1, 2, 3] have emerged as a significant advancement in graph learning frameworks. Unlike traditional graph neural networks (GNNs) that stack multiple explicit layers, IGNNs utilize a single implicit layer formulated as a fixed-point equation. The solution to this fixed-point equation, known as the equilibrium, is equivalent to the output obtained by iterating an explicit layer infinitely. This allows the implicit layer to access infinite hops of neighbors, providing IGNNs with global receptive fields within just one layer [4]. As a result, IGNNs effectively address the long-standing issue of over-smoothing in conventional explicit GNNs and capture of long-range dependencies in graph-structured data [5, 6, 7].
However, existing IGNNs suffer from slow speeds and have difficulty scaling to large-scale graphs [8, 9, 10, 11]. This is primarily because IGNNs derive features by solving for fixed points, demanding substantial computational resources. For example, even on the Citeseer dataset [12] classification task with a small-scale graph, IGNNs require more than forward iterative computations to nearly converge to a fixed point [1]. The computational overhead of solving fixed points is amplified by the task scale, resulting in notably slow inference speeds compared to explicit GNNs. This substantial drawback poses challenges for IGNNs in generalizing to large-scale graphs in practical scenarios.
In response to this challenge, we propose a novel graph neural solver for IGNNs, termed IGNN-Solver. It takes the IGNN layer as input, which includes the graph information matrix and layer parameters, and outputs the solution to the corresponding fixed-point equation. Unlike conventional solvers, which compute output features through iterative forward passes in a potentially large GNN (relying on root-finding algorithms like Broyden’s method [13]), the IGNN-Solver predicts the next iteration step using a tiny graph network. Consequently, the proposed IGNN-Solver remarkably accelerates the model’s inference speed without compromising accuracy and with only a slight increase in training overhead. This advantage becomes increasingly pronounced as the scale of the graph grows, making it particularly beneficial for deploying IGNNs in large-scale graph tasks.
Our IGNN-Solver comprises two components. First, we introduce a learnable initializer that estimates an optimal initial point for the optimization process. Second, we propose a generalized version of Anderson Acceleration (AA) [13], employing a tiny graph network to model the iterative updates as a sequence of graph-dependent steps. Compared to the solvers proposed for conventional Implicit Neural Networks (INNs) [14, 15, 16], we introduce novel improvements: learning solver parameters through a GNN-based method. This approach circumvents the potential loss of graph information, thereby improving the model’s performance. Moreover, IGNN-Solver has significantly fewer parameters compared to IGNN, and its training is independent of the IGNN’s inference. Consequently, the training of IGNN-Solver proceeds rapidly without sacrificing generalization.
Moreover, to address the challenges posed by large-scale graphs, we incorporate both graph sparsification and storage compression in the solver design. Specifically, to prevent the network from becoming excessively large and deviating from the initial goal of using a compact network for prediction [17, 18], we utilize graph sparsification for a lighter graph. Additionally, directly mapping data from high to extremely low dimensions is not appropriate [19], thereby we employ storage compression technique to mitigate this issue.
In our experiments, we apply IGNN-Solver to nine real-world datasets from diverse domains and scales, including four large-scale datasets: Amazon-all [20], Reddit [21], ogbn-arxiv [22] and ogbn-products [22]. Our results demonstrate that the IGNN-Solver achieves higher accuracy with reduced inference time, showing a to speedup, and incurs minimal additional training overhead, constituting only of the IGNN training time.
Our main contributions are summarized as follows:
-
•
We introduce IGNN-Solver, a method designed to predict the next fixed-point iteration of the IGNN layer. This innovative approach mitigates the need for extensive iterations, common in vanilla IGNNs, thereby substantially accelerating inference while preserving accuracy and minimizing parameter consumption.
-
•
Compared to conventional solvers proposed for INNs, our solver introduces a novel improvement by incorporating a tiny GNN to learn parameters. This approach not only preserves graph information but also maintains the simplicity and lightweight characteristics of neural solvers.
-
•
For large-scale graph tasks, we integrate custom-designed sparsification and storage compression methods into the solver’s architecture. These tailored techniques enable efficient processing of large-scale graphs, improving performance and reducing overhead.
-
•
We validate our approach through extensive experiments on nine real-world datasets, including four large-scale. Our results demonstrate that IGNN-Solver incurs minimal computational overhead (approximately 1% of the total) and achieves up to a -fold increase in inference speed without compromising accuracy.
II Related Works
The typical GNN [12] and its variants [23, 21] have been widely used for graph data modeling in various tasks. Different GNNs have been proposed to utilize attention mechanism [23], neighbors sampling [21], pseudo-coordinates [24] and graph fusion [25]. However, due to issues such as over-smoothing, depth, and bottlenecks, these models typically involve finite aggregation layers [5, 6, 7]. To address it, recent works [1, 2, 3] have developed Implicit Graph Neural Networks, encouraging these models to capture long-range dependencies on graphs. Here, we highlight the contributions of our proposed IGNN-Solver through a detailed comparison with Implicit Graph Neural Networks (IGNNs) and Deep Equilibrium Models (DEQs), both of which are closely related to our approach.
II-A Implicit Graph Neural Networks
Instead of stacking a series of operators hierarchically, implicit GNNs define their outputs as solutions to nonlinear dynamical systems, which is initially introduced by [1] to tackle challenges associated with learning long-range dependencies in graphs. [8] proposes a new implicit graph model enabling mini-batch training without sacrificing the ability to capture long-range dependencies. Subsequently, [26] introduces a novel approach based on implicit layer to model multi-scale structures on graphs. Additionally, [3] theoretically investigates the well-posedness of the IGNN model from a monotone operator viewpoint.
Although the aforementioned IGNN works well by alleviating the problem of over-smoothing of features by allowing meaningful fixed points to propagate implicitly, the inherent slow inference speed of implicit networks poses a major obstacle to its scalability. The main reason is that the solver of the fixed-point network is inefficient (e.g., the Picard solver used by IGNN [1] and Anderson acceleration solver used by MIGNN [3]), makes the overhead of the fixed-point solver magnified by the task scales. In comparison, our proposed IGNN-Solver accelerates the iteration process of IGNNs, which addresses the most limiting drawback compared to traditional feedforward models.
Another class of IGNNs based on Neural ODEs [27] has emerged to address issues like depth and bottlenecks. For example, [28] models continuous residual layers using GCNs. [29] proposes methods for modeling static and dynamic graphs with GCNs, along with a hybrid approach where latent states evolve continuously between RNN steps in dynamic graphs. [30] tackles the problem of continuous message passing. [31] introduces a graph neural diffusion network based on the discretization of diffusion PDEs on graphs. [32] enhances graph neural diffusion with a source term and connects the model to random walk formulation on graphs. However, they essentially view deep learning on graphs as a continuous diffusion process, which differs from IGNNs based on fixed-point networks, thus requiring manually tuning the termination time and step size of the diffusion equation. In contrast, our IGNN-Solver uses implicit formulations instead of explicit diffusion discretization, which admits an equilibrium corresponding to infinite diffusion steps and expands the receptive field.
II-B Fixed-point Solvers for Deep Equilibrium Models
Traditional deep learning models use multi-layered networks, while DEQs [14] find a fixed-point of a single layer, representing the network’s equilibrium state. Viewed as infinitely deep networks, DEQs use root-finding for forward propagation and implicit differentiation for backward propagation. Therefore, DEQs capture complex patterns with constant memory and computational costs [33]. Monotone operator theory has been used to guarantee the convergence of DEQs [34] and to improve the stability of implicit networks [16].
However, it is well-known that DEQs, as typical implicit models, suffer from slow training and inference speeds [35, 8], which is highly disadvantageous for large-scale scenarios like GraphGPT [36]. To alleviate this issue, recent efforts have explored certain improved solver methods for DEQs, further optimizing root-finding problems and making these models easier to solve. [37] discusses the amortization of the cost of the iterative solver that would otherwise make implicit models slow. [9] proposes a novel gradient approximation method for implicit models, which significantly accelerates the backward passes in training implicit models. [16] discusses the superiority of learnable solvers over generic solvers in implicit models using a tiny neural network and significantly enhances the efficiency of such models through custom neural solvers.
In contrast, IGNN has a similar network representation to DEQ, both defining the output as the solution of the equation to obtain network outputs. But the difference lies in the fact that IGNN’s equilibrium equations encode graph structure while DEQ does not, which will undoubtedly deepen its weakness of slow inference speed and make IGNN’s solution slower. Insight on it, our proposed IGNN-Solver leverages graph information to guide solver acceleration, achieving fast and meaningful implicit graph network propagation, especially in the case of graph data being large-scale.
III Preliminaries
III-A Explicit GNNs
Let represent an undirected graph, where is the adjacency matrix indicating the relationships between the nodes in , and is the number of nodes. The node feature matrix contains the features of the nodes, with representing the feature dimension. Conventional (explicit) GNNs [12, 23] feature a learnable aggregation process centered on the message-passing operation within the graph [38]. This process iteratively propagates information from each node to its neighboring nodes. The formal general structure for each layer is defined as follows:
(1) |
where represents the hidden node representation, denotes the parameters in -th layer. A commonly used GNN is Graph Convolutional Network (GCN) [12], defined as , where denotes the weight matrix of the -th layer, denotes the activation function, and represents the symmetric normalized graph matrix. is a diagonal matrix with . GNNs leverage the above message-passing operation in (1) to learn useful information. Still, they often involve a limited number of layers due to over-smoothing [6], making it challenging for GNNs to capture the long-range dependency (LRD) on graphs.
III-B Implicit GNNs
III-B1 Architecture
Similar to traditional explicit GNNs, IGNNs [1, 2, 3, 4] also involve an aggregation process, with the distinction that the depth of layers (iteration step ) is infinite. The aggregation process in IGNNs is typically defined as
(2) |
where represents affine transformation parameterized by , and the weight matrices and are globally shared at each iteration step. The IGNN model is formally described by
(3) |
where the representation, given as the “internal state” , is obtained as the fixed-point solution of the equilibrium (2). The final representation theoretically encompasses information from all neighbors in the graph. Therefore, IGNNs capture LRD within the graph, offering better performance compared to GNNs with finite iterations [1, 2]. Another notable advantage of this framework is its memory efficiency, as it only needs to retain the current state without requiring additional intermediate representations.
III-B2 Well-posedness
Notably, to obtain a valid representation for any given input from IGNNs, we need to solve the unique internal state in (3). However, there may not always exist a well-defined solution for some inputs [1]. To guarantee the existence and uniqueness of the solution to (3), we introduce the concept of well-posedness for IGNNs with activation function [1]. Specifically, the weight matrix and the adjacency matrix are said to be well-posed for if the solution of (3) exists and is unique for any .
Based on the monotone operator theorem [39, 34], a sufficient condition for the well-posedness for IGNNs is defined as follows. Let the nonlinearity be a non-expansive activation function (e.g., Sigmoid, Tanh, ReLU, etc.), and let . 111We represent the Kronecker product of matrices and as . Then, the IGNN model defined in (3) is well-posed as long as for some . 222 We represent if is semi-positive definite. For the case of non-symmetric matrices , positive definiteness can be defined as the positive definiteness of the symmetric component, i.e., if and only if .
It establishes that if is symmetric and for some , then the existence and uniqueness of the equilibrium point are guaranteed, ensuring the model is well-posed. This follows from the sufficient condition that is strongly monotone.
To satisfy the well-posedness of IGNNs, a commonly used method [40, 3], is to orthogonally reparameterize in the IGNN layer using the following Cayley map, which is inspired from the unitary operator [41], as
(4) |
where and is a skew-symmetric matrix with as an arbitrary parameterized matrix. From (4), it follows that is positive definite and its real eigenvalues are in . Moreover, based on the definition of , we can derive the eigenvalues of are in , which ensures that , thereby ensuring the well-posedness of IGNN. We refer interested readers to [3] for detailed proof.
III-C Training of IGNNs
Additionally, to train the parameters during the back-propagation process of neural networks, IGNNs compute the Jacobian by solving the equilibrium equation, as derived from the implicit differential method and the implicit function theorem [42]:
(5) |
where and refers to the element-wise derivative of the map . Once is obtained, and are derived via the chain rule, eliminating the need to store activations at each layer during gradient updates.
However, IGNNs typically require multiple iterations to reach equilibrium, both in finding the fixed point during the forward process (2) and in solving the implicit differential during the backward process (5), resulting in significant overhead during both training and inference. Therefore, developing a fast and efficient solver to find their fixed points is crucial.
III-D Traditional Anderson Acceleration Solver
Since implicit networks typically require multiple iterations to reach equilibrium, a commonly used solver method is Anderson Acceleration [13], which is widely applied in [3, 15, 14, 16]. Specifically, Given an implicit network layer and input , we aim to find the fixed-point in the system that represents the equilibrium state of the network output by solving a root-finding problem
(6) |
via utilizing fixed-point solvers, such as the classical Anderson Acceleration (AA) [13], which can directly seek the equilibrium point through quasi-Newton methods. This approach demonstrates super-linear convergence properties.
We briefly introduce the AA-solver, as our approach is inspired by it. Algorithm 1 provides a pseudocode example, where represents the past residuals. The key idea behind AA is to accelerate the next approximate solution by leveraging information stored from the history iteration steps, constructing a normalized linear combination of past updates with weights and . These weights are computed greedily to minimize the linear combination of past AA update steps. This approach is typically effective for accelerating the solution of implicit functions.
However, since the computation process relies only on the final output and does not require storing any intermediate activation values [14], the memory consumption during the training process remains constant, which is also a key reason why fixed-point solvers are attractive.

IV Our purposed IGNN-Solver
Although traditional fixed-point solvers for IGNNs demonstrate functionality, they are characterized by relatively slow performance, necessitate manual parameter tuning, and are not well-suited for graph-based applications. To address these limitations, we propose a lightweight, learnable, and content-aware fixed-point solver for IGNN that incorporates a tiny graph neural network, combining the speed advantages of the solver with the information benefits of graph learning. By encoding the relationship between past residuals and parameters and , it enables more efficient adjustments to enhance the learning capabilities of the IGNN-Solver, as shown in Figure 1.
Given the implicit model’s characteristic that its representation capability is decoupled from the forward computation (i.e., the solver has no knowledge of the task, such as node classification, and vice versa), we can train this neural solver in a lightweight, unsupervised manner. Once learned, the IGNN-Solver can solve the fixed-point equation within the IGNN layer, facilitating further model updates. Note that our solver does not alter the original structure of the model but only accelerates the solution process. Details of the training strategy involving the IGNN-Solver are provided in Appendix B.
IV-A General Formulation
Let be the input features and be the graph, IGNN learns the node representation by finding the fixed point below
(7) |
The overall structure of the IGNN-Solver is shown in Algorithm 2. Specifically, it learns the parameters that control the weights among past steps approximate solution, and that control the weights between past residuals and approximate solution, through a solver to accelerate the approximate solution of the next update step. To train the IGNN-Solver, we minimize the joint loss function by back-propagating through this -step temporal process, which is discussed in Section IV-C. It’s worth noting that the original IGNN is frozen (i.e., model parameters are fixed), and only the IGNN-Solver parameters are trained here, so we do not need the ground-truth label that corresponds to input . This implies that IGNN-Solver can also be fine-tuned during inference after deployment.
IV-A1 Initializer
To accelerate the convergence of predicted values, we propose constructing an initializer
(8) |
where , for a rapid and reasonable input-based initial estimate value, rather than simply setting the initial values to or random, where are learnable parameters of the initializer. We set the intermediate dimension to be significantly smaller than the feature dimension to reduce the training overhead of the initializer.
IV-A2 Improved Anderson Iterations with tiny-GNN
In the original AA solver, the parameter is predetermined and fixed [13], whereas the parameters s are determined by solving numerous linear equations using the least squares method. Although this method operates adequately, its efficiency and adaptability in optimizing IGNN models are limited. Therefore, we propose introducing a tiny and learnable graph neural network as
(9) |
where , to predict the two parameters instead of setting them as the least squares solution on the past residuals .
Remark 1 (Well-posedness of IGNN-Solver).
The proposed IGNN-Solver remains well-posed when the IGNN satisfies the reparameterization defined in (4). Note that the IGNN-Solver introduces two key improvements while maintaining the structure and parameters of the model. First, we propose an initializer for a better initial guess, establishing a more meaningful starting point, reducing solving time, and shortening the fixed-point trajectory. Second, we parameterize the Anderson weights and step sizes within their original range, ensuring that well-posedness is preserved. Consequently, the modifications introduced in the IGNN-Solver retain the foundational characteristics of classical AA without compromising the well-posedness condition. Therefore, for an IGNN where satisfies the well-posedness condition, it holds that the corresponding IGNN-Solver on this IGNN is also well-posed, thereby guaranteeing the existence and uniqueness of its solution.
IV-B Graph Sparsification and Storage Compression
Directly using the original graph for modeling would result in an oversized network [17, 18], deviating from the initial goal of using a tiny network for prediction. Moreover, because of the curse of dimensionality, directly mapping the data from high to extremely low dimensions is not appropriate [19]. Therefore, in response to the challenges posed by large-scale graphs, we take into account both Graph Sparsification and Storage Compression in the design of the solver, as detailed below.
IV-B1 Graph Sparsification
Firstly, the number of edges in graph is usually large in practice, as it is affected by the scale of the input (for example, in the ogbn-products dataset, the number of edges in the graph is close to 62M). To maintain the lightweight nature of tiny-GNN and reduce the computational cost of prediction, we propose introducing graph sparsification [43, 44, 45] for a light graph:
(10) |
where represents the sparse subgraph of , obtained via a graph sparsification operator , with the parameter controls the sparsity of . We choose the RPI-Graph approach [17] in this paper, which is a plug-and-play graph sparsification method based on the principle of relevant information. It is worth noting that this subgraph is used exclusively in the tiny-GNN predictor and does not alter the original graph of the IGNN model .
IV-B2 Storage Compression
Besides, considering that (the number of nodes) is relatively large (for example, is approximately 169K in the ogbn-arxiv dataset), it is not appropriate to map from a high-dimensional space to a very low-dimensional one in (9) directly [25, 46, 47] owing to the curse of dimensionality [19]. Therefore, to maintain fast and compact, we recommend compress to form a smaller yet still representative version . Specifically, We map it from to an appropriate space by multiple layer perception (MLP). Then, the nearest sets of fixed points are merged into one feature matrix:
(11) |
where represents the concatenation operation that stacks the compressed storage matrix in the -th iteration.
Once the smaller yet representative storage matrix is obtained as described above, we treat it as a -channel matrix and employ a tiny graph neural network layer:
(12) |
to predict the relative weights assigned to past residual, as well as the AA mixing coefficient at the -th iteration. Notably, we normalize the original output using the function
to ensure that the constraint is satisfied.
In this way, shall autonomously learn and adjust these parameters and based on previous solver steps, while also receiving gradients from subsequent iterations. We provide a comparison of different choices for in Section V-D, demonstrating that IGNN-Solver improves the convergence path. We also introduce the training procedure below.
IV-C Training IGNN-Solver
Unlike the neural ODE solver, the fixed-point trajectory of the IGNN is not unique. Therefore, trajectory fitting is not applicable to the IGNN-solver training. Instead, the goal is simply to bring everything as close as possible to . Formally, given an IGNN-Solver that returns and , we introduce three objectives functions for its training. The complete training algorithm can be referred to in Algorithm 2.
Initializer Loss Functions
To train the initializer, we minimize the distance between the initial guess and the fixed-point by
(13) |
for facilitating the subsequent iteration process. Since the initializer directly predicts based on the input without going through iteration, we separate this loss from other components.
Reconstruction Loss Functions
The training of the solver does not require reference to label information or any trajectory fitting. Apart from the loss necessary for training the initializer above, we introduce reconstruction loss
(14) |
where the reconstruction loss aims to make all intermediate predictions converge to the accurate fixed point as closely as possible.
Auxiliary Loss Functions
Although we utilize and to improve the generic solver, we have empirically found that using an auxiliary loss to guide the solver’s prediction of is beneficial sometimes, especially in the early stages of training. Therefore, in practice, we suggest considering this loss and gradually diminishing it as training progresses (i.e., decay the weight of this loss to ).
(15) |
The final form of the joint loss function is as follows:
(16) |
where , and control the weights of three loss functions mentioned above and elaborate more in Section A-B.




Type | *Models | Amazon-all | ogbn-arxiv | ogbn-products | |
---|---|---|---|---|---|
*Nodes | |||||
*Edges | |||||
Explicit | GCN [12] | ||||
GAT [23] | |||||
SGC [48] | |||||
APPNP [49] | |||||
JKNet [50] | |||||
AM-GCN [25] | |||||
DEMO-Net [51] | |||||
GCNII [52] | |||||
ACM-GCN [53] | |||||
Implicit | IGNN [1] | ||||
EIGNN [2] | |||||
MIGNN [3] | |||||
IGNN-Solver (AA) | |||||
IGNN-Solver (NN) | |||||
IGNN-Solver (Ours) |
V Experiments
In this section, we compare the speed/accuracy Pareto curve rather than a single point on the curve of IGNN-Solver with IGNNs and several state-of-the-art (SOTA) GNNs across various graph classification tasks, both at the node and graph levels. Our goal is to demonstrate: 1) IGNN-Solver achieves nearly a to inference acceleration without sacrificing accuracy; and 2) IGNN-Solver is extremely compact, adding little overhead to training. Specifically, we compare our approach with numerous representative baselines and two variants of our method: IGNN w. AA (using the original Anderson Acceleration to speed up IGNN) and IGNN w. NN (replacing our proposed graph neural solver with a standard neural network solver for parameter learning).
In Section V-D, we sequentially demonstrate that the training overhead on IGNN-Solver is little relative to the total time overhead on IGNN. Additionally, we provide further evidence regarding the convergence and generalizability of the IGNN-Solver. We compare the training/inference time costs per epoch between IGNN and IGNN-Solver, visualizing the advantages of our approach. Furthermore, we conduct ablation studies to validate the stability of the IGNN solver and assess the effectiveness of its components. Additional experimental setups and descriptions are detailed in Appendix A.





Type | *Models | Citeseer | ACM | CoraFull | BlogCatalog | Flickr |
---|---|---|---|---|---|---|
*Nodes | ||||||
*Edges | ||||||
Explicit | GCN [12] | |||||
GAT [23] | ||||||
SGC [48] | ||||||
APPNP [49] | ||||||
JKNet [50] | ||||||
AM-GCN [25] | ||||||
DEMO-Net [51] | ||||||
GCNII [52] | ||||||
ACM-GCN [53] | ||||||
Implicit | IGNN [1] | |||||
EIGNN [2] | ||||||
MIGNN [3] | ||||||
IGNN-Solver (AA) | ||||||
IGNN-Solver (NN) | ||||||
IGNN-Solver (Ours) |
V-A Experiments Settings
V-A1 Datasets
The evaluation is conducted on nine different-field, real-world benchmarks for node classification tasks, including four citation datasets (Citeseer, ACM, CoraFull, ogbn-arxiv), three social interaction datasets (BlogCatalog, Flickr, Reddit), and two product network datasets (Amazon-all, ogbn-products); as well as five bioinformatics benchmarks (MUTAG, PTC_MR, COX2, PROTEINS, NCI1) for graph classification tasks. Further details can be found in Appendix A. Notably, to demonstrate the scalability of the proposed model on larger datasets, we use four large-scale datasets: Amazon-all, Reddit, ogbn-arxiv, and ogbn-products, two of which are from the Open Graph Benchmark (OGB) [22].
V-A2 Baselines
We compare IGNN-Solver with a comprehensive set of baselines in graph node classification tasks, including three implicit GNNs, i.e., MIGNN [3], EIGNN [2], IGNN [1], and nine explicit/traditional GNNs, i.e., AM-GCN [25], GCN [12], GAT [23], SGC [48], APPNP [49], JKNet [50], DEMO-Net [51], GCNII [52], ACM-GCN [53]. For the sake of fairness, we use the same hidden layer configuration for all baseline models. For example, in the Flickr dataset, the dimension of the hidden layers is uniformly set to for all methods (including ours, see in Table IV). In addition, for EIGNN, the arbitrary gamma is set to , which is consistent with the original paper. For AM-GCN, the number of nearest neighbors in the KNN graph is set from . For GCN, the hidden layer setup is the same as that for others. For GAT, three attention headers are used for each layer. For SGC, the power of self-loops in the graph adjacency matrix is set to . For APPNP, we set the number of iterations to and teleport probability to . For JKNet, we set layers to in small datasets and set to in large ones. For DEMO-Net, the regularization parameter is set as , the learning rate is set as , and the hash dimension is set as . For GCNII, we set and regularization to , consistent with the original paper. For ACM-GCN, we set layers to in small datasets and set to in large ones. For our IGNN-Solver and its variants, we employ the same settings as the basic IGNN model. Furthermore, we adjust these parameters affecting the convergence and performance of IGNN through a combined approach of hierarchical grid search and manual tuning. The Adam optimizer [54] is used for optimization.
V-B Performance on Node Classification Tasks
V-B1 Experimental setup
To demonstrate the superiority of the IGNN-Solver over IGNNs in terms of both performance and efficiency, we analyze the movement of the speed/accuracy Pareto curve across various datasets, rather than concentrating on a single point on the curve, which is depicted in Figures 2 and 3. All experiments are conducted five times, and the best results are reported. Furthermore, we present the node classification performance results for each method on various datasets, which are shown in Tables I and II. All experiments are conducted five times, and we report the average results along with the standard deviation. Additional details on the training procedure and the hyperparameters used for each task are provided in Appendices A and B.
V-B2 Results on large-scale tasks
From Figure 2 (the speed/accuracy Pareto curve on large-scale datasets: Amazon-all, Reddit, ogbn-arxiv and ogbn-products), we observe that:
-
1.
Comprehensive performance and efficiency: IGNN-Solver generally outperforms all other methods, particularly in large datasets with a greater graph radius. This advantage is more pronounced in such cases. The improved performance is attributed to IGNN-Solver’s significant enhancement in the convergence path and the better initialization of the fixed-point equation through the initializer;
-
2.
Rapid inference advantage: IGNN-Solver demonstrates a notable speed advantage during inference. For instance, on the large-scale dataset Amazon-all, our IGNN-Solver achieves faster inference speed than IGNN while maintaining the same performance, and there is consistently at least acceleration and a similar pattern has been observed as well on other datasets.
From Table I, we observe that implicit GNNs with infinite depth generally outperform shallow explicit GNNs in most cases. Furthermore, IGNN-Solver consistently shows higher accuracy percentages across most datasets compared to other state-of-the-art explicit GNNs like DEMO-Net [51], GCNII [52] etc, indicating its superior performance.
V-B3 Results on small-scale tasks
We further evaluate IGNN-Solver on several small-scale graph node classification tasks, including Citeseer, ACM, CoraFull, BlogCatalog, and Flickr. The hyperparameters used are outlined in Table IV, Section A-B. The mean accuracy and standard deviation are reported in Table II, while the speed/accuracy Pareto curve is presented in Figure 3. In general, learning LRD is not crucial for these tasks, as the graph diameters are relatively small [3]. However, as shown in Table II, even for these small-scale node classification tasks, IGNN-Solver outperforms IGNNs and many explicit GNNs, as well as other advanced implicit models. As illustrated in Figure 3, the solver continues to improve the convergence path of IGNN, even in small-scale tasks. These results, along with those in Section V-B, confirm the expressive power of the IGNN-Solver using learnable graph neural networks, even surpassing that of many explicit and enhanced implicit GNNs.
V-C Performance on Graph Classification Tasks
Type | Method | MUTAG | PTC_MR | COX2 | PROTEINS | NCI1 |
---|---|---|---|---|---|---|
Explicit | WL [55] | 84.1 1.9 | 58.0 2.5 | 83.2 0.2 | 74.7 0.5 | 84.5 0.5 |
DCNN [56] | 67.0 | 56.6 | - | 61.3 | 62.6 | |
DGCNN [57] | 85.8 | 58.6 | - | 75.5 | 74.4 | |
GIN [58] | 89.4 5.6 | 64.6 7.0 | - | 76.2 2.8 | 82.7 1.7 | |
FDGNN [59] | 88.5 3.8 | 63.4 5.4 | 83.3 2.9 | 76.8 2.9 | 77.8 1.6 | |
PK [60] | 76.0 2.7 | 59.5 2.4 | 81.0 0.2 | 73.7 0.7 | 82.5 0.5 | |
GCN [12] | 85.6 5.8 | 64.2 4.3 | - | 76.0 3.2 | 80.2 2.0 | |
Implicit | IGNN [1] | 76.0 13.4 | 60.5 6.4 | 79.7 3.4 | 76.5 3.4 | 73.5 1.9 |
CGS [61] | 89.4 5.6 | 64.7 6.4 | - | 76.3 4.9 | 77.6 2.0 | |
GIND [62] | 89.3 7.4 | 66.9 6.6 | 84.8 4.2 | 77.2 2.9 | 78.8 1.7 | |
MIGNN [3] | 81.8 9.1 | 72.6 5.4 | 85.0 5.3 | 77.9 2.6 | 79.6 1.8 | |
IGNN-Solver (Ours) | 90.5 1.6 | 71.8 3.1 | 85.3 3.0 | 78.4 2.6 | 80.2 1.1 |
V-C1 Experimental setup
We extend our evaluation to graph classification tasks to further demonstrate the effectiveness and versatility of IGNN-Solver. We conduct experiments on several widely used bioinformatics benchmark datasets for graph classification, including MUTAG, PTC_MR, COX2, PROTEINS, and NCI1. These datasets vary in size and complexity, as detailed in Appendix A. We train the model using 10-fold cross-validation using the experimental setup of [1]. For each dataset, we employ standard 10-fold cross-validation to evaluate the test performance of the IGNN-Solver, with the baseline and their results drawn from [3, 62]. The average prediction accuracy and standard deviations are in Table III.
V-C2 Results and discussion
The results of the graph classification experiments, summarized in Table III, show that our IGNN-Solver almost outperforms the baseline models across all datasets, clearly demonstrating its ability to effectively capture graph structures and significantly improve classification accuracy. These findings confirm the effectiveness of the IGNN-Solver in graph classification tasks, further supporting its potential for deployment in real-world applications where graph data is prevalent.
V-D Empirical Understandings of IGNN-Solver
V-D1 Overhead in training


To investigate the proportion of training overhead attributed to the solver throughout the entire training process, we depict in Figure 4 the percentage of training computations on IGNN-Solver relative to the total time overhead on IGNN. Our approach is not only effective but also incurs minimal training overhead for the solver: the solver module is relatively small and requires only about of the original training time required by the IGNN model. Besides, this proportion will be lower for large-scale data. For instance, on the Amazon dataset, IGNN necessitates a total runtime of hours, whereas the solver only requires approximately minutes, indicating that our solver maintains its lightweight nature across any-scale datasets.
V-D2 Efficiency study








We present additional evidence on the convergence and generalizability of the neural solvers in Figure 5, where the ordinate represents the relative residual , with denoting the Frobenius norm. We compare the convergence of a pre-trained IGNN model under two conditions: 1) canonical iteration in IGNN [1], and 2) IGNN-Solver with unrolling steps.
From Figure 5, we observe that the IGNN-Solver, trained with unrolling steps, is capable of generalizing beyond , with both solvers ultimately reaching a stable state, demonstrating good convergence. Moreover, thanks to the fixed-point solution of the graph neural parameterization, IGNN-Solver continuously enhances the convergence of the canonical iterators while being more computationally efficient. Specifically, each step of the neural solver is cheaper than the standard iterator step. This explains the observed improvement in inference efficiency of at least in Section V-B.
V-D3 Other choices for graph neural layers in IGNN-Solver


We provide additional evidence on the convergence and generalization of IGNN-Solver by exploring alternatives to our proposed graph neural (GN) layers, including neural network (NN), temporal convolutional network (TCN) [63], and gated recurrent unit (GRU) [64]. The relative residual curves on the Citeseer dataset, observed during both the warm-up and final inference stages, are presented in Figure 6. Interestingly, despite their larger number of parameters and more complex architectures, TCN and GRU consistently underperform compared to NN. Notably, IGNN-Solver (GN) significantly improves the convergence path of all solvers and exhibits the fastest rate of residual reduction, establishing itself as the most effective predictor in IGNN-Solver due to its ability to effectively exploit rich graph information.









V-D4 Ablation study
In this section, we analyze the effect of different loss components on the IGNN-Solver. The performance of IGNN-Solver and its variants on the Citeseer dataset is shown in Figure 7, with similar patterns observed across other datasets and solvers. For the main convergence loss in fixed-point iterations (16), we design two contrasting schemes concerning its weight s: setting them to a constant value (i.e., for all ), or setting all values except the -th term to zero (i.e., if else ). Both variant solvers still perform well, yet our suggested monotonically increasing scheme (i.e., emphasizing the later steps to a greater extent) exhibits the best performance.
Furthermore, removing either the initializer or the alpha loss negatively impacts performance, with the former having a more detrimental impact on the solver. Nonetheless, all these ablation configurations significantly outperform methods without a solver, underscoring the advantages of employing a custom learnable solver for the IGNN model.
VI Conclusion
This paper introduces IGNN-Solver, a novel graph neural solver tailored for achieving fast fixed-point solving in IGNNs. Unlike previous approaches that impose constraints on the structure or parameters of the implicit layer, we propose to leverage the important idea that implicit models separate the representation from the forward computation. By leveraging the generalized Anderson Acceleration method and parameterizing it with a tiny GNN, IGNN-Solver learns iterative updates as a graph-dependent temporal process. To address large-scale graph tasks, we further tailored sparsification and storage compression methods for the IGNN solver and integrated them into its design. Our extensive experiments demonstrate the significant acceleration in inference achieved by IGNN-Solvers, with a speedup ranging from to , all while maintaining accuracy. Notably, this acceleration is particularly pronounced as the scale of the graph increases. These findings underscore the potential of IGNN-Solver for large-scale end-to-end deployment in real-world applications.
Looking ahead, IGNN-Solver opens new avenues for the integration of learnable solvers in implicit GNNs, bridging the gap between solver efficiency and adaptability to diverse graph structures. Future research could explore extending IGNN-Solver to handle dynamic or heterogeneous graphs, further broadening its applicability. Additionally, investigating the interplay between solver generalization and graph topology holds promise for uncovering deeper insights into the dynamics of graph-based learning. With its ability to combine speed, scalability, and accuracy, IGNN-Solver represents a pivotal step toward making implicit graph models a viable choice for large-scale real-world systems. We hope that IGNN-Solver inspires further innovations in learnable solvers for IGNNs, driving progress in scalable and efficient algorithms on graphs.
References
- [1] F. Gu, H. Chang, W. Zhu, S. Sojoudi, and L. El Ghaoui, “Implicit graph neural networks,” Advances in Neural Information Processing Systems, vol. 33, pp. 11 984–11 995, 2020.
- [2] J. Liu, K. Kawaguchi, B. Hooi, Y. Wang, and X. Xiao, “Eignn: Efficient infinite-depth graph neural networks,” Advances in Neural Information Processing Systems, vol. 34, pp. 18 762–18 773, 2021.
- [3] J. Baker, Q. Wang, C. D. Hauck, and B. Wang, “Implicit graph neural networks: a monotone operator viewpoint,” in International Conference on Machine Learning. PMLR, 2023, pp. 1521–1548.
- [4] Q. Chen, Y. Wang, Y. Wang, J. Chang, Q. Tian, J. Yang, and Z. Lin, “Efficient and scalable implicit graph neural networks with virtual equilibrium,” in 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022, pp. 864–873.
- [5] Q. Li, Z. Han, and X. M. Wu, “Deeper insights into graph convolutional networks for semi-supervised learning,” in 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press, 2018, pp. 3538–3545.
- [6] U. Alon and E. Yahav, “On the bottleneck of graph neural networks and its practical implications,” in International Conference on Learning Representations, 2020.
- [7] X. Zhang, Y. Xu, W. He, W. Guo, and L. Cui, “A comprehensive review of the oversmoothing in graph neural networks,” in CCF Conference on Computer Supported Cooperative Work and Social Computing. Springer, 2023, pp. 451–465.
- [8] J. Liu, B. Hooi, K. Kawaguchi, Y. Wang, C. Dong, and X. Xiao, “Scalable and effective implicit graph neural networks on large graphs,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=QcMdPYBwTu
- [9] Z. Geng, X.-Y. Zhang, S. Bai, Y. Wang, and Z. Lin, “On training implicit models,” Advances in Neural Information Processing Systems, vol. 34, pp. 24 247–24 260, 2021.
- [10] Y. Yang, T. Liu, Y. Wang, Z. Huang, and D. Wipf, “Implicit vs unfolded graph neural networks,” arXiv preprint arXiv:2111.06592, 2021.
- [11] M. Nastorg, M.-A. Bucci, T. Faney, J.-M. Gratien, G. Charpiat, and M. Schoenauer, “An implicit gnn solver for poisson-like problems,” arXiv preprint arXiv:2302.10891, 2023.
- [12] T. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
- [13] D. G. Anderson, “Iterative procedures for nonlinear integral equations,” Journal of the ACM (JACM), vol. 12, no. 4, pp. 547–560, 1965.
- [14] S. Bai, J. Z. Kolter, and V. Koltun, “Deep equilibrium models,” Advances in neural information processing systems, vol. 32, 2019.
- [15] S. Bai, V. Koltun, and J. Z. Kolter, “Multiscale deep equilibrium models,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020, pp. 5238–5250.
- [16] ——, “Neural deep equilibrium solvers,” in International Conference on Learning Representations, 2021.
- [17] S. Yu, F. Alesiani, W. Yin, R. Jenssen, and J. C. Principe, “Principle of relevant information for graph sparsification,” in Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, ser. Proceedings of Machine Learning Research, J. Cussens and K. Zhang, Eds., vol. 180. PMLR, 01–05 Aug 2022, pp. 2331–2341. [Online]. Available: https://proceedings.mlr.press/v180/yu22c.html
- [18] C. Zheng, B. Zong, W. Cheng, D. Song, J. Ni, W. Yu, H. Chen, and W. Wang, “Robust graph representation learning via neural sparsification,” in International Conference on Machine Learning. PMLR, 2020, pp. 11 458–11 468.
- [19] M. Köppen, “The curse of dimensionality,” in 5th online world conference on soft computing in industrial applications (WSC5), vol. 1, 2000, pp. 4–8.
- [20] J. Yang and J. Leskovec, “Defining and evaluating network communities based on ground-truth,” in Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics, 2012, pp. 1–8.
- [21] W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017.
- [22] W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec, “Open graph benchmark: Datasets for machine learning on graphs,” Advances in neural information processing systems, vol. 33, pp. 22 118–22 133, 2020.
- [23] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks,” International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=rJXMpikCZ
- [24] F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, and M. M. Bronstein, “Geometric deep learning on graphs and manifolds using mixture model cnns,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 5115–5124.
- [25] X. Wang, M. Zhu, D. Bo, P. Cui, C. Shi, and J. Pei, “Am-gcn: Adaptive multi-channel graph convolutional networks,” in Proceedings of the 26th ACM SIGKDD International conference on knowledge discovery & data mining, 2020, pp. 1243–1253.
- [26] J. Liu, B. Hooi, K. Kawaguchi, and X. Xiao, “Mgnni: Multiscale graph neural networks with implicit layers,” Advances in Neural Information Processing Systems, vol. 35, pp. 21 358–21 370, 2022.
- [27] R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, “Neural ordinary differential equations,” Advances in neural information processing systems, vol. 31, 2018.
- [28] P. H. Avelar, A. R. Tavares, M. Gori, and L. C. Lamb, “Discrete and continuous deep residual learning over graphs,” arXiv preprint arXiv:1911.09554, 2019.
- [29] M. Poli, S. Massaroli, J. Park, A. Yamashita, H. Asama, and J. Park, “Graph neural ordinary differential equations,” arXiv preprint arXiv:1911.07532, 2019.
- [30] L.-P. Xhonneux, M. Qu, and J. Tang, “Continuous graph neural networks,” in International conference on machine learning. PMLR, 2020, pp. 10 432–10 441.
- [31] B. Chamberlain, J. Rowbottom, M. I. Gorinova, M. Bronstein, S. Webb, and E. Rossi, “Grand: Graph neural diffusion,” in International conference on machine learning. PMLR, 2021, pp. 1407–1418.
- [32] M. Thorpe, T. Nguyen, H. Xia, T. Strohmer, A. Bertozzi, S. Osher, and B. Wang, “Grand++: Graph neural diffusion with a source term,” ICLR, 2022.
- [33] Z. Ling, X. Xie, Q. Wang, Z. Zhang, and Z. Lin, “Global convergence of over-parameterized deep equilibrium models,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2023, pp. 767–787.
- [34] E. Winston and J. Z. Kolter, “Monotone operator equilibrium networks,” Advances in neural information processing systems, vol. 33, pp. 10 718–10 728, 2020.
- [35] Z. Ling, L. Li, Z. Feng, Y. Zhang, F. Zhou, R. C. Qiu, and Z. Liao, “Deep equilibrium models are almost equivalent to not-so-deep explicit models for high-dimensional Gaussian mixtures,” in Proceedings of the 41st International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 235. PMLR, 21–27 Jul 2024, pp. 30 585–30 609. [Online]. Available: https://proceedings.mlr.press/v235/ling24a.html
- [36] Q. Zhao, W. Ren, T. Li, X. Xu, and H. Liu, “Graphgpt: Graph learning with generative pre-trained transformers,” arXiv preprint arXiv:2401.00529, 2023.
- [37] Z. Huang, S. Bai, and J. Z. Kolter, “: Implicit layers for implicit representations,” Advances in Neural Information Processing Systems, vol. 34, pp. 9639–9650, 2021.
- [38] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in International conference on machine learning. PMLR, 2017, pp. 1263–1272.
- [39] E. K. Ryu and S. Boyd, “Primer on monotone operator methods,” Appl. comput. math, vol. 15, no. 1, pp. 3–43, 2016.
- [40] L. Jing, Y. Shen, T. Dubcek, J. Peurifoy, S. Skirlo, Y. LeCun, M. Tegmark, and M. Soljačić, “Tunable efficient unitary neural networks (eunn) and their application to rnns,” in International Conference on Machine Learning. PMLR, 2017, pp. 1733–1741.
- [41] P. R. Halmos, “A hilbert space problem book,” Graduate Texts in Mathematics, 1982.
- [42] S. G. Krantz and H. R. Parks, The implicit function theorem: history, theory, and applications. Springer Science & Business Media, 2002.
- [43] B. Hui, D. Yan, X. Ma, and W.-S. Ku, “Rethinking graph lottery tickets: Graph sparsity matters,” arXiv preprint arXiv:2305.02190, 2023.
- [44] J. F. Lutzeyer, C. Wu, and M. Vazirgiannis, “Sparsifying the update step in graph neural networks,” in Topological, Algebraic and Geometric Learning Workshops 2022. PMLR, 2022, pp. 258–268.
- [45] S. Zhang, M. Wang, P.-Y. Chen, S. Liu, S. Lu, and M. Liu, “Joint edge-model sparse learning is provably efficient for graph neural networks,” arXiv preprint arXiv:2302.02922, 2023.
- [46] J. Zhao, X. Wang, C. Shi, B. Hu, G. Song, and Y. Ye, “Heterogeneous graph structure learning for graph neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, 2021, pp. 4697–4705.
- [47] M. Poli, S. Massaroli, A. Yamashita, H. Asama, and J. Park, “Hypersolvers: Toward fast continuous-depth models,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 105–21 117, 2020.
- [48] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying graph convolutional networks,” in International conference on machine learning. PMLR, 2019, pp. 6861–6871.
- [49] J. Gasteiger, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” in International Conference on Learning Representations, 2018.
- [50] K. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka, “Representation learning on graphs with jumping knowledge networks,” in International conference on machine learning. PMLR, 2018, pp. 5453–5462.
- [51] J. Wu, J. He, and J. Xu, “Demo-net: Degree-specific graph neural networks for node and graph classification,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 406–415. [Online]. Available: https://doi.org/10.1145/3292500.3330950
- [52] M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li, “Simple and deep graph convolutional networks,” in International conference on machine learning. PMLR, 2020, pp. 1725–1735.
- [53] S. Luan, C. Hua, Q. Lu, J. Zhu, M. Zhao, S. Zhang, X.-W. Chang, and D. Precup, “Is heterophily a real nightmare for graph neural networks to do node classification?” arXiv preprint arXiv:2109.05641, 2021.
- [54] D. P. Kingma, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- [55] N. Shervashidze, S. Vishwanathan, T. Petri, K. Mehlhorn, and K. Borgwardt, “Efficient graphlet kernels for large graph comparison,” in Artificial intelligence and statistics. PMLR, 2009, pp. 488–495.
- [56] J. Atwood and D. Towsley, “Diffusion-convolutional neural networks,” Advances in neural information processing systems, vol. 29, 2016.
- [57] M. Zhang, Z. Cui, M. Neumann, and Y. Chen, “An end-to-end deep learning architecture for graph classification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
- [58] K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” arXiv preprint arXiv:1810.00826, 2018.
- [59] C. Gallicchio and A. Micheli, “Fast and deep graph neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 04, 2020, pp. 3898–3905.
- [60] M. Neumann, R. Garnett, C. Bauckhage, and K. Kersting, “Propagation kernels: efficient graph kernels from propagated information,” Machine learning, vol. 102, pp. 209–245, 2016.
- [61] J. Park, J. Choo, and J. Park, “Convergent graph solvers,” arXiv preprint arXiv:2106.01680, 2021.
- [62] Q. Chen, Y. Wang, Y. Wang, J. Yang, and Z. Lin, “Optimization-induced graph implicit nonlinear diffusion,” in International Conference on Machine Learning. PMLR, 2022, pp. 3648–3661.
- [63] C. Lea, M. D. Flynn, R. Vidal, A. Reiter, and G. D. Hager, “Temporal convolutional networks for action segmentation and detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 156–165.
- [64] K. Cho, B. van Merriënboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: Encoder–decoder approaches,” in Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, D. Wu, M. Carpuat, X. Carreras, and E. M. Vecchi, Eds. Doha, Qatar: Association for Computational Linguistics, Oct. 2014, pp. 103–111. [Online]. Available: https://aclanthology.org/W14-4012
- [65] S. Yun, M. Jeong, R. Kim, J. Kang, and H. J. Kim, “Graph transformer networks,” in Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc., 2019, pp. 11 960–11 970. [Online]. Available: https://proceedings.neurips.cc/paper/2019/file/9d63484abb477c97640154d40595a3bb-Paper.pdf
- [66] A. Bojchevski and S. Günnemann, “Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking,” in International Conference on Learning Representations, 2018, pp. 1–13.
- [67] Z. Meng, S. Liang, H. Bao, and X. Zhang, “Co-embedding attributed networks,” in Proceedings of the twelfth ACM international conference on web search and data mining, 2019, pp. 393–401.
- [68] W. Feng, J. Zhang, Y. Dong, Y. Han, H. Luan, Q. Xu, Q. Yang, E. Kharlamov, and J. Tang, “Graph random neural networks for semi-supervised learning on graphs,” Advances in neural information processing systems, vol. 33, pp. 22 092–22 103, 2020.
- [69] P. Yanardag and S. Vishwanathan, “Deep graph kernels,” in Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 2015, pp. 1365–1374.
- [70] P.-T. De Boer, D. P. Kroese, S. Mannor, and R. Y. Rubinstein, “A tutorial on the cross-entropy method,” Annals of operations research, vol. 134, pp. 19–67, 2005.
Supplementary Material
IGNN-Solver: A Graph Neural Solver for Implicit Graph Neural Networks
Appendix A Experimental Details
The experiments are conducted on nine public datasets of different scales for node classification tasks, including five widely adopted benchmark datasets, Citeseer, ACM, CoraFull, BlogCatalog, Flickr, and four large-scale benchmark datasets, Amazon-all, Reddit, ogbn-arxiv and ogbn-products; as well as five bioinformatics benchmarks for graph classification tasks: MUTAG, PTC_MR, COX2, PROTEINS, NCI1. We endeavor to conduct inference testing under almost identical hyperparameter conditions as previous work [3, 2, 1, 25, 12, 23, 48, 49, 50], including performance and efficiency comparisons. All the Experiments are conducted independently five times (i.e., using five random seeds) on a machine with Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz with a single 3090 GPU. A detailed description of all tasks and additional details are provided below.
A-A Datasets
-
•
Citeseer [12]: This dataset is a citation network of research papers, which are divided into six categories. The citation network takes 3,327 scientific papers as nodes and 4,732 citation links as edges. The feature of each node in the dataset is a word vector to describe whether the paper has the corresponding words or not.
-
•
ACM [65]: It is a citation network dataset, where nodes represent papers and node features are constructed by the keywords. Papers are divided into three categories according to their types of conferences.
-
•
CoraFull [66]: Similar to the Citeseer dataset, CoraFull is a well-known citation network labeled based on the paper topic, which contains 19,793 scientific publications. CoraFull is classified into one of 70 categories, where nodes represent papers and the edges represent citations.
-
•
BlogCatalog [67]: This dataset is a social relationship network. The graph is composed of bloggers and their social relationships (such as friends). Node attributes are constructed by keywords in the user profile. The labels represent bloggers’ interests. All nodes are divided into six categories.
-
•
Flickr [51]: It is a graphic social network where nodes represent users and edges correspond to the friendships among users. All the nodes are divided into nine classes according to the interest groups of users.
-
•
Amazon-all [20]: The widely-used benchmark dataset Amazon-all encompasses the Amazon product co-purchasing network dataset. This dataset represents products as nodes and co-purchases as edges. It includes 58 product types, each with over 5,000 products, selected from a total pool of 75,149 product types.
-
•
Reddit [68]: The Reddit dataset is a social network where nodes represent posts, and edges indicate that the same user commented on two connected posts. Each node contains 602 dimensional features.
-
•
ogbn-arxiv [22]: The ogbn-arxiv dataset is a citation network between all Computer Science (CS) arXiv papers. Each node is an arXiv paper, and each directed edge indicates that one paper cites another. Each paper comes with a 128-dimensional feature vector obtained by averaging the embeddings of words in its title and abstract. The task is to predict the 40 subject areas of arXiv CS papers.
-
•
ogbn-products [22]: The ogbn-products dataset contains an undirected and unweighted graph, representing an Amazon product co-purchasing network. Nodes represent products sold on Amazon, and edges between two products indicate that the products are purchased together. The task is to predict the category of a product in a multi-class classification setup, where the 47 top-level categories are used for target labels.
-
•
MUTAG [69]: A dataset of nitroaromatic compounds, where the task is to predict the mutagenic effect on a bacterium. It contains 188 graphs with an average of 17.9 nodes per graph.
-
•
PTC-MR [69]: A dataset of 344 chemical compounds that report carcinogenicity for male and female rats. The dataset is used to predict chemical carcinogenicity, where each compound is represented as a graph with atoms as nodes and chemical bonds as edges. The average number of nodes per graph is 14.3, and the task is binary classification to determine whether a compound is carcinogenic or not. The node features represent atom types and other chemical properties.
-
•
COX2 [69]: A dataset of 467 cyclooxygenase-2 (COX-2) inhibitors. The dataset contains chemical compounds that inhibit COX-2 enzyme activity, which is important in inflammation and pain. Each compound is represented as a graph where atoms are nodes, and chemical bonds are edges.
-
•
PROTEINS [69]: This dataset consists of proteins represented as graphs, where nodes are secondary structure elements, and edges indicate neighborhood in the amino-acid sequence or in 3D space. It includes 1,113 graphs with an average of 39.1 nodes per graph.
-
•
NCI1 [69]: A dataset of chemical compounds screened for activity against non-small cell lung cancer and ovarian cancer cell lines. It contains 4,110 graphs with an average of 29.8 nodes per graph.
Scale | Datasets | nhid | dropout | lr | training | test | * | ||||
Small | Citeseer | 128 | 0.5 | 0.002 | 10 | 100 | 360 | 1000 | 0.1 | 5 | 1e-4 |
ACM | 128 | 0.5 | 0.001 | 10 | 100 | 180 | 1000 | 0.3 | 5 | 1e-4 | |
CoraFull | 512 | 0.5 | 0.002 | 15 | 300 | 4200 | 1000 | 0.3 | 5 | 1e-4 | |
BlogCatalog | 512 | 0.5 | 0.001 | 10 | 100 | 360 | 1000 | 0.5 | 5 | 1e-4 | |
Flickr | 512 | 0.5 | 0.002 | 15 | 200 | 540 | 1000 | 0.1 | 5 | 1e-5 | |
Large | Amazon-all | 128 | 0.5 | 0.005 | 20 | 1000 | 16970 | 28285 | 0.3 | 5 | 1e-5 |
512 | 0.5 | 0.005 | 15 | 2000 | 139,779 | 46,593 | 0.5 | 5 | 1e-5 | ||
ogbn-arxiv | 128 | 0.5 | 0.001 | 20 | 2000 | 90,941 | 48,603 | 0.5 | 5 | 1e-5 | |
ogbn-products | 128 | 0.5 | 0.001 | 20 | 2000 | 196,615 | 2,213,091 | 0.5 | 5 | 1e-5 |
A-B Hyper-parameters Setting in IGNN-Solver
We present in Table IV all the hyper-parameters preset by IGNN-Solver across all datasets, where represents the threshold of maximum iteration in IGNN-Solver, means the maximum training steps in IGNN. It is worth noting that we set the value of in a linearly increasing manner to penalize the intermediate estimation error when the training step is large. Similarly, we set the value of in a linearly decreasing manner to prevent overfitting during later training epochs.
Appendix B Training Strategies
B-A IGNN Training with frozen IGNN-Solver
During the training processing of the IGNN model, we assume that both the initializer and the neural solver of IGNN-Solver have approached the desired state via the training method outlined in Section IV, and they will no longer be updated in the subsequent model training. For a given IGNN layer , as well as the input graph and feature matrix , we define:
(17) |
and utilize the IGNN-Solver to solve (17), obtaining the fixed point . Furthermore, the fixed point is passed through a linear layer to obtain an embedding . Then we, jointly with downstream tasks, compute the Binary Cross Entropy (BCE) loss [70] to back-propagate and update the parameters of in a supervised way. The complete process is illustrated in Algorithm 3.
B-B Advanced IGNN Training Strategy with IGNN-Solver
Given the fact that model parameter gradually updates only with the model training iterations (i.e., we assume is only related to at training step ), we propose training the lightweight IGNN-Solver and the large model in an alternating way. Specifically, we adopt the following procedure:
Here is approximately 4% to 10% of , adjusted as needed. The sum of all values is referred to as . It is reassuring that the additional cost of fine-tuning the IGNN-Solver (step (iii)) is sufficiently offset by the substantial benefits of accelerated solving (step (ii)). Moreover, we are surprised to find that the proposed IGNN-Solver does not exhibit a decline in expressive capability for excessively small or large values. This indicates that thanks to its robust stability, accidentally setting high does not significantly affect the normal training of the IGNN. Conversely, if is set slightly lower, the model still demonstrates good generalization capabilities. Thus, the IGNN-Solver shows low sensitivity to .
Appendix C More Experiments on IGNN-Solver
C-A Training Dynamics
In this section, we present the time cost trends of IGNN and IGNN-Solver during training and inference on the Amazon dataset. From Figure 8, it is evident that IGNN starts relatively fast overall at the beginning of training. However, as the training of IGNN progresses, the model becomes increasingly complex, leading to a sharp rise in the time cost for fixed-point computation. After 100 epochs, this cost remains persistently high and fluctuates continuously. This issue arises because the IGNN reaches the maximum number of iterations without achieving the preset minimum error.

On the contrary, our IGNN-Solver demonstrates significantly lower training and inference time consumption compared to the former and maintains a stable state throughout the process. This is attributed to the solver’s consistent memory consumption and its rapid fixed-point computation capability, which drives the unique advantage of IGNN-Solver.
C-B Long-Range Dependencies (LRD)
To evaluate the capability of GNNs in capturing long-range dependencies within graphs, we follow the methodology presented in [1] by creating a synthetic dataset called Chains. In this dataset, the objective is to classify nodes positioned in a chain of length l. The class information is provided sparsely, appearing only as a feature in one end node of the chain. Our experimental setup consists of small datasets: a training set with 20 nodes, a validation set with 100 nodes, and a test set with 200 nodes. For simplicity, we focus on binary classification. Following the experimental protocol outlined in IGNN, we implement and evaluate four representative baseline models.

As demonstrated in Figure 9, both IGNN-Solver and IGNN effectively capture long-range dependencies on longer chains, whereas the performance of the graph transformer (GTN) [65] slightly decreases when the chain length exceeds . On the other hand, convolutional GNNs with a fixed number of iterations , like GCN and GAT, face challenges in making meaningful predictions as the chain length grows due to their limited capacity to capture dependencies beyond two hops. Interestingly, increasing the number of iterations for these models does not lead to significant improvement in this case.