Towards Inductive Robustness: Distilling and Fostering Wave-induced Resonance in Transductive GCNs Against Graph Adversarial Attacks
Abstract
Graph neural networks (GNNs) have recently been shown to be vulnerable to adversarial attacks, where slight perturbations in the graph structure can lead to erroneous predictions. However, current robust models for defending against such attacks inherit the transductive limitations of graph convolutional networks (GCNs). As a result, they are constrained by fixed structures and do not naturally generalize to unseen nodes. Here, we discover that transductive GCNs inherently possess a distillable robustness, achieved through a wave-induced resonance process. Based on this, we foster this resonance to facilitate inductive and robust learning. Specifically, we first prove that the signal formed by GCN-driven message passing (MP) is equivalent to the edge-based Laplacian wave, where, within a wave system, resonance can naturally emerge between the signal and its transmitting medium. This resonance provides inherent resistance to malicious perturbations inflicted on the signal system. We then prove that merely three MP iterations within GCNs can induce signal resonance between nodes and edges, manifesting as a coupling between nodes and their distillable surrounding local subgraph. Consequently, we present Graph Resonance-fostering Network (GRN) to foster this resonance via learning node representations from their distilled resonating subgraphs. By capturing the edge-transmitted signals within this subgraph and integrating them with the node signal, GRN embeds these combined signals into the central node’s representation. This node-wise embedding approach allows for generalization to unseen nodes. We validate our theoretical findings with experiments, and demonstrate that GRN generalizes robustness to unseen nodes, whilst maintaining state-of-the-art classification accuracy on perturbed graphs.
Introduction
In recent years, graph neural networks (GNNs), through the capabilities afforded by inductive learning, have emerged as the most potent instruments for node classification tasks. Nevertheless, earlier transductive models, such as graph convolutional networks (GCNs) (Kipf and Welling 2017), have inadvertently introduced vulnerabilities to adversarial attacks within the GNN framework. It has been observed that perturbed graphs derived from GCNs serving as surrogate models have the potential to compromise the outputs of inductive GNNs when transferred. In real-world applications, where trust and accuracy are non-negotiable (Chen et al. 2021; Nadal et al. 2021; Zhao et al. 2022; Berberidis and Giannakis 2019; Xiao, Chen, and Shi 2019), such vulnerabilities can significantly jeopardize public trust (Kreps and Kriner 2020), distort human decision-making (Walt, Jack, and Christof 2019), and threaten human well-being (Samuel et al. 2019). Addressing the vulnerabilities introduced by transductive GCNs into the GNNs’ community is of paramount importance.
Distinct from discrete feature data like images or text, graph data comprises a connected set of features through its topological structure. This interconnectedness naturally encourages the adoption of a global input-output mechanism to establish a learning channel from features to labels, a paradigm referred as transductive learning (Kipf and Welling 2017; Defferrard, Bresson, and Vandergheynst 2016; Bruna et al. 2013), with GCNs epitomizing this approach. This very transductive nature of GCNs offers adversaries an ideal environment for launching attacks (Liu et al. 2022). Leveraging this global input-output pattern, given sufficient computation, adversaries can invariably devise perturbations that are both concealment and effective (Sun et al. 2022). Given that adversaries exploit vulnerabilities inherent to transductive models to compromise the GNNs’ communities, the formulation of a more robust transductive model has ascended as the prevailing defensive approach.
To defense adversarial attacks, early research predominantly sought to fortify GCN’s tolerance to perturbations by adversarial training through random edge drops (Dai et al. 2018). Recently, a shift towards self-supervised training methods has been observed. These techniques sidestep the trap set by adversaries, which bait the model into misclassifying specific inputs. Instead of singularly focusing on enhancing the model’s robustness to a given label space, they aim to expand the GCN’s overall robustness to potential perturbed graphs. Key representatives of these research endeavors include: (1) In RGCN (Zhu et al. 2019), the Gaussian distributions are employed to replace the node hidden representations across each GCN layer, aiming to mitigate the adversarial modifications’ impact. (2) By introducing a singular value decomposition (SVD) filter before the GCN processing, GCN-SVD (Entezari et al. 2020) is designed to discard adversarial edges from the training dataset. (3) STABLE (Li et al. 2022) introduces enhancements in GCN’s forward propagation by incorporating functions that sporadically reinstate edges which were approximately removed. (4) EGNN (Liu et al. 2021) leverages graph smoothing techniques to confine the permutation setting space, effectively excluding the majority of non-smooth permutations.
However, current research, aiming to improve GCN-based models into a robust transductive variant against attacks, inadvertently carries over transductive-introduced weaknesses (Hamilton, Ying, and Leskovec 2017). Specifically, these models can’t handle unseen nodes and are limited to fixed structures, lacking generalization. This restricts their applicability. If adversaries slightly adjust tactics, defenders must retrain their models for safety. The cause is that GCNs’ vulnerabilities are inherent. To enhance their robustness, these vulnerabilities require targeted solutions. Deviating from the context of GCNs could hinder a thorough analysis of attack mechanisms. This, in turn, would obstruct the transition from transductive robust models to inductive ones. Until we harness GCN’s inherent robustness for inductive models, we will be stuck in a cycle of constantly refining transductive ones to address vulnerabilities.
In addressing this conundrum, our exploration unveiled an intriguing intrinsic source of robustness within the GCN itself. Without resorting to additional designs, merely deepening the standard 2-layer GCN to a 3-layer structure endows it with an innate (albeit partial) robustness. Importantly, the mechanism underpinning this robustness can be distilled. By purposefully fostering this intrinsic mechanism, it has paved the way for us to architect a robust inductive model. Employing this approach serves a dual purpose: On one hand, it facilitates a precision-oriented confrontation against the perturbations devised specifically by adversaries for transductive structures, ensuring the efficacy of our defensive strategies. On the other hand, it enables us to integrate this robustness mechanism into inductive frameworks, thereby achieving a seamless melding of inductiveness and robustness.
Specifically, we demonstrate that the vibrations of node signals within the GCN-driven message passing (MP) are equivalent to the edge-based waves, formulated by wave equations (Friedman and Tillich 2004; Shatah and Struwe 1993). Given this equivalence, it follows that GCNs inherently possess the potential for resonance (Kovalyov 1989), allowing them to harness the natural advantages of waves in defending against perturbations (Blas and Jenkins 2022). Then, we introduce a mathematical definition for the intensity of such resonance in GCNs. This definition, which outlines the scope and weights of a node’s connections to its neighbors, concurrently adheres to four principles: universality, adaptation via MP, node-independence, and topological correlation. Subsequently, we demonstrated that for 3+ layer GCNs, an invariant mapping exists, translating GCNs’ outputs into resonance intensity, manifesting as nodes capturing their surrounding local weighted structure.
Informed by these insights, we introduce the Graph Resonance-fostering Network (GRN) for inductive learning. The core of GRN is that it distills the structure resonating with nodes as local resonance subgraphs. Then, within this subgraph, GRN fosters the resonance by embedding both the node’s signals and the signals transmitted through edges as central node’s representation. This embedding approach is generalizable across graph structures. If the surrounding topology of a node (with unseen ones) can be clearly determined to distill the local resonance subgraph, robust and inductive graph learning is achieved. Our contributions are:
-
•
We propose the first inductive and robust GNN.
-
•
We prove that a 3-layer GCN inherently possesses an distillable robustness.
-
•
We prove the equivalence between GCN-driven signal vibrations and edge-based waves.
Preliminaries
Notations
We consider connected graphs consisting nodes. Let be the adjacency matrix. Let generic symbol be the Laplacian in its broadest sense. The feature and one-hot label matrix are and respectively. The edge connected nodes and is written as or . The neighborhood of a node consists of all nodes for which . Let be the degree of node . The feature vector and one-hot label of node are and .
GCN
Under the topology , with as the input, the output at the -th layer of a GCN is denoted as . The -th parameter matrix of is . denotes the features in -th MP.
Wave Equation
The edge-based wave equation introduces a relationship between a graph-based signal and its topological structure. Let be a constant, it is defined as (Friedman and Tillich 2004). Herein can be instantiated as any discernible signal.
3-layer GCN Possesses Adversarial Robustness via Wave-induced Resonance
Equivalence of GCN-driven MP and Wave Equation
Here we demonstrate that the signal vibrations driven by GCNs, are equivalent to waves on graph topologies and can be characterized by nonlinear wave equations.
Theorem 1.
Let and denote the signals of the -th MP and -th wave respectively, under the topological structure represented by . It is established that for the given and , there exists satisfies , where are the Laplacian matrices of all attribute graphs.
This study draws an analogy between the node signals in GCN-driven MP and waves, considering edges as the transmission medium. Research indicates that in systems producing waves, resonance can arise between waves and their medium (Ahmad et al. 2023; Bykov, Bezus, and Doskolovich 2019). Building on this understanding, we can reaffirm our empirical observations about the GCN training pattern: in non-adversarial contexts, GCNs converge to the most natural, or congruent with ground truth, signal MP paradigm during training. Under this premise, the messages transmitted by nodes and edges in the graph manifest a natural coupling state, maintaining a benign mapping relationship . The key to optimizing is the resonance between node signals and edge signals . In adversarial situations, adversaries manipulate node signals by rewiring edges, which inadvertently induces unnatural, i.e., noncongruent with ground truth, MP patterns. Under this scheme, the benign resonance is disrupted, resulting in a malignant mapping relationship , where and is the perturbed graph and label, respectively.
Mathematical Definition of Resonance in GCN
Maintaining benign resonance becomes an intuitive defensive mechanism as it intrinsically resists unnatural perturbations. To actualize control over this resonance, thereby purposefully fostering resonance within the GCN, we subsequently delineate a detailed definition of this resonance. Thus, this definition should comply with the following conditions: 1) Every node within a graph should possess a computable resonance intensity, 2) the resonance intensity of all nodes should evolve in accordance with MP, 3) each node should maintain an independent resonance intensity, and 4) the stronger a node’s connection to its surrounding topology, the greater its perceived resonance intensity.
To devise a methodology compliant with the desired conditions, we consider node and utilize its latent representation (Kula 2015) to quantify the intensity of the node signals. Furthermore, we use , the count of edges among nodes in , to measure the connectivity strength specific to the edges at the given nodes. Then, we use the total number of walks of length 2 originating from to any node in , to quantity the magnitude of connectivity density that exhibits in the structure.
Accordingly, we propose the following definition to quantify the resonance intensity at node :
Definition 1.
The resonance intensity of on -th MP is
(1) |
The unique of defining resonance intensity can be articulated as follows: it not only allows for an interpretable quantification of the resonance on different nodes, but it is also directly observable within MP. This implies that under such a definition, the resonance intensity of any node at any given MP epoch on a graph can be independently calculated, obviating the need for the GCN computational paradigm.
Resonance arises in 3rd MP
Definition 1 facilitates the quantification of resonance for any signal function on any graph, irrespective of whether or not it is driven by GCN. Nonetheless, an intriguing finding has been proven: the wave system constructed by GCN inherently and involuntarily arises resonance, which is outlined in the theorem:
Theorem 2.
Let be the latent signal formed by GCN-driven MP, we have:
(2) |
Theorem 2 unveils an intriguing phenomenon: for , there subsists an invariant mapping, which transposes the GCN-driven signal into a resonance intensity that bears no correlation with the GCN paradigm. Given that Definition 1 has established the resonance intensity as a measure of the coupling strength between nodes and structure within the graph, we can thus characterize it as the degree of coupling. Consequently, it can be asserted that prior to 3rd MP iteration, the GCN appears to have yet to delve into the coupling paradigm between nodes and structure within the graph. However, subsequent to the 3rd MP, due to the persistent presence of the invariant mapping, it can be construed that the GCN has fortuitously assimilated the coupling paradigm within the graph during the 3rd MP, and perpetuates this paradigm into subsequent MPs.
Vast Perturbation Search Space of 3-layer GCN
In light of the current absence of an effective method for quantifying the combined adversarial robustness of a specific graph and a GCN learning from said graph, we propose an intuitive approach. For a graph , comprised of edges and represented by the adjacency matrix , and a GCN with layers, where the perturbation budget is denoted as , the number of matrix multiplication-based forward propagations required by the attack model can be construed as the highest attack cost. In this context, the number of subgraphs is independent of node features, hence we employ as the independent variable for the attack cost function, denoted as . We then present the following theorem:
Theorem 3.
For any specified graph with a node set and an adjacency matrix , in conjunction with a -layer GCN, and a maximum perturbation , the following holds:
(3) |
where denotes the number of combinations.
It’s revealed that adversaries face the same computational cost for matrix multiplication-based forward propagations when = 1 or 2. However, for , the cost dramatically increases, largely due to . As an example, with the Cora dataset () and a 1% perturbation rate (), the cost for = 3 becomes exponentially larger. Thus, attacking a 3-layer GCN presents a vast search space for adversaries. This insight extends Theorem 2’s real-world applications and our previous findings: a 3-layer GCN can naturally create resonance robustness. With our defined resonance, we can further boost this robustness proposefully.
Graph Resonance-fostering Network
Principle Overview
We employ GRN to enhance the resonance of the GCN. The underlying concept of the GRN is articulated as follows. Definition 1 exhibits that for a node , there exists a local graph structure that resonates, known as the local resonance subgraph (LRS) for node , denoted as , used to represent the maximal subgraph structure that node can capture. During end-to-end training, both the node signals , and the signals transmitted through edges concurrently vibrate within the LRS. Consequently, for , if a learnable parameter capable of jointly embedding MP’s result , and , into ’s output representation, this aggregation intentionally accomplishes a learnable resonance, generating a local-level embedding. This identical aggregate pattern is applied across all nodes to facilitate a mapping, thereby achieving a global-level forward propagation within the GRN.
In summary, a single forward propagation of the GRN is:
(4) |
Next, we provide explicit definitions for and .
: Local Resonance Subgraph
As per Definition 1, LRS comprises three components: 1) edges formed amongst all first-order neighbors, as counted by , with these edges’ weights equal 1; 2) edges formed between it and all 2nd- (inevitably includes 1st-) order neighbors, as counted by , with these edges’ weights equal 2; 3) edges between it and all first-order neighbors, as counted by , with these edges’ weights equal 8. Consequently, the LRS can be viewed as a weighted graph, in which the weights of edges serve as attention for the joint combination of and . An illustrative example of the LRS is presented in Figure 1.

: Edge-transmitted Signals
In the MP driven by adjacency matrices, only signals at the nodes are observable, while the signals transmitted across each edge remain imperceptible. To ascertain the quantified signals on every specific edge within , we first obtain the global edge-transmitted signals . Then, is subsequently derived through a sampling procedure on using the edge indices within .
Specifically, within , the edge-transmitted signals on are denoted as . For , is defined via a sequential procedure: 1) The edge in is deleted, producing a new graph with its adjacency matrix . 2) A new forward propagation is executed in the same layer on , obtaining a feature matrix . This matrix does not contain any messages transmitted through the edge . Consequently,
(5) |
The feature of node in denoted as , is obtained. 3) In , there is no edge between the nodes . Hence, the feature transmitted from node to (i.e., ) is calculated by subtracting the feature obtained through the re-propagation on (i.e., ) from the original feature (i.e., ). Similarly, the signal transmitted through the pair could be interpreted as the average of the mutually transmitted signals. At , since MP has not been initiated, would ideally be 0. For end-to-end training, it is defined as a random infinitesimal value. In conclusion, is determined as
(6) |
Figure 2 illustrates the computation of .

Simplifying the Computational Overhead of
Equation (5) explicates the method of re-propagation on . Given that there are ways to choose , it necessitates the computation of matrix multiplications (where remains the same for all selections and can be considered a constant matrix), thereby constituting the primary computational cost of . Here, we provide a computational method equivalent to Equation (5), reducing the times to once.
Proposition 1.
By indexing and rearranging by rows and to obtain a matrix ,
(7) |
Evidently, a single matrix multiplication, i.e., , is sufficient to iterate over all and yield the results.

Learning the Parameters
Each layer of GRN only contains trainable parameters , and each has a distinct output representation . Thus, in accordance with the requirements of the downstream task, GRN can accommodate either supervised or unsupervised loss functions, thereby tuning their weight matrices. Specifically, we denote the discrepancy function as . In semi-supervised scenarios, the loss function is ; in unsupervised scenarios (Müller 2023), . Depending on the downstream applications, can take various forms, such as cross-entropy, etc. The general workflow of GRN is illustrated in Figure 3.
Experiments
Datasets
Baselines
Comparison defending models. We compare GRN with other defending models including: 1) RGCN which leverages the Gaussian distributions for node representations to amortize the effects of adversarial attacks, 2) GNN-SVD which is applied to a low-rank approximation of the adjacency matrix obtained by truncated SVD, 3) Pro-GNN (Jin et al. 2020) which can learn a robust GNN by the intrinsic properties of nodes, 4) Jaccard (Wu et al. 2019) which defends attacks based on the measured Jaccard similarity score, 5) EGNN (Liu et al. 2021) which filters out perturbations by - and -based graph smoothing. Attack methods. The experiments are designed under the following attack strategies: 1) Metattack (Zügner and Günnemann 2018), a meta-learning based attack, 2) CLGA (Sixiao et al. 2022), an unsupervised attack, 3) RL-S2V (Dai et al. 2018), a reinforcement learning based attack.
Pinpointing the Layer of Resonance
In Theorem 2, we establish an equivalence relation between the -th and the -th layer’s output latent representations, as derived from Equations (1) and (2). This elucidates that when , the -th layer involuntarily captures local structures, thereby inducing resonance. To facilitate experimental variable control, we first demonstrate the equivalence relation under varying “gap layer numbers” (denoted as ). If the equivalence between Equations (1) and (2) only holds when , it substantiates the validity of Theorem 2. Specifically, we first train a 5-layer GCN, then obtain the resonance intensity denoted as , and the actual observed signal denoted as , for each epoch. Given these observational variables, we delineate their transformations over the learning process using lists and respectively. Each list chronicles its corresponding variable’s fluctuations across all epochs. Subsequently, we standardize (using the standardize function ) the sequences under different and calculate the absolute difference to obtain a difference sequence:
(8) |
The parameter , serving as an indicator variable, accurately encapsulates the discrepancy between and . The experimental results are illustrated in Figure 4. Owing to the large number of nodes, we display the mean value (central line) and standard deviation (shadow areas) of all nodes. As epochs progress, gradually converges to zero. After the initial several epochs, it significantly diverges from and . This validates the intriguing phenomenon mentioned in Theorem 2: a correlation has been established between the signal at the 3-th layer and the graph’s initial signal and structure. Subsequent experimental results echo the aforementioned findings, thereby affirming the correctness of Theorem 2 when .




Attack Success Rate Cliff-like declines on 3-layer GCN
Intuitively, the complexity of an attack tends to increase with the number of GCNs’ layer. Observing the pattern of attack success rate (ASR) declines as the number of GCN layers increases helps validate our claim that the 3-layer GCN, derived from resonance, can significantly enhance robustness. Specifically, we start by initializing 10 GCNs with the number of layers ranging from 1 to 10. Next, we conduct experiments on 4 datasets using 3 typical attacks, setting the perturbation rate uniformly at 20%. We then train a surrogate model for each GCN separately, placing perturbations in the dataset, and clearing these perturbations after each attack. We repeat each attack five times and report the average ASR accuracy (depicted by the lines) and variation range (represented by the shaded areas). The results, as shown in Figure 6, clearly reveal a steep drop in ASR at the 3-layer GCN. However, further layer addition seems unable to significantly reduce the ASR, as the additional layers maintain the same resonance pattern as the 3-layer GCN to achieve adversarial robustness. These findings articulate the concept of distilling the resonance from GCNs and fostering this resonance to design a inductive approach.

How Robust of LRS-constructed Graphs
Derived from a transductive model, the LRS captures distinct resonance regions and transforms a localized, unweighted graph (also perceivable as a graph with unitary weights) into a weighted one. The implementation of the LRS within the GRN enables the demarcation of a learnable resonance scope for inductive learning. Consequently, it becomes essential to validate the efficacy of the LRS through its embedding precision within the transductive model.
We initiate by presenting results obtained under non-adversarial conditions. We sum the LRS of all nodes in and apply min-max scaling to all weights, thus creating a global weighted graph . Then, we train a standard 2-layer GCN on Cora dataset (denoted as ) and visualize its well-trained representations. Then, we feed into to generate its visualization. Lastly, for comparison, we create a random-weighted graph whose edge distribution is the same as , and input it into to get the corresponding visualization. As Figure 5(a) shows, under identical weights, the representations of different categories in are tighter than those in . This suggests that introducing LRS brings beneficial global weights, which enhance the model’s performance in non-adversarial scenarios.
Then, we explored the similarity between and . Using strength distribution, akin to degree distribution for unweighted graphs (Zügner, Akbarnejad, and Günnemann 2018), we found the two weighted graphs are notably similar. Figure 5(b) confirms this, showing a stark difference from . Therefore, maintains the traits of .
We next assessed the classification accuracy of under adversarial attacks using varying Metattack perturbation rates . Figure 5(c) shows that as increases, ’s accuracy consistently edges out . This suggests that the LRS introduces a resonance in the transductive model, marginally boosting its adversarial robustness.
Generalizable Robustness of GRN
Dataset | (%) | Defense Baselines | GRN | ||||||
---|---|---|---|---|---|---|---|---|---|
RGCN | GNN-SVD | Pro-GNN | Jaccard | EGNN | =20% | =40% | =60% | ||
Cora | 0 | 83.49±0.57 | 81.14±0.79 | 85.01±0.40 | 81.74±0.36 | 85.00±0.40 | 83.74±1.68 | 86.79±2.27 | 87.75±0.93 |
5 | 77.20±0.47 | 78.29±0.63 | 80.10±0.22 | 80.56±1.30 | 82.24±0.49 | 81.48±0.83 | 86.04±3.15 | 86.24±1.54 | |
10 | 72.65±0.40 | 70.81±1.77 | 74.45±0.28 | 75.07±1.28 | 76.38±0.35 | 79.51±1.62 | 81.38±2.58 | 82.03±1.48 | |
20 | 59.31±0.27 | 56.67±1.22 | 64.68±0.75 | 73.54±0.94 | 69.82±0.71 | 73.79±1.91 | 74.14±1.93 | 74.81±1.52 | |
Citeseer | 0 | 71.81±0.71 | 70.42±0.39 | 74.94±0.40 | 73.82±0.56 | 74.92±0.66 | 75.69±0.69 | 78.19±1.30 | 83.26±0.73 |
5 | 71.22±0.61 | 68.86±0.47 | 72.45±0.88 | 71.41±0.65 | 73.60±0.45 | 75.40±0.95 | 77.54±1.04 | 81.66±0.70 | |
10 | 67.53±0.60 | 68.70±0.89 | 70.16±1.05 | 70.09±0.48 | 73.66±0.37 | 73.81±1.10 | 77.04±1.51 | 80.32±0.49 | |
20 | 63.20±1.70 | 57.95±1.48 | 55.84±1.28 | 67.22±1.32 | 65.91±1.20 | 71.06±1.20 | 72.72±1.28 | 77.21±0.64 | |
Pubmed | 0 | 84.57±0.39 | 83.25±0.35 | 84.96±0.08 | 84.87±0.10 | 85.94±0.10 | 80.26±0.43 | 85.51±0.66 | 87.27±0.67 |
5 | 81.25±0.50 | 82.90±0.26 | 83.00±0.10 | 82.32±0.11 | 83.89±0.09 | 77.86±0.35 | 81.92±0.59 | 83.36±0.62 | |
10 | 78.96±0.43 | 80.35±0.21 | 80.82±0.20 | 80.77±0.11 | 82.13±0.15 | 76.62±0.55 | 79.29±0.60 | 80.99±0.58 | |
20 | 71.33±0.40 | 73.57±0.15 | 74.16±0.16 | 73.41±0.12 | 76.01±0.19 | 74.98±0.64 | 76.87±0.60 | 77.15±0.56 | |
Polblogs | 0 | 94.87±0.19 | 95.08±0.22 | 95.45±0.12 | 95.03±0.57 | 95.70±0.34 | 95.42±0.56 | 94.88±0.43 | 94.97±0.31 |
5 | 73.28±0.18 | 88.86±0.58 | 90.98±0.69 | 90.97±0.61 | 89.97±1.25 | 90.18±0.43 | 91.22±0.38 | 89.37±0.46 | |
10 | 70.91±0.37 | 80.38±0.85 | 85.60±1.08 | 85.93±1.39 | 83.66±1.81 | 86.30±0.70 | 85.43±0.68 | 85.07±0.54 | |
20 | 57.97±0.41 | 55.33±2.07 | 73.52±0.53 | 70.47±1.27 | 75.87±0.88 | 82.03±0.79 | 81.96±0.72 | 81.56±0.18 |
We assess the adversarial robustness of the 3-layer GRN under generalization demands by comparing its accuracy against other baselines on perturbed graphs. We partition a subset of the dataset as training set with proportions (also named “seen” rate) as 20%, 40%, and 60%. The data within these proportions are deemed “seen” by the GRN, while the remaining data is categorized as “unseen”. Utilizing Metattack as our attack approach, we adopt the standard 2-layer GCN as the surrogate model. By adjusting , we derive the corresponding perturbed graphs. Then, we evaluate the classification performance of the baselines on these graphs, placing an emphasis on the accuracy upon model convergence. For each setting, we executed 10 iterations, tabulating both the average outcome and its variability. Table 1 reports the results.
From the data, both clean and perturbed graphs show GRN with a = 60% generally surpasses the baseline in accuracy. There are three exceptions: 1) For the Pubmed dataset at = 10%, this is due to EGNN using graph smoothing to enhance adversarial robustness. In this case, perturbations may be more pronounced in a certain area, and EGNN could leverage this by smoothing concentrated perturbation patterns. However, these incidents are rare, and as increases, GRN’s accuracy returns to its peak. 2) With the Polblogs dataset at = 0%, GRN is slightly behind EGNN by 0.28%. Yet, as rises, the decline in GRN’s accuracy is the least noticeable among all baselines, ensuring its top position. 3) An intriguing pattern emerging from the Polblogs dataset is the non-proportional relationship between the GRN’s and its accuracy. The peculiarity of the Polblogs dataset is that its nodes lack intrinsic features. Typically, scholars have used node degrees as proxies for these absent attributes. This substitution results in the inherent attributes of Polblogs leaning towards uniformity. Expanding the training set’s scale exacerbate the oversmoothing phenomenon, culminating in diminished accuracy.
Ablation Studies
GRNs combine edge-transmitted signal and node signal for node ’s representation. We initiate an ablation study to delve into this process. First, we embed only , naming the model GRNZ. This appears similar to a 2-depth GraphSAGE with mean aggregation, indicating potential vulnerability to adversarial attacks. We then examine the combination order of and . The default GRN order is GRNE,Z. We test GRNZ,E (reversed order) and GRNshuf (shuffled rows). Results (Table 2) show GRNZ underperforms, especially in adversarial settings, emphasizing the importance of co-embedding both signals. Precisions of GRNZ,E, GRNE,Z, and GRNshuf are comparable due to the edge-transmitted signal, which, combined with the node signal through shared parameters, results in consistent performance regardless of order. This suggests that GRN has the capability to recognize a latent graph structure, wherein edge-transmitted signals function as latent node signals, contributing to adversarial robustness and insensitivity to signal order.
Dataset | (%) | Standard | Ablated models | ||
---|---|---|---|---|---|
GRNE,Z | GRNZ,E | GRNshuf | GRNZ | ||
Cora | 0 | 87.75 | 87.14±0.81 | 87.28±0.92 | 87.74±0.72 |
5 | 86.24 | 86.18±0.75 | 86.34±0.98 | 85.56±0.87 | |
10 | 82.03 | 82.81±0.84 | 82.54±1.05 | 79.07±0.90 | |
20 | 74.81 | 74.53±0.80 | 74.42±1.36 | 63.54±1.30 | |
Citeseer | 0 | 83.26 | 83.07±0.64 | 83.52±0.92 | 82.23±0.97 |
5 | 81.66 | 81.49±0.71 | 81.45±0.95 | 79.47±1.10 | |
10 | 80.32 | 80.10±0.65 | 80.46±1.17 | 74.39±1.44 | |
20 | 77.21 | 76.95±1.26 | 76.84±1.53 | 69.05±1.86 |
Conclusions
We addressed critical concerns surrounding the transductive nature of existing robust graph learning models. We began by establishing the equivalence between GCN-driven MP and edge-based waves. Subsequently, we demonstrated that a 3-layer GCN capitalizes on the unique resonance intrinsic to waves to achieve robustness. Delving deeper, we formalized this resonance as a coupling between a node and its surrounding local structure. We then introduced an inductive graph learning model, GRN. Experimental results not only corroborated our theoretical insights but also highlighted the exemplary robustness of the proposed GRN model.
Acknowledgments
This work is supported in part by the National Key Research and Development Program of China (No. 2020YFB1805400), the National Natural Science Foundation of China (No. U19A2068, No. 62372313), and the Sichuan Science and Technology Program (No. 2023YFG0113).
References
- Ahmad et al. (2023) Ahmad, S.; Saifullah, S.; Khan, A.; and Wazwaz, A. M. 2023. Resonance, fusion and fission dynamics of bifurcation solitons and hybrid rogue wave structures of Sawada–Kotera equation. Commun. Nonlinear. Sci. Numer. Simul., 119: 107117.
- Berberidis and Giannakis (2019) Berberidis, D.; and Giannakis, G. B. 2019. Node embedding with adaptive similarities for scalable learning over graphs. IEEE Trans. Knowl. Data Eng., 33(2): 637–650.
- Blas and Jenkins (2022) Blas, D.; and Jenkins, A. C. 2022. Detecting stochastic gravitational waves with binary resonance. Phys. Rev. D, 105(6): 064021.
- Bruna et al. (2013) Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203.
- Bykov, Bezus, and Doskolovich (2019) Bykov, D. A.; Bezus, E. A.; and Doskolovich, L. L. 2019. Coupled-wave formalism for bound states in the continuum in guided-mode resonant gratings. Phys. Rev. A, 99(6): 063805.
- Chen et al. (2021) Chen, W.; Feng, F.; Wang, Q.; He, X.; Song, C.; Ling, G.; and Zhang, Y. 2021. Catgcn: Graph convolutional networks with categorical node features. IEEE Trans. Knowl. Data Eng.
- Dai et al. (2018) Dai, H.; Li, H.; Tian, T.; Huang, X.; Wang, L.; Zhu, J.; and Song, L. 2018. Adversarial attack on graph structured data. In Proc. 35th Int. Conf. Mach. Learn., 1115–1124.
- Defferrard, Bresson, and Vandergheynst (2016) Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. volume 29.
- Entezari et al. (2020) Entezari, N.; Al-Sayouri, S. A.; Darvishzadeh, A.; and Papalexakis, E. E. 2020. All you need is low (rank) defending against adversarial attacks on graphs. In Proc. 13th Int. Conf. Web Search Data Min., 169–177.
- Friedman and Tillich (2004) Friedman, J.; and Tillich, J.-P. 2004. Wave equations for graphs and the edge-based Laplacian. Pac. J. Math., 216(2): 229–266.
- Hamilton, Ying, and Leskovec (2017) Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. In Proc. 31st Adv. Neural Inf. Proces. Syst.
- Jin et al. (2020) Jin, W.; Ma, Y.; Liu, X.; Tang, X.; Wang, S.; and Tang, J. 2020. Graph structure learning for robust graph neural networks. In 26th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., 66–74.
- Kipf and Welling (2017) Kipf, T. N.; and Welling, M. 2017. Semi-supervised classification with graph convolutional networks. In Proc. 5th Int. Conf. Learn. Represent.
- Kovalyov (1989) Kovalyov, M. 1989. Resonance-type behaviour in a system of nonlinear wave equations. J. Differ. Equ., 77(1): 73–83.
- Kreps and Kriner (2020) Kreps, S.; and Kriner, D. 2020. Model uncertainty, political contestation, and public trust in science: Evidence from the COVID-19 pandemic. Sci. Adv., 6(43): eabd4563.
- Kula (2015) Kula, M. 2015. Metadata Embeddings for User and Item Cold-start Recommendations. In CORR., volume 1507.08439.
- Li et al. (2022) Li, K.; Liu, Y.; Ao, X.; Chi, J.; Feng, J.; Yang, H.; and He, Q. 2022. Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN. In Proc. 28th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min.
- Liu et al. (2022) Liu, A.; Li, B.; Li, T.; Zhou, P.; and Wang, R. 2022. AN-GCN: An Anonymous Graph Convolutional Network Against Edge-Perturbing Attacks. IEEE Trans. Neural Netw. Learn. Syst.
- Liu et al. (2021) Liu, X.; Jin, W.; Ma, Y.; Li, Y.; Liu, H.; Wang, Y.; Yan, M.; and Tang, J. 2021. Elastic graph neural networks. In Proc. 38th Int. Conf. Mach. Learn., 6837–6849.
- Müller (2023) Müller, E. 2023. Graph clustering with graph neural networks. J. Mach. Learn. Res., 24: 1–21.
- Nadal et al. (2021) Nadal, S.; Abelló, A.; Romero, O.; Vansummeren, S.; and Vassiliadis, P. 2021. Graph-driven federated data management. IEEE Trans. Knowl. Data Eng., 35(1): 509–520.
- Samuel et al. (2019) Samuel, F., G; John, B., D; Joichi, I.; Jonathan, Z., L; Andrew, B., L; and Isaac, K., S. 2019. Adversarial attacks on medical machine learning. Science, 363(6433): 1287–1289.
- Shatah and Struwe (1993) Shatah, J.; and Struwe, M. 1993. Regularity results for nonlinear wave equations. Ann. Math., 138(3): 503–518.
- Sixiao et al. (2022) Sixiao, Z.; Hongxu, C.; Xiangguo, S.; Yicong, L.; and Xu, G. 2022. Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation. In Proc. 31st Int. Conf. World Wide Web.
- Sun et al. (2022) Sun, L.; Dou, Y.; Yang, C.; Zhang, K.; Wang, J.; Philip, S. Y.; He, L.; and Li, B. 2022. Adversarial attack and defense on graph data: A survey. IEEE Trans. Knowl. Data Eng.
- Sun et al. (2020) Sun, Y.; Wang, S.; Tang, X.; Hsieh, T.-Y.; and Honavar, V. 2020. Non-target-specific node injection attacks on graph neural networks: A hierarchical reinforcement learning approach. In Proc. 29th Int. Conf. World Wide Web, volume 3.
- Walt, Jack, and Christof (2019) Walt, W.; Jack, C.; and Christof, T. 2019. Adversarial explanations for understanding image classification decisions and improved neural network robustness. Nat. Mach. Intell., 1: 508––516.
- Wu et al. (2019) Wu, H.; Wang, C.; Tyshetskiy, Y.; Docherty, A.; Lu, K.; and Zhu, L. 2019. Adversarial examples for graph data: deep insights into attack and defense. In Proc. 28th Int. Joint Conf. Artif. Intel.
- Xiao, Chen, and Shi (2019) Xiao, H.; Chen, Y.; and Shi, X. 2019. Knowledge graph embedding based on multi-view clustering framework. IEEE Trans. Knowl. Data Eng., 33(2): 585–596.
- Zhao et al. (2022) Zhao, Y.; Zhou, H.; Zhang, A.; Xie, R.; Li, Q.; and Zhuang, F. 2022. Connecting embeddings based on multiplex relational graph attention networks for knowledge graph entity typing. IEEE Trans. Knowl. Data Eng.
- Zhu et al. (2019) Zhu, D.; Zhang, Z.; Cui, P.; and Zhu, W. 2019. Robust graph convolutional networks against adversarial attacks. In Proc. 25th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., 1399–1407.
- Zügner, Akbarnejad, and Günnemann (2018) Zügner, D.; Akbarnejad, A.; and Günnemann, S. 2018. Adversarial attacks on neural networks for graph data. In Proc. 24th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., 2847–2856.
- Zügner and Günnemann (2018) Zügner, D.; and Günnemann, S. 2018. Adversarial Attacks on Graph Neural Networks via Meta Learning. In Proc. 6th Int. Conf. Learn. Represent.
Proofs
Proof of Theorem 1
Node signals vibrate from high to low dimensions during MP, which is implemented through embedding. To ensure the uniformity of node signal dimensions during the MP process, we introduce a invertible transfer function, denoted as , to standardize the dimensions of and . This function can extend the process of the -th MP to , where . This process can be interpreted as all embeddings induced by MP are completed in . i.e., embedding into dimensions at the k-th MP, and the subsequent dimensions collapse to zero. Indeed, with this approach, MP can be viewed as signal propagation in a space of equal dimensions.
The edge-based Laplacian wave equation is (also introduced in main text):
(9) |
where is a constant. In this context, we instantiate as the signal . Then we define as a signal amplitude matrix:
Definition 2.
The amplitude in -th MP is represents the transmission amplitude of signals in different dimensions. It is a variable matrix that defines the adjustable parameters of the graph-based function at different times. It performs an equal-dimensional transformation of the signal through right multiplication.
The initial speed of the first-hop MP can thus be denoted as . Consequently, we derive that . Hence, the element-wise fluctuation speed [1] of the signal , denoted as , can be described, and it has the relationship
(10) |
We regard as a sampling within the continuous signal with a minimal step , the wave described by can thus be formalized as
(11) |
One may write
(12) |
and this discrete representation iterating with a step size of can be written in a continuous expression:
(13) |
In order to establish a direct correlation between and , instead of via a chain-like propagation through , we introduce the Moore-Penrose (MP) pseudoinverse [2] to simplify Equation (Proof of Theorem 1). Specifically, we introduce a dimension matrix , which is associated with , calculated by
(14) |
Then, a direct connection between and can be obtained as
(15) |
which delineates the general term formula for equal-dimensional waves . Given that , the proposition henceforth is such that, if one were to establish the validity of , signifying that a exists that would embedding the trailing columns of to be minimal value matrices (denoted as ), then a inverse mapping could be constructed that would allow for the transition from equal-dimensional waves to unequal-dimensional waves . To address this problem, we split into two segments: a left matrix and a right matrix . Given that, we have: . Since , it thus holds that is of full column rank. Consequently, we may conclude that for any arbitrary real matrix , there exist such that , while supporting holds. Therefore, there exists a matrix
(16) |
such that , thereby confirming the validity of . Hence, Equation (15) can be expressed inversely as
(17) |
Taking into account the dimension of variation under the wave, and the independent amplitude correlation when the signal fluctuates once, we apply rank- truncated singular value decomposition (-TSVD) [3] to to approximately obtain amplitude-correlated matrices. Given , -TSVD decomposes as , where , , and are matrices whose columns are the first left singular vectors, the first singular values, and the first right singular vectors of the , respectively. We denote: 1) ; 2) , and as the same of the above definitions; 3) . Since TSVD can decompose any real-valued matrix for any valid , we apply continuous -TSVD to , yielding:
(18) |
Since TSVD provides an approximate equality with introducing a minimum error term:
(19) |
we can summarize Equation (Proof of Theorem 1), decomposing into a form of continuous multiplication, to formalize the amplitude matrices introduced by iterations of MP:
(20) |
Substituting Equation (20) into Equation (17), we obtain:
(21) |
Now, to endow the fluctuation equation represented by Equation (21) with the capability of backpropagation, we introduce nonlinear activation function , Subsequently, we instantiate as trainable matrices. As we expounded in the preceding text, the dimensions of these matrices merely need to satisfy the condition of input-output decrement, and thus these dimensions can be arbitrarily set. We denote X as the -th trainable matrix by given a decrement dimension , the aforementioned matrices can be instantiated as . In summary, by making Equation (21) trainable, we obtain the following form of a trainable wave equation:
(22) |
We note that Equation (22) is the trainable form of the edge-based Laplacian wave equation (Equation (9)), and it is also the forward propagation form of the GCN. Through the derivation process we have provided, the equivalence between the GCN and the wave equation is thus substantiated.
Proof of Theorem 2
We first gives the vectorized representation of the resonance intensity. For each pair of nodes in , if there is an edge between and , then this edge is counted in . This is indicated by . Therefore equals the sum of over all nodes pairs in . Then, can be obtained by three steps:
-
•
First, the number of edges on all nodes pairs of (not just pairs of neighbors of node ) is .
-
•
Then, to ensure that , we need to include the terms and in the above summation. This is because if and only if and are both neighbors of node , i.e., if and only if , and form a triangle.
-
•
Finally, counts the number of triangles that include node , This gives that
Then, we have
(23) |
Hence, according to Definition 1, one may write
(24) |
We denote by whose entries are all equal to . We utilize right multiplication by for row-wise summation of the matrix. Subsequently, we engage in the augmentation of to its vectorized representation:
(25) |
Here, represents the expansion of into a square matrix by padding zeros on the right side of . Moving forward, we incorporate the Sigmoid function and elucidate its role through the matrix notation
(26) |
Then we stipulate the minimum error function as
(27) |
and denote following minimum error terms , , and . Utilizing the aforementioned notation, we can deduce
(28) |
For Equation (Proof of Theorem 2), we extract (there are countless methods of extraction, this is because the right multiplication by gives , indicating that the sum of each row of an matrix is . Here, we consider the case where all elements of this matrix have the average value ), thereby deriving Equation (Proof of Theorem 2) into:
(29) |
The matrix-based Taylor series expansion [4] for is
(30) |
Since , , , and , by substituting them into Equation (30), we have
(31) |
Hence, Equation (Proof of Theorem 2), through astute algebraic manipulation, can be metamorphosed into the form of a Taylor series expansion, as delineated by Equation (Proof of Theorem 2). In the ensuing steps, we shall meticulously engage in the transformation of the discrete constituents within Equation (Proof of Theorem 2), with the aim of perpetuating the advancement of its derivation
(32) |
Substituting Equation (Proof of Theorem 2) into Equation (Proof of Theorem 2), we can ultimately derive Equation (Proof of Theorem 2) into
(33) |
where denotes the operation of summing the elements of each row in the input matrix. For , irrespective of left multiplication by any matrix or application of an element-wise activation function, its rightmost columns persist as zero. As a result, when right-multiplied by , (which effectively sums each row), the outcomes for and are identical. This is because the zeroed columns in do not contribute to the row sum. Consequently, even when is reverted back to , Equation (Proof of Theorem 2) still holds. Then, it is perceptible that the yield of Equation (Proof of Theorem 2) is emblematic of a 3rd MP scheme, propelled by the GCN — wherein the GCN variant deployed abstains from the consideration of self-loops, thereby adopting the use of matrix in lieu of — and integrates the sigmoid function as its activation. Therefore, we can write . Consequently, by effectuating the incorporation of Equation (Proof of Theorem 2) into Equations (Proof of Theorem 2) and (25), and electing to neglect the error magnitudes , we are enabled to derive the following expression
(34) |
Similarly, we have
(35) |
Hence, we can write
(36) |
Through the transposition of Equation (Proof of Theorem 2) into a scalar representation (i.e., ), we are capacitated to consummate the derivation of Equation (2).
Proof of Theorem 3
Adversaries employ the following methodology to attack node : Initially, a target label is defined, wherein and differ only in the label value of the target nodes (regardless of single target or multiple targets), which is altered to the target labels, while the labels of all other nodes remain unchanged. Subsequently, by modifying the graph structure, specifically , with a budget constraint a new graph which with as adjacency matrix is obtained. The objective is to maximize the similarity between the output and (i.e., minimize the loss between them) after the forward propagation by .
Ultimately, as the primary goal is to manipulate the one-hot output [5], i.e., the magnitude relationship in the -th row of , an activation function is added only in the final layer to compute the loss function with the label matrix [6]. Consequently, the optimization target of the adversaries can be expressed as:
(37) |
For , less than 5% of total edges is considered to be inconspicuous in common. Hence, given the necessity to minimize the perceptibility of the perturbation, . Therefore, by introducing a sparse matrix containing only 0, 1, and -1, the perturbation process can be represented as . Simultaneously, during training, as the activation functions between layers have been disregarded (as aforementioned, this does not affect the size relationship of the output), the parameter matrices between different layers can be unified into a single parameter matrix. We denote , hence, Equation (37) can be rewritten as:
(38) |
Considering when , Equation (38) becomes:
(39) |
To quantify the cost of the equation above, we consider that in non-adversarial scenarios, the GCN can reach the computational universality of the ground-truth labels, i.e., there always exists and a minimal value such that . Since is perturbable, to maintain the concealment of the perturbations and minimize the modification degree of the GCN parameters, i.e., , it is only necessary to ensure that, with an identity activation , is as close to a zero matrix as possible. Thus, Equation (39) can be further simplified to:
(40) |
Upon observation, we consider can represent a distinct graph , which has the same number of nodes as , with most nodes being isolated. Perturbations can be viewed as edges, where inserted edges (i.e., when an edge is inserted between nodes and , ) have a weight of , and deleted edges (i.e., when an edge is deleted between nodes and , ) have a weight of . Therefore, the equation can be approximated as another GCN training paradigm. The difference is that the training parameters have changed from to . At this point, since is a disconnected subgraph of with edges, hence, when , the maximum number of disconnected subgraphs of with edges is the maximum number of forward propagations that the attack model needs to compute. That is, .
Proceeding to the case where , with which plays the same role of , Equation (38) can be reformulated as follows:
(41) |
Analogous to the scenario where , the objective of the adversary for can be simplified to:
(42) |
This expression can be interpreted as an extension of the , case: an attack on a graph with an adjacency matrix as The maximum number of forward propagations for the attacker is equal to the number of largest disconnected subgraphs in , which is the same as the case. We present illustrative examples of , , and in Figure 7.



The situation differs when . Specifically, Equation (38) for is given by:
(43) |
In the above expression, it is required to ensure:
(44) |
It can be observed that for the maximum number of connected subgraphs in depends on the number of edges in the graph , represented by , rather than the number of edges in (which is the case for and ). In , each element represents the number of paths of length 2 from node to in . Therefore, the number of non-zero elements in can be at most , as each edge can form a path of length 2 with the other nodes. Hence, the upper bound on the number of edges contained in is . The maximum number of disconnected -edges subgraphs (i.e., ) is .
Upon obtaining , it needs to be sent to the last two terms, namely and for computation. Considering that is a sparse matrix, is generally a zero matrix, implying that all nodes in the graph formed by it are isolated. Consequently, the last two terms can be reduced to one terms, meaning that after obtaining , one more matrix multiplication-based forward propagations are needed. In summary, .
Utilizing the same methodology, for , each decomposition introduces a graph with as the adjacency matrix, and a linearly increasing number of summation terms. Combining the above analysis, we can conclude that the maximum number of edges in , grows exponentially, and the number of matrix multiplication-based forward propagation summation terms determined by the maximum number of disconnected subgraphs in grows linearly. That is, when , , which completes the proof.
Proof of Proposition 1
We adopt an intuitive approach to simplify this computation. Given that contains one fewer edge than (i.e., ), and differ in only two elements, hence the subtraction results in a sparse matrix , where the majority of elements are zero except for the elements at the -th row and -th column, and -th row and -th column, which are 1. Given that the parameters of have been trained during the calculation of the -th layer’s edge-transmitted signals, we use to represent the constant matrix in the perspective of the -th layer. Then, Equation (5) simplifies to
(45) |
Given that is a symmetric matrix with only two non-zero values, the result of , defined as
(46) |
does not need to be recalculated for each , , but can instead be reassembled by indexing the rows of according to the chosen of , , i.e.,
Through the method stated above, Equation (5) is eventually simplified to the conclusion of the Proposition 1 (Equation (7)). It can be observed that Equation (7) encompasses all arbitrary selections of and , but only the and term is required to be computed. Thus, during the computation of edge-transmitted signals in GRN, we can reduce the initial computations down to a single computation. Thereafter, when transitioning among different edges, it is only necessary to conduct a column-wise concatenation and substraction of the matrices.
Additional Experiments
In this section, we elucidate the behavior of layers arising from resonance across an expanded range of datasets. Concurrently, we delineate the strength distribution of LRS-constructed graphs drawn from additional datasets. The experimental framework employed here remains consistent with the main body of the text. It is imperative to note that, in the interest of maintaining methodological rigor and fairness, these experiments were conducted based on randomized initializations that have been reset. Consequently, under identical experimental conditions, there exists a slight discrepancy between these results and those presented in the primary narrative. Figures 8(a) and 8(b), which can be conceptually viewed as extensions of Figures 4 and 5(b) respectively, depict these outcomes. The conclusions derived from the illustrated results demonstrate a congruence with the central assertions delineated in the main discourse.
References
-
[1]
Breuer Heinz-Peter, Huber Wolfgang, and Petruccione Francesco. 1994. Fluctuation effects on wave propagation in a reaction-diffusion process. Physica D, 73(3): 259–273.
-
[2]
Golub Gene H and Reinsch Christian. 1971. Singular value decomposition and least squares solutions. Linear algebra.
-
[3]
Vannieuwenhoven Nick, Vandebril Raf and Meerbergen Karl. 2012. A new truncation strategy for the higher-order singular value decomposition. SIAM J. Sci. Comput., 34(2): 1027–1052.
-
[4]
Al-Mohy, Awad H and Higham, Nicholas J. 2011. Computing the action of the matrix exponential, with an application to exponential integrators. SIAM J. Sci. Comput., 33(2): 488–511.
-
[5]
Sun Lichao, Dou Yingtong, Yang Carl, et al. 2022. Adversarial attack and defense on graph data: A survey. IEEE Trans. Knowl. Data Eng., 35(8): 7693–7711.
-
[6]
Zügner Daniel, Akbarnejad Amir and Günnemann Stephan. 2018. Adversarial attacks on neural networks for graph data. Proc. 24th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., 2847–2856.