Scalable Adversarial Attack on Graph Neural Networks
with Alternating Direction Method of Multipliers
Abstract
Graph neural networks (GNNs) have achieved high performance in analyzing graph-structured data and have been widely deployed in safety-critical areas, such as finance and autonomous driving. However, only a few works have explored GNNs’ robustness to adversarial attacks, and their designs are usually limited by the scale of input datasets (i.e., focusing on small graphs with only thousands of nodes). In this work, we propose, SAG, the first scalable adversarial attack method with Alternating Direction Method of Multipliers (ADMM). We first decouple the large-scale graph into several smaller graph partitions and cast the original problem into several subproblems. Then, we propose to solve these subproblems using projected gradient descent on both the graph topology and the node features that lead to considerably lower memory consumption compared to the conventional attack methods. Rigorous experiments further demonstrate that SAG can significantly reduce the computation and memory overhead compared with the state-of-the-art approach, making SAG applicable towards graphs with large size of nodes and edges.
Introduction
Graph neural networks (GNNs) play an essential role in many emerging applications with graph data. On these applications, GNNs show their strength in extracting valuable information from both the features (i.e., information from individual nodes) and the topology (i.e., the relationship between nodes). For example, GNNs can effectively analyze financial data to decide loan-related policies. Another example is the social network with billions of users where GNNs can do friendship recommendation.
Such wide deployment of GNNs motivates the investigation of their robustness and reliability. One of the key aspects is to effectively generate adversarial examples on graph data, so that we can better understand the “weakness” of GNNs and secure them more wisely afterward. Our initial exploration and studies identify several key properties to be considered in the GNN attack. First, the adversarial example needs to consider topology and feature information to comprehensively attack the GNNs on all perspectives. Second, the attack method needs to be efficient in both memory and computation for catering to the huge number of nodes in graph data.
However, existing work inevitably falls in short at least one of the above aspects, as summarized in Table 1. Specifically, FGSM (Goodfellow, Shlens, and Szegedy 2015) is crafted for attacking traditional deep neural networks (DNNs). Even though it can attack node embeddings with good computation and memory efficiency, it can not well support the graph topology, which distinguishes GNNs from DNNs. PGD (Xu et al. 2019a) is one of the state-of-the-art work designated for the GNN attack. However, it does not support the attack on node features, and it suffers a quadratic memory overhead due to maintaining the large size of edge gradients in a dense matrix format. For example, a graph with nodes leads to a dense gradient matrix, consuming more than GB memory for a moderate-size graph with only nodes. Another work, Nettack (Zügner, Akbarnejad, and Günnemann 2018), while performing well in three aspects, finds its shortcoming in computation efficiency due to a fine-grained edge manipulation in a very inefficient trial-and-error manner.
Method | Topology | Feature | Comp. Effi. | Mem. Effi. |
FGSM | ✗ | ✓ | ✓ | ✓ |
PGD | ✓ | ✗ | ✓ | ✗ |
Nettack | ✓ | ✓ | ✗ | ✓ |
SAG | ✓ | ✓ | ✓ | ✓ |
To overcome these challenges, we propose SAG, the first ADMM-attack on GNNs with the alternating direction method of multipliers (ADMM), which iteratively maximizes the training loss through modifying the graph topology and the node features. ADMM has been shown to be effective in dividing-and-conquering a large non-convex optimization problem into multiple smaller ones for achieving both efficiency and scalability (Fan et al. 2017; Leng et al. 2018; Liu et al. 2020; Ren et al. 2019). As shown in the last line of Table 1, SAG can attack both the graph topology and the node features, while maintaining computation and memory efficiency to a great extent, making it a viable and promising solution for large-scale graphs.
In summary, our major contributions are:
-
•
We identify and analyze the key properties of the effective adversarial attacks on GNNs, where none of the existing methods could address all of them systematically and comprehensively.
-
•
We propose SAG to effectively generate adversarial examples on graph neural networks based on both topology attack and feature attack. We formulate SAG with the ADMM optimization framework and achieve both high memory and computation efficiency.
-
•
Evaluation shows our proposed method can launch more effective attacks compared with the state-of-the-art approaches while reducing the computation and memory overhead, making it applicable towards large graph settings.
Related Work
Graph Adversarial Attacks. Graph adversarial attacks aim to maximize the accuracy drop on GNN models by introducing the perturbations, such as the modifications of the graph topology and the node representations (feature embeddings). Existing GNN attacks can be broadly classified into two major categories, poisoning attack (Zügner, Akbarnejad, and Günnemann 2018; Zügner and Günnemann 2019) and evasion (Dai et al. 2018) attack, depending on the time they happen. The former (poisoning attack) happens during the training time of the GNNs through modifying training data, while the latter (evasion attack) takes place during the GNN inference time by changing test data samples. Our proposed SAG is a comprehensive attack method that can cover both types of attack meanwhile offering significant computation and memory efficiency compared with the existing attack approaches.
ADMM Framework. ADMM (Boyd et al. 2011) is an optimization framework that is effective in decomposing and solving optimization problems under constraints. The theoretical results of ADMM have been explored in (Gómez, Eftekhari, and Cevher 2019; Poon and Liang 2019; Yu 2019; Nishihara et al. 2015) for various convex problems under diverse constraints and are shown to have linear convergence. Formally, ADMM can effectively solve an optimization problem under linear constraints
(1) |
where and could be either differentiable or non-differentiable but has some exploitable structure properties. Then, by introducing the augmented Lagrangian function, ADMM can break the problem into two subproblems in and and iteratively solve each subproblem at one time. While popular stochastic gradient descent (SGD) method can usually solve optimization problems, SGD cannot effectively process the case with diverse constraints (e.g., equality constraints between variables) and usually require ad-hoc modifications on gradients.
Although ADMM is originally developed to solve convex problems, recent studies successfully exploit ADMM to solve NP-hard non-convex problems under constraints in CNN pruning (Fan et al. 2017; Zhang et al. 2018), compressive sensing (Yang et al. 2016), Auto-ML (Liu et al. 2020), Top-K feature selection (Fan et al. 2017), and hardware design (Ren et al. 2019). In this paper, we focus on exploring the benefit of exploiting ADMM framework in the context of graph-based adversarial attacks.
A single large graph | |
The number of nodes | |
The length of node features | |
The number of edges in the original input graph | |
The original input graph of shape | |
The original input feature of shape | |
Fixed GNN weights | |
A perturbation matrix of shape | |
Perturbed adjacency matrix of shape | |
Perturbed features of shape | |
The maximum number of allowed edge perturbations | |
The maximum norm of allowed feature perturbations | |
The loss function to measure the difference between | |
the current output of the model and the targeted labels | |
The number of subproblems to split | |
The graph partition of shape | |
The feature copy of shape | |
The projection of input towards set | |
The number of epochs in ADMM | |
Set of nodes with ground truth label |
Methodology
Problem Formulation of Scalable Graph Attack
We first define the notation in this paper, as summarized in Table 2. We consider an input graph , where is the adjacency matrix on the edge connection between nodes, is the associated node features, is the number of nodes in the graph, and is the feature dimension of each node. Here, only a portion of nodes are labeled and the goal of node classification is to predict labels for remaining nodes. Following the common practice in the field (Xu et al. 2019a; Zügner, Akbarnejad, and Günnemann 2018), we focus on the well-established work that utilizes graph convolutional layers (Kipf and Welling 2017) for node classification. Formally, the layer is defined as
(2) |
where to achieve numerical stability, is a diagonal matrix with , and is the weights for the layer. Since the memory and the computation complexity generally increase as the number of layers increases, we focus on a single layer GCN that is tractable and still captures the idea of graph convolutions:
(3) |
Here, the output is the predicted labels for individual nodes. The parameter is learned by minimizing the cross-entropy on the output of labeled nodes.
(4) |
where is the ground truth label for node . Our experiments show that adversarial examples on this single layer GCN model transfer well to GCNs with various layers and other GNN models such as GAT. Besides, recent studies (Zügner, Akbarnejad, and Günnemann 2018) show that a L-layer GCN can also be treated as to generate adversarial attacks with tractable computation.
In this paper, SAG aims to generate a perturbed adjacency matrix and a perturbed feature matrix satisfying a pre-defined perturbation budget. Similar to (Xu et al. 2019a), we use a Boolean matrix to record the edge perturbations that indicates the perturbation of edge between node and node . Given the original adjacency matrix and its supplement matrix (i.e., ), we can generate the perturbed adjacency matrix as , where is the Hadamard product. Formally, given the edge perturbation budget and the feature perturbation budget , SAG aims to solve the following optimization problem
(5) |
For notation simplicity, we use to represent the cross-entropy loss when the context is clear. Following the common practice (Zügner, Akbarnejad, and Günnemann 2018; Xu et al. 2019a) in the field, we consider two threat models – the evasive attack and the poisoning attack. The evasive attack assumes that the GNN model is fixed and targets the test data. The poisoning attack targets the training data and performs the model training phase on the perturbed data after the attack.
There are several challenges in solving this optimization problem. First, the popular stochastic gradient descent (SGD) methods cannot effectively solve optimization problems under constraints and usually require ad-hoc modification on the gradients to satisfying such constraints. Second, the discrete modification on the edge perturbations makes it a combinatorial optimization problem. (Zügner, Akbarnejad, and Günnemann 2018) attacks the graph topology by changing one edge at one time and selecting the edges that achieve the maximum loss. However, this approach needs to trial-and-error on all edges, leading to prohibitive time cost. (Xu et al. 2019a) takes gradient on the matrix by releasing the requirement to be . However, this approach takes memory to store the gradient matrix, which is prohibitive for large graphs with tens of thousands of nodes. Besides, (Xu et al. 2019a) only supports topology attack and cannot conduct joint attack on the node features. We detail ADMM-based SAG on solving the optimization problems under constraints in later sections.

Scalable Graph Attack Framework using ADMM
SAG achieves a scalable attack on graph data by splitting the matrix into multiple partitions and consider one partition at each time, as illustrated in Figure 1. Supposing we split the graph into partitions, we only need to take gradient on a small matrix with shape that can easily fit into the memory. Similar to (Xu et al. 2019a) and the probabilistic view of node classification problem, we first release the discrete constraints to continuous constraints , where indicates the probability that an edge needs to be perturbed for attack. Since the impact between node and node may be asymmetry (i.e., the impact of node on node is higher than the reverse one), we do not apply symmetry constraint on . Then, we split into sub matrix such that , where considers the nodes with index between . Due to the cumulative property in cross-entropy loss, we can attack by solving the following optimization problem
(6) |
Here, represents the allowed number of edges to change in each graph partition and we set it to be for simplicity.
Ideally, we can split the problem 6 into sub-problems and solve them independently for memory efficiency. However, problem 6 still has interaction between sub-problems on the term. To this end, we further reformulate the problem 6 into the following problem by substitute with a duplicated feature matrix
(7) |
For notation simplicity, we use to represent . is an indicator function such that if , otherwise . and are feasible sets
(8) |
Here, the popular SGD cannot be easily applied to the optimization problem 7 under constraints, especially the equality ones. To this end, we can adopt the ADMM framework to systematically solve the problem. First, we have the Lagragian function (9) where is a hyper-parameter and is the dual variable.
Following the ADMM framework, we can solve the problem 7 by repeating for iterations and, in each iteration, solving each and individually
(10) |
which is respectively the feature update, topology update, and dual update. We stress that we only need to solve the minimization problem for a single graph partition , leading to much reduced memory consumption compared to (Xu et al. 2019a), which requires solving the whole graph at the same time. Here, the main memory overhead comes from the duplicated feature . However, the feature dimension is usually a fixed number around , which is much smaller than the number of nodes that may reach tens of thousands, or even millions.
Algorithm Subroutines for SAG Optimization
In this section, we present how to efficiently solve the above problem and derive closed form formula for individual minimization problems.
Feature Update. In feature update, we aim to find the feature that minimizes
(11) |
Here, is the cross-entropy loss on the GNN predictions, the last two terms are differentiable functions, but the function cannot be easily solved with gradient descent method due to the values for . To this end, we adopt a two-step strategy for the feature update. For the first three terms that are differentiable, we use the gradient descent method to access the gradient on and get a pseudo-perturbed feature
(12) |
where is the learning rate at iteration . Note that the update on feature depends only on the graph partition and does not involve with remaining graph partitions. For the term , we refer to the projection method by projecting the pseudo-perturbed feature onto the feasible set :
(13) |
While computing the projection is a difficult task in general and usually requires an iterative procedure to solve, we exploit the special structure of and derive a closed-form formula to analytically solve it.
Proposition 1. Given , the projection of to is
(14) |
Proof: can be viewed as an optimization problem
(15) |
We can derive its Lagrangian function as
(16) |
where . Using the KKT condition we have the stationary condition that
(17) |
and get that . We can also get the complementary slackness that
(18) |
If , we need to have and . By reformulating, we can have . If and , we have .
Topology Update. In topology update, we aim to minimize the following function by finding
(19) |
Similar to feature update, we can first use the gradient descent method to access the gradient on and then use the projection method to generate the perturbed topology in the feasible set
(20) |
where is the learning rate. With the Langrangian function and KKT condition, we can derive the closed-form formula to analytically project to the feasible set . Due to the similarity with proof for Proposition 1 and page limits, we leave the detailed proof to appendix.
Proposition 2. Given , the projection of to is
(21) |
where
Optimization and Complexity Analysis. We summarize our SAG in Algorithm 1. There are two optimization loops. In the inner loop, we iterate through graph partitions and update the corresponding partitioned graph perturbation and feature perturbation . At each iteration, only a single graph partition needs to be considered and large memory is saved for not considering the other partitions. In the outer loop, we repeat the optimization for (=200 by default) iterations for the algorithm to converge.
After iterations, will be identical to each other and can be directly used as the feature perturbation . Recalling that is a probability whether an edge needs to be perturbed for attack, we use Bernoulli distribution to sample a -valued edge between each pair of nodes. We repeat this sample procedure for times and select the one with minimal loss for the final perturbed topology .
The memory complexity of SAG is
(22) |
for storing graph partition and feature perturbation , respectively, since we only need to optimize one graph partition at each iteration. We store only the current graph partition in the limited GPU memory (around 10 GB) and offload the remaining graph partitions to the large host memory (more than 50 GB). Note that the same implementation strategy cannot be applied to (Xu et al. 2019a) which requires attacking the whole graph at each iteration.
Evaluation
In this section, we evaluate SAG on five datasets and compare with three attack algorithms to show its effectiveness.
Datasets. In this experiment, we select various datasets to cover the vast majority of the GNN inputs, including typical datasets (Citeseer, Cora, Pubmed, and Amazon-Computer/Photo) used by many GNN papers (Kipf and Welling 2017; Xu et al. 2019b; Hamilton, Ying, and Leskovec 2017). Details of these datasets are listed in Table 3.
Baselines. To evaluate the effectiveness of SAG, we compare it with the state-of-the-art attack methods by using the adversarial attack repository DeepRobust 111https://github.com/DSE-MSU/DeepRobust.git.
-
•
FGSM (Goodfellow, Shlens, and Szegedy 2015) is a gradient-based adversial attack that generates adversarial examples by perturbing the input features. While it is originally designed for attacking CNNs, it can be easily adapted to attack GNNs by perturbing node features.
-
•
PGD (Xu et al. 2019a) is another gradient-based attack method tailored for discrete graph data. The reason to select PGD for comparison is that it provides fast attack on discrete graph data by leveraging an optimization-based approach.
-
•
Nettack (Zügner, Akbarnejad, and Günnemann 2018) is the most popular attack algorithm on graph data by incorporating both edge and feature attacks. We select Nettack for comparison because it serves as a strong baseline for SAG on the joint attack.
Models. Graph Convolutional Network (GCN) (Kipf and Welling 2017) is one of the most popular GNN architectures. It has been widely adopted in node classification, graph classification, and link prediction tasks. Besides, it is also the key backbone network for many other GNNs, such as GraphSage (Hamilton, Ying, and Leskovec 2017), and differentiable pooling (Diffpool) (Ying et al. 2018). We use the setting of hidden dimension size = 16 for each layer of GCN. Graph Attention Network (GAT) (Xu et al. 2019b), another typical category of GNN, aims to distinguish the graph-structure that cannot be identified by GCN. GAT differs from GCN in its aggregation function, which assigns different weights for different nodes during the aggregation. We use the setting of 8 hidden dimension and 8 attention heads for each layer of GAT.
Platforms. We implement SAG based on PyTorch Geometric (Fey and Lenssen 2019). We evaluate SAG on Dell T7910 (Ubuntu 18.04) with Intel Xeon CPU E5-2603, 64 GB host memory, and an NVIDIA 1080Ti GPU with 12 GB memory.
Dataset | #Vertex | #Edge | #Dim | #Class |
Cora | 2,708 | 10,858 | 1,433 | 7 |
Citeseer | 3,327 | 9,464 | 3,703 | 6 |
Amazon-Photo | 7,487 | 119,043 | 745 | 8 |
Amazon-Computer | 13,381 | 245,778 | 767 | 10 |
Pubmed | 19,717 | 88,676 | 500 | 3 |
Dataset | Method | Time (min) | Mem. (GB) | Evasive Acc. (%) | Poisoning Acc. (%) |
Cora | Clean | 0 | 0 | 79.63 | 79.63 |
FGSM | 0.05 | 0.64 | 70.52 | 78.87 | |
PGD | 0.25 | 1.03 | 75.70 | 70.77 | |
Nettack | 14.75 | 0.68 | 69.62 | 71.35 | |
SAG | 0.48 | 0.70 | 68.06 | 67.15 | |
Citeseer | Clean | 0 | 0 | 71.80 | 71.80 |
FGSM | 0.03 | 0.41 | 67.00 | 71.70 | |
PGD | 0.18 | 0.97 | 69.08 | 67.30 | |
Nettack | 12.63 | 0.60 | 62.91 | 67.54 | |
SAG | 0.34 | 0.82 | 62.62 | 63.68 | |
Amazon Photo | Clean | 0 | 0 | 93.11 | 93.11 |
---|---|---|---|---|---|
FGSM | 0.05 | 1.17 | 89.13 | 91.37 | |
PGD | 4.21 | 3.69 | 83.02 | 82.77 | |
Nettack | 1560 | 1.84 | 87.95 | 89.25 | |
SAG | 6.13 | 1.24 | 80.52 | 80.12 | |
Amazon Computer | Clean | 0 | 0 | 89.29 | 89.29 |
FGSM | 3.28 | 2.67 | 85.62 | 88.03 | |
PGD | 17.32 | 10.59 | 77.22 | 77.93 | |
Nettack | 2578 | 4.35 | 82.41 | 85.33 | |
SAG | 18.78 | 2.91 | 74.56 | 76.31 | |
Pubmed | Clean | 0 | 0 | 77.53 | 77.53 |
FGSM | 9.83 | 3.54 | 72.93 | 74.25 | |
PGD | - | OOM | - | - | |
Nettack | 3108 | 8.17 | 67.72 | 76.25 | |
SAG | 47.62 | 3.63 | 62.71 | 36.27 |
-
1
Note that “-” means such parameter is not applicable to the given setting.
-
2
OOM refers to “Out of Memory”.
Metrics. We evaluate SAG with six metrics – evasive accuracy, poisoning accuracy, topology ratio, feature ratio, memory consumption, and running time. Following the common setting (Zügner, Akbarnejad, and Günnemann 2018; Xu et al. 2019a), we report the evasive accuracy by assuming the GNN model is fixed and targeting the test data. We report the poisoning accuracy by targeting the training data and perform the model training phase after the attack. The topology ratio (%) is computed as the number of attacked edges over the number of existing edges in the clean graph dataset. The feature ratio (%) is reported as the norm of perturbed features over the norm of the original features in the clean graph dataset. To measure the memory, we utilize NVProf to query the runtime GPU memory consumption at a pre-selected frequency of ms. To measure the time, we leverage the software timer from python library. For a fair comparison, all gradient-based approaches are conducted for iterations.
Overall Performance
Table 4 shows the overall performance comparison between SAG and existing adversarial attacks under the same setting of topology ratio and feature ratio. Following the most common setting used by many previous papers (Zügner, Akbarnejad, and Günnemann 2018; Xu et al. 2019a), we select the same topology attack ratio of and feature attack ratio of , and leave the study on diverse ratios to the ablation study. On PGD and FGSM, we only attack the topology and features, respectively, due to their limits in attacking capability. In SAG, we stick to M=2 and will exhibit the impact of M in the ablation study. We observe that SAG consistently outperforms the state-of-the-art attack approach, such as FGSM, PGD, and Nettack on evasive attack (up to accuracy drop) and poisoning attack (up to accuracy drop) across different datasets. On Pubmed dataset, SAG achieves accuracy drop in evasive attack and in poisoning attack. The major reason for such success is that SAG enables the gradient-based joint optimization on both the features and topology while incorporating global reasoning on the interaction between attacking different nodes. By contrast, FGSM and PGD attack only the feature or topology, and Nettack considers only one edge at each time, failing to reason the global interaction across edges and nodes.



Across different datasets and settings, we notice that Nettack always comes with the highest time cost. The reason is that, at each iteration, it selects only one edge or feature to attack by examining all edges and node features, and repeats the procedure until reaching the topology ratio and the feature ratio. Moreover, we observe that PGD usually has the highest memory consumption, since it requires a floating-point matrix to store the edge gradients between each pair of nodes, where is the number of nodes. For Pubmed, a single matrix of this shape requires at least GB to store and PGD processes the whole matrix at each iteration, instead of processing only a small partition as the case in SAG. Besides, in the time and memory comparison among these implementations, we notice the significant strength of SAG, which achieves up to speedup and memory reduction. This is largely due to our effective algorithmic and implementation optimizations that can reduce the runtime complexity meanwhile amortizing the memory overhead to great extent.
Ablation Studies
In this ablation study, we will focus on two representative datasets – Cora and Pubmed – for the performance of SAG on a small graph dataset and a large graph dataset.
ADMM Convergence Behavior We show the ADMM convergence behavior in Figure 2. Here, we adopt the same number of epochs as and show only the first epochs in Figure 2(b) and Figure 2(c) since SAG converges fast on and . We only present the result for since various ’s show similar results. Overall, SAG converges gracefully as epoch increases, demonstrating the effectiveness of our method. In Figure 2(a), with the increase of the epoch number, SAG gradually increases the attack loss by perturbing the features and the topology. In Figure 2(b), starts from 0 since we initialize both and to the node features on clean graph dataset. Compared across different datasets, Pubmed converges faster than Cora since Pubmed contains much smaller feature dimension than Cora.
Topology and Feature Perturbation For poisoning attack on Cora dataset (Table 5), the increase of feature perturbation and topology perturbation would both lead to the accuracy drop compared with the original clean data. Besides, we observe that, at the same level of topology perturbation, feature perturbation can lead to extra accuracy drop on average. On Pubmed dataset, we observe that feature perturbation can lead to extra accuracy drop, averaged over various topology perturbation ratio. These results show the benefit of attacking both topology and features.
Cora | Feature Perturbation (%) | |||||
Topology Perturbation (%) | 0 | 1 | 2 | 4 | 8 | |
0 | 79.63 | 79.25 | 78.91 | 78.84 | 78.73 | |
5 | 70.47 | 69.11 | 68.71 | 66.52 | 65.59 | |
10 | 58.63 | 57.29 | 56.34 | 55.23 | 53.77 | |
15 | 50.15 | 49.60 | 47.53 | 47.04 | 46.73 | |
20 | 45.37 | 43.46 | 43.06 | 42.71 | 41.65 | |
Pubmed | Feature Perturbation (%) | |||||
Topology Perturbation (%) | 0 | 1 | 2 | 4 | 8 | |
0 | 77.53 | 75.34 | 73.91 | 72.55 | 68.65 | |
5 | 49.19 | 39.14 | 36.27 | 31.65 | 28.53 | |
10 | 33.24 | 28.57 | 27.81 | 26.35 | 25.30 | |
15 | 28.04 | 26.05 | 23.59 | 23.04 | 22.34 | |
20 | 23.95 | 21.62 | 21.09 | 20.71 | 20.38 |
M | Time | Memory | Accuracy Drop |
(min) | (GB) | (%) | |
1 | 33 | 5.72 | 35.37 |
2 | 42 | 3.63 | 36.27 |
4 | 62 | 2.75 | 36.18 |
8 | 95 | 2.21 | 35.97 |
Cora (%) | Pubmed (%) | |
1-layer GCN | 0.67 (0.80) | 0.35 (0.77) |
2-layer GCN | 0.78 (0.83) | 0.80 (0.85) |
4-layer GCN | 0.74 (0.81) | 0.76 (0.84) |
1-layer GAT | 0.74 (0.82) | 0.37 (0.79) |
2-layer GAT | 0.77 (0.83) | 0.78 (0.85) |
4-layer GAT | 0.75 (0.80) | 0.73 (0.82) |
-
1
Data Format: attacked data acc. (clean data acc.).
M-value Impact We also evaluate SAG for poisoning attack on Pubmed to show the impact of the hyperparameter M (i.e., the number of graph partitions) on memory saving. As shown in Table 6, with the increase of the M value, the memory size reduction becomes significant, since splitting the graph into M partitions and attacking individual partitions at each time essentially reduce the memory requirement. We also observe similar accuracy drop under different M since SAG converges gracefully and hits similar optimal points for diverse M. Meanwhile, we also observe that the increase of value M also brings the runtime overhead in terms of time cost, for example, M=8 setting is 33 minutes slower than M=4 setting. This slowdown happens since we need to attack individual split graphs at each time, leading to a small portion of system overhead on memory access. This also leads to a tradeoff among these factors when selecting the value of M. We also observe that the memory consumption does not decrease linearly as K increases. The main reason is that, as M reduces, memory consumption from other sources (e.g., loading PyTorch framework and the features) becomes the dominant component.
Transferability To demonstrate the transferability of SAG, we further evaluate our attacked graphs (Cora and Pubmed) on GCNs (with 1, 2, and 4 layers) and GATs (with 1, 2, and 4 layers), respectively. We generate adversarial examples on a 1-layer GCN model and conduct poisoning attack on other models by targeting the training data and training these models on the perturbed data. As shown in Table 7, SAG can effectively maximize the accuracy drop on Cora (up to 13%) and Pubmed (up to 42%). The major reason for such success in launching the poisoning attack is that the adversarial attack on 1-layer GCN effectively captures the intrinsic property on the graph data that is agnostic to the models. We want to stress that, even on models with different layers (i.e., 2 and 4), the poisoned graph data can still achieve and accuracy drop on Cora and Pubmed, respectively. These results demonstrate the transferability of SAG towards models with diverse architectures and the number of layers.
Conclusion
This work focuses on GNN robustness by giving an in-depth understanding of GNN’s “weakness”. We propose SAG, the first scalable adversarial attack method with Alternating Direction Method of Multipliers (ADMM), which can successfully overcome the limitations of the previous solutions. Extensive experiments further highlight SAG’s advantage of reducing the computation and memory overhead over the existing approaches.
References
- Boyd et al. (2011) Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; and Eckstein, J. 2011. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 3(1): 1–122. ISSN 1935-8237. doi:10.1561/2200000016. URL https://doi.org/10.1561/2200000016.
- Dai et al. (2018) Dai, H.; Li, H.; Tian, T.; Huang, X.; Wang, L.; Zhu, J.; and Song, L. 2018. Adversarial attack on graph structured data. arXiv preprint arXiv:1806.02371 .
- Fan et al. (2017) Fan, M.; Chang, X.; Zhang, X.; Wang, D.; and Du, L. 2017. Top-k Supervise Feature Selection via ADMM for Integer Programming. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 1646–1653. doi:10.24963/ijcai.2017/228. URL https://doi.org/10.24963/ijcai.2017/228.
- Fey and Lenssen (2019) Fey, M.; and Lenssen, J. E. 2019. Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds (ICLR).
- Gómez, Eftekhari, and Cevher (2019) Gómez, F. L.; Eftekhari, A.; and Cevher, V. 2019. Fast and Provable ADMM for Learning with Generative Priors. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alché-Buc, F.; Fox, E. B.; and Garnett, R., eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, 12004–12016. URL http://papers.nips.cc/paper/9371-fast-and-provable-admm-for-learning-with-generative-priors.
- Goodfellow, Shlens, and Szegedy (2015) Goodfellow, I.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations (ICML). URL http://arxiv.org/abs/1412.6572.
- Hamilton, Ying, and Leskovec (2017) Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. In Advances in neural information processing systems (NeurIPS), 1024–1034.
- Kipf and Welling (2017) Kipf, T. N.; and Welling, M. 2017. Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations (ICLR) .
- Leng et al. (2018) Leng, C.; Dou, Z.; Li, H.; Zhu, S.; and Jin, R. 2018. Extremely Low Bit Neural Network: Squeeze the Last Bit Out With ADMM. In McIlraith, S. A.; and Weinberger, K. Q., eds., Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, 3466–3473. AAAI Press. URL https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16767.
- Liu et al. (2020) Liu, S.; Ram, P.; Vijaykeerthy, D.; Bouneffouf, D.; Bramble, G.; Samulowitz, H.; Wang, D.; Conn, A.; and Gray, A. G. 2020. An ADMM Based Framework for AutoML Pipeline Configuration. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, 4892–4899. AAAI Press. URL https://aaai.org/ojs/index.php/AAAI/article/view/5926.
- Nishihara et al. (2015) Nishihara, R.; Lessard, L.; Recht, B.; Packard, A.; and Jordan, M. 2015. A General Analysis of the Convergence of ADMM. volume 37 of Proceedings of Machine Learning Research, 343–352. Lille, France: PMLR. URL http://proceedings.mlr.press/v37/nishihara15.html.
- Poon and Liang (2019) Poon, C.; and Liang, J. 2019. Trajectory of Alternating Direction Method of Multipliers and Adaptive Acceleration. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alché-Buc, F.; Fox, E. B.; and Garnett, R., eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, 7355–7363. URL http://papers.nips.cc/paper/8955-trajectory-of-alternating-direction-method-of-multipliers-and-adaptive-acceleration.
- Ren et al. (2019) Ren, A.; Zhang, T.; Ye, S.; Li, J.; Xu, W.; Qian, X.; Lin, X.; and Wang, Y. 2019. ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Methods of Multipliers. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’19, 925–938. New York, NY, USA: Association for Computing Machinery. ISBN 9781450362405. doi:10.1145/3297858.3304076. URL https://doi.org/10.1145/3297858.3304076.
- Xu et al. (2019a) Xu, K.; Chen, H.; Liu, S.; Chen, P.-Y.; Weng, T.-W.; Hong, M.; and Lin, X. 2019a. Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214 .
- Xu et al. (2019b) Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2019b. How Powerful are Graph Neural Networks? In International Conference on Learning Representations (ICLR). URL https://openreview.net/forum?id=ryGs6iA5Km.
- Yang et al. (2016) Yang, Y.; Sun, J.; Li, H.; and Xu, Z. 2016. Deep ADMM-Net for Compressive Sensing MRI. In Lee, D. D.; Sugiyama, M.; Luxburg, U. V.; Guyon, I.; and Garnett, R., eds., Advances in Neural Information Processing Systems 29, 10–18. Curran Associates, Inc. URL http://papers.nips.cc/paper/6406-deep-admm-net-for-compressive-sensing-mri.pdf.
- Ying et al. (2018) Ying, R.; You, J.; Morris, C.; Ren, X.; Hamilton, W. L.; and Leskovec, J. 2018. Hierarchical Graph Representation Learning with Differentiable Pooling. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS), 4805–4815. Red Hook, NY, USA.
- Yu (2019) Yu, H. 2019. A Communication Efficient Stochastic Multi-Block Alternating Direction Method of Multipliers. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alché-Buc, F.; Fox, E. B.; and Garnett, R., eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, 8622–8631. URL http://papers.nips.cc/paper/9068-a-communication-efficient-stochastic-multi-block-alternating-direction-method-of-multipliers.
- Zhang et al. (2018) Zhang, T.; Ye, S.; Zhang, K.; Tang, J.; Wen, W.; Fardad, M.; and Wang, Y. 2018. A Systematic DNN Weight Pruning Framework Using Alternating Direction Method of Multipliers. In Ferrari, V.; Hebert, M.; Sminchisescu, C.; and Weiss, Y., eds., Computer Vision – ECCV 2018, 191–207. Cham: Springer International Publishing. ISBN 978-3-030-01237-3.
- Zügner, Akbarnejad, and Günnemann (2018) Zügner, D.; Akbarnejad, A.; and Günnemann, S. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2847–2856.
- Zügner and Günnemann (2019) Zügner, D.; and Günnemann, S. 2019. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412 .
Proof of Proposition 2
In this section, we provide the proof for Proposition 2. Similar to the proof in Proposition1 and existing works on projection (Leng et al. 2018; Zhang et al. 2018; Xu et al. 2019a), we utilize the Langrangian function and the KKT contition to derive a closed-form formula to project a given input towards the feasible set .
Proposition 2. Given , the projection of to is
(23) |
where
Proof: We first transform the projection problem into an optimization problem
(24) |
Then, we can derive its Langrangian function as
(25) |
where is the dual variable. Here, Using the KKT condition, we have the stationary condition that
(26) |
Here, We have , where , and is element-wisely applied on . Using the KKT condition, we also have the complementary slackness
(27) |
If , we need to have . If , we need to have and . In other words, we have , where is a scalar variable and can be solved with the bisection method (Boyd et al. 2011).