Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs
Abstract
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications such as online recommendations, studies on disease contagion, organizational studies, etc. Various LPDG methods based on graph embedding and graph neural networks have been recently proposed and achieved state-of-the-art performance. In this paper, we study the vulnerability of LPDG methods and propose the first practical black-box evasion attack. Specifically, given a trained LPDG model, our attack aims to perturb the graph structure, without knowing to model parameters, model architecture, etc., such that the LPDG model makes as many wrong predicted links as possible. We design our attack based on a stochastic policy-based RL algorithm. Moreover, we evaluate our attack on three real-world graph datasets from different application domains. Experimental results show that our attack is both effective and efficient.
1 Introduction
Graphs are often used to describe complex systems such as social networks, biology, social and economic organizations, communication systems, power grid, etc. These real-world systems often evolve with time and can be modeled as dynamic graphs, where nodes/entities or links/edges are dynamically added or deleted. Links, which represent the interactions between nodes, are of great importance in the analysis of dynamic graphs; and one particular important research problem is called link prediction in dynamic graphs (LPDG). Specifically, given historical graph data of a real-world system, LPDG aims to predict its future graph structure so as to better understand the evolution process. It is precisely that information in future graphs would be valuable in various applications such as online recommendations, studies on disease contagion, organizational studies, etc.
Various LPDG methods have been proposed in the past decade. Conventional methods include feature-based methods [15, 39, 7, 20], generative methods [25, 37, 44], and deep neural networks [19, 30, 5]. Recently, graph embedding (GE) methods [24, 28, 12] and graph neural networks (GNNs) [17, 31, 35] have achieved great success in many graph-related tasks (e.g., node classification, link prediction, graph classification, etc.) for static graphs. Inspired by this success, many GE/GNN-based LPDG methods [18, 36, 22, 11, 41, 23, 10, 21] have been developed, which extend original GE/GNN methods to the dynamic graphs via introducing a recurrent mechanism that captures the temporal information of dynamic graphs. GE/GNN-based LPDG methods have achieved great success in various applications such as forecasting bitcoin user trading [23], traffic prediction [40, 36], predicting loan repayment [41], forecasting message exchange in communication network [23], etc. For instance, DyGCN [21] combines LSTM [14] and GCN [17] and achieves the state-of-the-art performance.
In this paper, in contrast to designing better LPDG methods, we take the first step to study the vulnerability of LPDG methods. In particular, we consider a practical black-box evasion attacks to LPDG— Given a trained LPDG model, we assume an attacker only knows the predictions via querying the LPDG model, while not knowing the model parameters, model architecture, etc. As to attacker’s capability, we consider that the attacker can perturb a certain number of links/edges in the historical graphs, i.e., by adding new edges to or/and deleting existing edges from these graphs. Then, the attacker’s goal is to learn to perturb the historical graphs such that the LPDG model has as many wrong predicted links as possible in the future graph. One possible way to perform such an attack is to formulate the attack as an optimization problem. However, we emphasize that there are two key challenges. First, optimization-based attack requires the attacker knows accurate gradient information related to model parameters, which is difficult to obtain in the black-box setting. Second, optimization-based attack for graph structure perturbation is a binary optimization problem, which is NP-hard.
To address the above challenges, we propose a reinforcement learning (RL)-based attack to LPDG. Specifically, we can model perturbing (optimal) edges in a graph as executing (optimal) actions, which is based on solving a policy function in RL. Note that solving the policy function only requires LPDG model predictions, and thus our attack does not need to know model parameters. Moreover, our attack involves learning a nonlinear mapping from the high-dimensional graph structure space to a low-dimensional representation space. Such a nonlinear mapping can be easily parameterized by a neural network, and naturally fits the RL framework. We implement our RL-based attack, specifically to DyGCN, by adopting the soft actor-critic algorithm [13], which is a stochastic policy-based RL method that has demonstrated a stronger exploration ability and a more stable training. We evaluate our attack on three real-world graph datasets from different application domains. Our attack is effective. For instance, on Trapping dataset, our attack can reduce the link prediction performance (measured by score) by when only of total edges in a graph is perturbed. Our attack is also efficient. For instance, the running time of our attack is linear to the number of perturbed edges.
2 Background and Problem Definition

Link Prediction in Dynamic Graphs
Given a finite sequence of undirected graphs , where denotes a snapshot graph at time step , with the set of nodes and the set of edges/links. and represent the number of nodes and number of edges in graph , respectively. W.l.o.g., we assume all graphs in have the same node set in this paper. We use to represent the adjacency matrix for , i.e., if there is a link between node and node in , and otherwise. For description convenience, we will use the adjacency matrix and graph interchangeably. For each , let be ’s feature vector and be the feature matrix of all nodes in graph .
Now, suppose we are given a set of training examples , where consists of a sequence of historical graphs and is the future graph to be predicted. Then, the goal of link prediction in dynamic graphs is to learn a function , parameterized by , from the historical graphs in to predict links in the future graphs in . In this paper, we focus on DyGCN, as it achieves the state-of-the-art dynamic link prediction performance.
Dynamic Graph Convolutional Network (DyGCN)
DyGCN combines GCN [17] with LSTM [14] to perform link prediction in dynamic graphs. GCN was originally developed for a static graph. We first briefly review GCN. Suppose we are given a graph with adjacency matrix and node feature matrix . GCN stacks multiple (e.g., ) graph convolutional layers, and each layer learns a hidden node feature matrix . Formally, the node feature matrix at -layer is updated as follows:
where is a normalized version of , which is defined as , where ; ; is the parameter matrix connecting the -th layer and the -th layer; is a nonlinear activation function, e.g., ReLU .
In order to perform LPDG, DyGCN introduces multiple GCNs at different time steps and adds an LSTM after each GCN. Specifically, at time step , GCN is with an input and the output of GCN (i.e., ) is treated as the input of an LSTM. Moreover, the output of LSTM is denoted as .
Now, suppose we have a set of training examples. For each training example , each GCN takes a graph in as an input, e.g., GCN at time step takes as an input. To predict links in the graph , DyGCN adds a fully connected layer after the final output of LSTM at time step (i.e., ), and maps the output to a matrix that has the same size as . Formally,
where is a fully connected layer with parameters . As the entries in are usually not binary, DyGCN further performs the following transformation:
where is the predicted graph.
We use the function to encompass all functions involved in DyGCN, where contains all parameters. Thus, . To train DyGCN, we minimize the loss function , which is defined as the reconstruction loss between the predicted graph and the ground truth graph for all training examples. Specifically,
where the weight matrix is used to mitigate the issue caused by the sparse graph. We set if , and , otherwise. In our experiment, we set . Adam [16] is used to train DyGCN and the learnt model parameters is denoted as . Figure 1 illustrates DyGCN using a single training example.
Problem Definition
We consider black-box evavaion attacks to LPDG and focus on attacking DyGCN in this paper. However, we note that our attack is applicable to any LPDG algorithm. Specifically, given a trained DyGCN model , we assume that the attacker only knows the predictions (i.e., predicted graphs) via querying , while not knowing the model parameters or model architecture. Moreover, given a testing example where consists of a sequence of historical graphs, we assume the attacker can perturb a fraction of the graphs in , and each graph is allowed to perturb (e.g., add new edges or remove existing edges) a certain fraction of the total links/edges. We denote the perturbed testing example as , where , if is perturbed by a binary matrix ; and , if not. For simplicity, we consider recent historical graphs, i.e., , are perturbed. Then, the attacker’s goal is learn to perturb the graphs, such that the link prediction performance on evaluated by is maximally decreased. Formally, the attacker’s objective function is defined as follows:
(1) |
where is a function that counts the total number of unequal elements between and at the same position.
Directly solving Equation 1 to achieve the attacker’s goal is challenging, as it is a binary optimization problem that is NP-hard. In the next section, we will introduce our RL-based attack against DyGCN.
3 Our Proposed RL-based Attack

In this section, we propose to use RL to perform black-box evasion attacks to DyGCN. Our RL-based attack is based on soft actor-critic (SAC) [13], which is a stochastic policy-based RL method and achieves state-of-the-art performance. Comparing with optimization-based attacks, RL-based attacks have two major advantages. First, perturbing (optimal) edges in a graph (i.e., add new edges or delete existing edges) can be naturally modeled by executing (optimal) actions based on solving a policy function in RL, while solving which is NP-hard using optimization methods. Second, our attack involves learning a nonlinear mapping from the high-dimensional graph structure space to a low-dimensional representation space. Such a nonlinear mapping can be easily parameterized by a neural network and fits the RL framework.
The core idea of our attack is as follows: First, the attacker observes the current state associated with a graph. Based on the current state, the attacker executes an optimal action (i.e., add new edges and delete existing edges) by solving a policy function parameterized by a policy network; and obtaining a reward, which relates to our objective function in Equation 1, by solving a soft -function parameterized by a -network. Then, the attacker obtains the next state and saves the current and next states, actions, and rewards as a trajectory in a replay buffer. Finally, the attacker samples the trajectories from the buffer and adopts the SAC algorithm to train our attack. Figure 2 shows our proposed RL-based attack to DyGCN. The used notations and their descriptions are listed in Table 1.
The Attack Environment
A RL method consists of states, actions, rewards, and terminal condition. We first define the attack environment from the attacker’s perspective, which involves how to represent states, execute actions, define rewards, and determine the terminal condition. We take attacking a single graph () in an example as an instance to introduce our attack.
States.
We use to denote the state of the intermediate perturbed graph at the attack step . Thus is the state of the clean graph and . Moreover, we also use to indicate a low-dimensional embedding of the high-dimensional perturbed graph . aims to capture the latent structural information in the perturbed graph . and it is learnt via the following three steps: First, we observe all clean graphs in the example other than (i.e., from to in this case), and select a fraction () of nodes with the largest averaged degrees (called popular nodes) as well as the same fraction of nodes with the smallest averaged degrees (called neglected nodes) among those observed graphs (i.e., ). Our intuition is that nodes with the largest/smallest averaged degrees can maintain the primary information in a graph. Second, we encode the two node sets and using two one-hot vectors . Specifically, if and if . Similarly, if and if . Third, we define the representation for as follows:
(2) | ||||
where is a function , which first selects entries from the -dimensional vector with indexes corresponding to (or ) to form a new -dimensional vector; and then applies a nonlinear activation function (e.g., ReLU in our paper) to the new vector to obtain the -dimensional embedding for the perturbed graph . Note that we introduce a smooth regularization term in order to stabilize the training and reduce overfitting. By default, we set to be 0.2. With Equation 2, each entry value in indicates the importance of the corresponding node in or . For the implementation purpose, we also store the mapping between the vector index or and the real node index in the graph.
Notation | Description |
Training/Validation/testing set | |
No. of Training/Validation/testing examples | |
Learnt DyGCN model | |
Ground truth graph at time step | |
Estimated graph at time step | |
Perturbed graph at time step | |
Popular/Neglected node set | |
No. of graphs in a sequence | |
No. of nodes in each graph | |
No. of layers in GCN | |
Policy network | |
Soft -network | |
Replay buffer | |
Temperature hyperparameter | |
Fraction of perturbed graphs in an example | |
Fraction of selected popular/neglected nodes | |
Fraction of total edges perturbed in a graph | |
No. of training episodes |
Action.
We consider that the attacker only changes the link status among and . Our motivation is based on the assumption that: when deleting edges among popular nodes or/and adding edges among neglected nodes, the graph structure can be largely changed, and thus the DyGCN model would make as many wrong predicted links as possible when evaluated on the perturbed graph. Note that such a consideration can also reduce the computational complexity, as the attacker does not need to search the entire graph structure. Moreover, to ensure that certain graph property does not change significantly after the perturbation, we require the total number of edges in the graph keeps the same in each attack step. Particularly, we consider that an attacker executes an action by adding a new edge between a pair of nodes as well as deleting an existing edge between another pair of nodes. Then, the purpose of the action is to search two pairs of nodes, such that when executing the action, DyGCN’s performance decreases as much as possible based on the current state. Specifically, we denote as the action at the attack step . Then, we design a policy to generate actions based on . Formally, we define
(3) |
where is a policy network parameterized by . Equation 3 attempts to enforce that, when nodes in (or ) are relative more important at state , then they are more likely to be selected as the action . In this paper, we define our policy network as a composition of a 3-layer MLP network and a truncated normal distribution (TruncNorm) within an interval . Specifically, the output of MLP is the mean and log-deviation of a TruncNorm; and we then perform the action by sampling two pairs of nodes from the TruncNorm (with rounding). Moreover, we note that the sampling process can be naturally divided into two separate steps: we first perform an action to sample a pair of nodes from and then perform another action to sample a pair of nodes from . Formally,
With , we can map each of its value back to the real node index via our stored mapping. For simplicity, we still use to indicate the real node indexes. Moreover, when the attacker’s action is to add an existing edge or delete a nonexistent edge, we will move to the next step. Details of training the policy network is illustrated in the next subsection.
Rewards.
Given the state and action , the attacker obtains a new state . Naturally, if the link prediction performance of at state is worse than that at state , then a positive reward will be given, otherwise a negative reward. Based on this motivation, we design the reward function as follows:
(4) |
where corresponds to our objective function in Equation 1 at the -th attack step. is a function to measure the effectiveness of on the perturbed graphs for link prediction at state . If , which means our attack generates more wrong predicted links at the -th attack step, then the gap between and is positive. Furthermore, if the positive gap is larger, then our attack is more effective. Thus, we can use this positive gap as a positive reward . Otherwise, if our attack performs worse at the attack step , then we set a large negative reward to penalize this step.
To obtain a optimal expected reward and guide the policy network , we also need to solve a -function, which is parameterized by a soft -network . In our paper, soft -network uses the same MLP as the policy network. Specifically, -network takes a (state, action) pair as an input and outputs a score that indicates the effectiveness of the action. Its associated Bellman equation is defined as:
(5) |
where
(6) |
is the state value function that denotes the expected reward obtained by the attacker at state . The discount factor is to limit the sum of the expected rewards. Details of training the soft -network is shown in the next subsection.
Terminal.
As required for the attacker, the number of modified edges for each perturbed graph in the sequence is limited to . During the training process, the steps for each training episode is finite and fixed. In each attack step, the attacker agent adds a new edge as well as deletes an existing edge, therefore there are at most steps per episode.
Maximum Entropy Reinforcement Learning
We use the soft actor critic (SAC) algorithm [13], a method of maximum entropy reinforcement learning (MERL), to train our attack model. This algorithm maximizes the entropy of the policy as well as the expected reward. Compared with other RL algorithms, SAC has a stronger exploration ability and a more stable training. We first define the loss function of the policy network and the soft -network ; and then show how to train them.
Policy Network.
Its loss function is defined as
(7) |
where is a replay buffer which collects a set of previous states, actions, and rewards. is a temperature parameter that controls the stochasticity of the optimal policy. In practice, is an important parameter that is learnt as follows:
(8) |
where is a predefined entropy threshold (e.g., in our paper). Minimizing loss function in Equation 7 can make the policy select multiple attractive actions whose probabilities are similar, instead of focusing on a single determinate action.
Furthermore, to estimate the density of the -function with a lower variance estimator, we usually apply a reparameterization trick to reparameterize the policy using a neural network transformation , where is an input noise, sampled from, e.g., a normal Gaussian distribution. Then, we can rewrite the loss in Equation 7 as
(9) |
Soft -Network.
Its loss function is defined as:
(10) |
where and are sampled from the relay buffer , and is sampled from the policy during the training process.
Attack model training.
After defining the loss function of the parameterized policy network, parameterized soft -network and temperature parameter, we can now train our RL-based attack model. Suppose we are given a trained DyGCN model , a validation set, and a testing set. We use the validation set and the Adam algorithm to train our attack against the DyGCN model. Specifically, we first update the parameters in the policy network , which guides the policy improvement; Then, we update the temperature parameter based on updated parameters . Third, we update the parameter in the soft -network, which evaluates the effectiveness of the policy. We iteratively and alternatively update these parameters until reaching the terminal condition. Finally, we can evaluate our trained attack model on a testing set. The training process and evaluation process of our attack are detailed in Algorithm 1 and Algorithm 2, respectively.
4 Evaluation
Dataset | Nodes | Edges | Graphs |
Haggle | 274 | 28.2k | 30 |
Hypertext | 113 | 20.8k | 30 |
Trapping | 1.5k | 4.6k | 30 |
Experimental Setup
Dataset description.
We evaluate our RL-based black-box evasion attack to DyGCN, a state-of-the-art LPDG method, on three graph datasets, i.e., Haggle, Hypertext, and Trapping, from three different domains. We split each dataset into 30 graphs in a chronological order.
-
•
Haggle. This is a social network available at KONECT111http://konect.uni-koblenz.de/networks/contact, representing the connection between users measured by wireless devices. A node represents a user and an link between two person indicates a contact between them. The dataset was collected within 5 days.
-
•
Hypertext. This is a face-to-face contact network of 100 attendees to the ACM Hypertext 2009 conference held in Turin, Italy over three days from June 29 to July 1, 2009. The network is provided by SocioPatterns222http://www.sociopatterns.org/datasets. In the network, a node represents a conference attendee, and an edge represents a face-to-face contact between two attendees that was active for at least 20 seconds.
-
•
Trapping. This is a real-world animal interaction network from Network Data Repository333http://networkrepository.com/mammalia-voles-rob-trapping.php. The animal interaction data were from published studies of wild, captive, and domesticated animals. Each node represents an animal, a mammal, or a voles. An edge was added into the network whenever two voles were caught in at least one common trap over the primary trapping sessions. Each network in the dataset has a 12-hour time span.









Configuration.
Each dataset is divided into a training set, a validation set, and a testing set, where each training/validation/testing example consists of a sequence of graphs. As each dataset has 30 graphs, we have examples in total. We use training examples to train DyGCN; validation examples to train our attack model; and the remaining testing examples to evaluate our attack. All our experiments are performed on a Linux machine with 128GB memory and 10 cores.
In DyGCN, the number of GCN layers is with each layer having 64 units, and LSTM has 2 layers. As our graph datasets do not have node features, we thus use the identity matrix to represent node feature matrix, similar to [35]. In our attack, there are several key parameters that could affect the attack performance: the fraction of total graphs to be perturbed in an example; the fraction of total nodes to be selected as the popular/neglected nodes; and the fraction of total edges to be perturbed in each graph. By default, we set , , and . We also explore the impact of these parameters. We fix the other parameters as the default value when we study one specific parameter. Specifically, we set , , and .
Compared attacks.
There are no existing works on attacking dynamic link prediction in the black-box setting. Here, we propose two baseline attacks and compare them with our attack under the same setting (e.g., the same number of graphs to be attacked, the same attack budget).
-
•
Random-whole. In this attack, the attacker randomly deletes an edge from and adds an edge to the whole graph in each attack step.
-
•
Random-partial. In this attack, in each attack step, the attacker randomly selects a pair of nodes from and deletes the edge if an edge exists between them; and selects another pair of nodes from and adds the edge if there is no edge between them.
Evaluation metric.
Similar to existing works [5, 23, 21], we use score to evaluate the performance of a link prediction algorithm. Thus, the function in Equation 4 is the score. The higher/lower the score, the better/worse the link prediction method is. All results are reported in an averaged score on the testing examples.
Dataset | Haggle | Hypertext | Trapping |
No attack(NO) | |||
Random-whole(R-W) | |||
Random-partial(R-P) | |||
Our attack(Ours) |
Attack Results on DyGCN
Effectiveness of our attack.
Table 3 shows DyGCN’s performance on the clean graphs, i.e., no attack, and the perturbed graphs, i.e., with our attack and two random attacks. We observe that our attack is effective. For instance, compared with no attack, our attack has an , , and score degradation on the three datasets, respectively. Moreover, our attack is much more effective than random attacks. For instance, on Trapping, our attack has a and performance gain over Random-whole attack and Random-partial attack, respectively.
Impact of .
Figure 3 shows DyGCN’s score under all compared attacks vs. fraction of total edges perturbed. We observe that as increases, all attacks have better attack performance. This is because, a larger indicates that an attacker is capable of perturbing a larger number of links/edges. Moreover, our attack is much more effective than random attacks at a given . For instance, when only edges can be perturbed, our attack can reduce the score with , , , while random attacks reduce around , , and , on the three datasets, respectively.
Impact of .
Figure 4 shows DyGCN’s score under all compared attacks vs. fraction of total nodes selected as the popular/neglected node set. Similarly, we observe that as increases, all attacks’ performance increases. However, our attack requires a much smaller than random attacks, in order to achieve a promising attack performance.
Impact of .
Figure 5 shows DyGCN’s score under all compared attacks vs. fraction of total graphs perturbed in each example. Similarly, we observe that as increases, score of all attacks decreases. However, our attack obtains a good attack performance when perturbing a relatively smaller number of graphs (e.g., ).

Efficiency of our attack.
We use running time as the metric to evaluate the efficiency of training our attack. Figure 6 shows the running time vs. , , and on Haggle. When we observe one fraction, the other two are set to default values. Note that we have similar observations on the other two datasets, and thus show results on Haggle for simplicity. We observe that the running time is linear to the three parameters. Thus, we can conclude that our attack is also efficient.
5 Conclusion
We propose the first black-box evasion attack against graph neural network-based link prediction in dynamic graphs (LPDG). Our attack aims to perturb the graph structure so as to fool the LPDG model, while not knowing the model parameters, model architecture, etc. We formulate our attack as an optimization problem, which is NP hard, and then develop a reinforcement learning-based method to realize our attack. Experimental results on three real-world graph datasets show that our attack is both effective and efficient.
6 Related Work
We review existing works that perform attacks against graph data. These methods can be summarized as attacking graph-based clustering [6], graph-based collective classification [29, 32], graph embedding [9, 26, 3, 1, 2], and graph neural networks [42, 8, 43, 32, 33, 34, 27, 38]. For instance, [6] designed a practical attack against spectral clustering, a type of graph-based clustering methods. [32] proposed an attack against the collective classification method, called linearized belief propagation, by manipulating the graph structure. Zügner et al. [42] developed a Nettack method to attack graph convolutional network (GCN) [17] by perturbing both the node features and graph structure. However, we note that all these attacks focus on attacking a single static graph.
To the best of our knowledge, only one work [4] studies adversarial attacks to link prediction in dynamic graphs. However, this attack is white-box (i.e., the attacker knows all information about the trained model), and is specially designed for a specific LPDG method called deep dynamic graph embedding. In contrast, our attack is black-box and is applicable to arbitrary LPDG methods. In our work, for simplicity, we focus on attacking the state-of-the-art DyGCN method [21].
References
- [1] Aleksandar Bojchevski and Stephan Günnemann. Adversarial attacks on node embeddings via graph poisoning. In ICML, 2019.
- [2] Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, and Junzhou Huang. A restricted black-box adversarial framework towards attacking graph embedding models. In AAAI, 2020.
- [3] Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. Fast gradient attack on network embedding. arXiv, 2018.
- [4] Jinyin Chen, Jian Zhang, Zhi Chen, Min Du, Feifei Li, and Qi Xuan. Time-aware gradient attack on dynamic network link prediction. arXiv, 2019.
- [5] Jinyin Chen, Jian Zhang, Xuanheng Xu, Chenbo Fu, Dan Zhang, Qingpeng Zhang, and Qi Xuan. E-lstm-d: A deep learning framework for dynamic network link prediction. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2019.
- [6] Yizheng Chen, Yacin Nadji, Athanasios Kountouras, Fabian Monrose, Roberto Perdisci, Manos Antonakakis, and Nikolaos Vasiloglou. Practical attacks against graph-based clustering. In CCS, 2017.
- [7] Carter Chiu and Justin Zhan. Deep learning for link prediction in dynamic networks using weak estimators. IEEE Access, 2018.
- [8] Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack on graph structured data. In ICML, 2018.
- [9] Quanyu Dai, Qiang Li, Jian Tang, and Dan Wang. Adversarial network embedding. In AAAI, 2018.
- [10] Palash Goyal, Sujit Rokka Chhetri, and Arquimedes Canedo. dyngraph2vec: Capturing network dynamics using dynamic graph representation learning. Knowledge-Based Systems, 2020.
- [11] Palash Goyal, Nitin Kamra, Xinran He, and Yan Liu. Dyngem: Deep embedding method for dynamic graphs. arXiv, 2018.
- [12] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In KDD, 2016.
- [13] Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv, 2018.
- [14] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 1997.
- [15] Nahla Mohamed Ahmed Ibrahim and Ling Chen. Link prediction in dynamic social networks by integrating different types of information. Applied Intelligence, 2015.
- [16] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
- [17] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
- [18] Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, and Huan Liu. Attributed network embedding for learning in a dynamic environment. In CIKM, 2017.
- [19] Xiaoyi Li, Nan Du, Hui Li, Kang Li, Jing Gao, and Aidong Zhang. A deep learning approach to link prediction in dynamic networks. In SDM, 2014.
- [20] Xiaoke Ma, Penggang Sun, and Yu Wang. Graph regularized nonnegative matrix factorization for temporal link prediction in dynamic networks. Physica A: Statistical mechanics and its applications, 2018.
- [21] Franco Manessi, Alessandro Rozza, and Mario Manzo. Dynamic graph convolutional networks. Pattern Recognition, 2020.
- [22] Giang Hoang Nguyen, John Boaz Lee, Ryan A Rossi, Nesreen K Ahmed, Eunyee Koh, and Sungchul Kim. Continuous-time dynamic network embeddings. In WWW, 2018.
- [23] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B Schardl, and Charles E Leiserson. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. In AAAI, 2020.
- [24] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In KDD, 2014.
- [25] Purnamrita Sarkar, Deepayan Chakrabarti, and Michael Jordan. Nonparametric link prediction in dynamic networks. In ICML, 2012.
- [26] Mingjie Sun, Jian Tang, Huichen Li, Bo Li, Chaowei Xiao, Yao Chen, and Dawn Song. Data poisoning attack against unsupervised node embedding methods. arXiv preprint arXiv:1810.12881, 2018.
- [27] Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In The Web Conference, 2020.
- [28] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In WWW, 2015.
- [29] MohamadAli Torkamani and Daniel Lowd. Convex adversarial collective classification. In ICML, 2013.
- [30] Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In ICML, 2017.
- [31] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In ICLR, 2018.
- [32] Binghui Wang and Neil Zhenqiang Gong. Attacking graph-based classification via manipulating the graph structure. In CCS, 2019.
- [33] Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples on graph data: Deep insights into attack and defense. In IJCAI, 2019.
- [34] Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. Topology attack and defense for graph neural networks: An optimization perspective. In IJCAI, 2019.
- [35] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ICLR, 2019.
- [36] Bing Yu, Haoteng Yin, and Zhanxing Zhu. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. In IJCAI, 2018.
- [37] Wenchao Yu, Wei Cheng, Charu C Aggarwal, Kai Zhang, Haifeng Chen, and Wei Wang. Netwalk: A flexible deep embedding approach for anomaly detection in dynamic networks. In KDD, 2018.
- [38] Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. Backdoor attacks to graph neural networks. arXiv, 2020.
- [39] Zhongbao Zhang, Jian Wen, Li Sun, Qiaoyu Deng, Sen Su, and Pengyan Yao. Efficient incremental dynamic link prediction algorithms in social network. Knowledge-Based Systems, 2017.
- [40] Ling Zhao, Yujiao Song, Chao Zhang, Yu Liu, Pu Wang, Tao Lin, Min Deng, and Haifeng Li. T-gcn: A temporal graph convolutional network for traffic prediction. IEEE Transactions on Intelligent Transportation Systems, 2019.
- [41] Le-kui Zhou, Yang Yang, Xiang Ren, Fei Wu, and Yueting Zhuang. Dynamic network embedding by modeling triadic closure process. In AAAI, 2018.
- [42] Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In KDD, 2018.
- [43] Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. In ICLR, 2019.
- [44] Yuan Zuo, Guannan Liu, Hao Lin, Jia Guo, Xiaoqian Hu, and Junjie Wu. Embedding temporal network via neighborhood formation. In KDD, 2018.