The Devil is in the Conflict: Disentangled Information Graph Neural Networks For Fraud Detection
Abstract
Graph-based fraud detection has heretofore received considerable attention. Owning to the great success of Graph Neural Networks (GNNs), many approaches adopting GNNs for fraud detection has been gaining momentum. However, most existing methods are based on the strong inductive bias of homophily, which indicates that the context neighbors tend to have same labels or similar features. In real scenarios, fraudsters often engage in camouflage behaviors in order to avoid detection system. Therefore, the homophilic assumption no longer holds, which is known as the inconsistency problem. In this paper, we argue that the performance degradation is mainly attributed to the inconsistency between topology and attribute. To address this problem, we propose to disentangle the fraud network into two views, each corresponding to topology and attribute respectively. Then we propose a simple and effective method that uses the attention mechanism to adaptively fuse two views which captures data-specific preference. In addition, we further improve it by introducing mutual information constraints for topology and attribute. To this end, we propose a Disentangled Information Graph Neural Network (DIGNN) model, which utilizes variational bounds to find an approximate solution to our proposed optimization objective function. Extensive experiments demonstrate that our model can significantly outperform state-of-the-art baselines on real-world fraud detection datasets.
Index Terms:
Graph Neural Networks, Fraud Detection, Information TheoryI Introduction
Graph-based fraud detection is a crucial task and has tremendous impact in various applications, such as opinion fraud detection [1], fake news detection [2, 3], review spams[4] and financial fraud detection [5, 6]. In these scenarios, as graph can effectively model the correlations among entities, interactive activities on platform can be characterized as a graph, where users or objects are often treated as nodes, and transactions or relations between them are treated as edges.
Numerous techniques have been proposed to detect the fraudsters. Recently, driven by the powerful representation capability of graph structure and advances of Graph Neural Networks (GNNs) [7, 8, 9], many approaches try to harness GNNs for fraud detection on either homogeneous or heterogeneous graphs. The main idea is to leverage GNNs to learn expressive node representations with the goal of distinguishing abnormal nodes from the normal ones in the latent embedding space. Message-Passing GNNs (MP-GNNs) are mainstreaming in recent years, which aggregate neighbor node features and achieve local smoothing by stacking layers. Although MP-GNNs can obtain satisfactory performance on most of cases, the strong inductive bias of homophily limits their representative ability on heterophilic graphs. Some works [10] point out that plentiful GNNs can be seen as low-pass filters, so their generalization ability on high frequency graph signals are poor. In fraud detection task, fraudsters often imitate normal users in order to camouflage themselves, hence they will interact with normal users more frequently. For instance, normal users account for 81% of the fraudsters’ neighbor nodes in YelpChi dataset (Figure 1). In other words, fraudsters’ features are inconsistent with their behaviors (interactions, e.g., topological structure). Thus, recall that MP-GNNs do not work well on heterophilic graphs, they fail to tackle the inconsistency phenomenon in graph-based fraud detection and fraudsters could fool the detection system.
Recently, a few works have noticed this problem, and they employ aggregating weights to reduce the adverse impact of dissimilar neighbors, or set similarity-aware thresholds to select and re-link similar nodes. For instance, GraphConsis [11] computes consistent score between connected node pairs as the sampling probability. PC-GNN [6] combines label information and latent embeddings as distance function to measure similarity. Although such methods can alleviate the inconsistency problem in some extent, they discard a lot of information during filting dissimilar neighbors out, thus they may lead to sub-optimal performance.
In this paper, we analyze the inconsistency problem in graph-based fraud detection task, which has been obstructing a full understanding of this field. First, we clarify that the inconsistency problem is the bottleneck of graph fraud detection. According to [12], the underlying optimization process of GNNs is equivalent with minimizing the topology and attribute constraints, and Yang et al.[13] indicates that the degradation of performance is imputed to the compromise between topology and attribute. Due to the camouflage behaviors (topology) of fraudsters, which are inconsistent with their essence (attribute), this conflict in fraud networks may injure the discriminative ability of GNNs. Second, the forefronts of different datasets are diverse, and most existing methods are not satisfactory in fusing topological structures and node attributes [14]. For example, fraudsters may possess distinguishable attribute on some platforms, but their deceptive behaviors can confuse the detection model. Therefore, we are motivated to explore a novel method that is able to minimize the conflict between topology and attribute and meanwhile effectively extract most task-relevant information from datasets.
We borrow the concept of multi-view learning problems to graph-based fraud detection task and propose a simple and effective model, Disentangled Information Graph Neural Networks (DIGNN). Technically, we first disentangle fraud networks into topology and attribute views. Next, we employ attention mechanism to fuse two view embeddings adaptively for extracting task-relevant information. Surprisingly, we observe that this simple method surpasses all state-of-the-art baselines. This empirically proves that the conflict between topology and attribute causes the inconsistency problem. Besides, to further decrease the entanglement between topology and attribute and improve the performance, we design a new optimization objective based on information theory, which resorts to variational bounds to minimize mutual information between two views and maximize the mutual information between view embeddings and original inputs.
We conduct extensive experiments to compare our proposed model with existing graph-based fraud detection models, the results demonstrate the effectiveness of our model. In summary, the contributions of this paper can be summarized as follows:
-
•
We analyze the cause of the inconsistency problem, and point out that it is mainly attributed to the conflict between topology and attribute. In light of this, we propose a simple yet effective model, DIGNN, which firstly disentangles fraud network into two views and fuses them by attention mechanism.
-
•
We propose a novel optimization objective based on mutual information theory and theoretically derive its upper bound for tractable calculation.
-
•
We verify the effectiveness of our model on real-world fraud detection datasets. It is shown that our model is able to significantly improve the performance in terms of all commonly adopted metrics.

II Related Work
II-A Graph-based Fraud Detection
The core idea of graph-based fraud detection task is taking the advantages of GNNs to get the discriminative node embeddings, and find out the malicious ones in the latent space. Examples include [11, 15, 16] for review fraud detection, [2, 3] for fake news detection and [5, 6, 17, 18, 19] for financial fraud detection. Ma et al. [20] provides a comprehensive investigation on graph-based fraud detection.
Most of existing GNNs methods holds homophilic assumption that neighbor nodes share same labels or similar features. However, fraudsters will try to conceal themselves, so that their features are inconsistent with their camouflage behaviors. Some graph-based fraud detection works have noticed this problem. GraphConsis [11] pioneers to formulate and tackle the inconsistency problem. They introduce three kinds of inconsistency phenomenon existing in fraud networks. CARE-GNN [15] devises a label-aware similarity measure to find informative neighboring nodes and utilizes reinforcement learning to select similar neighbors. FRAUDRE [21] aggregates difference between adjacent node pairs. PC-GNN [6] devises a choose operation to select beneficial neighbors based on feature similarity. IHGAT [22] is devised to encode both sequence-like intentions and relationship among transactions for leveraging the cross-interaction information.
Our model is different from all above works. We innovatively disentangle topology and attribute and consider graph learning as a multi-view learning problem, instead of measuring similarity between adjacent node pairs.
II-B Multi-view on GNNs
Topology and attribute are two essential compositions of graphs. However existing state-of-the-art GNN models are disable to effectively fuse topological structure and node attributes. AM-GCN [14] uses -nearest neighbor to construct feature graph and combine it with topological structure view and common embeddings. SCRL [23] designs a self-supervised approach to maximize the agreement of the embeddings in the topology graph and the feature graph. A recent work [13] claims that the interference between topology and attribute is mainly ascribed to compromises between them. LINKX [24] processes node attributes and topological structure in an orthogonal manner. In this paper, we also follow this idea and extend it by proposing a novel architecture and optimization objective.
Information-theoretic methods have been gaining momentum in recent years, which take into consideration the mutual dependency of different views. MIB[25] extends the information bottleneck principle to unsupervised multi-view setting to discard superfluous information. DVIB[26] and CMIB[27] leverage mutual information constrains to better preserve shared and private information of multi-view learning. To cope with intractable computation of mutual information, these methods adopt variational inference to optimize objective lower and upper bounds. In comparison, our model propose a novel optimization objective to reconcile consistency and complementarity between topology and attribute views. Equipped with variational inference, we also approximate the mutual information with derived bounds.
III PRELIMINARIES
III-A Problem Statement
Definition 1. Graph-based Fraud Detection. Given a fraud network , where is the set of nodes; is the adjacency matrix, if and are connected, , otherwise, ; denotes node feature matrix, each node is associated with a -dimensional feature vector and a label , where 0 denotes the node is a normal user (negative) and 1 indicates it is a fraudster (positive). The core idea of graph-based fraud detection is to learn discriminative node embeddings to detect the anomaly samples in latent space.
Definition 2. Graph Neural Networks. Most of GNNs follow message passing mechanism which uses feature transformation and aggregation operations to capture the structural and attribute information. One of the most popular and representative GNNs model is graph convolutional networks (GCNs). The forward inference at the -th layer of GCNs is formally defined as:
(1) |
where is nonlinear activation function, is the linear transformation matrix, denotes the node embedding matrix at the -th layer, and , is the normalized adjacency matrix, which can be implemented by row-normalization, or symmetric normalization, , and is a diagonal matrix, is an identity matrix.
Interestingly, some works [12] have declared that representative GNN models can be regarded as solving a Graph Signal Denoising problem, which given a noisy signal on graph , the goal is to recover a clean signal :
(2) |
where the first term guides output signal similar to original signal and the second term encourages signal smoothness on graph.

IV METHOD
In this section, we will present our model, Disentangled Information Graph Neural Network (DIGNN). Figure 2 gives an overview of our model. It consists of three main objectives: 1. Disentangle attribute fraud network into topology and attribute views and fuse them by attention mechanism, the final embeddings are trained with Cross-Entropy loss; 2. To further reduce the conflict between two views, we minimize the mutual information between them; 3. In order to maintain the semantic information from input space, we maximize the mutual information between view-specific embeddings and their original inputs.
IV-A View-specific Embedding
It is universally acknowledged that topology and attribute are of vital importance for graph learning. However, in graph fraud detection scenario, traditional message passing along neighboring nodes is inappropriate as graph signal smoothing makes fraudsters more indistinguishable. To alleviate the inconsistency problem, we disentangle the topology and attribute information and encode them in parallel.
Given an attributed fraud network , it can be disentangled into topology view and attribute view . Here we provide two view encoders for each input view, as shown in Figure 2. Specifically, we employ Multi-Layer Perceptron (MLP) as encoders to obtain view-specific embeddings :
(3) |
in which is the embedding dimension. With these two embeddings, we need to fuse them to obtain final representation and extract task-relevant information.
IV-B Cross-view Fusion
Now we have two view-specific embeddings and , we then perform cross-view fusion by utilizing attention mechanism. The attention value can be represented as:
(4) |
where denotes the learnable attention vector, is the weight matrix and is bias vector. Thus, we can get the attention values and for view-specific embeddings and , respectively. Then we normalize them via softmax function to get the final weight:
(5) |
Larger attention weight implies that the corresponding embeddings is more important, and it is determined by specific dataset. Then the final output embedding can be combined by two view-specific embeddings with its corresponding attention weight as:
(6) |
And we put it into a linear classifier, while training by a cross-entropy loss function:
(7) |
in which and is the weight matrix and bias vector of linear classifier, is a softmax function, and is the training node set.
IV-C Mutual Information Optimization
Up to now, we have discussed how to get view-specific embeddings and fuse them with attention mechanism. However, as mentioned in [13], the representative GNN models tend to deteriorate their expressive power due to interference between attribute and topology. In spite of decoupling operation, it is still impractical to look forward to injecting mutual-exclusive learning ability to our model simply equipped with attention mechanism. In other words, we need to seek some principles to guide the training procedure. By leveraging the information theory, we propose a novel optimization objective to alleviate the aforementioned problem. Furthermore, we derive the variational bound of our optimization objective and discuss the intrinsic effect and intuitive insight. Without loss of generality, we let to represent original views and to represent view-specific embeddings for ease of reading.
IV-C1 Optimization principles
The first principle aims to induce model to learn mutual-exclusive embeddings, which ameliorates the compromise problem between attribute and topology. Considering that mutual information measures the mutual dependence of variables, we introduce the constraint term to our optimization objective. In this way, model is able to reduce the redundancy and enhance the ability on exploiting sufficient semantic information in embedding space with limited dimensionality.
Nevertheless, mutual-exclusive constraint is prone to impair the helpful shared information. For instance, in Amazon dataset, handcrafted features are highly correlated to social networks (topology), thus mutual-exclusive constraint will injure attribute semantics during training. The second principle builds the relationship between view-specific embeddings and their original inputs. In virtue of rich but distinct semantics inherent in the attribute and topology, it is necessary to extract useful features and meanwhile maintain respective information from input data space. We further introduce the constraint term to our optimization objective to encode inputs with more view-specific information available. To sum up, our mutual information optimization objective can be summarized as follow:
(8) |
IV-C2 Theoretical Analysis
Recall that the optimization objective has the form . However, it is intractable to directly calculate the mutual information for high dimensional variables [28]. We alternatively sort to derive variational bounds of the mutual information to find an approximate solution to original objective function. In this work, denote random variables, and denote corresponding instances.
Lower bound of . For conciseness, we use term to represent . According to the definition of mutual information, we have
(9) | ||||
(10) |
where is the entropy of ; is the variational approximation of conditional distribution . Notice that the entropy of random variable is independent of our optimization procedure. Therefore, maximization of is equivalent to maximize .

Upper bound of . The overall optimization transformation is demonstrated as Figure 3. At the beginning, we introduce some basic properties in information theory [29]:
(11) | |||
(12) |
Following [25], we have
Here we assume that is a sufficient representation of , which means holds. According to symmetry of mutual information, we can follow the same formulation and derive an equivalent form of .
(13) |
Then, we take the mutual information as an example and derive the upper bound. We have:
(14) | ||||
(15) | ||||
(16) | ||||
(17) | ||||
(18) | ||||
(19) |
where and represent encoders that encode information from original feature space. The upper bound will become tighter as the marginal distribution approaches the priors . By observing the two terms in the upper bound. The first term measures the difference of two latent representations from and but with the same input , while the second term measures the difference between encoder with the approximated margin . According to [26], the two terms has the same optimization directions, thus we simplify the upper bound to
(20) |
Again, we take into consideration the symmetry of the mutual information and take average of the two form to formulate the final upper bound.
(21) |
In practice, we minimize the reconstruction loss to equivalently minimize the lower bound of , as done in auto-encoder models[30]. According to the type of input , can be any appropriate distribution with known probability density function. Here we let the be the Gaussian distribution with given variance and deterministic mean function which is parameterized by neural networks, we have
(22) | ||||
(23) |
To maximize the upper bound of , we define and as the Gaussian distribution and respectively, where and are given expectation and variance. Then we employ the reparameterization trick[31] to rewrite , where and our mutual-exclusive loss is defined as
(24) |
Eventually, the overall optimization objective is formulated as follow
(25) |
where and are scalar factors. Moreover, it is worth noting that the second term reconstruction loss is equivalent to graph signal denoising but without signal smoothness, which is reasonable considering the inconsistency problem of graph anomaly detection. Intuitively, our loss function denoises the original graph signal and achieves mutual exclusion between attribute and topology together with supervised information. The training procedure is presented in Algorithm. 1.
V Experiments
In this section, we investigate the effectiveness of DIGNN, and aim to answer the following research questions:
-
•
RQ1: Does disentangle operation benefit to inconsistency problem?
-
•
RQ2: Does DIGNN outperform the state-of-the-art methods for graph-based fraud detection?
-
•
RQ3: How do different components of DIGNN contributes to performance improvement in graph-based fraud detection task?
-
•
RQ4: What is the performance of DIGNN with respect to different hyperparameters?
V-A Experiment Setup
V-A1 Datasets.
Our proposed DIGNN model is evaluated on two real-world opinion fraud network datasets: YelpChi and Amazon. The YelpChi dataset [32] collects hotel and restaurant reviews on Yelp.com online platform. The nodes of YelpChi dataset are reviews with 32 handcrafted features and the dataset includes three relations: 1) R-U-R that connects reviews posted by the same user, 2) R-S-R that connects reviews under the same product with the same star rating, 3) R-T-R that connects two reviews under the same product posted in the same month. The Amazon dataset [33] includes product reviews under the Musical Instrument category. The nodes in the graph of Amazon dataset are users with 25 handcrafted features and also contain three relations: 1) U-P-U that connects users reviewing at least one same product, 2) U-S-U that connects users having at least one same star rating within one week, 3) U-V-U that connects users with top 5% mutual review text similarities (measured by TF-IDF) among all users. The statistic of datasets is shown in Table I.
V-A2 Baselines.
We compare with several representative state-of-the-art models to verify the effectiveness of DIGNN in graph-based fraud detection.
-
•
GCN [7]: graph convolutional network achieved by aggregating features in the neighborhood to generate node embeddings.
-
•
GAT [9]: graph attention network aggregates the neighbors with different aggregation weights calculated by attention mechanism.
-
•
GraphSAGE [8]: an inductive GNN model samples the neighbor by connection information and aggregates features by stacking layers.
-
•
DR-GCN [34]: a dual-regularized graph convolutional network to handle multi-class imbalanced graph representation learning.
-
•
CARE-GNN [15]: a fraud detection GNN model utilizes a similarity measure to enhance aggregation and reinforcement learning to obtain optimal selection count.
-
•
FRAUDRE [21]: a GNN model aggregates difference between neighbors and tackles with class imbalance.
-
•
PC-GNN [6]: a state-of-the-art graph-based fraud detection method, which proposed pick and choose operations to alleviate inconsistency and class imbalance.
-
•
DIGNN∖S: a variant of DIGNN, which is trained on node feature and adjacency matrix directly without mini-batch sampling.
-
•
DIGNN∖M: a variant of DIGNN, which removed the mutual information optimization from DIGNN.
dataset |
|
Relation | #Edges | Class | #Class | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
YelpChi |
|
|
|
|
|
||||||||||||||||
Amazon |
|
|
|
|
|
Method | Dataset | Yelpchi | Amazon | ||||
---|---|---|---|---|---|---|---|
Metric | F1-macro | AUC | GMean | F1-macro | AUC | GMean | |
Baselines | GCN | 0.49290.0025 | 0.62740.0034 | 0.18860.0063 | 0.54610.0363 | 0.83280.0111 | 0.25700.0789 |
GAT | 0.48790.0230 | 0.57150.0029 | 0.16590.0789 | 0.64640.0387 | 0.81020.0179 | 0.66750.1345 | |
GraphSAGE | 0.44050.1066 | 0.5439 0.0025 | 0.25890.1864 | 0.64160.0079 | 0.75890.0046 | 0.59490.0349 | |
DR-GCN | 0.55230.0231 | 0.59210.0195 | 0.40380.0742 | 0.64880.0364 | 0.82950.0079 | 0.53570.1077 | |
CARE-GNN | 0.60750.0128 | 0.77130.0015 | 0.70230.0044 | 0.88750.0040 | 0.93980.0032 | 0.88480.0012 | |
FRAUDRE | 0.58410.0365 | 0.74270.0084 | 0.66540.0210 | 0.88060.0320 | 0.92720.0021 | 0.88080.0049 | |
PC-GNN | 0.61300.0083 | 0.77150.0005 | 0.70680.0015 | 0.8557 0.0227 | 0.94820.0034 | 0.89520.0044 | |
Ablation | DIGNN∖S | 0.51200.0027 | 0.61200.0067 | 0.58950.0010 | 0.73080.0064 | 0.89130.0020 | 0.80880.0042 |
DIGNN∖M | 0.69940.0149 | 0.83890.0128 | 0.73480.0173 | 0.91860.0029 | 0.96450.0019 | 0.91950.0013 | |
Ours | DIGNN | 0.70920.0025 | 0.85260.0067 | 0.75960.0105 | 0.91890.0045 | 0.97290.0039 | 0.92810.0038 |

V-A3 Settings.
The parameters of DIGNN are optimized with Adam [35] optimizer, the learning rate is set to 0.001, and weight decay is 0.0005, the training epochs are set to 50, the hidden dimension of node feature is set to 32, the scales of reconstruction learning rate of both topology and attribute are 0.05, the layer number of encoders are set to 2. The train, valid, and test ratio are set to be 40%, 20%, and 40% respectively. We use Scikit-learn [36] to implement train-test split, and the imbalance ratio is consistent in three sets. It is worth noting that to alleviate the influence of class imbalance, we employ down-sampling to train DIGNN.
For GCN, GAT, and GraphSage, they suffer from the class imbalance and inconsistency problem, and will always predict normal (negative) samples. Therefor, we follow PC-GNN to utilize threshold-moving strategy, and the classification threshold is set to be 0.2 for YelpChi and Amazon. For CARE-GNN, FRAUDRE, PC-GNN, we use the parameters introduced by authors.
V-A4 Implementation.
Our model DIGNN is implemented in Pytorch 1.7.0 [37]. For DIGNN, GCN, GAT, and GraphSage, we all implement them based on Pytorch Geometric 2.0.3 [38]. For CARE-GNN, FRAUDRE and PC-GNN, we carry out the source code provide by authors. All models are running on Python 3.8.12, 1 NVIDIA GeForce RTX 2080 GPU and 3.20 GHz Intel Xeon E5-2660 CPU.
V-A5 Metrics.
The fraud detection datasets display a skewed class distribution, so accuracy is not suitable to evaluate the effectiveness of fraud detection models. The evaluation metrics should have no bias to any class. Therefore, we use three common metrics, namely F1-macro, AUC and GMean. F1-macro is the unweighted mean of the F1-score of each class. AUC is the area under the ROC Curve.
Here, and denotes the minority and majority class set in the testing set, respectively. And indicates the rank of node via the score of prediction. And GMean calculated the geometric mean of True Positive Rate (TPR) and True Negative Rate (TNR), it can be defined as,
The higher scores of this three metrics indicate the higher performance of the approaches.




V-B Analysis of Attention Mechanism (RQ1)
To answer the RQ1, we analyze the attention values and visualize them for investigating whether the attention values learned by our model is meaningful. The attention changing trends are shown in Figure 4. The x-axis is the number of training epochs and y-axis is the average attention value. With the training epoch increasing, the difference between the corresponding attention values of topology and attribute begin to be striking. We can observe that DIGNN pays more attention on attribute and topology on YelpChi and Amazon datasets respectively. It demonstrates our model has a strong capability to extract the task-relevant information from these two views.
V-C Performance Comparison (RQ2)
To answer the RQ2, we compare the performance of DIGNN with state-of-the-art methods. The corresponding F1-macro, AUC and GMean scores are shown in Table II, we have the following two observations.
First, DIGNN significantly boosts the performance for all metrics on YelpChi and Amazon datasets than other SOTA baseline methods. In YelpChi dataset, our model obtains 9.62%, 8.11%, and 5.28% improvement respectively in F1-macro, AUC and GMean. We can observe that PC-GNN outperforms other baselines in most metrics, but our model can still surpass it by a significant margin. In Amazon dataset, graph-based fraud detection methods have already achieved high performance and the increasing room is limited. But our model can still get improvements, with 3.14% improvement in F1-macro, 2.47% improvement in AUC and 3.29% improvement in GMean. In addition, the relatively low standard deviation of DIGNN shows that our model is stable.
Second, the compared baseline methods can be divided into two groups, traditional MP-GNNs and graph-based fraud detection methods. GCN, GAT, GraphSAGE are tradition GNN models, and DR-GCN is designed for imbalanced node classes. They do not consider the inconsistency problem so that we can observe these models get poor performance on YelpChi and Amazon datasets. Because Amazon dataset has a more skewed label distribution (fraudsters only occupy 6.87% of all samples), more intra-class edges which are beneficial to traditional MP-GNNs are appeared in graph. Thus these methods have a relatively satisfactory performance on Amazon. CARE-GNN and PC-GNN are graph-based fraud detection methods, they both sample neighbors according to similarity measure, which can alleviate inconsistency problem to a certain degree. Therefore, they can perform better on these two datasets. However, sampling neighbor strategy may discard a lot of information, and it can lead to sub-optimal results. Instead, our DIGNN model abandons this practice, and disentangles original graph into topological structure and node attributes, then processes them in parallel.
In general, DIGNN outperforms all baselines in F1-macro, AUC and GMean on YelpChi and Amazon datasets, which can demonstrate the effectiveness of our model.
V-D Ablation Study (RQ3)
To answer the RQ3, we compare DIGNN with two corresponding variants DIGNN∖S and DIGNN∖M. The results of two datasets are shown in Table II. We can observe that DIGNN surpasses its variants in most of metrics. For DIGNN∖M, its overall performance on Yelpchi and Amazon is inferior to complete model, which verifies the effectiveness of our proposed mutual information objective. It is noting that the DIGNN∖M is on-par with DIGNN on Amazon dataset evaluated by F1-macro. This could attribute to smaller fraud rate of Amazon, thus DIGNN∖M pay more attention to majority class without the guidance of mutual information compared with DIGNN. For DIGNN∖S, we can observe that DIGNN is evidently better than model without sampling strategy. We suppose it is caused by the noise information of the structure view. Sampling strategy play a denoising effect on structural information to some extent.
V-E Sensitive Analysis (RQ4)
To answer RQ4, we further evaluate the performance of DIGNN with respect to the training ratio, hidden dimension and hyperparameters . For training ratio, we vary the percentage of training nodes from 10% to 50%, and compare DIGNN with other two baselines, CARE-GNN and PC-GNN. Figure 5 shows the performance of F1-macro, AUC and GMean on YelpChi dataset. We can observe that DIGNN always achieves best performance among the three models. When the training ratio is 10%, DIGNN still performs better than PC-GNN training on 50% samples. And DIGNN surpasses CARE-GNN and PC-GNN by a large margin in AUC.
For hidden dimension , we study the performance of DIGNN with various hidden dimension number from 8 to 64. And the results are presented in Figure 6. With the increase of hidden dimension , the performances improve first, but then start to grow slowly. It is relatively stable with respect to hidden dimension .
For hyperparameters and , we vary these two hyperparameters from 0 to 1, and the corresponding results are shown in Figure 7. Considering the limit space, we only present AUC performance on YelpChi and Amazon datasets. It can be observed that the optimal selection of these two hyper-parameters varies greatly on the different datasets. In the YelpChi dataset, higher AUC performance can be achieved by selecting larger (). And in the Amazon dataset, larger () and smaller () can get a better result.
V-F Visualization
In order to show the effectiveness of different models more intuitively, we visualize the learned node embeddings on YelpChi dataset. Specifically, we compare our DIGNN model with the other three models. We select one traditional MP-GNN, GCN, and two graph-based fraud detection models, CARE-GNN and PC-GNN. Firstly, we use the 32-dimensional output embedding on the last layer of these models before softmax function. And then we employ the t-SNE [39] to map the 32-dimensional embedding into 2-dimensional space for visualization. Because of the imbalanced class distribution, we randomly sample the same number of benign samples as fraudster samples for better visibility. The results of YelpChi are showed in Figure 8, and orange dots represent fraudsters, blue dots represent benign entities.
We can observe that due to the strong inductive bias of homophilic, GCN is disable to learn discriminative node embeddings. For CARE-GNN and PC-GNN, although they alleviate the inconsistency problem, they still fail to seperate the embeddings of fraudsters from that of the benign entities. Conversely, DIGNN achieves inter-class separation obviously, the overlap of the two kinds of nodes is relatively small. Consequently, it can verify the effectiveness of our proposed DIGNN model.
VI Conclusion
In this paper, we suggest that disentangling operation is beneficial to alleviate the inconsistency problem in fraud network. In order to decrease the conflict between topological structure and node attribute, we propose a simple yet effective model named DIGNN. It firstly disentangles the attribute fraud network into topology and attribute two views. Then DIGNN fuses two kinds of view information adaptively by attention mechanism, which can effectively extract task-relevant information. Moreover, we design a novel optimization objective to further reduce the entanglement between these two view-specific embeddings and maintain their semantic information. Experiment results demonstrate that DIGNN outperforms state-of-the-art methods on two real-world graph fraud detection datasets.
VII acknowledge
This work is jointly sponsored by National Natural Science Foundation of China (U19B2038, 62141608, 62206291) and CCF-AFSG Research Fund (20210001).
References
- [1] A. Li, Z. Qin, R. Liu, Y. Yang, and D. Li, “Spam review detection with graph convolutional networks,” in Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019, pp. 2703–2711.
- [2] Y. Dou, K. Shu, C. Xia, P. S. Yu, and L. Sun, “User preference-aware fake news detection,” in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021, pp. 2051–2055.
- [3] W. Xu, J. Wu, Q. Liu, S. Wu, and L. Wang, “Mining fine-grained semantics via graph neural networks for evidence-based fake news detection,” arXiv preprint arXiv:2201.06885, 2022.
- [4] L. Deng, C. Wu, D. Lian, Y. Wu, and E. Chen, “Markov-driven graph convolutional networksfor social spammer detection,” IEEE Transactions on Knowledge and Data Engineering, 2022.
- [5] D. Wang, J. Lin, P. Cui, Q. Jia, Z. Wang, Y. Fang, Q. Yu, J. Zhou, S. Yang, and Y. Qi, “A semi-supervised graph attentive network for financial fraud detection,” in 2019 IEEE International Conference on Data Mining (ICDM). IEEE, 2019, pp. 598–607.
- [6] Y. Liu, X. Ao, Z. Qin, J. Chi, J. Feng, H. Yang, and Q. He, “Pick and choose: a gnn-based imbalanced learning approach for fraud detection,” in Proceedings of the Web Conference 2021, 2021, pp. 3168–3177.
- [7] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
- [8] W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017.
- [9] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” stat, vol. 1050, p. 20, 2017.
- [10] H. Nt and T. Maehara, “Revisiting graph neural networks: All we have is low-pass filters,” arXiv preprint arXiv:1905.09550, 2019.
- [11] Z. Liu, Y. Dou, P. S. Yu, Y. Deng, and H. Peng, “Alleviating the inconsistency problem of applying graph neural network to fraud detection,” in Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, 2020, pp. 1569–1572.
- [12] Y. Ma, X. Liu, T. Zhao, Y. Liu, J. Tang, and N. Shah, “A unified view on graph neural networks as graph signal denoising,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 1202–1211.
- [13] L. Yang, W. Zhou, W. Peng, B. Niu, J. Gu, C. Wang, X. Cao, and D. He, “Graph neural networks beyond compromise between attribute and topology,” 2022.
- [14] X. Wang, M. Zhu, D. Bo, P. Cui, C. Shi, and J. Pei, “Am-gcn: Adaptive multi-channel graph convolutional networks,” in Proceedings of the 26th ACM SIGKDD International conference on knowledge discovery & data mining, 2020, pp. 1243–1253.
- [15] Y. Dou, Z. Liu, L. Sun, Y. Deng, H. Peng, and P. S. Yu, “Enhancing graph neural network-based fraud detectors against camouflaged fraudsters,” in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 315–324.
- [16] Y. Wang, J. Zhang, S. Guo, H. Yin, C. Li, and H. Chen, “Decoupling representation learning and classification for gnn-based anomaly detection,” in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021, pp. 1239–1248.
- [17] X. Ao, Y. Liu, Z. Qin, Y. Sun, and Q. He, “Temporal high-order proximity aware behavior analysis on ethereum,” World Wide Web, vol. 24, no. 5, pp. 1565–1585, 2021.
- [18] T. Liang, G. Zeng, Q. Zhong, J. Chi, J. Feng, X. Ao, and J. Tang, “Credit risk and limits forecasting in e-commerce consumer lending service via multi-view-aware mixture-of-experts nets,” in Proceedings of the 14th ACM international conference on web search and data mining, 2021, pp. 229–237.
- [19] G. Zhang, Z. Li, J. Huang, J. Wu, C. Zhou, J. Yang, and J. Gao, “efraudcom: An e-commerce fraud detection system via competitive graph neural networks,” ACM Transactions on Information Systems (TOIS), vol. 40, no. 3, pp. 1–29, 2022.
- [20] X. Ma, J. Wu, S. Xue, J. Yang, C. Zhou, Q. Z. Sheng, H. Xiong, and L. Akoglu, “A comprehensive survey on graph anomaly detection with deep learning,” IEEE Transactions on Knowledge and Data Engineering, 2021.
- [21] G. Zhang, J. Wu, J. Yang, A. Beheshti, S. Xue, C. Zhou, and Q. Z. Sheng, “Fraudre: Fraud detection dual-resistant to graph inconsistency and imbalance,” in 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 2021, pp. 867–876.
- [22] C. Liu, L. Sun, X. Ao, J. Feng, Q. He, and H. Yang, “Intention-aware heterogeneous graph attention networks for fraud transactions detection,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 3280–3288.
- [23] C. Liu, L. Wen, Z. Kang, G. Luo, and L. Tian, “Self-supervised consensus representation learning for attributed graph,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 2654–2662.
- [24] D. Lim, F. Hohne, X. Li, S. L. Huang, V. Gupta, O. Bhalerao, and S. N. Lim, “Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods,” Advances in Neural Information Processing Systems, vol. 34, 2021.
- [25] M. Federici, A. Dutta, P. Forré, N. Kushman, and Z. Akata, “Learning robust representations via multi-view information bottleneck,” arXiv preprint arXiv:2002.07017, 2020.
- [26] F. Bao, “Disentangled variational information bottleneck for multiview representation learning,” in CAAI International Conference on Artificial Intelligence. Springer, 2021, pp. 91–102.
- [27] Z. Wan, C. Zhang, P. Zhu, and Q. Hu, “Multi-view information-bottleneck representation learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 11, 2021, pp. 10 085–10 092.
- [28] M. I. Belghazi, A. Baratin, S. Rajeshwar, S. Ozair, Y. Bengio, A. Courville, and D. Hjelm, “Mutual information neural estimation,” in International conference on machine learning. PMLR, 2018.
- [29] T. M. Cover, J. A. Thomas et al., “Entropy, relative entropy and mutual information,” Elements of information theory, vol. 2, no. 1, pp. 12–13, 1991.
- [30] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, and L. Bottou, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.” Journal of machine learning research, vol. 11, no. 12, 2010.
- [31] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
- [32] S. Rayana and L. Akoglu, “Collective opinion spam detection: Bridging review networks and metadata,” in Proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining, 2015, pp. 985–994.
- [33] J. J. McAuley and J. Leskovec, “From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews,” in Proceedings of the 22nd international conference on World Wide Web, 2013, pp. 897–908.
- [34] M. Shi, Y. Tang, X. Zhu, D. Wilson, and J. Liu, “Multi-class imbalanced graph convolutional network learning,” in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20), 2020.
- [35] K. Diederik, B. Jimmy et al., “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, pp. 273–297, 2014.
- [36] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikit-learn: Machine learning in python,” the Journal of machine Learning research, vol. 12, pp. 2825–2830, 2011.
- [37] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
- [38] M. Fey and J. E. Lenssen, “Fast graph representation learning with pytorch geometric,” arXiv preprint arXiv:1903.02428, 2019.
- [39] L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.