FairGP: A Scalable and Fair Graph Transformer Using Graph Partitioning
Abstract
Recent studies have highlighted significant fairness issues in Graph Transformer (GT) models, particularly against subgroups defined by sensitive features. Additionally, GTs are computationally intensive and memory-demanding, limiting their application to large-scale graphs. Our experiments demonstrate that graph partitioning can enhance the fairness of GT models while reducing computational complexity. To understand this improvement, we conducted a theoretical investigation into the root causes of fairness issues in GT models. We found that the sensitive features of higher-order nodes disproportionately influence lower-order nodes, resulting in sensitive feature bias. We propose Fairness-aware scalable GT based on Graph Partitioning (FairGP), which partitions the graph to minimize the negative impact of higher-order nodes. By optimizing attention mechanisms, FairGP mitigates the bias introduced by global attention, thereby enhancing fairness. Extensive empirical evaluations on six real-world datasets validate the superior performance of FairGP in achieving fairness compared to state-of-the-art methods. The codes are available at https://github.com/LuoRenqiang/FairGP.
Introduction
Despite the reported capabilities of Graph Transformer (GT) models in graph representation learning, recent studies have exposed fairness concerns associated with these models (Luo et al. 2024b). In critical applications such as medical diagnoses (Pan et al. 2024), loan approval (Zhang et al. 2024b), and e-commerce (Agarwal et al. 2024), features such as gender, race, age, and region are legally protected to prevent discrimination and are thus considered sensitive features (Wang et al. 2024). Unfortunately, GTs generate biased predictions that discriminate against specific subgroups characterized by these sensitive features.
In addition, GTs are computationally demanding and memory-intensive when applied on large-scale graphs due to the complexity of global attention mechanisms (Dao et al. 2022; Chen et al. 2023). Consequently, training deep GTs on large-scale graphs can be resource-intensive, necessitating sophisticated techniques for partitioning nodes into smaller mini-batches to alleviate the computational overhead (Wu et al. 2023b). Partitioning strategies impact the performance of GT models and reduce the time and space complexity of algorithms (Blondel et al. 2008; Traag, Waltman, and Van Eck 2019). Additionally, these strategies may also be subject to fairness issues.
Will GT with graph partitioning exhibit the same fairness issues as a vanilla GT? Various efforts have been devoted to developing fairness-aware algorithms, aiming to control the degree to which a model depends on sensitive features, measured by independence criteria such as statistical parity (measured by ) and equality opportunity (measured by ) (Zhang et al. 2024a). We calculate the fairness metrics for both vanilla GT and GT with graph partitioning separately, as shown in Figure 1. The experimental settings are detailed in Appendix A.1, along with results from three additional graph partitioning strategies. The results show that graph partitioning improves fairness in GT models, indicating its potential to address fairness issues within GTs. This observation raises a natural and fundamental question: how does graph partitioning promote fairness? To answer the question, we conducted a theoretical investigation into the underlying mechanisms.

Intuitively, the global attention mechanism in GTs tends to favour higher-order nodes (nodes with high degrees), causing the sensitive features of these nodes to disproportionately influence lower-order nodes, resulting in sensitive feature bias (Shehzad et al. 2024). Our theoretical analysis indicates that global attention among nodes with different sensitive features significantly impacts GT fairness. Minimizing the influence of higher-order nodes on nodes with varying sensitive features can effectively improve GT fairness. In addition, we categorize the attention into inter-cluster and intra-cluster attention. To enhance fairness, we optimize both inter-cluster and intra-cluster attention, thereby reducing interactions between nodes with different sensitive features. By adapting the graph partitioning, we enable a large number of lower-order nodes to avoid the undue influence from higher-order nodes. This effectively addresses the fairness issue, ensuring that the sensitive features of lower-order nodes are less affected by those of higher-order.
Building on this insight, we propose Fairness-aware scalable GT based on Graph Partitioning (FairGP). FairGP reduces the influence range of higher-order nodes in global attention by partitioning the graph and focusing attention within respective subgraphs. Additionally, FairGP optimizes attention to enhance the similarity between the distribution of the original sensitive features and the distribution of their embeddings. This optimization minimizes the negative influence between nodes with different sensitive features, thereby enhancing GT fairness. The main contributions of this work are summarized as follows:
-
•
We show that GT fairness is influenced by the attention of higher-order nodes and the attention between nodes with different sensitive features. To the best of our knowledge, this work is the first to uncover the underlying cause of fairness issues in GTs.
-
•
We propose a novel fairness-aware scalable GT, FairGP, which partitions the graph to mitigate the influence of higher-order node attention and optimizes inter-cluster attention to mitigate bias in nodes with different sensitive features.
-
•
To verify the effectiveness of the proposed FairGP, we conduct extensive empirical experiments comparing with state-of-the-art models. The results show that FairGP consistently improves fairness by at least 40% across both statistical parity and equality opportunity over six real-world datasets.
Related Work
Fairness-aware Graph Neural Network (GNNs)
Fairness in GNNs has gained substantial attention, particularly in efforts to identify and mitigate biases associated with specific sensitive features (Zhang et al. 2024c). Various fairness-aware GNN studies aim to preserve the independence of sensitive features through pre-processing and in-processing techniques (Dong et al. 2023; Luo et al. 2024c).
Graphair (Ling et al. 2023) is a recent pre-processing algorithm. It automatically identifies fairness-aware augmentations from input graph data, aiming to circumvent sensitive features while preserving the integrity of other valuable information. FairAC (Guo, Chu, and Li 2023) proposes a fairness feature completion approach. It completes missing information by adversarial learning and achieves fair node embeddings for graphs lacking certain features. Unlike GNN methods, GTs that depend on the attention score matrix face challenges in directly identifying sensitive features in the input, making it difficult to apply pre-processing techniques effectively. In-processing methods, on the other hand, focus on message-passing or convolution propagation. FairSIN (Yang et al. 2024) introduces a neutralization-based paradigm, which incorporates additional fairness-enhancing features into node features or representations before message-passing. FMP (Jiang et al. 2024) directly addresses the use of sensitive features during forward propagation for node classification tasks using cross-entropy loss, eliminating the need for data pre-processing. However, the global attention mechanism in GTs, which facilitates direct node-to-node interactions, differs from the message-passing or convolution used in GNNs (Yin and Zhong 2023). In-processing techniques may not be directly applicable to GTs.
Scalable Graph Transformer
Recent GT studies are shifting their focus to handling the large-scale graph (Xing et al. 2024). DIFFormer (Wu et al. 2023a) introduces a scalable GT framework guided by data-invariant functions, enabling efficient processing of large-scale graphs. Similarly, SGFormer (Wu et al. 2023b) simplifies the Transformer architecture to boost computational efficiency on large-scale graphs. Polynormer (Deng, Yue, and Zhang 2024) advances the field by developing a polynomial expressive GT that can capture features in complex graph structures more effectively.
Despite these advancements, existing GT methods often lack explicit consideration of algorithmic fairness. When applied to real-world datasets, they exhibit significant fairness issues. FairGT (Luo et al. 2024b) addresses the fairness problem within GT based on a sensitive feature complete graph. It faces challenges in deployment on large graphs due to its reliance on global attention in two whole graphs (original and sensitive feature complete graph). Scalable techniques such as mini-batch and graph partitioning are challenging to deploy in FairGT while ensuring guaranteed performance, because the sensitive feature complete graph relies on all edges. Addressing the fairness issues of GTs on large-scale graphs remains a pressing and current challenge.
Preliminaries
Notations
Unless otherwise specified, we denote sets with copperplate uppercase letters (e.g., ), matrices with bold uppercase letters (e.g., ), and vectors with bold lowercase letters (e.g., ). We denote the number of elements in the set as . We denote a graph as , where is the set of nodes in the graph and , is the adjacency matrix, and is the node feature matrix.
We follow the convention of NumPy in Python for matrix and vector indexing: represents the entry of matrix at the -th row and the -th column; and represent the -th row and the -th column of matrix , respectively. denotes a sensitive feature of nodes, and denotes a sensitive feature of node .
Fairness Evaluation Metrics
We present two definitions of fairness for a binary ground truth label and a binary sensitive feature . We use to represent the predicted label.
Definition 1. Statistical Parity (i.e., Demographic Parity, Independence) (Dwork et al. 2012). Statistical parity requires the predictions to be independent of the sensitive features. The prediction can be formally written as:
(1) |
When both the predicted labels and the sensitive features are binary, the extent of statistical parity can be quantified by , defined as follows:
(2) |
The measures the acceptance rate difference between two sensitive subgroups with and .
Definition 2. Equal Opportunity (Hardt, Price, and Srebro 2016). Equal opportunity necessitates that the likelihood of an instance belonging to a positive class leading to a positive outcome should be equitable for all members within subgroups. For individuals with positive ground truth labels, it is necessary for positive predictions to be devoid of any dependence on sensitive features. This principle can be formulated as follows:
(3) | ||||
Fairness-aware algorithms prevent the allocation of unfavourable predictions to individuals solely based on their sensitive subgroup affiliation. In particular, quantifies the extent of deviation in predictions from equal opportunity. To quantitatively assess equal opportunity:
(4) | ||||
Both and are evaluated on the test set.
Transformer
The Transformer architecture consists of a composition of Transformer layers. Each Transformer layer has two parts: a self-attention module and a position-wise feed-forward network (FFN). Let be the input of the self-attention module where is the hidden dimension and is the hidden representation at position . The input is projected by three matrices , , and to the corresponding representations Q, K, V. The self-attention is then calculated as:
(5) | ||||
To simplify the discussion, we consider single-head self-attention and assume . The extension to multi-head attention is straightforward, and we omit the bias terms for simplicity.
The attention score matrix is denoted as , which contains the attention scores for all node pairs:
(6) |
represents the attention score between nodes and . Due to the Softmax function, we have and .
The Fairness Issues in GT
In this section, we analyze the underlying cause of fairness issues in GTs. Firstly, we present our empirical observations that the global attention mechanism negatively impacts fairness in GT. Because, global attention tends to focus on the proportions of sensitive features among higher-order nodes, while fairness-aware algorithms often protect the proportions of sensitive features among all nodes (most of the nodes are lower-order nodes). Methods that can truncate the attention score matrix, such as graph partitioning, can effectively mitigate this negative impact. Then, we present the theoretical findings on GT fairness and explain how graph partitioning enhances fairness. A theoretical foundation is provided, enabling us to propose a GT fairness algorithm (FairGP) based on graph partitioning.
Empirical Observations
We empirically show how the global attention mechanism in GT tends to favour higher-order nodes. The experimental settings are detailed in Appendix A.2. We focus on the proportion of different sensitive features across different nodes, analyzing how these distributions are affected by GT training. This analysis includes the proportion of different sensitive features among all nodes, higher-order nodes, and prediction results. We normalize the minority subgroups as . Thus, if , for :
(7) | ||||
and, if :
(8) | ||||
The definition is similar in higher-order nodes and prediction results, and the details are shown in Appendix B.
Datasets | All Nodes | Higher-order | Prediction () | |||
Credit | 10.17 | 1 | 2.25 | 1 | 2.94 | 1 |
Pokec-z-R | 1.82 | 1 | 1 | 1.29 | 1 | 2.63 |
Pokec-z-G | 1 | 1 | 1 | 1.25 | 1 | 3.10 |
Pokec-n-R | 2.12 | 1 | 2.51 | 1 | 3.50 | 1 |
Pokec-n-G | 1.26 | 1 | 1 | 1.37 | 1 | 2.69 |
AMine-L | 1.18 | 1 | 1 | 1.36 | 1 | 6.81 |
In Table 1, higher proportions of sensitive features will be highlighted in red () and blue () for clarity. When the overall distribution of sensitive features among all nodes differs from that of higher-order nodes, GTs tend to learn to predict based on the distribution of higher-order nodes. It may even lead to the opposite prediction. The majority population in both distributions is the same, the proportion of sensitive features in the prediction result will align more closely with the higher-order nodes. Consequently, the proportions of sensitive features among higher-order nodes can affect the GT fairness.
Theoretical Findings
We denote the similarity between the distribution of original sensitive features and the distribution of sensitive features when they are mapped to node embeddings as sensitive feature similarity. Node embeddings are computed by the global attention mechanism. We select the Euclidean Norm to measure similarity. A higher similarity indicates greater independence of sensitive attributes during training, aligning with algorithmic fairness. This analysis helps us understand how the attention mechanism impacts the preservation or alteration of sensitive feature distributions during the embedding process. By comparing these distributions, we can assess whether a GT model introduces or mitigates bias, thereby uncovering the underlying causes of fairness issues in GTs.
Theorem 1 The sensitive feature similarity is bounded by the attention scores between nodes with different sensitive features.
(9) |
Proof. Based on the Triangle Inequality:
Since global attention tends to concentrate on higher-order nodes (Xing et al. 2024), the fairness of the algorithm is predominantly influenced by these higher-order nodes with different sensitive features.
Next, we consider GT with partitioning. We denote as the GT attention score matrix with graph partitioning. We assume even partitions, and the number of partitions (i.e. clusters) is . Let represent the attention score between clusters and . Then the approximate attention score between node and (in different clusters) with graph partitioning can be expressed as (Xing et al. 2024). Thus,
(10) |
Lemma 1 The sensitive feature similarity are lower than , whether using a vanilla GT or a GT with graph partitioning.
(11) | ||||
The proof of Lemma 1 is shown in Appendix C.
Theorem 2 Compared to vanilla GT, the change in sensitive feature similarity in GT with graph partitioning is constrained by the attention mechanism between different clusters.
(12) | ||||
where .
Proof. Based on Lemma 1,
In addition,
Thus,

Because,
In addition, based on equation (10), if :
Thus,
The sensitive feature similarity bound is maximized when (). Thus, setting inter-cluster attention scores to zero enhances GT fairness.
FairGP
We are now ready to present our model FairGP to enhance GT fairness with graph partitioning. An overview of FairGP is shown in Figure 2. We first encode the structural topology by constructing the node feature matrix. Next, we partition the graph into different clusters, which helps reduce attention computation time complexity and enhance fairness. Finally, we mitigate inter-cluster attention to address fairness issues arising from the higher-order node bias.
Constructing the Feature Matrix
The node feature matrix is essential for graph mining tasks. We select the eigenvectors corresponding to the largest eigenvalues to construct the structure matrix . Then, We combine the original feature matrix with the structure matrix to preserve both node features and structural information:
(13) |
where represents the concatenation operator, and denotes the fused feature matrix. FairGT (Luo et al. 2024b) shows that the largest eigenvalues are closely tied to structural information, which tends to be fairer. Thus, the feature matrix contributes to a fairer encoding.
Graph Partitioning
According to the empirical observations and Theorem 1, graph partitioning improves the fairness of GT by reducing the biased influence of higher-order nodes. FairGP partitions the graph into non-overlapping clusters using METIS (Karypis and Kumar 1998). Let represents a subgraph of , satisfying and .
Fairness-aware Attention Optimization
Existing Graph Transformers utilize the global attention mechanism to capture information between all node pairs, causing the higher-order node bias. To obtain the updated hidden representations .
(14) | ||||
where , , and . The , , and are trainable weights of the linear layers in the Transformer, and FFN represents a Feed-Forward Neural Network.
FairGP separates the attention into inter-cluster and intra-cluster. For the intra-cluster attention where , the attention is the same as the whole attention score:
(15) |
Methods | Pokec-z-R | Pokec-z-G | ||||||
---|---|---|---|---|---|---|---|---|
ACC(%) | AUC(%) | (%) | (%) | ACC(%) | AUC(%) | (%) | (%) | |
GCN | ||||||||
GAT | ||||||||
APPNP | ||||||||
FairGNN | ||||||||
FairSIN | ||||||||
FMP | ||||||||
FUGNN | ||||||||
DIFFormer | ||||||||
SGFormer | ||||||||
Polynormer | ||||||||
CoBFormer | ||||||||
FairGT | OOM | OOM | OOM | OOM | OOM | OOM | OOM | OOM |
FairGP | ||||||||
Methods | Credit | AMiner-L | ||||||
ACC(%) | AUC(%) | (%) | (%) | ACC(%) | AUC(%) | (%) | (%) | |
GCN | ||||||||
GAT | ||||||||
APPNP | ||||||||
FairGNN | ||||||||
FairSIN | OOM | OOM | OOM | OOM | ||||
FMP | ||||||||
FUGNN | ||||||||
DIFFormer | ||||||||
SGFormer | ||||||||
Polynormer | ||||||||
CoBFormer | ||||||||
FairGT | OOM | OOM | OOM | OOM | ||||
FairGP |
According to Theorem 2, the attention between different clusters is zero, which helps improve fairness. To achieve this goal, we optimize the attention mechanism accordingly:
(16) |
This effectively nullifies the attention between different clusters, thereby enhancing the GT fairness. By focusing on intra-cluster attention and eliminating inter-cluster attention, FairGP ensures a fairer representation of the graph’s structural information.
Complexity of FairGP
By dividing the graph into smaller, more manageable clusters, we reduce the computational burden while preserving essential structural information. For each cluster, the time and space complexity is , leading to time and space complexity of FairGP being . This makes our approach both scalable and efficient, ensuring that FairGP can handle large-scale graphs effectively.
Experiments
Datasets and Implementation details
For the experimental study, node classification serves as the downstream task, with four real-world datasets: Credit, Pokec-z, Pokec-n, and Income. Specifically, the datasets are labelled as Pokec-z-R and Pokec-n-R when living region is the sensitive feature, and Pokec-z-G and Pokec-n-G when gender is the sensitive feature. The details of datasets and implementation details are shown in Appendix D.
Baselines
We use three classes of baselines: GNNs, fairness-aware GNNs, and scalable GTs.
- •
- •
- •
Multi-Class Sensitive Features
There are multi-class sensitive features in dataset AMiner-L. The previous fairness evaluation metrics are formulated for binary scenarios, but they can be easily extended into multi-class sensitive feature scenarios following previous work (Jiang et al. 2023). The main idea is to guarantee fairness across all pairs of sensitive subgroups. Quantification strategies can be applied by leveraging either the variance across different sensitive subgroups.
Specifically, the fairness evaluation metrics in multi-class sensitive features are defined as follows:
(17) | ||||
where class number denotes .
Comparison Results
Table 2 offers a detailed comparison of the fairness evaluation metrics for our proposed FairGP method against various baseline models across four real-world datasets. Further comparison results for other datasets are included in Appendix E. Notably, Pokec-z-R and Pokec-z-G are the same datasets examined under two different sensitive features. FairGP effectively addresses the multi-sensitive feature issue, as confirmed by the correlation results. We present metrics such as overall Accuracy, AUC, , and . The table clearly indicates that FairGP consistently achieves superior fairness, validating the efficacy of our approach in fairness-aware node classification. FairGP not only improves fairness compared to traditional GTs but also surpasses existing fairness-aware GNN methods in fairness metrics. This highlights its effectiveness in addressing fairness-related issues in scalable GTs, demonstrating the practicality of FairGP for real-world applications.
Ablation Study
FairGP as a fair scalable GT achieves its objectives through three key components: feature matrix, graph partitioning and attention optimization. To comprehensively assess the contributions of these three elements, we conducted an ablation study. Specifically, we aim to examine the extent to which feature matrix, graph partitioning and attention optimization contribute to improving prediction fairness. In this analysis, we systematically remove each component independently to assess their individual impact.
Dataset | Metric | FairGP | w/o FM | w/o GP | w/o AO |
---|---|---|---|---|---|
Pokec-z-R | ACC | ||||
AUC | |||||
Pokec-z-G | ACC | ||||
AUC | |||||
AMiner-L | ACC | ||||
AUC | |||||
The FairGP without graph partitioning is denoted as w/o GP. Since GT without graph partitioning encounters out-of-memory issues in large-scale graphs, we employ mini-batch processing, a typical scalable technique, to keep it running. Even when compared to results using the mini-batch technique, FairGP shows a significant improvement in fairness, with enhancements of at least 25% across all datasets, and in some instances, exceeding 90%. These results validate our findings that graph partitioning improves GT fairness.
The FairGP with a feature matrix that excludes structural features is denoted as w/o FM, and without attention optimization is denoted as w/o AO. Notably, when comparing w/o FM, and w/o AO, FairGP consistently outperforms them in terms of accuracy and fairness. The results shown in Table 3 reinforce the necessity of graph partitioning and attention optimization within FairGP. More ablation experiments for other datasets are shown in Appendix F.
Training Cost Comparison
FairGP, partitioning a graph into clusters, exhibits efficiency compared to GTs. To validate the efficiency of FairGP, we compare its training time with GT baselines. For a fair comparison, we standardize key parameters across all methods, setting the number of hidden dimensions to 128, the number of layers to 1, and the number of heads to 1. In addition, all models are evaluated with a runtime of 100 epochs, and the units are in seconds. The runtimes of different sensitive features in the same dataset are identical. Therefore, we combine the results of Pokec-z-R and Pokec-z-G into Pokec-z. The same approach is applied to Pokec-n. The runtimes of FairGP and GT baselines are shown in Table 4, and the runtimes of computing are shown in Appendix G.
DIFFormer | SGFormer | Polynormer | CoBFormer | FairGP | |
---|---|---|---|---|---|
Pokec-z | |||||
Pokec-n | |||||
Aminer-L |
Conclusion
We showed how graph partitioning addresses GT fairness issues on large-scale graphs. We proposed FairGP, a novel algorithm to enhance the fairness of scalable GTs. FairGP is built on graph partitioning, while graph structure information is encoded in the node features to enhance the sensitive feature similarity and the attention computation is adapted to decrease the negative impact of global attention. We also present an empirical study of the efficacy of the proposed approach. In diverse real-world datasets, FairGP earns the best and comparing state-of-the-art methods. Moreover, in the vast majority of cases, FairGP not only improved fairness but also achieved leading performance, further demonstrating its effectiveness and superiority in large-scale graph applications. In future work, we will further expand FairGP to address situations with limited sensitive features. Since FairGP relies on access to the entire graph for partitioning, it is less suited for applications where privacy preservation or significant misinformation is a concern. In the future, we aim to extend FairGP to be applicable in attribute-constrained and more complex scenarios.
References
- Agarwal et al. (2024) Agarwal, N.; Sirohi, A. K.; Kumar, S.; and Jayadeva. 2024. No Prejudice! Fair Federated Graph Neural Networks for Personalized Recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence.
- Blondel et al. (2008) Blondel, V. D.; Guillaume, J.-L.; Lambiotte, R.; and Lefebvre, E. 2008. Fast Unfolding of Communities in Large Networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10): P10008.
- Chen et al. (2023) Chen, J.; Gao, K.; Li, G.; and He, K. 2023. NAGphormer: A Tokenized Graph Transformer for Node Classifcation in Large Graphs. In International Conference on Learning Representations.
- Dai and Wang (2023) Dai, E.; and Wang, S. 2023. Learning Fair Graph Neural Networks with Limited and Private Sensitive Attribute Information. IEEE Transactions on Knowledge and Data Engineering, 35(7): 7103–7117.
- Dao et al. (2022) Dao, T.; Fu, D.; Ermon, S.; Rudra, A.; and Re, C. 2022. FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. In Advances in Neural Information Processing Systems.
- Deng, Yue, and Zhang (2024) Deng, C.; Yue, Z.; and Zhang, Z. 2024. Polynormer: Polynomial-Expressive Graph Transformer in Linear Time. In International Conference on Learning Representations.
- Dong et al. (2023) Dong, Y.; Ma, J.; Wang, S.; Chen, C.; and Li, J. 2023. Fairness in Graph Mining: A Survey. IEEE Transactions on Knowledge and Data Engineering, 35(01): 10583–10602.
- Dwork et al. (2012) Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. S. 2012. Fairness through Awareness. In Proceedings of Innovations in Theoretical Computer Science.
- Guo, Chu, and Li (2023) Guo, D.; Chu, Z.; and Li, S. 2023. Fair Attribute Completion on Graph with Missing Attributes. In International Conference on Learning Representations.
- Hardt, Price, and Srebro (2016) Hardt, M.; Price, E.; and Srebro, N. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems.
- I-cheng and Che-hui (2009) I-cheng, Y.; and Che-hui, L. 2009. The Comparisons of Data Mining Techniques for the Predictive Accuracy of Probability of Default of Credit Card Clients. Expert Systems with Applications, 36(2): 2473–2480.
- Jiang et al. (2023) Jiang, Z.; Han, X.; Fan, C.; Liu, Z.; Mostafavi, A.; and Hu, X. 2023. Fairness and Explainability: Bridging the Gap towards Fair Model Explanations. In Proceedings of the AAAI Conference on Artificial Intelligence.
- Jiang et al. (2024) Jiang, Z.; Han, X.; Fan, C.; Liu, Z.; Mostafavi, A.; and Hu, X. 2024. Chasing Fairness in Graphs: A GNN Architecture Perspective. In Proceedings of the AAAI Conference on Artificial Intelligence.
- Karypis and Kumar (1998) Karypis, G.; and Kumar, V. 1998. A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs. SIAM Journal on Scientific Computing, 20(1): 359–392.
- Kipf and Welling (2017) Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations.
- Klicpera, Bojchevski, and Gunnemann (2019) Klicpera, J.; Bojchevski, A.; and Gunnemann, S. 2019. Predict then Propagate: Graph Neural Networks Meet Personalized PageRank. In International Conference on Learning Representations.
- Ling et al. (2023) Ling, H.; Jiang, Z.; Luo, Y.; Ji, S.; and Zou, N. 2023. Learning Fair Graph Representations via Automated Data Augmentations. In International Conference on Learning Representations.
- Luo et al. (2024a) Luo, R.; Huang, H.; Yu, S.; Han, Z.; He, E.; Zhang, X.; and Xia, F. 2024a. FUGNN: Harmonizing Fairness and Utility Graph Neural Networks. In Proceedings of the 30th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
- Luo et al. (2024b) Luo, R.; Huang, H.; Yu, S.; Zhang, X.; and Xia, F. 2024b. FairGT: A Fairness-aware Graph Transformer. In Proceedings of the 33rd International Joint Conference on Artificial Intelligence.
- Luo et al. (2024c) Luo, R.; Tang, T.; Xia, F.; Liu, J.; Xu, C.; Zhang, L. Y.; Xiang, W.; and Zhang, C. 2024c. Algorithmic Fairness: A Tolerance Perspective. arXiv preprint arXiv:2405.09543.
- Pan et al. (2024) Pan, C.; Xu, J.; Yu, Y.; Yang, Z.; Wu, Q.; Wang, C.; Chen, L.; and Yang, Y. 2024. Towards Fair Graph Federated Learning via Incentive Mechanisms. In Proceedings of the AAAI Conference on Artificial Intelligence.
- Shehzad et al. (2024) Shehzad, A.; Xia, F.; Abid, S.; Peng, C.; Yu, S.; Zhang, D.; and Verspoor, K. 2024. Graph Transformers: A Survey. arXiv preprint arXiv:2407.09777.
- Takac and Michal (2012) Takac, L.; and Michal, Z. 2012. Data Analysis in Public Social Networks. In Proceedings of International Scientific Conference and International Workshop Present Day Trends of Innovations.
- Traag, Waltman, and Van Eck (2019) Traag, V. A.; Waltman, L.; and Van Eck, N. J. 2019. From Louvain to Leiden: Guaranteeing Well-connected Communities. Scientific reports, 9(1): 1–12.
- Velickovic et al. (2018) Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2018. Graph Attention Networks. In International Conference on Learning Representations.
- Wan et al. (2019) Wan, H.; Zhang, Y.; Zhang, J.; and Tang, J. 2019. AMiner: Search and Mining of Academic Social Networks. Data Intelligence, 1(1): 58–76.
- Wang et al. (2024) Wang, Z.; Xie, Y.; Li, Z.; Jia, X.; Jiang, Z.; Jia, A.; and Xu, S. 2024. SimFair: Physics-guided Fairness-aware Learning with Simulation Models. In Proceedings of the AAAI Conference on Artificial Intelligence.
- Wu et al. (2023a) Wu, Q.; Yang, C.; Zhao, W.; He, Y.; Wipf, D.; and Yan, J. 2023a. DIFFormer: Scalable Graph Transformers Induced by Energy Constrained Diffusion. In International Conference on Learning Representations.
- Wu et al. (2023b) Wu, Q.; Zhao, W.; Yang, C.; Zhang, H.; Nie, F.; Jiang, H.; Bian, Y.; and Yan, J. 2023b. SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations. In Advances in Neural Information Processing Systems.
- Xing et al. (2024) Xing, Y.; Wang, X.; Li, Y.; Huang, H.; and Shi, C. 2024. Less is More: On the Over-Globalizing Problem in Graph Transformers. In International Conference on Machine Learning.
- Yang et al. (2024) Yang, C.; Liu, J.; Yan, Y.; and Shi, C. 2024. FairSIN: Achieving Fairness in Graph Neural Networks through Sensitive Information Neutralization. In Proceedings of the AAAI Conference on Artificial Intelligence.
- Yin and Zhong (2023) Yin, S.; and Zhong, G. 2023. LGI-GT:Graph Transformers with Local and Global Operators Interleaving. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence.
- Zhang et al. (2024a) Zhang, B.; Dong, Y.; Chen, C.; Zhu, Y.; Luo, M.; and Li, J. 2024a. Adversarial Attacks on Fairness of Graph Neural Networks. In International Conference on Learning Representations.
- Zhang et al. (2024b) Zhang, G.; Cheng, D.; Yuan, G.; and Zhang, S. 2024b. Learning Fair Representations via Rebalancing Graph Structure. Information Processing and Management, 61: 103570.
- Zhang et al. (2024c) Zhang, Z.; Zhang, M.; Yu, Y.; Yang, C.; Liu, J.; and Shi, C. 2024c. Endowing Pre-trained Graph Models with Provable Fairness. In Proceedings of the Web Conference.
Appendix A.1
Experiment Settings: All experiments are conducted on the Ubuntu operating system, using 2 V100 GPU with 16GB memory. The CUDA version 11.6, PyTorch 1.13.1, and Torch Geometric 2.5.3. For hyperparameters, we search the hidden number from {16, 32, 64, 128}, the number of partition clusters from {100, 150, 200, 250, 300}, and choose the METIS algorithm (Karypis and Kumar 1998) for graph partition. For the training set size, we select 6,000 nodes for Credit, 5,000 nodes for AMiner-L, and 1,000 nodes for the remaining datasets. Note that Vanilla GT would cause an out-of-memory error. Therefore, we implemented it through a mini-batch strategy.
Impact of Graph Partitioning on Fairness: We also test the effect of different partitioning strategies on the GT fairness. Specifically, we select Louvain (Blondel et al. 2008), Leiden (Traag, Waltman, and Van Eck 2019), and Random partitioning. The hidden number is set to 64, the cluster number is set to 100, and the results are shown in Tables 5, 6, and 7. It can be seen that using partitioning strategies improves fairness performance in our experiments.
Dataset | (%) | (%) | ||
---|---|---|---|---|
Louvain | Vanilla | Louvain | Vanilla | |
Aminer-L | ||||
Credit | ||||
Pokec-n-G | ||||
Pokec-n-R | ||||
Pokec-z-G | ||||
Pokec-z-R |
Dataset | (%) | (%) | ||
---|---|---|---|---|
Leiden | Vanilla | Leiden | Vanilla | |
Aminer-L | ||||
Credit | ||||
Pokec-n-G | ||||
Pokec-n-R | ||||
Pokec-z-G | ||||
Pokec-z-R |
Dataset | (%) | (%) | ||
---|---|---|---|---|
Random | Vanilla | Random | Vanilla | |
Aminer-L | ||||
Credit | ||||
Pokec-n-G | ||||
Pokec-n-R | ||||
Pokec-z-G | ||||
Pokec-z-R |
Appendix A.2
The experimentals setting in empirical observations is the same as Appendix A.1. The statistical information is based on the node features in each dataset. We denotes nodes with degrees greater than 100 as higher-order nodes.
Appendix B
We normalize the minority subgroups as . We denotes as the higher-order node set. For , if :
(18) | ||||
and, if :
(19) | ||||
For prediction results, if :
(20) | ||||
and if :
(21) | ||||
Appendix C
Lemma 1 The similarity (measured by Euclidean Norm) between the distribution of original sensitive features and the distribution of sensitive features when they are mapping to node embeddings are lower than , whether using a vanilla GT or a GT with graph partitioning.
(22) | ||||
Proof.
Appendix D
The statistics of these six datasets are shown in Table 8.
Dataset | # Nodes | # Edges | Sensitive feature | Label |
---|---|---|---|---|
Credit | Age | Credit | ||
Pokec-z-R | Region | Field | ||
Pokec-z-G | Gender | Field | ||
Pokec-n-R | Region | Field | ||
Pokec-n-G | Gender | Field | ||
AMiner-L | Affiliation | Field |
-
•
Credit (I-cheng and Che-hui 2009): The edges of Credit dataset are based on similarities in their spending and payment patterns. Age is considered the sensitive feature, while the label indicates whether an individual defaults on credit card payments in the following month.
-
•
Pokec-z and Pokec-n (Takac and Michal 2012):Pokec-z and Pokec-n are constructed by sampling users from two provinces, Zilinsky and Nitriansky, respectively. Each user is represented as a node with edges denoting friendship relations. Node features are based on user profiles, including attributes such as living region, gender, spoken language, and age. The sensitive features are living region and gender, with the classification task focusing on predicting users’ working fields.
-
•
AMiner-L (Wan et al. 2019): This dataset is a coauthor network constructed from the AMiner network. The sensitive feature is the continent of each researcher’s affiliation. The associated task is to predict the primary research field of each researcher.
For datasets with more than two classes of ground truth labels for the node classification task, we simplify the problem by keeping the classes of labels and unchanged, while setting the classes of labels greater than to . This simplification helps balance the classes and make the task more manageable. Next, we partition the data by randomly selecting % of nodes as the validation set and another % as the test set, ensuring that each category of nodes remains balanced in these sets. For the training set, we randomly select either % of nodes or nodes in each class of ground truth labels, depending on which is smaller. This partitioning strategy is consistent with the approach used in several prior studies (Jiang et al. 2024; Luo et al. 2024a; Yang et al. 2024). By maintaining a balanced representation of classes in the training, validation, and test sets, we ensure that the model is evaluated fairly and consistently across different partitions. This strategy allows us to effectively measure the performance and fairness of the proposed FairGP method against the baseline approaches.
Appendix E
Further comparison results for datasets Pokec-n-R and Pokec-n-G is shown in Table 9. Notably, Pokec-n-R and Pokec-n-G are the same dataset examined under two different sensitive features. The correlation results further validate FairGP’s effectiveness in addressing the multi-sensitive feature challenge.
Methods | Pokec-n-R | |||
---|---|---|---|---|
ACC(%) | AUC(%) | (%) | (%) | |
GCN | ||||
GAT | ||||
APPNP | ||||
FairGNN | ||||
FairSIN | ||||
FMP | ||||
FUGNN | ||||
DIFFormer | ||||
SGFormer | ||||
Polynormer | ||||
CoBFormer | ||||
FairGT | OOM | OOM | OOM | OOM |
FairGP | ||||
Pokec-n-G | ||||
GCN | ||||
GAT | ||||
APPNP | ||||
FairGNN | ||||
FairSIN | ||||
FMP | ||||
FUGNN | ||||
DIFFormer | ||||
SGFormer | ||||
Polynormer | ||||
CoBFormer | ||||
FairGT | OOM | OOM | OOM | OOM |
FairGP |
Appendix F
Further ablation study for datasets Credit, Pokec-n-R and Pokec-n-G is shown in Table 10. Even when compared to results using the mini-batch technique, FairGP shows a significant improvement in fairness, with enhancements of at least 50% across these three datasets, and in some instances, exceeding 96%. These results validate our findings that graph partitioning improves the GT fairness.
Dataset | Metric | FairGP | w/o FM | w/o GP | w/o AO |
---|---|---|---|---|---|
Credit | ACC | ||||
AUC | |||||
Pokec-n-R | ACC | ||||
AUC | |||||
Pokec-n-G | ACC | ||||
AUC | |||||
Appendix G
The time required to compute the structure matrix is short, and it can be reused across different hyperparameter experiments on the same dataset. In particular, FairGP selects only the principal eigenvectors using the Arnolodi Package, ensuring that remains consistently below. Table 11 shows the required runtime for computing .
Dataset | Pokec-z | Pokec-n | Aminer-L |
---|---|---|---|
Computing S | 8.43 | 7.91 | 10.95 |