Simple Multigraph Convolution Networks
Abstract.
Existing multigraph convolution methods either ignore the cross-view interaction among multiple graphs, or induce extremely high computational cost due to standard cross-view polynomial operators. To alleviate this problem, this paper proposes a Simple MultiGraph Convolution Networks (SMGCN) which first extracts consistent cross-view topology from multigraphs including edge-level and subgraph-level topology, then performs polynomial expansion based on raw multigraphs and consistent topologies. In theory, SMGCN utilizes the consistent topologies in polynomial expansion rather than standard cross-view polynomial expansion, which performs credible cross-view spatial message-passing, follows the spectral convolution paradigm, and effectively reduces the complexity of standard polynomial expansion. In the simulations, experimental results demonstrate that SMGCN achieves state-of-the-art performance on ACM and DBLP multigraph benchmark datasets. Our codes are available at here.
1. Introduction
Multigraph convolution is an important tool in graph machine learning and signal processing field, which aims to filter the graph signal in the spatial or (and) spectral domain of multiple graphs.
In recent years, multigraph convolution methods have been widely used in many fields, such as social network analysis (Sless et al., 2018; Cui et al., 2021), recommendation system (Li et al., 2023; Zou et al., 2022), and bioinformatics (Yan et al., 2019; Bhadra et al., 2019), for their data is usually represented as a multigraph, which contains multiple different views’ graphs representing different relation about the same object. For example, in social network analysis, node can be user, and different views can be different social relations between users, such as friendship, following, and co-occurrence.
In recent years, several multigraph convolution methods have been proposed, including building robust node features to help to regularize and aggregate embeddings learned from each view (Qu et al., 2017), parallel learning embeddings of each view and concatenate their min, max and mean as final embedding (Wang et al., 2020), summing embeddings learned from different orders of each view’s graphs (Butler et al., 2023) and Multi-Input Multi-Output GCN (MIMO-GCN) (Butler et al., 2023) that computes embeddings learned from all combination of different views’ graphs.

However, existing multigraph convolution methods still have difficult in effectively solving the conflict between effectiveness and efficiency. As Figure 1 shows, on the one hand, a part of methods consider integrate the multigraphs directly such as selection or weighted sum operator, which is efficient but ignores the cross-view interaction among multiple graphs. On the other hands, several methods such as MIMO (Butler et al., 2023), performs standard cross-view polynomial expansions on multigraphs, which integrates the high-order structure with a global perspective, but the computational cost can rapidly increase based on the number of polynomial orders and views. To alleviate this conflict, this paper proposes a Simple MultiGraph Convolution Networks (SMGCN) method which contains consistent topology extraction module and simple polynomial expansion module. To be specific, SMGCN first considers extracting the consistent cross-view topology in edge-level and subgraph-level, then performs polynomial expansion between raw multigraphs edge-level, subgraph-level to induce the filtered graph signal. More importantly, SMGCN is a simple and efficient method, which also performs credible cross-view spatial message-passing and follows the spectral convolution paradigm. In experiments, we evaluate SMGCN on two famous multigraph benchmark datasets, including ACM and DBLP. The experiment results demonstrate the superiorities of SMGCN with the perspective of effectiveness and efficiency.
The main contributions of this paper are summarized into the following two folds:
-
•
In theory, SMGCN performs cross-view interaction via a simple and efficient spectral convolution focusing on extracting edge-level and subgraph-level consistent topology. It effectively alleviates the conflict between effectiveness and efficiency in previous works.
-
•
In the simulations, experiment results demonstrate that SMGCN can achieve state-of-the-art performance with lower computational cost.
Notations. In this paper, a views multigraph is denoted as 111We treat default bigger than 2 for standard cross-view polynomial expansion can be used directly when .. is the set of nodes, is the set of edges in view . For easily understanding, we use to represent the adjacency matrix of each view’s graph, where is the adjacency matrix of view and is the number of nodes. is the degree matrix of view , where . is the node feature matrix, where is the dimension of node feature. is the identity matrix. is the learnable projection matrix. is the order of polynomial expansion.
2. Related Works
In this section, we introduce some representative works about multigrpah convolution methods, including Parallel GCN (P-GCN) (Wang et al., 2020), Merged GCN (M-GCN) (Butler et al., 2023) and Multi-Input Multi-Output GCN (MIMO-GCN) (Butler et al., 2023).
1) P-GCN is the fundamental method to perform multigraph convolution calculation, which aims to learn node embedding based on each view’s graph, then concatenate the min, max and mean of these embeddings as final embedding. This method only focus on the first-order information of each view’s graph separately. The concrete formulation can be written as follows:
(1) | ||||
where is the projection matrix for graph in view and is the node embedding matrix for graph in view . is the final node embedding matrix, where is the dimension of node embedding. , and are set operators, which compute minimum, maximum and mean values across all elements in all inputs, respectively.
2) M-GCN is the fundamental method to perform multigraph convolution calculation, which considers the high-order information of each view’s graph, then summing the embeddings learned from different orders of each view’s graphs as final embedding. This method focus on different orders of each view’s graph separately. The concrete formulation can be written as follows:
(2) |
where is the projection matrix for identity matrix, is the projection matrix for -th order of .
3) MIMO-GCN is the state-of-the-art method to perform multigraph convolution calculation, which aims to among high-order interaction of multiple graphs, considers any arrangement of the views’ graphs with given order . Then embeddings learned from all combination of different views’ graphs are summed as final embedding. This method focus on interaction among different views’ graph of different orders. The concrete formulation can be written as follows:
(3) |
where is the polynomial expansion of with order . It can be written as follows:
(4) |
where is the set of all possible arrangements of length of , is the projection matrix for .
3. Simple Multigraph Convolution Networks (SMGCN)
In this section, we first propose the edge-level and subgraph-level consistent topology extraction modules, then introduce the proposed SMGCN model.
3.1. Credible Edge-level Topology Extraction
Given multigraph , we first calculate the sum of all views , then we filter the raw edge via cross-view voting scheme, , if an edge exists in two views, we set it as a credible raw edge. The concrete formulation can be written as follows:
(5) |
where is the similarity matrix represented the edge-level credible topology. Obviously that since can hold part of edges based on cross-view topology, it can be seen as a summarization of raw multigraph , thus it can be used as a representative structure to perform high-order interaction with raw multigraph.
3.2. Credible Subgraph-level Topology Extraction
The exacted edge-level topology aforementioned above summarizes the credible topology structures of the raw multigraphs, which focuses on the global structure of the graph of each view. In this subsection, we focus on extracting credible subgraph-level topology, , latent subgraph structure. Formally, 222If are weighted graph, this step can be skipped.suppose are unweighted graph, we first calculate the triangle-based weighted similarity matrix for each view as follows:
(6) |

In Figure 2, we illustrate the triangle-based weighted similarity matrix generation procedure. Then we extract the first nearest neighbor matrix for each view as follows:
(7) |
where is set of the nearest neighbor of sample in -th view. Afterwards, according to the same principle of credible edge-level topology extraction, we calculate the sum of all views , then we filter the edge via cross-view voting scheme as follows:
(8) |
where is the similarity matrix represented the edge-level credible topology. Naturally, can be seen as a summarization of generated subgraph-level multigraph similarity matrices . More importantly, may generate the new edges from raw multigraph due to the sub-graph generation procedure, which provides new perspective in the cross-view interactions of multigraph.
3.3. Simple High-order Interaction Formulation
According to the credible edge-level topology and subgraph-level topology , we design the following simple high-order interaction formulation (For better illustration, we denote and ) to obtain the hidden layer multigraph representation333 can be symmetrized in practice (optional).:
(9) |
where is the projection matrix for node feature and is the projection matrix for each polynomial combination for and , is the order of polynomial expansion. Compared with previous works such as the standard cross-view polynomial expansion of MIMO, SMGCN cleverly performs the cross-view interaction of multigraph via extracting two credible cross-view topology , which significantly reduces the computational cost. More importantly, SMGCN can be seen as a pruning form of the standard cross-view high-order polynomial interaction among , thus it also satisfies the spectral properties of standard polynomial expansion, such as Theorem 1 and Lemma 2 in (Butler et al., 2023).
Computational Cost Comparison. We can evaluate the computational cost of SMGCN based on the number of parameters. For instance, suppose the size of all projection matrices (including and ) is and the order is . In SMGCN, the number of parameters is , which is linear the polynomial order. However, due to the standard cross-view polynomial expansion, the number of parameters of MIMO (without the post-processing pruning operation) is . According to Lemma 1 in (Butler et al., 2023), the post-processing pruning operation can reduce a part of terms when , but the concrete computational cost is hard to evaluate since it is determined by threshold.
4. Experiments
4.1. Benchmark Datasets
In this section, we introduce the benchmark datasets used in our experiments, including ACM (Lv et al., 2021), DBLP (Fu et al., 2020). To perform task on multi-view graph, we follow the usage in (Fan et al., 2020), which turns graph with multi relational edges and multi types nodes into multi-view graphs with only one type nodes. The statistics of these datasets are summarized in Table 1.
For ACM dataset, we consider four views, including co-author, co-subject, co-cite and co-refer relation between papers. For DBLP dataset, we consider three views, including co-paper, co-conference and co-term relation between authors. All the datasets are available at Pytorch Geometric (Fey and Lenssen, 2019).
Dataset Target Node Type Node Number Node Feature Views View Names and Edge Number ACM Paper 3025 1902 4 Co-author(30752), Co-subject(2214064), Co-cite(5343), Co-refer(5343). DBLP Author 4057 334 3 Co-paper(6572), Co-conference(3225446), Co-term(7611879).
Model ACM(4 views) DBLP(3 views) ACC F1 NMI ACC F1 NMI P-GCN 93.77 93.82 76.89 91.43 90.68 73.31 M-GCN 94.10 94.15 77.94 93.12 92.47 77.48 MIMO-GCN 94.33 94.40 79.04 92.57 91.86 76.04 SMGCN 95.04 95.11 80.80 93.40 92.78 78.15
(a) Parameters of models on ACM dataset
(b) Parameters of models on BDLP dataset
4.2. Experimental Settings and Evaluation Metrics
We perform node classification tasks on both datasets, with given train/test split in corresponding dataset. 10% of nodes in original training set are used as validation set. We grid search the learning rate and weight decay in 0.1,0.01,0.001 and 0,1e-5.1e-4,1e-3,1e-2. All the optimizer are Adam. All the hidden dimension are set to 128. All the activation function are ReLu. One linear layer is used to project the node embedding to the classification space. Due to memory limitation, we set the maximum order of polynomial expansion to 3. Accuracy, F1 macro and NMI are used as evaluation metrics.
4.3. Results and Analysis
Results are shown in Table 2. We can see that our method outperforms all the baseline methods on both datasets, achieving state-of-the-art performance.
As Figure 3 shows, our method has the same cross-view function as MIMO-GCN, but has much fewer parameters and much lower parameters’ growth with the increase of K than MIMO-GCN. Our method also has comparable parameters amount with other simple methods such as P-GCN and M-GCN that have no cross-view function. Visualization of the learned node embedding is shown in Figure 4, which shown effective node embedding reflecting the node label information is learned by our method. Visualization of and learned by our method is shown in Figure 5, which shown that focus more on the global structures, while focus more on the local structures.
(a) Node features
(b) Node embeddings
(a)
(b)
5. Conclusion
In this paper, we propose a simple and efficient multigraph convolution method named as SMGCN. Different with previous works, SMGCN efficiently performs spatial cross-view message-passing via extracting credible cross-view topology, including edge-level and subgraph-level. It is worth noting that the proposed extraction methods for credible cross-view topology are just simple and effective instance. The idea, , extracting edge-level and subgraph-level to perform cross-view message-passing, has enlightening implications for the field of multigraph convolution fields. We really hope to see more effective extraction methods emerge in the future.
References
- (1)
- Bhadra et al. (2019) Tapas Bhadra, Saurav Mallik, and Sanghamitra Bandyopadhyay. 2019. Identification of Multiview Gene Modules Using Mutual Information-Based Hypograph Mining. IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, 6 (June 2019), 1119–1130.
- Butler et al. (2023) Landon Butler, Alejandro Parada-Mayorga, and Alejandro Ribeiro. 2023. Convolutional Learning on Multigraphs. IEEE Transactions on Signal Processing 71 (2023), 933–946.
- Cui et al. (2021) Wanqiu Cui, Junping Du, Dawei Wang, Feifei Kou, and Zhe Xue. 2021. MVGAN: Multi-View Graph Attention Network for Social Event Detection. ACM Transactions on Intelligent Systems and Technology 12, 3 (June 2021), 1–24.
- Fan et al. (2020) Shaohua Fan, Xiao Wang, Chuan Shi, Emiao Lu, Ken Lin, and Bai Wang. 2020. One2Multi Graph Autoencoder for Multi-view Graph Clustering. In Proceedings of The Web Conference 2020. ACM.
- Fey and Lenssen (2019) Matthias Fey and Jan E. Lenssen. 2019. Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds.
- Fu et al. (2020) Xinyu Fu, Jiani Zhang, Ziqiao Meng, and Irwin King. 2020. MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding. In Proceedings of The Web Conference 2020. ACM.
- Li et al. (2023) Qi Li, Wenping Chen, Zhaoxi Fang, Changtian Ying, and Chen Wang. 2023. A multi-view contrastive learning for heterogeneous network embedding. Scientific Reports 13, 1 (April 2023).
- Lv et al. (2021) Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, and Jie Tang. 2021. Are we really making much progress?. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery Data Mining. ACM.
- Qu et al. (2017) Meng Qu, Jian Tang, Jingbo Shang, Xiang Ren, Ming Zhang, and Jiawei Han. 2017. An Attention-based Collaboration Framework for Multi-View Network Representation Learning. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM.
- Sless et al. (2018) Liat Sless, Noam Hazon, Sarit Kraus, and Michael Wooldridge. 2018. Forming k coalitions and facilitating relationships in social networks. Artificial Intelligence 259 (June 2018), 217–245.
- Wang et al. (2020) Duo Wang, Mateja Jamnik, and Pietro Lio. 2020. Abstract Diagrammatic Reasoning with Multiplex Graph Networks.
- Yan et al. (2019) Ke Yan, Xiaozhao Fang, Yong Xu, and Bin Liu. 2019. Protein fold recognition based on multi-view modeling. Bioinformatics 35, 17 (Jan. 2019), 2982–2990.
- Zou et al. (2022) Ding Zou, Wei Wei, Xian-Ling Mao, Ziyang Wang, Minghui Qiu, Feida Zhu, and Xin Cao. 2022. Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender System. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM.