This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Simple Multigraph Convolution Networks

Danyang Wu 0000-0002-0309-1409 College of Information Engineering,
Northwest A&F University
Xinjie Shen 0009-0004-9176-5400 South China University of Technology Jitao Lu 0000-0002-6065-2639 Northwestern Polytechnical University Jin Xu 0009-0001-8735-3532 South China University of Technology
Pazhou Lab
 and  Feiping Nie 0000-0002-0871-6519 Northwestern Polytechnical University
(2024)
Abstract.

Existing multigraph convolution methods either ignore the cross-view interaction among multiple graphs, or induce extremely high computational cost due to standard cross-view polynomial operators. To alleviate this problem, this paper proposes a Simple MultiGraph Convolution Networks (SMGCN) which first extracts consistent cross-view topology from multigraphs including edge-level and subgraph-level topology, then performs polynomial expansion based on raw multigraphs and consistent topologies. In theory, SMGCN utilizes the consistent topologies in polynomial expansion rather than standard cross-view polynomial expansion, which performs credible cross-view spatial message-passing, follows the spectral convolution paradigm, and effectively reduces the complexity of standard polynomial expansion. In the simulations, experimental results demonstrate that SMGCN achieves state-of-the-art performance on ACM and DBLP multigraph benchmark datasets. Our codes are available at here.

Multigraph convolution networks, Multiview graph learning.
journalyear: 2024copyright: acmlicensedconference: Companion Proceedings of the ACM Web Conference 2024; May 13–17, 2024; Singapore, Singaporebooktitle: Companion Proceedings of the ACM Web Conference 2024 (WWW ’24 Companion), May 13–17, 2024, Singapore, Singaporedoi: 10.1145/3589335.3651560isbn: 979-8-4007-0172-6/24/05

1. Introduction

Multigraph convolution is an important tool in graph machine learning and signal processing field, which aims to filter the graph signal in the spatial or (and) spectral domain of multiple graphs.

In recent years, multigraph convolution methods have been widely used in many fields, such as social network analysis (Sless et al., 2018; Cui et al., 2021), recommendation system (Li et al., 2023; Zou et al., 2022), and bioinformatics (Yan et al., 2019; Bhadra et al., 2019), for their data is usually represented as a multigraph, which contains multiple different views’ graphs representing different relation about the same object. For example, in social network analysis, node can be user, and different views can be different social relations between users, such as friendship, following, and co-occurrence.

In recent years, several multigraph convolution methods have been proposed, including building robust node features to help to regularize and aggregate embeddings learned from each view (Qu et al., 2017), parallel learning embeddings of each view and concatenate their min, max and mean as final embedding  (Wang et al., 2020), summing embeddings learned from different orders of each view’s graphs  (Butler et al., 2023) and Multi-Input Multi-Output GCN (MIMO-GCN) (Butler et al., 2023) that computes embeddings learned from all combination of different views’ graphs.

Refer to caption
Figure 1. Overview of the proposed SMGCN.

However, existing multigraph convolution methods still have difficult in effectively solving the conflict between effectiveness and efficiency. As Figure 1 shows, on the one hand, a part of methods consider integrate the multigraphs directly such as selection or weighted sum operator, which is efficient but ignores the cross-view interaction among multiple graphs. On the other hands, several methods such as MIMO (Butler et al., 2023), performs standard cross-view polynomial expansions on multigraphs, which integrates the high-order structure with a global perspective, but the computational cost can rapidly increase based on the number of polynomial orders and views. To alleviate this conflict, this paper proposes a Simple MultiGraph Convolution Networks (SMGCN) method which contains consistent topology extraction module and simple polynomial expansion module. To be specific, SMGCN first considers extracting the consistent cross-view topology in edge-level and subgraph-level, then performs polynomial expansion between raw multigraphs edge-level, subgraph-level to induce the filtered graph signal. More importantly, SMGCN is a simple and efficient method, which also performs credible cross-view spatial message-passing and follows the spectral convolution paradigm. In experiments, we evaluate SMGCN on two famous multigraph benchmark datasets, including ACM and DBLP. The experiment results demonstrate the superiorities of SMGCN with the perspective of effectiveness and efficiency.

The main contributions of this paper are summarized into the following two folds:

  • In theory, SMGCN performs cross-view interaction via a simple and efficient spectral convolution focusing on extracting edge-level and subgraph-level consistent topology. It effectively alleviates the conflict between effectiveness and efficiency in previous works.

  • In the simulations, experiment results demonstrate that SMGCN can achieve state-of-the-art performance with lower computational cost.

Notations. In this paper, a mm views multigraph is denoted as 𝒢=(𝒱,{(1)(m)})\mathcal{G}=(\mathcal{V},\{\mathcal{E}^{(1)}\ldots\mathcal{E}^{(m)}\})111We treat mm default bigger than 2 for standard cross-view polynomial expansion can be used directly when m=2m=2.. 𝒱\mathcal{V} is the set of nodes, (v)\mathcal{E}^{(v)} is the set of edges in view vv. For easily understanding, we use {𝐀(1),,𝐀(m)}\{\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(m)}\} to represent the adjacency matrix of each view’s graph, where 𝐀(v)n×n\mathbf{A}^{(v)}\in\mathbb{R}^{n\times n} is the adjacency matrix of view vv and nn is the number of nodes. 𝐃(m)\mathbf{D}^{(m)} is the degree matrix of view vv, where 𝐃i,i(v)=j=1n𝐀i,j(v)\mathbf{D}^{(v)}_{i,i}=\sum_{j=1}^{n}\mathbf{A}^{(v)}_{i,j}. 𝐗n×d\mathbf{X}\in\mathbb{R}^{n\times d} is the node feature matrix, where dd is the dimension of node feature. 𝐈\mathbf{I} is the identity matrix. 𝐖\mathbf{W} is the learnable projection matrix. KK is the order of polynomial expansion.

2. Related Works

In this section, we introduce some representative works about multigrpah convolution methods, including Parallel GCN (P-GCN) (Wang et al., 2020), Merged GCN (M-GCN) (Butler et al., 2023) and Multi-Input Multi-Output GCN (MIMO-GCN) (Butler et al., 2023).

1) P-GCN is the fundamental method to perform multigraph convolution calculation, which aims to learn node embedding based on each view’s graph, then concatenate the min, max and mean of these embeddings as final embedding. This method only focus on the first-order information of each view’s graph separately. The concrete formulation can be written as follows:

(1) 𝐙(v)=𝐃(v)12𝐀(v)𝐃(v)12𝐗𝐖(v)v{1,,m},\displaystyle\mathbf{Z}^{(v)}=\mathbf{D}^{(v)-\frac{1}{2}}\mathbf{A}^{(v)}\mathbf{D}^{(v)-\frac{1}{2}}\mathbf{X}\mathbf{W}^{(v)}\quad\forall v\in\{1,\ldots,m\},
𝐙=𝐂𝐨𝐧𝐜𝐚𝐭(𝐦𝐢𝐧({𝐙(1),,𝐙(m)}),\displaystyle\mathbf{Z}=\mathbf{Concat}(\mathbf{min}(\{\mathbf{Z}^{(1)},\ldots,\mathbf{Z}^{(m)}\}),
𝐦𝐚𝐱({𝐙(1),,𝐙(m)}),𝐦𝐞𝐚𝐧({𝐙(1),,𝐙(m)}),\displaystyle\quad\quad\mathbf{max}(\{\mathbf{Z}^{(1)},\ldots,\mathbf{Z}^{(m)}\}),\mathbf{mean}(\{\mathbf{Z}^{(1)},\ldots,\mathbf{Z}^{(m)}\}),

where 𝐖(v)\mathbf{W}^{(v)} is the projection matrix for graph in view vv and 𝐙(v)\mathbf{Z}^{(v)} is the node embedding matrix for graph in view vv. 𝐙n×c\mathbf{Z}\in\mathbb{R}^{n\times c} is the final node embedding matrix, where cc is the dimension of node embedding. 𝐦𝐢𝐧()\mathbf{min}(\cdot), 𝐦𝐚𝐱()\mathbf{max}(\cdot) and 𝐦𝐞𝐚𝐧()\mathbf{mean}(\cdot) are set operators, which compute minimum, maximum and mean values across all elements in all inputs, respectively.

2) M-GCN is the fundamental method to perform multigraph convolution calculation, which considers the high-order information of each view’s graph, then summing the embeddings learned from different orders of each view’s graphs as final embedding. This method focus on different orders of each view’s graph separately. The concrete formulation can be written as follows:

(2) 𝐙=𝐈𝐗𝐖𝐈+i=1mk=1K(𝐀(i))k𝐗𝐖(𝐀(i))k,\mathbf{Z}=\mathbf{I}\mathbf{X}\mathbf{W}_{\mathbf{I}}+\sum_{i=1}^{m}\sum_{k=1}^{K}(\mathbf{A}^{(i)})^{k}\mathbf{X}\mathbf{W}_{(\mathbf{A}^{(i)})^{k}},

where 𝐖𝐈\mathbf{W}_{\mathbf{I}} is the projection matrix for identity matrix, 𝐖(𝐀(i))k\mathbf{W}_{(\mathbf{A}^{(i)})^{k}} is the projection matrix for kk-th order of 𝐀(i)\mathbf{A}^{(i)}.

3) MIMO-GCN is the state-of-the-art method to perform multigraph convolution calculation, which aims to among high-order interaction of multiple graphs, considers any arrangement of the views’ graphs with given order KK. Then embeddings learned from all combination of different views’ graphs are summed as final embedding. This method focus on interaction among different views’ graph of different orders. The concrete formulation can be written as follows:

(3) 𝐙=𝐈𝐗𝐖𝐈+𝐏𝐨𝐥𝐲({𝐀(1),,𝐀(m)},K),\mathbf{Z}=\mathbf{I}\mathbf{X}\mathbf{W}_{\mathbf{I}}+\mathbf{Poly}(\{\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(m)}\},K),

where 𝐏𝐨𝐥𝐲({𝐀(1),,𝐀(m)},K)\mathbf{Poly}(\{\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(m)}\},K) is the polynomial expansion of {𝐀(1),,𝐀(m)}\{\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(m)}\} with order KK. It can be written as follows:

(4) 𝐏𝐨𝐥𝐲({𝐀(1),,𝐀(m)},K)=k=1KpkPkpk𝐗𝐖pk,\mathbf{Poly}(\{\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(m)}\},K)=\sum_{k=1}^{K}\sum_{p_{k}\in P_{k}}p_{k}\mathbf{X}\mathbf{W}_{p_{k}},

where PkP_{k} is the set of all possible arrangements of length kk of {𝐀(1),,𝐀(m)}\{\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(m)}\}, 𝐖pk\mathbf{W}_{p_{k}} is the projection matrix for pkp_{k}.

3. Simple Multigraph Convolution Networks (SMGCN)

In this section, we first propose the edge-level and subgraph-level consistent topology extraction modules, then introduce the proposed SMGCN model.

3.1. Credible Edge-level Topology Extraction

Given multigraph {𝐀(1),𝐀(m)}\{\mathbf{A}^{(1)},\ldots\mathbf{A}^{(m)}\}, we first calculate the sum of all views v=1m𝐀(v)\sum_{v=1}^{m}\mathbf{A}^{(v)}, then we filter the raw edge via cross-view voting scheme, i.e.i.e., if an edge exists in two views, we set it as a credible raw edge. The concrete formulation can be written as follows:

(5) ei,j={1,ifv=1m(ai,j(v)>0)2,0,Otherwise,\displaystyle e_{i,j}=\begin{cases}1,&\text{if}~{}\sum_{v=1}^{m}\mathcal{I}(a^{(v)}_{i,j}>0)\geqslant 2,\\ 0,&\text{Otherwise},\end{cases}

where 𝐄n×n\mathbf{E}\in\mathbb{R}^{n\times n} is the similarity matrix represented the edge-level credible topology. Obviously that since 𝐄\mathbf{E} can hold part of edges based on cross-view topology, it can be seen as a summarization of raw multigraph {𝐀(1),𝐀(m)}\{\mathbf{A}^{(1)},\ldots\mathbf{A}^{(m)}\}, thus it can be used as a representative structure to perform high-order interaction with raw multigraph.

3.2. Credible Subgraph-level Topology Extraction

The exacted edge-level topology aforementioned above summarizes the credible topology structures of the raw multigraphs, which focuses on the global structure of the graph of each view. In this subsection, we focus on extracting credible subgraph-level topology, i.e.i.e., latent subgraph structure. Formally, 222If {𝐀(1),𝐀(m)}\{\mathbf{A}^{(1)},\ldots\mathbf{A}^{(m)}\} are weighted graph, this step can be skipped.suppose {𝐀(1),𝐀(m)}\{\mathbf{A}^{(1)},\ldots\mathbf{A}^{(m)}\} are unweighted graph, we first calculate the triangle-based weighted similarity matrix for each view as follows:

(6) 𝐀^(v)=(𝐀(v)+𝐈)2(𝐀(v)+𝐈).\displaystyle\mathbf{\widehat{A}}^{(v)}=\left(\mathbf{A}^{(v)}+\mathbf{I}\right)^{2}\circ\left(\mathbf{A}^{(v)}+\mathbf{I}\right).
Refer to caption
Figure 2. Triangle-based weighted similarity matrix generation. Self-loop is omitted in this figure.

In Figure 2, we illustrate the triangle-based weighted similarity matrix generation procedure. Then we extract the first nearest neighbor matrix 𝐀^(v)\widehat{\mathbf{A}}^{(v)} for each view as follows:

(7) a^i,j(v)={1,ifi𝒩1(v)(j)orj𝒩1(v)(i),0,Otherwise,\displaystyle\widehat{a}^{(v)}_{i,j}=\begin{cases}1,&\text{if}~{}i\in\mathcal{N}_{1}^{(v)}(j)~{}\text{or}~{}j\in\mathcal{N}_{1}^{(v)}(i),\\ 0,&\text{Otherwise},\end{cases}

where 𝒩1(v)(i)\mathcal{N}_{1}^{(v)}(i) is set of the nearest neighbor of sample ii in vv-th view. Afterwards, according to the same principle of credible edge-level topology extraction, we calculate the sum of all views v=1m𝐀^(v)\sum_{v=1}^{m}\widehat{\mathbf{A}}^{(v)}, then we filter the edge via cross-view voting scheme as follows:

(8) si,j={1,ifv=1m(a^i,j(v)>0)2,0,Otherwise,\displaystyle s_{i,j}=\begin{cases}1,&\text{if}~{}\sum_{v=1}^{m}\mathcal{I}(\widehat{a}^{(v)}_{i,j}>0)\geqslant 2,\\ 0,&\text{Otherwise},\end{cases}

where 𝐒n×n\mathbf{S}\in\mathbb{R}^{n\times n} is the similarity matrix represented the edge-level credible topology. Naturally, 𝐒\mathbf{S} can be seen as a summarization of generated subgraph-level multigraph similarity matrices {𝐀~(1),𝐀~(m)}\{\tilde{\mathbf{A}}^{(1)},\ldots\tilde{\mathbf{A}}^{(m)}\}. More importantly, 𝐒\mathbf{S} may generate the new edges from raw multigraph due to the sub-graph generation procedure, which provides new perspective in the cross-view interactions of multigraph.

3.3. Simple High-order Interaction Formulation

According to the credible edge-level topology 𝐄\mathbf{E} and subgraph-level topology 𝐒\mathbf{S}, we design the following simple high-order interaction formulation (For better illustration, we denote 𝐓(1)=𝐄\mathbf{T}^{(1)}=\mathbf{E} and 𝐓(2)=𝐒\mathbf{T}^{(2)}=\mathbf{S}) to obtain the hidden layer multigraph representation333(𝐀(v))k(𝐓(t))Kk(\mathbf{A}^{(v)})^{k}(\mathbf{T}^{(t)})^{K-k} can be symmetrized in practice (optional).:

(9) 𝐙=𝐈𝐗𝐖𝐈+t=12v=1mk=1K(𝐀(v))k(𝐓(t))Kk𝐗𝐖t,v,k,\displaystyle\mathbf{Z}=\mathbf{I}\mathbf{X}\mathbf{W_{I}}+\sum_{t=1}^{2}\sum_{v=1}^{m}\sum_{k=1}^{K}(\mathbf{A}^{(v)})^{k}(\mathbf{T}^{(t)})^{K-k}\mathbf{X}\mathbf{W}_{t,v,k},

where 𝐖𝐈\mathbf{W}_{\mathbf{I}} is the projection matrix for node feature 𝐈𝐗\mathbf{I}\mathbf{X} and 𝐖t,v,k\mathbf{W}_{t,v,k} is the projection matrix for each polynomial combination for {𝐀(1),,\{\mathbf{A}^{(1)},\ldots, 𝐀(m)}\mathbf{A}^{(m)}\} and {𝐓(1),𝐓(2)}\{\mathbf{T}^{(1)},\mathbf{T}^{(2)}\}, KK is the order of polynomial expansion. Compared with previous works such as the standard cross-view polynomial expansion of MIMO, SMGCN cleverly performs the cross-view interaction of multigraph via extracting two credible cross-view topology {𝐓(1),𝐓(2)}\{\mathbf{T}^{(1)},\mathbf{T}^{(2)}\}, which significantly reduces the computational cost. More importantly, SMGCN can be seen as a pruning form of the standard cross-view high-order polynomial interaction among {𝐀(1),,𝐀(m),𝐓(1),𝐓(2)}\{\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(m)},\mathbf{T}^{(1)},\mathbf{T}^{(2)}\}, thus it also satisfies the spectral properties of standard polynomial expansion, such as Theorem 1 and Lemma 2 in  (Butler et al., 2023).

Computational Cost Comparison. We can evaluate the computational cost of SMGCN based on the number of parameters. For instance, suppose the size of all projection matrices (including 𝐖𝐈\mathbf{W_{I}} and 𝐖t,v,k\mathbf{W}_{t,v,k}) is d0×d1d_{0}\times d_{1} and the order is KK. In SMGCN, the number of parameters is O(mKd0d1)O(mKd_{0}d_{1}), which is linear w.r.tw.r.t the polynomial order. However, due to the standard cross-view polynomial expansion, the number of parameters of MIMO (without the post-processing pruning operation) is Am+K1m1=m+K1!m1!A_{m+K-1}^{m-1}=\frac{m+K-1!}{m-1!}. According to Lemma 1 in  (Butler et al., 2023), the post-processing pruning operation can reduce a part of terms when K>3K>3, but the concrete computational cost is hard to evaluate since it is determined by threshold.

4. Experiments

4.1. Benchmark Datasets

In this section, we introduce the benchmark datasets used in our experiments, including ACM (Lv et al., 2021), DBLP (Fu et al., 2020). To perform task on multi-view graph, we follow the usage in  (Fan et al., 2020), which turns graph with multi relational edges and multi types nodes into multi-view graphs with only one type nodes. The statistics of these datasets are summarized in Table 1.

For ACM dataset, we consider four views, including co-author, co-subject, co-cite and co-refer relation between papers. For DBLP dataset, we consider three views, including co-paper, co-conference and co-term relation between authors. All the datasets are available at Pytorch Geometric (Fey and Lenssen, 2019).

Dataset Target Node Type Node Number Node Feature Views View Names and Edge Number ACM Paper 3025 1902 4 Co-author(30752), Co-subject(2214064), Co-cite(5343), Co-refer(5343). DBLP Author 4057 334 3 Co-paper(6572), Co-conference(3225446), Co-term(7611879).

Table 1. Statistics of benchmark datasets.

Model ACM(4 views) DBLP(3 views) ACC F1 NMI ACC F1 NMI P-GCN 93.77 93.82 76.89 91.43 90.68 73.31 M-GCN 94.10 94.15 77.94 93.12 92.47 77.48 MIMO-GCN 94.33 94.40 79.04 92.57 91.86 76.04 SMGCN 95.04 95.11 80.80 93.40 92.78 78.15

Table 2. Metrics of different methods on ACM and DBLP dataset.

Refer to caption

(a) Parameters of models on ACM dataset

Refer to caption

(b) Parameters of models on BDLP dataset

Figure 3. Parameters of models on ACM and DBLP dataset.

4.2. Experimental Settings and Evaluation Metrics

We perform node classification tasks on both datasets, with given train/test split in corresponding dataset. 10% of nodes in original training set are used as validation set. We grid search the learning rate and weight decay in {\{0.1,0.01,0.001}\} and {\{0,1e-5.1e-4,1e-3,1e-2}\}. All the optimizer are Adam. All the hidden dimension are set to 128. All the activation function are ReLu. One linear layer is used to project the node embedding to the classification space. Due to memory limitation, we set the maximum order of polynomial expansion KK to 3. Accuracy, F1 macro and NMI are used as evaluation metrics.

4.3. Results and Analysis

Results are shown in Table 2. We can see that our method outperforms all the baseline methods on both datasets, achieving state-of-the-art performance.

As Figure 3 shows, our method has the same cross-view function as MIMO-GCN, but has much fewer parameters and much lower parameters’ growth with the increase of K than MIMO-GCN. Our method also has comparable parameters amount with other simple methods such as P-GCN and M-GCN that have no cross-view function. Visualization of the learned node embedding is shown in Figure 4, which shown effective node embedding reflecting the node label information is learned by our method. Visualization of 𝐓(1)\mathbf{T}^{(1)} and 𝐓(2)\mathbf{T}^{(2)} learned by our method is shown in Figure 5, which shown that 𝐓(1)\mathbf{T}^{(1)} focus more on the global structures, while 𝐓(2)\mathbf{T}^{(2)} focus more on the local structures.

Refer to caption

(a) Node features

Refer to caption

(b) Node embeddings

Figure 4. Visualization of node embedding learned by our method on ACM dataset.

Refer to caption

(a) 𝐓(1)\mathbf{T}^{(1)}

Refer to caption

(b) 𝐓(2)\mathbf{T}^{(2)}

Figure 5. Visualization of 𝐓(1)\mathbf{T}^{(1)} and 𝐓(2)\mathbf{T}^{(2)} learned by our method on ACM dataset. Yellow boxes represent nodes in the same connecting component.

5. Conclusion

In this paper, we propose a simple and efficient multigraph convolution method named as SMGCN. Different with previous works, SMGCN efficiently performs spatial cross-view message-passing via extracting credible cross-view topology, including edge-level and subgraph-level. It is worth noting that the proposed extraction methods for credible cross-view topology are just simple and effective instance. The idea, i.e.i.e., extracting edge-level and subgraph-level to perform cross-view message-passing, has enlightening implications for the field of multigraph convolution fields. We really hope to see more effective extraction methods emerge in the future.

References

  • (1)
  • Bhadra et al. (2019) Tapas Bhadra, Saurav Mallik, and Sanghamitra Bandyopadhyay. 2019. Identification of Multiview Gene Modules Using Mutual Information-Based Hypograph Mining. IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, 6 (June 2019), 1119–1130.
  • Butler et al. (2023) Landon Butler, Alejandro Parada-Mayorga, and Alejandro Ribeiro. 2023. Convolutional Learning on Multigraphs. IEEE Transactions on Signal Processing 71 (2023), 933–946.
  • Cui et al. (2021) Wanqiu Cui, Junping Du, Dawei Wang, Feifei Kou, and Zhe Xue. 2021. MVGAN: Multi-View Graph Attention Network for Social Event Detection. ACM Transactions on Intelligent Systems and Technology 12, 3 (June 2021), 1–24.
  • Fan et al. (2020) Shaohua Fan, Xiao Wang, Chuan Shi, Emiao Lu, Ken Lin, and Bai Wang. 2020. One2Multi Graph Autoencoder for Multi-view Graph Clustering. In Proceedings of The Web Conference 2020. ACM.
  • Fey and Lenssen (2019) Matthias Fey and Jan E. Lenssen. 2019. Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds.
  • Fu et al. (2020) Xinyu Fu, Jiani Zhang, Ziqiao Meng, and Irwin King. 2020. MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding. In Proceedings of The Web Conference 2020. ACM.
  • Li et al. (2023) Qi Li, Wenping Chen, Zhaoxi Fang, Changtian Ying, and Chen Wang. 2023. A multi-view contrastive learning for heterogeneous network embedding. Scientific Reports 13, 1 (April 2023).
  • Lv et al. (2021) Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, and Jie Tang. 2021. Are we really making much progress?. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery Data Mining. ACM.
  • Qu et al. (2017) Meng Qu, Jian Tang, Jingbo Shang, Xiang Ren, Ming Zhang, and Jiawei Han. 2017. An Attention-based Collaboration Framework for Multi-View Network Representation Learning. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM.
  • Sless et al. (2018) Liat Sless, Noam Hazon, Sarit Kraus, and Michael Wooldridge. 2018. Forming k coalitions and facilitating relationships in social networks. Artificial Intelligence 259 (June 2018), 217–245.
  • Wang et al. (2020) Duo Wang, Mateja Jamnik, and Pietro Lio. 2020. Abstract Diagrammatic Reasoning with Multiplex Graph Networks.
  • Yan et al. (2019) Ke Yan, Xiaozhao Fang, Yong Xu, and Bin Liu. 2019. Protein fold recognition based on multi-view modeling. Bioinformatics 35, 17 (Jan. 2019), 2982–2990.
  • Zou et al. (2022) Ding Zou, Wei Wei, Xian-Ling Mao, Ziyang Wang, Minghui Qiu, Feida Zhu, and Xin Cao. 2022. Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender System. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM.