Non-Graph Data Clustering via Bipartite Graph Convolution
Abstract
Since the representative capacity of graph-based clustering methods is usually limited by the graph constructed on the original features, it is attractive to find whether graph neural networks (GNNs), a strong extension of neural networks to graphs, can be applied to augment the capacity of graph-based clustering methods. The core problems mainly come from two aspects. On the one hand, the graph is unavailable in the most general clustering scenes so that how to construct graph on the non-graph data and the quality of graph is usually the most important part. On the other hand, given samples, the graph-based clustering methods usually consume at least time to build graphs and the graph convolution requires nearly for a dense graph and for a sparse one with edges. Accordingly, both graph-based clustering and GNNs suffer from the severe inefficiency problem. To tackle these problems, we propose a novel clustering method, AnchorGAE, with the self-supervised estimation of graph and efficient graph convolution. We first show how to convert a non-graph dataset into a graph dataset, by introducing the generative graph model and anchors. A bipartite graph is built via generating anchors and estimating the connectivity distributions of original points and anchors. We then show that the constructed bipartite graph can reduce the computational complexity of graph convolution from and to . The succeeding steps for clustering can be easily designed as operations. Interestingly, the anchors naturally lead to siamese architecture with the help of the Markov process. Furthermore, the estimated bipartite graph is updated dynamically according to the features extracted by GNN modules, to promote the quality of the graph by exploiting the high-level information by GNNs. However, we theoretically prove that the self-supervised paradigm frequently results in a collapse that often occurs after 2-3 update iterations in experiments, especially when the model is well-trained. A specific strategy is accordingly designed to prevent the collapse. The experiments support the theoretical analysis and show the superiority of AnchorGAE.
Index Terms:
Graph Convolution Network, Efficient Clustering, Anchors, Siamese Network, Self-Supervised Learning.1 Introduction
Graph-based clustering methods [1, 2, 3, 4, 5, 6] are important in the clustering field since they can capture the data topology and group data points non-linearly via constructing a graph. For instance, spectral clustering [1] originates from a relaxed graph-cut problem, e.g., Ratio Cut [2], Normalized Cut [3], Balanced Cut [7], etc. In graph-based methods, it is easier to formally study diverse clustering-related tasks with the help of graph theory [8].
Given data points, the graph-based clustering methods suffer the severe inefficiency in practice due to the construction of graphs, which usually needs time without any acceleration tricks. For some methods such as spectral clustering [1], the eigenvalue decomposition, an operation, is required. To alleviate the phenomenon, plenty of variants of spectral clustering have been developed [9, 10, 11]. An important variant is to introduce some representative points [12] to accelerate both graph construction and optimization [13, 14]. Some works [15, 16] recently attempt to extend these methods into multi-view scenarios. However, the graph constructed by original features may only contain low-level information (i.e., the representative capacity is limited), since the construction is usually based on the original features without mapping (e.g., neural networks). The limited ability to extract the high-level relationship of data results in the inaccurate graphs and thus becomes the bottleneck of graph-based clustering methods.
With the rise of deep learning, the utilization of deep neural networks (DNN) is widely regarded as an effective way to exploit deep information of data via extracting latent features [17]. Some models [18, 19, 20] based on convolution neural networks (CNN) have achieved impressive improvement on clustering, but CNN can not be applied to non-Euclidean data [21], such as social networks, recommendation systems, word vector, data mining data, etc. Compared with the Euclidean data (e.g., images, videos, etc.), the most difference of non-Euclidean data is that the relationships of features are not regular (or even do not exist). It indicates that the classical convolution operation could not be utilized and the existing CNN-based clustering models can not be applied to general clustering scenes. Although some deep clustering models [22, 23, 24, 18, 19] have been proposed, most methods aim to introduce some widely used modules from supervised scenarios to clustering and few models focus on how to utilize deep models to further promote the capacity graph-based methods. SpectralNet [25] is an important attempt to connect neural networks and classical spectral clustering. It utilizes neural networks to promote capacity and simulate spectral decomposition.

Graph neural networks (GNNs) [26, 27, 28, 29, 30, 31], which attempt to extend the classical neural networks into graph-type data with considering the graph structure, have attracted much attention. GNN is designed for graph-type data, such as social networks, recommendation systems, citation networks, etc. In these practical scenarios, links are easy and cheap to obtain and graphs are provided as prior information. For example, in citation networks, each paper is viewed as a node in graph and the citation relationships, which are easily available, are modeled as links. As GNNs have shown impressive capacity and performance on graph-type data, it is attractive to employ GNN to promote the capacity of graph-based clustering methods. Nevertheless, there are two major hurdles. On the one hand, in the most scenarios (e.g., text, signals, etc.), graphs are not provided as the prior information, which indicates that the existing GNN-based models for node clustering cannot be applied on these non-graph data without artificial construction of graphs. It goes back to the same dilemma as the graph-based clustering, i.e., how to construct a graph containing enough information. On the other hand, the inefficiency is also a well-known problem for GNNs [32, 33]. Forward propagation of each node in a GNN layer relies on neighboring samples such that the computation complexity of the graph convolution increases exponentially with the growth of layers. For a dense graph, the complexity of a multi-layer GNN is almost time. For a sparse one, the computational cost becomes where represents the amount of edges. Accordingly, each training step approximately depends on all data points and the training is quite time-consuming. It also results in the unavailability of batch gradient descent. To accelerate GNNs, some works focus on how to apply batch gradient descent [34, 35, 36, 32]. The inefficiency problem is also identical to the graph-based clustering.
The main motivation of this paper is how to efficiently improve the capacity of graph-based clustering via GNNs on non-graph data. The efficiency guarantees the practicability and the non-graph property of data ensures the universality of our models. Although SDCN [37] aims to extend GNNs into the data clustering task, it simply constructs KNN graphs and thus fails to exploit the high-level information of data. The major contributions of the proposed model, namely AnchorGAE, are listed as follows
-
•
As no graph is provided in general, we aim to construct the graph via estimating the connectivity distribution of each sample from a perspective of generative models. Under this generative perspective, anchors are introduced to accelerate the graph construction and computation of GNNs. The produced anchors and connectivity distributions lead to a bipartite graph convolution operation so that the computational complexity of each forward propagation of GNNs is reduced from or to .
-
•
With the help of Markov process and the reconstruction goal provided by generative graph models, a siamese architecture is naturally designed to transform anchors into the embedding space.
-
•
We theoretically prove that a collapse, where points are split into plenty of tiny groups and these groups scatter randomly, may happen due to the self-supervised update of the graph (and connectivity distributions), especially when the model is well-trained. We therefore devise a specific strategy to update the graph to prevent the collapse. The experiments also support the theoretical analysis and verify the effectiveness of the designed strategy.
2 Preliminary and Related Work
In this section, we introduce the existing work about graph neural networks and fast clustering algorithms.
2.1 Preliminary
Let be a graph where , , and represent nodes, edges, and weights respectively. For an unweighted matrix, can be ignored since the weights of edges are all regarded as 1. All matrices and vectors are represented by upper-case letters and lower-case letters in boldface, respectively. represents the size of a set. . For a real number , represents the maximum integer that is not larger than . denotes an -dimension vector whose all entries equal 1. In computer science, the adjacency matrix, which is denoted by in this paper, is a common way to represent the edges of a graph. Specifically, for an unweighted graph , if and otherwise. A popular model of GNN is the graph convolution network (GCN) [28] where the graph convolution layer is accordingly formulated as [28]
(1) |
where is an activation function, is the representation of data points, denotes the coefficients to learn, is often viewed as a variant of the normalized Laplacian matrix, and is the degree matrix of . Specifically, is a diagonal matrix where .
2.2 Graph Neural Networks
In recent years, how to effectively extract features of graphs has attracted a lot of attention [28, 31]. Inspired by CNN, a few works [27, 38, 28] attempt to investigate convolution operation on graph-type data. With the rise of the attention mechanism, GAT [29, 39, 40] introduces self-attention into graph neural networks. Meanwhile, GNN can be applied to sequential tasks [41, 30], which is known as spatial-temporal graph neural network (STGNN). Some theoretical works [42, 43, 44, 45, 46, 47] are also investigated in recent years. Some works [46] argue the impact of over-smoothing while literature [47] proposes the over-squashing. To extend GCN [28] into unsupervised learning, graph auto-encoders [48, 49, 50, 51, 52], based on auto-encoder [53, 54], are developed. Different from traditional auto-encoders, GCN [28] is used as an encoder and the decoder calculates inner-products of embedding to reconstruct graphs. To build a symmetric architecture, GALA [50] designs a sharpening operator since the graph convolution is equivalent to a smooth operator. SDCN [37] has tried to utilize GNN for clustering, while it just constructs a KNN graph as the preprocessing and suffers from the inefficiency problem as well. Although GNN has achieved outstanding performance [28, 55], it requires too much memory and time. Some methods [33, 34, 36, 35, 56] for accelerating training and reducing consumption of memory are proposed. For instance, SGC [35] attempts to accelerate by removing non-linear activations in GCN, which reduces the number of parameters. More importantly, it compresses multiple layers into one layer such that stochastic gradient descent can be used. GraphSAGE [33] attempts to sample part of nodes for each node to reduce the computational complexity, while FastGCN [32] samples nodes independently. ClusterGCN [34] tries to find an approximate graph with multiple connected components via METIS [57] to employ batch gradient descent.
2.3 Clustering on Large Scale Data
In traditional clustering, -means [58] works efficiently on large scale data but it can only deal with spherical data, which limits the application of -means. By virtue of deep learning, deep clustering [22, 17, 23, 18, 19, 59] often aims to learn expressive representations and then feed the representations into the simple clustering module. The whole optimization is based on batch gradient descent so that they are still available on large scale data. However, they frequently suffers from the over-fitting and the performance heavily relies on the tuning of neural networks. For graph-based clustering methods [3, 2, 1], both computational complexity and space complexity of graph construction are . In particular, solving spectral clustering needs eigenvalue decomposition, which is also time-consuming. To apply spectral clustering on large scale data, a lot of variants [12, 9, 10, 11] have been developed. Anchor-based methods [12, 13, 14] are important extensions of spectral clustering. The core idea is to construct a graph implicitly via some representative points, which are usually named as anchors. By constructing a bipartite graph between samples and anchors, they only need to perform singular value decomposition on the Laplacian of the bipartite graph [13]. The anchor-based spectral clustering provides an elegant scheme to improve the efficiency of graph-based models.
Remark 1.
Although several CNN-based clustering methods [19, 18, 20] have achieved good performance, they focus on vision data and the main improvement is caused by the classical convolution operator, so that they could not be applied to non-Euclidean data, e.g., word vectors, data mining data, etc. Therefore, the CNN-based methods will not be used as the main competitors in this paper. It should be emphasized that CNN-based methods and AnchorGAE should not be mutually exclusive. AnchorGAE aims to find a general paradigm to improve the clustering baseline under the most general settings. It may be an attractive topic in the future to extend the core idea of AnchorGAE into the vision-specific clustering by combining CNNs.
3 Methodology
In this section, we first revisit the main problems that we attempt to address in this paper. In Section 3.2, we show how to construct a bipartite graph on the non-graph data from a generative perspective. The probability perspective is vital to the derivation of the fast bipartite graph convolution and the reconstruction goal of the unsupervised model. In Section 3.3, the core idea of AnchorGAE and its efficiency are elaborated. In Section 3.4, we will find that a siamese structure is naturally induced by the bipartite graph convolution. The rigorous analysis of the degeneration caused by the simple update of graphs is provided in Section 3.5. Finally, the methods to obtain the final clustering assignments are discussed. The core idea of AnchorGAE is illustrated in Figure 1.
3.1 Revisit the Critical Problems
Before elaborating on how to efficiently improve the capacity of graph-based clustering via GNNs on non-graph data, we introduce the major hurdles respectively.
No Prior Graph: In the scenarios of clustering, only features are given as the prior information. The graph is usually constructed as the preprocessing in graph-based clustering methods. The key of performance is the quality of the constructed graph. When the scattering of data is too complicated to precisely capture the key information based on the original data (which is the main bottleneck of the most existing graph-based clustering methods), it is a rational scheme to map the data into a latent space and then build a more precise graph in this space. As we focus on graph-based models, GNNs are more rational schemes compared with classical neural networks. Moreover, the utilization of the estimated graph (via GNNs) to update graph leads to a self-supervised procedure. The graph construction usually results in costs, which also causes the inefficiency of graph-based clustering methods. The scheme to solve this problem will be elaborated in Section 3.2.
Inefficiency of GNNs: In the forward propagation of GNNs, it usually require neighbored nodes to compute the hidden feature of one node. This computation results in the inefficiency of GNNs on large graph [60], especially with the increase of depth. In this paper, we simply focus on GCN, the widely used variant of GNN, since the theoretical analysis does not depend on specific GNN modules. For instance, it requires time to compute in the forward propagation. If is sparse enough, the computational complexity is acceptable. If there are plenty of edges in , the computational complexity can be roughly regarded as , so that GCNs can not be applied in this case. We will show how to alleviate it into in Sections 3.3 and 3.4.
Inefficiency of Graph-Based Clustering Methods: Even after the graph construction, the computation complexity reaches due to the eigenvalue decomposition. The problem is easy to address after introducing the bipartite graph convolution and the details can be found in Section 3.6.
3.2 Generative Perspective for Bipartite Graphs
By virtue of the generative perspective introduced in this section, the probability transition matrix is accordingly built so that the Markov process can be used to implement the efficient bipartite graph convolution, which is elaborated in Section 3.3. The generative perspective also provides a training goal, reconstructing the distribution, for AnchorGAE.
3.2.1 Generative Perspective for Graphs
For a graph , an edge, , can be regarded as a sampling from an underlying connectivity distribution for , where denotes the -th node. Note that where . Accordingly, a graph can be viewed as multiple samplings regarding connectivity distributions . Before the further discussion of weighted graphs, we further clarify the difference between the generative perspective and discriminative perspective. In the discriminative view, the existence of edge is certain and the model should train a classifier to predict and . Note that in most cases. In this paper, the generative view provides an elegant Bayesian framework to simultaneously capture the local topology and offer a reconstruction goal for GAE.
With the generative perspective, for a weighted graph where represents the weights on edges, can be viewed as an estimation of the underlying connectivity distributions. From the generative perspective, the underlying connectivity distributions can be supposed to exist in any dataset, while the connectivity distributions are agnostic on non-graph datasets. However, no matter which type of data is, it will not affect the existence of connectivity distributions. The vital idea that leads AnchorGAE to be applied to non-graph data is to estimate the distributions. Besides, they are also the target to reconstruct by our unsupervised model. The following assumption helps to estimate the connectivity distributions.
Assumption 1.
Given an ideal metric of difference between two nodes , is negatively related to , i.e., . An ideal metric indicates .
In this paper, each node is described by a -dimension vector, i.e., . Therefore, the metric can be reformulated as
(2) |
where represents an ideal mapping of the raw features. The implementation is discussed in Section 3.3.
Remark 2.
Although the above assumption seems too strong to hold on raw data representation, the goal of our model is to find an approximately ideal mapping function to build a well-structured graph.
3.2.2 Bipartite Graph Simplification with Anchors
To efficiently apply graph convolution, we attempt to simplify the graph via some representative points, which are named anchors or landmarks. Let denote anchors where and the -th anchor is also described by a -dimension vector . Then, we can get a new bipartite graph with anchors, . Specifically, and is the corresponding weights of . The bipartite property means that only edges between and are allowed in . This bipartite property is from the following assumption.
Assumption 2.
Given an ideal set of anchors , can be constructed by and .
Since both connectivity distributions and anchors are unknown in general data, we need to estimate them. First, we intend to estimate anchors and alternatively by
(3) |
according to Assumption 1. Specifically speaking, in the ideal metric space introduced by Assumption 1, the expected divergence between and should be minimum. Note that we only focus on the estimation of and overlook for now. However, the above problem has a trivial solution, i.e.,
(4) |
It indicates that each original node (from ) only connects with its nearest anchors (from ). To avoid the ill connectivity distributions and anchor graph, we turn to the following regularized problem,
(5) |
where represents the uniform distribution and is a metric of two discrete distributions. Intuitively, should be sparse such that anchors far from are ignored. In practice, Kullback-Leibler divergence (KL-divergence) is usually used as to measure the divergence of two probability distributions. However, it will return a dense solution of . Instead, we employ a simple measure as
(6) |
Let be the -th smallest value of , and if , will be -sparse (only entries are non-zero). Meanwhile, it can be transformed into [4] and solved by
(7) |
In other words, the hyper-parameter is converted to the sparsity , which is much easier to tune.
When is fixed, anchors are computed by solving the following problem,
(8) |
where is representation of anchor in the original space. If we take the derivative of the above equation regarding ,
(9) |
and set it to 0, then we have
(10) |
In sum, problem (6) can be solved iteratively through Eq. (7) and (10), which is summarized as Algorithm 1.
Build A Generative Bipartite Graph by and : Then we turn to model , which is used to compute , the probability between two raw nodes. Rather than solving a problem like problem (6), is set by a simple normalized step,
(11) |
via utilizing calculated by Eq. (7). To show how the above formulations construct a bipartite graph and simplify the succeeding discussion about the fast convolution, we further reformulate and by matrix form. Let be a matrix where . Then
(12) |
represent an unnormalized bipartite graph and its degree matrix, respectively. Note that does not conform to the generative perspective for graphs, though it is a bipartite graph. According to Eq. (11), the generative bipartite graph can be defined as
(13) |
which can be also regarded as a probability transferring matrix.
3.3 Efficient Bipartite Graph Convolution
As the goal is to capture the structure information, a multi-layer GCN is a rational scheme to implement which is defined in Eq. (2). In this subsection, we will show why the bipartite graph convolution accelerates GCN.
We utilize the bipartite graph, defined in Eq. (12), to accelerate the graph convolution. In virtue of the Markov process [61], can be obtained by the one-step transition probability, i.e.,
(14) |
Similarly, . Formally, the constructed graphs of and are defined as
(15) |
Accordingly, is the weighted adjacency matrix, constructed by anchors, of samples, and is the weighted adjacency of anchors.
Remark 3.
Therefore, can be directly used as a part of the graph convolution operator. Since
(16) |
the degree matrix of is the identity matrix, i.e., . Therefore, the output of GCN is formulated as
(17) |
Here, we explain why the above equation accelerates the operation. It should be emphasized that is not required to be computed explicitly. The core is to rearrange the order of matrix multiplications as follows
(18) |
According to the above order of computation, the computational complexity is reduced to . Since , , and are usually much smaller than , the complexity can be regarded as . Moreover, if the amount of anchors is small enough, the required time of each forward propagation of a GCN layer is .
3.4 Siamese Architecture for Anchor Mapping
After showing the accelerated GCN mapping based on anchors and the bipartite graph, we discuss more details of the whole procedure in this section, especially the changes of architecture brought by anchors.
As the task is unsupervised, the whole architecture is based on graph auto-encoder (GAE). The encoder consists of multiple bipartite GCN layers defined in Eq. (17),
(19) |
where denotes the number of layers. The embedding (denoted by ), which is produced by the encoder, is a non-linear mapping for the raw representation of .
However, the mentioned encoder just transforms and there is no edge between and . In other words, can not be projected via the network with the adjacency . Meanwhile, GCN layers suffer from the out-of-sample problem. Accordingly, can not be directly projected into the deep feature space by the encoder. To map anchors into the same feature space, a siamese [62] encoder is thus designed. The encoder for anchors should share the same parameters but has its own graph structure. Surprisingly, the one-step probability transition matrix defined in Eq. (15) gives us a graph that is only composed of anchors, . As
(20) |
the corresponding degree matrix, , is also the identity matrix, i.e.,
(21) |
Accordingly, the embedding of anchors is formulated as
(22) |
where represents anchors.
Loss: As the underlying connectivity distributions have been estimated, we use them, rather than the adjacency in the existing GAEs, as the target for the decoder to rebuild. According to Assumption 1, the connectivity probability should be reconstructed according to Euclidean distances between and . Formally speaking, the decoder is
(23) |
Instead of using mean-square error (MSE) to reconstruct , AnchorGAE intends to reconstruct the underlying connectivity distribution and employ the cross-entropy to measure the divergence,
(24) |
The GCN part of AnchorGAE is trained via minimizing .
So far, how to get anchors at the beginning is ignored. In deep learning, a common manner is to pre-train a neural network. Nevertheless, the pre-training of raw graph auto-encoders will result in nearly time, which violates the motivation of our model. Therefore, we first compute anchors and transition probability on the raw features. Then anchors and transition probability will be rectified dynamically, which will be elaborated in Section 3.5.
3.4.1 Why not to reconstruct features?
One may ask why AnchorGAE just reconstruct the constructed graph rather than the features, which is a priori. The goal of the classical auto-encoder is to restore the input features. However, the learned representations are often not promising enough due to no limitation on the latent space. Accordingly, several works (e.g., adversarial auto-encoder [63], variational auto-encoder [54]) constrain the representations. More importantly, contrastive learning [64] does not reconstruct the features anymore and focuses on how to pull similar samples together and push dissimilar samples away. Contrastive learning has achieved impressive performance in recent years.
In AnchorGAE, the graph is constructed on the learned representations. Define a mapping . Then the graph can be roughly formulated as where denotes the GNN mapping. On the one hand, the composite mapping, , removes the redundant information of . The reconstruction of can be regarded as the restoration of the simplified . On the other hand, the edges of graphs indicate similar sample pairs and dissimilar pairs. The reconstruction of graph is therefore consistent with the idea of contrastive learning. That’s why AnchorGAE only reconstructs the constructed graph.
Method | ISOLET | SEGMENT | USPS | MNIST-test | MNIST-full | Fashion | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | |
K-Means | 0.621 | 0.775 | 0.559 | 0.590 | 0.648 | 0.628 | 0.548 | 0.501 | 0.534 | 0.500 | 0.474 | 0.512 |
Nystrom | 0.648 | 0.780 | 0.562 | 0.549 | 0.668 | 0.619 | 0.575 | 0.505 | 0.589 | 0.502 | 0.563 | 0.522 |
CSC | 0.661 | 0.785 | 0.606 | 0.555 | 0.726 | 0.696 | 0.623 | 0.572 | 0.592 | 0.569 | 0.515 | 0.501 |
KASP | 0.630 | 0.754 | 0.554 | 0.544 | 0.712 | 0.693 | 0.654 | 0.600 | 0.620 | 0.555 | 0.538 | 0.529 |
LSC | 0.657 | 0.774 | 0.562 | 0.557 | 0.727 | 0.694 | 0.702 | 0.615 | 0.714 | 0.623 | 0.634 | 0.629 |
SNC | 0.553 | 0.631 | 0.567 | 0.535 | 0.703 | 0.589 | 0.570 | 0.523 | — | — | — | — |
SGC | 0.464 | 0.665 | 0.524 | 0.611 | 0.657 | 0.801 | 0.719 | 0.757 | — | — | — | — |
GAE | 0.612 | 0.783 | 0.442 | 0.475 | 0.679 | 0.821 | 0.696 | 0.758 | — | — | — | — |
DEC | 0.425 | 0.685 | 0.143 | 0.000 | 0.621 | 0.588 | 0.813 | 0.803 | 0.797 | 0.810 | 0.516 | 0.541 |
SpectralNet | 0.528 | 0.781 | 0.515 | 0.540 | 0.678 | 0.818 | 0.820 | 0.817 | — | — | — | — |
ClusterGCN-L2 | 0.581 | 0.772 | 0.580 | 0.601 | 0.659 | 0.769 | 0.632 | 0.666 | 0.707 | 0.730 | 0.532 | 0.605 |
ClusterGCN-L4 | 0.574 | 0.738 | 0.593 | 0.599 | 0.662 | 0.769 | 0.631 | 0.665 | 0.708 | 0.732 | 0.532 | 0.606 |
Ours-A (fixed ) | 0.615 | 0.814 | 0.443 | 0.490 | 0.490 | 0.475 | 0.290 | 0.351 | 0.468 | 0.635 | 0.588 | 0.630 |
Ours-B (fixed ) | 0.373 | 0.532 | 0.285 | 0.125 | 0.265 | 0.174 | 0.284 | 0.228 | 0.289 | 0.232 | 0.194 | 0.112 |
Ours-C (KNN) | 0.654 | 0.778 | 0.555 | 0.511 | 0.729 | 0.746 | 0.776 | 0.735 | 0.793 | 0.743 | 0.533 | 0.553 |
AnchorGAE-L2 | 0.663 | 0.787 | 0.635 | 0.613 | 0.853 | 0.828 | 0.823 | 0.773 | 0.833 | 0.773 | 0.645 | 0.607 |
AnchorGAE-L4 | 0.591 | 0.756 | 0.576 | 0.508 | 0.781 | 0.775 | 0.810 | 0.762 | 0.808 | 0.771 | 0.613 | 0.590 |
3.5 Will the Adaptive Update Work?
Since the initialized anchors and transition probability is calculated from the original features, the information hidden in may be limited especially when the data distribution is complicated. To precisely exploit the high-level information in a self-supervised way, a feasible method is to adaptively update anchors according to the learned embedding.
To achieve this goal, we first need to recompute anchors, . According to Eq. (7) and (10), we can obtain anchors under the deep representations, . However, the input of our model requires anchors under the raw features. In other words, it is inevitable to compute . Unfortunately, it is hard to ensure an invertible mapping. A scheme is to utilize the classical auto-encoders since the decoder that tries to reconstruct the raw input from the embedding can be regarded as an approximation of the inverse mapping of the encoder. To simplify the discussion in this paper, we simply set
(25) |
to estimate under the original feature space.
In our expectations, the self-supervised update of anchors and transition probability would become more and more precise. However, the performance of AnchorGAE will become worse and worse after several updates of anchors, which has been proved theoretically and empirically. An interesting phenomenon is that a better reconstruction indicates a more severe collapse. The following theorem provides a rigorous description of this collapse.
Theorem 1.
Let be the -largest one of and be the updated distribution with the same after step 6 of Algorithm 2. If , then there exists a constant , so that when ,
holds for any .
The above conclusion means that is a sparse and uniform distribution. In other words, the constructed graph degenerates into an unweighted graph, which is usually unexpected in clustering. More intuitively, the degeneration is caused by projecting samples from the identical cluster into multiple tiny and cohesive clusters as shown in Figure 2. These tiny clusters have no connection with each other and they scatter randomly in the representation space, such that the clustering result is disturbed. To avoid this collapse, we propose two strategies: (1) increase which represents the sparsity of neighbors; (2) decrease the number of anchors, . In this paper, we dynamically increase , which is equivalent to attempt to connect these tiny groups before they scatter randomly, as follows
(26) |
Suppose that there are clusters and the number of samples in the smallest cluster is represented by . Then we can define the upper-bound of and increment of sparsity as
(27) |
where denotes the sparsity of the initialized anchor graph and be the number of iterations to update anchors. If we have no prior information of , can be simply set as or . The brief modification helps a lot to guarantee the stability and performance of AnchorGAE.
3.6 Obtain Clustering Assignments
After training AnchorGAE, we can simply run -means on the learned embedding. Due to Assumption 1, Euclidean distances of similar points will be small so that it is appropriate to use -means on the deep representations.
Another method is to perform spectral clustering on the constructed graph, which is the same as anchor-based spectral clustering. Let
(28) |
be the normalized Laplacian matrix. It should be emphasized that is also the unnormalized Laplacian matrix. Accordingly, the objective of spectral clustering is
(29) |
where is the soft indicator matrix. Let . Then the eigenvalue decomposition of can be transformed into the singular value decomposition of , which reduces the complexity from to .
Finally, one can perform clustering on the bipartite graph directly rather than the constructed graph. Let the bipartite graph be
(30) |
The degree matrix is represented as
(31) |
Then the normalized cut used spectral clustering is to optimize
(32) |
where
(33) |
Since , the above problem can be further reformulated as
(34) |
The closed-form solution can be calculated by
(35) |
where and are leading left and right singular vectors of . The details can be found in [65]. In our experiments, we report the results obtained by the above technique based on bipartite graphs, since it usually outperforms the simple -means.






Dataset | # Feature | # Size | # Classes |
---|---|---|---|
ISOLET | 617 | 1560 | 26 |
SEGMENT | 19 | 2310 | 7 |
USPS | 256 | 9298 | 10 |
MNIST-test | 784 | 10000 | 10 |
MNIST-full | 784 | 70000 | 10 |
Fashion-MNIST | 784 | 70000 | 10 |
4 Experiment
In this section, the experimental results and parametric settings of AnchorGAE are illustrated. According to these experimental results, we analyze the performance of AnchorGAE objectively. Additionally, the visualization of data is displayed. The experimental results strongly support the theoretical analysis.
4.1 Datasets and Comparative Methods
The performance of the AnchorGAE is evaluated on 6 non-graph datasets, including 2 UCI datasets (ISOLET [66] and SEGMENT [66]) and 4 image datasets (USPS [67], MNIST-test [68], MNIST-full [68], and Fashion-MNIST [69]). Remark that the two UCI datasets show the difference between AnchorGAE and CNN-based clustering models. Since each data point can only be represented by a vector, the CNN-based model can not be applied to these datasets. Note that Fashion-MNIST is denoted by Fashion in Table II The details of these datasets are shown in Table II.
AnchorGAE is compared with 9 methods, including 5 anchor-based methods (Nystrom [9], CSC [10], KASP [11], LSC [13], and SNC [14]), 3 GAE-based methods (SGC [35], GAE [48] and ClusterGCN [34]), 2 deep methods (DEC [17] and SpectralNet [25]), and classical K-Means [58]. All codes are downloaded from the homepages of authors.
4.2 Experimental Setup
There are three clustering evaluation criteria used in our experiments, including the clustering accuracy (ACC), normalized mutual information (NMI), and running time. In our experiments, the number of anchor points is set from 100 to 1000 and the increasing step is 100. The best results are reported in Table I. The activation function of the last layer is set to linear, while the other layers use the ReLU function. Particularly, ClusterGCN recommends 4 layers convolution. To ensure the convolution structure of ClusterGCN the same as others, we modify the 4 layers of convolution structure (denoted by ClusterGCN-L4) to 2 layers (denoted by ClusterGCN-L2) and remain 4 layers of the convolution structure as a competitor. To apply ClusterGCN to unsupervised tasks, we use the same GAE loss and decoder architecture that is same as the other models. To test the scalability of AnchorGAE in a deeper network, we design 2 layers (denoted by AnchorGAE-L2) and 4 layers (denoted by AnchorGAE-L4) convolution structure separately. The 2 layers neurons are designed as 128 and 64 or 256 and 32. For the 4 layers neurons, we set the first three layers as 16 and set the last layer as +1. is the number of labels. The maximum iterations to update network are 200, while the maximum epochs to update anchors are set as 5. The initial sparsity is set as 3.

To investigate the impact of anchors, we run AnchorGAE with different numbers of anchors and initial sparsity, i.e., and . We also conduct 3 ablation experiments based on AnchorGAE-L2 to study the impact of different parts. Firstly, we fix anchors and transition probability during training, which is denoted by Ours-A. Secondly, we fix the sparsity , which is denoted by Ours-B. Thirdly, we use a KNN graph, instead of a weighted graph, with the adaptive update and incremental in our model, which is denoted by Ours-C. All methods are run 10 times and the average results are reported. All experiments are conducted on a PC with an NVIDIA GeForce GTX 1660 GPU SUPER.


4.3 Experimental Results
ACC and NMI are shown in Table I while the runtime is reported in Figure 5. For ACC and NMI, the best results are bolded and suboptimal results are underlined. Especially, some methods either spend too much memory to run on MNIST-full and Fashion-MNIST, or the codes provided by authors fail to run on them. So they are denoted by a horizontal line in Table I. Similarly, they are represented as in Figure 5. Besides, the visualization of SEGMENT is shown in Figure 3. Figure 4 shows the impact of the number of anchors while the effect of the initial sparsity is illustrated in Figure 6.
From the experimental results, we can conclude that:
-
1.
On almost all datasets, AnchorGAE-L2 obtains the best results in all metrics. On MNIST-full, AnchorGAE obtains the suboptimal NMI, while DEC, which consists of much more layers, outperforms AnchorGAE. Meanwhile, AnchorGAE, with only 2 layers, achieves better ACC than DEC. AnchorGAE needs to compute anchors such that it requires more time.
-
2.
On MNIST-full, normal GAE-based methods fail to run due to out-of-memory since these methods need to save the graph structure in each convolution. AnchorGAE can convolute without constructing a graph explicitly, which not only needs less memory but also reduces runtime.
-
3.
When sparsity is fixed or anchors and transition probability (i.e., matrix ) is not updated, AnchorGAE usually gets poor results on many datasets. For instance, Ours-B shrinks about 55% in ACC and 55% in NMI on MNIST-full, while Ours-A reduces about 40% in ACC and reduces about 15% in NMI.
-
4.
To ensure fair competition and distinguish between anchor-based methods and GAE-based methods, we set the same number of anchors for AnchorGAE-L4 and other anchor-based methods. Note that the first two layers of AnchorGAE-L4 are the same as AnchorGAE-L2. Through observing the results of experiments, AnchorGAE-L4 obtains acceptable performance in most of the datasets but not the best results. It may be caused by the too small dimension of the final embedding.
-
5.
If the initial sparsity, , is set as a too large number, then it results in dense connections. As a result, a great quantity of wrong information is captured.
-
6.
From the comparison between AnchorGAE and Ours-C, we find that the weighted graphs are still important for clustering even after introducing non-linear GNN mapping modules.
5 Proof of Theorem 1
For an arbitrary probability distribution , let be the -largest one. For simplicity, let where represents the mapping function implemented by the trained GCN encoder. Note that the connectivity distribution is estimated through the current representations, i.e.,
(36) |
where and denotes the mapping function before the new training of GCN encoders.
We define and as the solutions at the -th iteration. To prove the theorem clearly, the iterative update of and anchors is shown in Algorithm 1. The following lemma aims to depict the first estimated connectivity distribution after one perfect training of GCNs.
Lemma 1.
Suppose that
where ensures the sparsity as . Let . If for any , then ,
(37) |
Proof.
Without loss of generality, we focus on the connectivity distribution of and suppose that . Let and . According to the definitions,
(38) |
If for any , we have . Clearly, . Suppose that . Therefore, we have
(39) |
where . Combine with the condition, , and we have
(40) |
Similarly, since ,
(41) |
If we update the connectivity distribution based on , then for any ,
(42) |
Furthermore, for any ,
For any , we have the following formulation based on the above result
The proof is easy to extend to other nodes. Hence, the theorem is proved. ∎
The following lemma shows that the updated anchors will approximate the mean vectors of the connected nodes if approaches the sparse and uniform distribution.
Lemma 2.
Let denote the nodes that connect the anchor and represent a -sparse solution given by Theorem 1. If for any , then
(43) |
Proof.
To keep the notation uncluttered, let and
(44) |
Note that . Let and we have
(45) |
where . According to the condition, we have
Accordingly, we have
Hence, the theorem is proved. ∎
With the following lemma, we can prove Theorem 1 via the mathematical induction method.
Lemma 3.
Suppose that for , and . Then there exists such that if and does not connect with , then for any , and share no anchors.
Proof.
At first, let . Hence, . For an arbitrary node connected with and and an anchor which disconnects with , suppose that there exists a node such that connects with and but disconnects with . According to the triangular inequality, we have
According to the triangular inequality,
Therefore,
(46) |
If we define as
(47) |
then Ineq. (46) results in a contradiction when . ∎
Proof of Theorem 1.
The proof is to analyze each iteration of step 6. Let and we can obtain that provided that , according to Lemma 1. Certainly, the conclusion constructs the precondition of Lemma 2.
Let . According to Lemma 2, we have and we can further derive that
and
Therefore, for , we have
Due to that is upper-bounded, the above conclusions are provided as the preconditions of Lemma 3. To keep simplicity, let . Therefore, we know that for any ,
and
Furthermore, as
we have
Similarly, we can also infer that
According to Lemma 3, there exists a constant such that when , approximates the sparse and uniform distribution,
(48) |
We can repeat the above proof for every iteration to update and , and therefore, the theorem is proved. ∎
6 Conclusion
In this paper, we propose an anchor-siamese GCN clustering with adaptive bipartite convolution (AnchorGAE). The generative perspective for weighted graphs helps us to build the relationship between the non-graph-type data and the graph-type data, and provides a reconstruction goal. Since the anchor-based bipartite graph factorizes the adjacency matrix, we can rearrange the order of matrix multiplications to accelerate the graph convolution. Experiments show that AnchorGAE consumes less time especially on MNIST-full and obtains impressive performance. Ablation experiments also verify the necessity of updating anchors, transition probability, and sparsity in a self-supervised way. From both theoretical and empirical aspects, we show that the simple update would cause a collapse when the model is well-trained. The designed mechanism to connect different groups in advance is also verified by experiments. Since the original images of anchors are estimated roughly, we will focus on how to exactly compute the original images from the embedding space in future work. To further speed up the model, it is worthy developing stochastic gradient descent for AnchorGAE in the future.
References
- [1] A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an algorithm,” in Advances in neural information processing systems, 2002, pp. 849–856.
- [2] L. Hagen and A. B. Kahng, “New spectral methods for ratio cut partitioning and clustering,” IEEE transactions on computer-aided design of integrated circuits and systems, vol. 11, no. 9, pp. 1074–1085, 1992.
- [3] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000.
- [4] F. Nie, H. Huang, and C. Ding, “Low-rank matrix recovery via efficient schatten p-norm minimization,” in Proceedings of the Twenty-sixth AAAI conference on artificial intelligence, 2012.
- [5] X. Li, M. Chen, F. Nie, and Q. Wang, “A multiview-based parameter free framework for group detection,” in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017, pp. 4147–4153.
- [6] X. Li, H. Zhang, and R. Zhang, “Matrix completion via non-convex relaxation and adaptive correlation learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2022.
- [7] X. Chen, W. Hong, F. Nie, J. Z. Huang, and L. Shen, “Enhanced balanced min cut,” International Journal of Computer Vision, pp. 1–14, 2020.
- [8] F. R. Chung, Spectral graph theory. American Mathematical Soc., 1997, vol. 92.
- [9] C. Fowlkes, S. Belongie, F. Chung, and J. Malik, “Spectral grouping using the nystrom method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 2, pp. 214–225, 2004.
- [10] H. Shinnou and M. Sasaki, “Spectral clustering for a large data set by reducing the similarity matrix size,” in International Conference on Language Resources & Evaluation, 2008.
- [11] D. Yan, L. Huang, and M. I. Jordan, “Fast approximate spectral clustering,” in ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2009.
- [12] W. Liu, J. He, and S. F. Chang, “Large graph construction for scalable semi-supervised learning,” in International Conference on Machine Learning, 2010.
- [13] D. Cai and X. Chen, “Large scale spectral clustering via landmark-based sparse representation,” IEEE Trans Cybern, vol. 45, no. 8, pp. 1669–1680, 2015.
- [14] X. Chen, F. Nie, J. Z. Huang, and M. Yang, “Scalable normalized cut with improved spectral rotation,” in Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017.
- [15] X. Li, H. Zhang, R. Wang, and F. Nie, “Multiview clustering: A scalable and parameter-free bipartite graph fusion method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 1, pp. 330–344, 2022.
- [16] Q. Qiang, B. Zhang, F. Wang, and F. Nie, “Fast multi-view discrete clustering with anchor graphs,” in Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021, pp. 9360–9367.
- [17] J. Xie, R. Girshick, and A. Farhadi, “Unsupervised deep embedding for clustering analysis,” in International conference on machine learning, 2016, pp. 478–487.
- [18] J. Yang, D. Parikh, and D. Batra, “Joint unsupervised learning of deep representations and image clusters,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5147–5156.
- [19] K. Ghasedi Dizaji, A. Herandi, C. Deng, W. Cai, and H. Huang, “Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 5736–5745.
- [20] X. Yang, C. Deng, F. Zheng, J. Yan, and W. Liu, “Deep spectral clustering using dual autoencoder network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4066–4075.
- [21] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, “Geometric deep learning: Going beyond euclidean data,” IEEE Signal Process. Mag., vol. 34, no. 4, pp. 18–42, 2017.
- [22] W. Hu, T. Miyato, S. Tokui, E. Matsumoto, and M. Sugiyama, “Learning discrete representations via information maximizing self-augmented training,” in Proceedings of the 34th International Conference on Machine Learning, ICML 2017, vol. 70, 2017, pp. 1558–1567.
- [23] H. Zhang, P. Li, R. Zhang, and X. Li, “Embedding graph auto-encoder for graph clustering,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–1, 2022.
- [24] X. Peng, J. Feng, S. Xiao, W.-Y. Yau, J. T. Zhou, and S. Yang, “Structured autoencoders for subspace clustering,” IEEE Transactions on Image Processing, vol. 27, no. 10, pp. 5076–5086, 2018.
- [25] U. Shaham, K. Stanton, H. Li, B. Nadler, R. Basri, and Y. Kluger, “Spectralnet: Spectral clustering using deep neural networks,” arXiv preprint arXiv:1801.01587, 2018.
- [26] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” IEEE Transactions on Neural Networks, vol. 20, no. 1, pp. 61–80, 2008.
- [27] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, “Spectral networks and locally connected networks on graphs,” arXiv preprint arXiv:1312.6203, 2013.
- [28] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in ICLR, 2017.
- [29] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in 6th International Conference on Learning Representations, ICLR 2018, 2018.
- [30] M. Li and Z. Zhu, “Spatial-temporal fusion graph neural networks for traffic flow forecasting,” in Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, pp. 4189–4196.
- [31] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A comprehensive survey on graph neural networks,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–21, 2020.
- [32] J. Chen, T. Ma, and C. Xiao, “Fastgcn: Fast learning with graph convolutional networks via importance sampling,” in 6th International Conference on Learning Representations, ICLR 2018, 2018.
- [33] W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 1025–1035.
- [34] W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. Hsieh, “Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 257–266.
- [35] F. Wu, T. Zhang, A. Holanda de Souza, C. Fifty, T. Yu, and K. Q. Weinberger, “Simplifying graph convolutional networks,” Proceedings of Machine Learning Research, 2019.
- [36] J. Chen, J. Zhu, and L. Song, “Stochastic training of graph convolutional networks with variance reduction,” arXiv preprint arXiv:1710.10568, 2017.
- [37] D. Bo, X. Wang, C. Shi, M. Zhu, E. Lu, and P. Cui, “Structural deep clustering network,” in WWW. ACM / IW3C2, 2020, pp. 1400–1410.
- [38] M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” in Advances in neural information processing systems, 2016, pp. 3844–3852.
- [39] C. Wang, S. Pan, R. Hu, G. Long, J. Jiang, and C. Zhang, “Attributed graph clustering: a deep attentional embedding approach,” in Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019, pp. 3670–3676.
- [40] R. Cheng and Q. Li, “Modeling the momentum spillover effect for stock prediction via attribute-driven graph attention networks,” in Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, pp. 55–62.
- [41] Y. Li, R. Yu, C. Shahabi, and Y. Liu, “Diffusion convolutional recurrent neural network: Data-driven traffic forecasting,” in 6th International Conference on Learning Representations, ICLR 2018, 2018.
- [42] K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” in 7th International Conference on Learning Representations, ICLR 2019, 2019.
- [43] T. Cai, S. Luo, K. Xu, D. He, T. Liu, and L. Wang, “Graphnorm: A principled approach to accelerating graph neural network training,” in Proceedings of the 38th International Conference on Machine Learning, ICML 2021, vol. 139, 2021, pp. 1204–1215.
- [44] K. Xu, M. Zhang, S. Jegelka, and K. Kawaguchi, “Optimization of graph neural networks: Implicit acceleration by skip connections and more depth,” in Proceedings of the 38th International Conference on Machine Learning, ICML 2021, vol. 139, 2021, pp. 11 592–11 602.
- [45] K. Oono and T. Suzuki, “Graph neural networks exponentially lose expressive power for node classification,” in International Conference on Learning Representations, 2020.
- [46] W. Cong, M. Ramezani, and M. Mahdavi, “On provable benefits of depth in training graph convolutional networks,” Advances in Neural Information Processing Systems, vol. 34, 2021.
- [47] U. Alon and E. Yahav, “On the bottleneck of graph neural networks and its practical implications,” in 9th International Conference on Learning Representations, ICLR 2021, 2021.
- [48] T. Kipf and M. Welling, “Variational graph auto-encoders,” arXiv preprint arXiv:1611.07308, 2016.
- [49] S. Pan, R. Hu, G. Long, J. Jiang, L. Yao, and C. Zhang, “Adversarially regularized graph autoencoder for graph embedding.” in IJCAI, 2018, pp. 2609–2615.
- [50] J. Park, M. Lee, H. J. Chang, K. Lee, and J. Y. Choi, “Symmetric graph convolutional autoencoder for unsupervised graph representation learning,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 6519–6528.
- [51] X. Zhang, H. Liu, Q. Li, and X. M. Wu, “Attributed graph clustering via adaptive graph convolution,” in 28th International Joint Conference on Artificial Intelligence, IJCAI 2019, 2019, pp. 4327–4333.
- [52] X. Li, H. Zhang, and R. Zhang, “Adaptive graph auto-encoder for general data clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2021.
- [53] G. E. Hinton and R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006.
- [54] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in ICLR, 2014.
- [55] K. Hassani and A. H. K. Ahmadi, “Contrastive multi-view representation learning on graphs,” in International conference on machine learning, 2020.
- [56] H. Zhu and P. Koniusz, “Simple spectral graph convolution,” in 9th International Conference on Learning Representations, 2021.
- [57] G. Karypis and V. Kumar, “A fast and high quality multilevel scheme for partitioning irregular graphs,” Siam Journal on Scientific Computing, vol. 20, no. 1, pp. 359–392, 1998.
- [58] J. Macqueen, “Some methods for classification and analysis of multivariate observations,” in Proc of Berkeley Symposium on Mathematical Statistics & Probability, 1965.
- [59] Y. Li, P. Hu, J. Z. Liu, D. Peng, J. T. Zhou, and X. Peng, “Contrastive clustering,” in Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Virtual Event, February 2-9, 2021, 2021, pp. 8547–8555.
- [60] X. Liu, M. Yan, L. Deng, G. Li, X. Ye, D. Fan, S. Pan, and Y. Xie, “Survey on graph neural network acceleration: An algorithmic perspective,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022.
- [61] F. Spitzer, Principles of random walk. Springer Science & Business Media, 2013, vol. 34.
- [62] S. Chopra, R. Hadsell, and Y. Lecun, “Learning a similarity metric discriminatively, with application to face verification,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005.
- [63] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv preprint arXiv:1511.05644, 2015.
- [64] T. Chen, S. Kornblith, M. Norouzi, and G. E. Hinton, “A simple framework for contrastive learning of visual representations,” in Proceedings of the 37th International Conference on Machine Learning, ICML 2020, ser. Proceedings of Machine Learning Research, vol. 119, 2020, pp. 1597–1607.
- [65] F. Nie, X. Wang, C. Deng, and H. Huang, “Learning a structured optimal bipartite graph for co-clustering,” in Advances in Neural Information Processing Systems, 2017, pp. 4129–4138.
- [66] D. Dua and C. Graff, “UCI machine learning repository,” 2017. [Online]. Available: http://archive.ics.uci.edu/ml
- [67] J. J. Hull, “A database for handwritten text recognition research,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 5, pp. 550–554, 2002.
- [68] Y. Lecun and L. Bottou, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
- [69] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv: 1708.07747, 2017.
![]() |
Hongyuan Zhang received the B.E. degree in software engineering from Xidian University, Xi’an, China in 2019. He is currently pursuing the Ph.D. degree from the School of Computer Science and the School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xi’an, China. |
![]() |
Jiankun Shi received the B.E. degree in software engineering from Henan University, Zhengzhou, China in 2020. He is currently working toward the master’s degree with the School of Artificial Intelligence, OPtics and ElectroNics(iOPEN), Northwestern Polytechnical University, Xi’an, China. |
![]() |
Rui Zhang (M’19) received the Ph.D degree in computer science at Northwestern Polytechnical University, Xi’an, China in 2018. He currently serves as an Associate Professor with the School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xi’an, China. |
Xuelong Li (M’02-SM’07-F’12) is a Full Professor with the School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xi’an, China. |