Residual Hyperbolic Graph Convolution Networks
Abstract
Hyperbolic graph convolutional networks (HGCNs) have demonstrated representational capabilities of modeling hierarchical-structured graphs. However, as in general GCNs, over-smoothing may occur as the number of model layers increases, limiting the representation capabilities of most current HGCN models. In this paper, we propose residual hyperbolic graph convolutional networks (-HGCNs) to address the over-smoothing problem. We introduce a hyperbolic residual connection function to overcome the over-smoothing problem, and also theoretically prove the effectiveness of the hyperbolic residual function. Moreover, we use product manifolds and HyperDrop to facilitate the -HGCNs. The distinctive features of the -HGCNs are as follows: (1) The hyperbolic residual connection preserves the initial node information in each layer and adds a hyperbolic identity mapping to prevent node features from being indistinguishable. (2) Product manifolds in -HGCNs have been set up with different origin points in different components to facilitate the extraction of feature information from a wider range of perspectives, which enhances the representing capability of -HGCNs. (3) HyperDrop adds multiplicative Gaussian noise into hyperbolic representations, such that perturbations can be added to alleviate the over-fitting problem without deconstructing the hyperbolic geometry. Experiment results demonstrate the effectiveness of -HGCNs under various graph convolution layers and different structures of product manifolds.
Introduction
Hyperbolic graph convolutional networks (HGCNs) have been an emerging research topic due to its superior representation capabilities of modeling hierarchical graphs (Chami et al. 2019; Dai et al. 2021). Different from Euclidean spaces with a polynomial expanding space volume, hyperbolic spaces increase exponentially growth of space volume with radius, which is well-suited to the geometry of hierarchical data. Benefiting from this property, great progress has been made by generalizing Euclidean methods to hyperbolic spaces such as hyperbolic graph convolutional networks (Chami et al. 2019; Dai et al. 2021), hyperbolic image embeddings (Khrulkov et al. 2020), and hyperbolic word embeddings (Nickel and Kiela 2017, 2018). However, the over-smoothing issue impeding the development of deep HGCNs, over-smoothing means node features becomes indistinguishable after passing through a large number of graph convolution layers. It proved that graph convolution is a special form of Laplacian smoothing (Li, Han, and Wu 2018) . Smoothing on nodes can reduce intra-class differences, while over-smoothing makes model less discriminative with indistinguishable node features.
In this paper, we propose residual hyperbolic graph convolutional networks (-HGCNs) to address the over-smoothing problem. Specifically, we introduce a hyperbolic residual connection function and use product manifolds and HyperDrop into HGCNs. The details are as follows: (1) The hyperbolic residual connection transmits the initial node information to each layer to prevent node features from being indistinguishable, and the hyperbolic identity mapping prevents performance degradation caused by deepening the models. (2) Product manifolds pick different origin points on different components, which makes the same input have different embedding results in different components, giving them the ability to view the graph structure from different perspectives. This enhances the representational ability of -HGCNs. (3) HyperDrop adds multiplicative Gaussian noise on hyperbolic neurons to alleviate the over-fitting issue, inheriting the insight of training with noise and preserving the hyperbolic geometry, Extensive experiments demonstrate the effectiveness of -HGCNs under various graph convolution layers and different structures of product manifolds. The contributions of this paper are summarized as follows:
-
•
We propose -HGCN, a product manifold based deep hyperbolic graph convolutional network that makes up for the deficiency of existing HGCNs for capturing long-range relationships in graphs.
-
•
We design hyperbolic residual connection that addresses the over-smoothing issue while deepening HGCNs and theoretically prove the effectiveness of the hyperbolic residual connection.
-
•
We propose to use product manifolds with different origin points in different components, which enables -HGCNs to extract more comprehensive features from the data.
-
•
We develop HyperDrop, a regularization method tailored for hyperbolic representations. It can improve -HGCNs’ generalization ability, alleviating the over-fitting issue.
Related Work
Graph convolutional networks typically embed graphs in Euclidean spaces since Euclidean spaces are the most commonly used and easy to calculate. Many researchers have noticed that Euclidean spaces have limitations while modeling data with hierarchical structure. Sa et al.(De Sa et al. 2018) claimed that it is not possible to embed trees into Euclidean spaces with arbitrarily low distortion, even in an infinite-dimensional Euclidean space. Meanwhile, trees can be embedded into a two-dimensional hyperbolic space with arbitrarily low distortion. Such a surprising fact benefits from the pretty property of hyperbolic spaces: unlike the volume of a ball in Euclidean space, which expands polynomially with radius, the volume of space in hyperbolic space grows exponentially with radius. (Liu, Nickel, and Kiela 2019). Thus, hyperbolic spaces are commonly viewed as a smooth version of tree and more suitable to model hierarchical data.
Several works discovered that graphs, e.g., biological networks and social networks, exhibit a highly hierarchical structure (Krioukov et al. 2010; Papadopoulos et al. 2012). Krioukov et al.(Krioukov et al. 2010) proved that the typical properties such as power-law degree distribution and strong clustering in such graphs are closely related to the curvature of hyperbolic spaces. Based on the above observations, generalizing GCNs from Euclidean spaces to hyperbolic spaces have been an emerging research topic. Liu et al.(Liu, Nickel, and Kiela 2019) and Chami et al.(Chami et al. 2019) first bridged the research gap and concurrently proposed HGCNs. Following the above works, many advanced techniques are proposed to improve HGCNs. Dai et al.(Dai et al. 2021) discovered that performimg graph convolution in tangent space will distort the global structure of hyperbolic spaces because tangent space is only a local approximation of hyperbolic manifolds. Yao et al.(Yao, Pi, and Chen 2022) designs a Hyperbolic Skipped Knowledge Graph Convolutional Network to capture the network structure characteristics in hyperbolic knowledge embeddings. Liu et al.(Liu and Lang 2023) propose a Multi-curvature Hyperbolic Heterogeneous Graph Convolutional Network (McH-HGCN) based on type triplets for heterogeneous graphs.
Preliminaries
Hyperbolic Manifold.
A Riemannian manifold or Riemannian space is a real and smooth manifold equipped with a positive-definite metric tensor . It is a topological space that is locally homeomorphic to an Euclidean space at each point , and the local Euclidean space is termed the tangent space .
Lorentz Model.
A -dimensional Lorentz model (, ) is defined by the manifold where the Lorentz inner product is defined as
(1) |
and the metric tensor where denotes a diagonal matrix.
Exponential and logarithmic maps.
Mappings between Riemannian manifold and their tangent spaces are termed exponential and logarithmic maps. Let be a point on the Lorentz manifold , be the tangent space at , and be a vector on the tangent space . The exponential map that projects onto the manifold is defined as
(2) |
where is the norm of . The logarithmic map, inverse to the exponential map at , is given by
(3) |
Parallel Transport.
The generalization of parallel translation to non-Euclidean geometry is termed parallel transport. For two points on the Lorentz model, the parallel transport of a tangent vector on the tangent space at to the tangent space at , along a smooth curve on the Lorentz model, is defined as
(4) |
Method
We propose residual hyperbolic graph convolutional networks (-HGCNs) to address the over-smoothing problem and enhance the representational ability of HGCNs.
Let denote a graph with a vertex set and an edge set . denotes the node features that typically lie in Euclidean spaces. is the number of nodes, and is the dimension of the node features.
Residual Hyperbolic Graph Convolution
Lorentz Operations.
Due to the strict manifold constraint of the Lorentz model, basic operations (matrix multiplication, vector addition, etc) are non-trivial to be generalized to Lorentz representations. Based on the maps mentioned above, we define the following operations.
Definition 1
Definition 2
(Lorentz scalar multiplication). The Lorentz scalar multiplication of a scale and on the Lorentz model is defined as
(6) |
Definition 3
(Lorentz vector addition). The Lorentz vector addition of on the Lorentz model is defined as
(7) |
where is the parallel transport operator defined in Eq (4).
Definition 4
(Lorentz activation function). For , the Lorentz activation function on the Lorentz model is defined as
(8) |
where can be any activation function such as .
Residual Hyperbolic Graph Convolution.
Since the input of a hyperbolic graph convolutional network is required to be hyperbolic, we construct the initial Lorentz node features whose -th row being the Lorentz feature of the -th node is generated by
(9) | ||||
where denotes the -row of . Such a construction is based on the fact that can be viewed as a tangent vector on the tangent space at the origin point, satisfying where is the origin point of the Lorentz model.
The performance of a hyperbolic graph convolutional network declines as the number of graph convolution layers increases, that is called over-smoothing issue (Li, Han, and Wu 2018). This is because the graph convolution is proved to be a special form of Laplacian smoothing, making node features tend to be indistinguishable after extensive graph convolutions(Li, Han, and Wu 2018). Inspired by (Chen et al. 2020), we design hyperbolic residual connection and hyperbolic identity mapping to tackle this issue. The residual hyperbolic graph convolution operator is defined as
(10) | ||||
where , , and are the Lorentz matrix-vector multiplication (Definition 1), the Lorentz scalar multiplication (Definition 2), and the Lorentz vector addition (Definition 3). Here is the Lorentz activation function (Definition 4). and are hyper-parameters to control the weight of hyperbolic residual connection and hyperbolic identity mapping.
Formally, at the -th layer, residual hyperbolic graph convolution performs like
(11) | ||||
where takes the node features from the previous layer, and outputs the node features at the -th layer.
Compared to the convolution operator in vanilla GCNs, i.e. , relieves over-smoothing issue through two modifications: (1) hyperbolic residual connection adds information paths from initial node features to each graph convolution layer, such that no matter how deep a hyperbolic graph convolutional network is, the node features at the top layer still combine initial node features avoid becoming indistinguishable; (2) hyperbolic identity mapping ensures that a deep hyperbolic graph convolutional network is not worse than a shallow model. In the extreme case where the values of are set to be zero after the -th layer, the model degenerates to an -layer GCN no matter how deep it is.
Effectiveness of Hyperbolic Residual Connection
This section is meant to theoretically explain the efficiency of our network architecture. Inspired by (Cai and Wang 2020), we first define a hyperbolic version of “Dirichlet energy" for tracking node embeddings as follows. The Dirichlet energy of a function measures the "smoothness" of a unit norm function. The indistinguishable parameters leading to the over-smoothing issue result in small Dirichlet energy. For details of formulas and proofs in this section, see the supplementary material.
Definition 5
Dirichlet energy of a scalar function on the graph is defined as
where , while and are the adjacency and degree matrices of . For a vector field , its Dirichlet energy is
Note that the defined Dirichlet energy always pulls back the node embedding in Lorentz space to its tangent space at the origin point.
If the hyperbolic graph convolution operator as in (10) has no original input, i.e. therein, then
(12) |
where is the smallest non-zero eigenvalue of and denotes the maximal singular value of . Note that for any matrix . Hence by the triangle inequality,
(13) |
Actually for any weight matrix may be estimated by
Lemma 1
If is a weight matrix, i.e. , then for any with , .
Hence combined with (12) and (13), we easily prove that in HGCNs without initial input, we have . Thus if the graph does not have enough expansion, say , then HGCNs would be exponentially over-smoothing as the number of layers increases.
The above result suggests us add an initial input as in ,
seeking to interfere the decreasing of Dirichlet energy, which is the motivation of -HGCNs.
We investigate in details the interference of initial input. For simplicity we assume that the features in process all have positive entries so that ReLU does not affect the evaluation of Dirichlet energy. Thus utilizing the same argument as in the proof of (12) in the case with initial input, we have
(14) |
with and the last equality of which is due to linearity of parallel transport, and .
By definition of parallel transport, we have
(15) |
Further by definition,
(16) |
Also by definition, . Then again by the property of parallel transport, we have
(17) |
where are coefficients depending on and the Lorentzian norm of the above 3 vectors. Noting that only depends on and , the effect of is always not negligible. Also ()
which removes from and re-scale it by a large factor. Then altogether we have
(18) |
where is negligible similarly. Thus we prove that in -HGCNs with initial input, the Dirichlet energy will be bounded away from zero even if the Dirichlet energy in the corresponding HGCNs without initial input decrease to zero.
Product Manifold
We use the product manifold of the Lorentz models as embedding space. The Lorentz components of the product manifold are independent of each other.
The product manifold is the Cartesian product of a sequence of Riemannian manifolds, each of which is called a component. Given a sequence of the Lorentz models where denotes the dimension of the -th component, the product manifold is defined as . The coordinate of a point on is written as where . Similarly, the coordinate of a tangent vector is written as where . For and , the exponential and logarithmic maps on are defined as
(19) |
(20) |
Different from ordinary product manifolds, we use , where are copies of -dimensional Lorentz spaces with randomly prescribed origin points . This gives -HGCNs the ability to extract node features from a wider range of different perspectives. Mathematically, such construction of product manifolds is inspired by the general construction of manifolds using Euclidean strata with different coordinates.
Hyperbolic Dropout
Hyperbolic Dropout(HyperDrop) adds multiplicative Gaussian noise on Lorentz components to regularize the HGCNs and alleviate the over-fitting issue. Concretely, let denote a product manifold of Lorentz models where denotes the dimension of the -th Lorentz component. Given an input on the product manifold where , HyperDrop is formulated as
(21) | ||||
where is the multiplicative Gaussian noise drawn from the Gaussian distribution . Following (Srivastava et al. 2014), we set where denotes drop rate. denotes the Lorentz scalar multiplication that is the generalization of scalar multiplication to the Lorentz representations, defined in Definition 2. could be any realization of a desirable function, such as a neural network with parameters .
It is noted that we sample from the Gaussian distribution instead of the Bernoulli distribution used in the standard dropout for the following reason. If is drawn from the Bernoulli distribution and happens to be (with a probability of ) at the -th neural network layer, the information flow of the -th Lorentz component will be interrupted, leading to the deactivation of the -th Lorentz component after the -th neural network layer. In contrast, drawn from a Gaussian distribution with mean value is exactly equal to is a small probability event. Thus, the -th Lorentz component always works.
We may interpret HyperDrop from a Bayesian perspective. For convenience, we take a single Lorentz model and the Lorentz linear transformation as an example, i.e. and . We have
(22) |
equal to
(23) | ||||
where denotes the Lorentz matrix-vector multiplication as defined in Definition 1, is the matrix with as entries, and is the matrix with as entries. Eq (23) can be interpreted as a Bayesian treatment that the posterior distribution of the weight is given by a Gaussian distribution i.e. . The HyperDrop sampling procedure Eq (22) can be interpreted as rising from a reparameterization of the posterior on the parameter as shown in Eq (23).
Residual Hyperbolic Graph Convolutional Network
We then investigate our architecture of -HGCN and its effect of preventing over-smoothing. In fact, we always combine -HGCN with initial input. Note that our model is of the form , where are copies of -dimensional Lorentz spaces with random prescribed origin points . Then for any , its Dirichlet energy is defined as
(24) |
Then by the similar argument with the proof of (12), we can estimate separately and then opt for the maximal among them, which may be better behaved than any single component due to possible fluctuation.
Network Architecture.
Let denote the product manifold of Lorentz models where is the dimension of the -th Lorentz model. The initial node features on the product manifold of Lorentz models are given by ()
(25) | ||||
where denotes the -th row of the node features on the -th Lorentz component.
The graph convolution on the product manifold of the Lorentz models combined with HyperDrop is realized by instantiating in Eq (21) as the hyperbolic graph convolution operator , i.e. Eq (11) becomes
(26) | ||||
where is the node features on the -th Lorentz component at the -th layer. is the -th row (i.e., the -th node) of . denotes the Lorentz scalar multiplication defined in Definition 2. is the random multiplicative noise drawn from the Gaussian distribution . We set where denotes the drop rate. The node features at the last layer can be used for downstream tasks. Taking the node classification task as an example, we map node features to the tangent spaces of the product manifolds, and send tangent representations to a fully-connected layer followed by a softmax for classification.
Experiments
Experiments are performed on the semi-supervised node classification task. We first evaluate the performance of -HGCN under different configurations of models, including various graph convolution layers and different structures of product manifolds. Then, we compare with several state-of-the-art Euclidean GCNs and HGCNs, showing that -HGCN achieves competitive results. Further, we compare with DropConnect(Wan et al. 2013), a related regularization mathod for deep GCNs.
Datasets | Pubmed | Citeseer | Cora | Airport |
---|---|---|---|---|
Classes | ||||
Nodes | ||||
Edges | ||||
Features |
Datasets and Baselines
We use four standard commonly-used citation network graph datasets : Pubmed, Citeseer, Cora and Airport (Sen et al. 2008). Dataset statistics are summarized in Table 1. Experiment details see in the supplementary material.
Datasets | Methods | 4 layers | 8 layers | 16 layers | 32 layers | ||||
---|---|---|---|---|---|---|---|---|---|
Original | HyperDrop | Original | HyperDrop | Original | HyperDrop | Original | HyperDrop | ||
Pubmed | GCNII | ||||||||
-HGCN[2×8] | |||||||||
-HGCN[4×4] | |||||||||
-HGCN[8×2] | |||||||||
-HGCN[16×1] | |||||||||
Citeseer | GCNII | ||||||||
-HGCN[2×8] | |||||||||
-HGCN[4×4] | |||||||||
-HGCN[8×2] | |||||||||
-HGCN[16×1] | |||||||||
Cora | GCNII | ||||||||
-HGCN[2×8] | |||||||||
-HGCN[4×4] | |||||||||
-HGCN[8×2] | |||||||||
-HGCN[16×1] | |||||||||
Airport | GCNII | ||||||||
-HGCN[2×8] | |||||||||
-HGCN[4×4] | |||||||||
-HGCN[8×2] | |||||||||
-HGCN[16×1] |
Validation Experiments
Here we demonstrate the effectiveness of the -HGCN and our regularization method under different model configurations. For -HGCN, increasing the number of hyperbolic graph convolution layers almost always brings improvements on three datasets.
Methods | Pubmed | Citeseer | Cora | |
---|---|---|---|---|
Euclidean | GCN(Kipf and Welling 2017) | |||
GAT(Veličković et al. 2017) | ||||
GraphSage(Hamilton, Ying, and Leskovec 2017) | ||||
SGC(Wu et al. 2019) | ||||
APPNP(Klicpera, Bojchevski, and Günnemann 2019) | ||||
GCNII(8)(Chen et al. 2020) | ||||
Hyperbolic | HGCN (Chami et al. 2019) | - | ||
H2HGCN(Dai et al. 2021) | - | |||
GCN (Bachmann, Bécigneul, and Ganea 2020) | ||||
LGCN (Zhang et al. 2021) | ||||
-HGCN[16×1](8) | ||||
-HGCN[16×1](8)+HyperDrop | ||||
-HGCN[2×8](8) | ||||
-HGCN[2×8](8)+HyperDrop |
Layers | -HGCN[2×8] | -HGCN[4×4] | -HGCN[8×2] | -HGCN[16×1] | ||||
---|---|---|---|---|---|---|---|---|
with IRC | w/o IRC | with IRC | w/o IRC | with IRC | w/o IRC | with IRC | w/o IRC | |
2 | ||||||||
4 | ||||||||
8 | ||||||||
16 |
Methods | Pubmed | Citeseer | Cora |
---|---|---|---|
w/o dropout | |||
DropConnect | |||
HyperDrop | |||
Both |
The criterion for judging the effectiveness of HyperDrop is to test whether the performance of -HGCN is improved with the help of HyperDrop. In Table 2, we report the performance of HyperDrop with various graph convolution layers and structures of product manifolds on Pubmed, Citeseer Cora and Airport. We observe that in most experiment, HyperDrop improves the performance of -HGCN. For example, on Cora, -HGCN[2×8] obtains , , and gains with , , and layers. The stable improvements demonstrate that HyperDrop can effectively improve the generalization ability of -HGCN.
We also compare -HGCN with a deep Euclidean method, GCNII. The hyperbolic residual connection and hyperbolic identity mapping in -HGCN are inspired by GCNII. The main difference between -HGCN and GCNII is, -HGCN performs graph representation learning in hyperbolic spaces while GCNII is in Euclidean spaces. As shown in Table 2, -HGCN shows superiority compared to GCNII. Actually, -HGCN is only baseline model we developed for evaluating the effectiveness of HyperDrop, and we give up extra training tricks for clear evaluations. For example, in Section Comparisons with DropConnect, -HGCN obtains the same mean accuracy as GCNII with layers on Cora while using HyperDrop and DropConnect(Wan et al. 2013) together. DropConnect is used in other hyperbolic graph convolutional networks, such HGCN (Chami et al. 2019) and LGCN (Zhang et al. 2021). We claim that the superior performance of -HGCN compared to GCNII benefits from the representing capabilities of hyperbolic spaces while dealing with hierarchical-structure data. It confirms the significance of hyperbolic representation learning.
Ablation Experiments
We conducted ablation experiments on the Pubmed dataset to observe the effect of our proposed hyperbolic residual connection on -HGCN performance in different dimension selection approaches, respectively. As can be seen from Tabel 4, without the hyperbolic residual connection, the fortunate performance of the model shows a different degree of decrease respectively. Moreover, as the number of model layers increases, the decline in model performance is more pronounced. The experimental results prove that hyperbolic residual connection has a great helpful effect on the model performance.
Performance Comparisons
The comparisons with several state-of-the-art Euclidean GCNs and HGCNs are shown in Table 3. We have three observations. First and most importantly, compared with other HGCNs that are typically shallow models, -HGCN shows better results on Pubmed and Citeseer. Through, on Pubmed, HGCN also achieves the best accuracy, . Note that HGCN uses extra link prediction task as pre-training model while -HGCN does not use this training trick for a clear evaluation of HyperDrop; and the performance of HGCN decreases when the link prediction pre-training is not used. On Cora, LGCN achieves the highest mean accuracy among HGCNs. Note that both HGCN and LGCN utilize DropConnect (Wan et al. 2013) technique for training. As shown in Section Comparisons with DropConnect, -HGCN obtains mean accuracy on Cora while also using DropConnect, that is higher than that of LGCN. Second, both -HGCN[16×1] and -HGCN[2×8] benefit from HyperDrop on three datasets. It proves HyperDrop alleviates over-fitting issue in hyperbolic graph convolutional network and improves the generalization ability of -HGCN on the test set. Third, compared with Euclidean GCNs, -HGCN combined with HyperDrop achieves the best results on Pubmed and Citeseer. It confirms the superiority of hyperbolic representation learning while modeling graph data.
Comparisons with DropConnect
Table 5 shows the performance of HyperDrop and DropConnect(Wan et al. 2013) on pubmed, Citeseer, and Cora using a -dimensional -HGCN. Since there is no dropout method tailed for hyperbolic representations before HyperDrop, some works (Chami et al. 2019; Zhang et al. 2021) use DropConnect as a regularization. DropConnect is one of variants of dropout methods that randomly zeros out elements of the Euclidean parameters in model, and it can be used in hyperbolic graph convolutional network as the parameters in hyperbolic graph convolutional network are Euclidean.
For DropConnect, we search the drop rate from to and report the best results. DropConnect obtains improvements on Pubmed and Cora but not on Citeseer. In contrast, HyperDrop achieves stable improvements on three datasets, and higher mean accuracy on Pubmed and Citeseer compared to DropConnect. HyperDrop and DropConnect are two dropout methods that the former works on hyperbolic representations and the latter works on Euclidean parameters. They can work together effectively for a better generalization of -HGCN. As the results on Citeseer and Cora show, using HyperDrop and DropConnect together has better performance than using only HyperDrop or DropConnect individually.
Conclusion
In this paper, we have proposed -HGCN, a product manifold based residual hyperbolic graph convolutional network for overcoming the over-smoothing problem. The residual connections can prevent node representations from being indistinguishable by hyperbolic residual connection and hyperbolic identity mapping. The product manifold with different origin points also provides a wider range of perspectives of data. A novel hyperbolic dropout method, HyperDrop, is proposed to alleviate the over-fitting issue while deepening models. Experiments have demonstrated the effectiveness of -HGCNs under various graph convolution layers and different structures of product manifolds.
Acknowledgements
This work was supported by the Natural Science Foundation of China (NSFC) under Grants No. 62172041 and No. 62176021, Natural Science Foundation of Shenzhen under Grant No. JCYJ20230807142703006, and 2023 Shenzhen National Science Foundation(No. 20231128220938001).
References
- Bachmann, Bécigneul, and Ganea [2020] Bachmann, G.; Bécigneul, G.; and Ganea, O. 2020. Constant curvature graph convolutional networks. In International Conference on Machine Learning (ICML), 486–496.
- Cai and Wang [2020] Cai, C.; and Wang, Y. 2020. A note on over-smoothing for graph neural networks. arXiv preprint arXiv:2006.13318.
- Chami et al. [2019] Chami, I.; Ying, Z.; Ré, C.; and Leskovec, J. 2019. Hyperbolic graph convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 4868–4879.
- Chen et al. [2020] Chen, M.; Wei, Z.; Huang, Z.; Ding, B.; and Li, Y. 2020. Simple and deep graph convolutional networks. In International Conference on Machine Learning (ICML), 1725–1735.
- Dai et al. [2021] Dai, J.; Wu, Y.; Gao, Z.; and Jia, Y. 2021. A hyperbolic-to-hyperbolic graph convolutional network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 154–163.
- De Sa et al. [2018] De Sa, C.; Gu, A.; Ré, C.; and Sala, F. 2018. Representation tradeoffs for hyperbolic embeddings. Proceedings of Machine Learning Research, 80: 4460.
- Hamilton, Ying, and Leskovec [2017] Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS), 1024–1034.
- Khrulkov et al. [2020] Khrulkov, V.; Mirvakhabova, L.; Ustinova, E.; Oseledets, I.; and Lempitsky, V. 2020. Hyperbolic image embeddings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6418–6428.
- Kipf and Welling [2017] Kipf, T. N.; and Welling, M. 2017. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR).
- Klicpera, Bojchevski, and Günnemann [2019] Klicpera, J.; Bojchevski, A.; and Günnemann, S. 2019. Predict then propagate: Graph neural networks meet personalized pagerank. In International Conference on Learning Representations (ICLR).
- Krioukov et al. [2010] Krioukov, D.; Papadopoulos, F.; Kitsak, M.; Vahdat, A.; and Boguná, M. 2010. Hyperbolic geometry of complex networks. Physical Review E, 82(3): 036106.
- Li, Han, and Wu [2018] Li, Q.; Han, Z.; and Wu, X.-M. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 3538–3545.
- Liu, Nickel, and Kiela [2019] Liu, Q.; Nickel, M.; and Kiela, D. 2019. Hyperbolic graph neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 8230–8241.
- Liu and Lang [2023] Liu, Y.; and Lang, B. 2023. McH-HGCN: multi-curvature hyperbolic heterogeneous graph convolutional network with type triplets. Neural Computing and Applications, 35(20): 15033–15049.
- Nickel and Kiela [2017] Nickel, M.; and Kiela, D. 2017. Poincaré embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems (NeurIPS), 6338–6347.
- Nickel and Kiela [2018] Nickel, M.; and Kiela, D. 2018. Learning continuous hierarchies in the lorentz model of hyperbolic geometry. In International Conference on Machine Learning (ICML), 3776–3785.
- Papadopoulos et al. [2012] Papadopoulos, F.; Kitsak, M.; Serrano, M. Á.; Boguná, M.; and Krioukov, D. 2012. Popularity versus similarity in growing networks. Nature, 489(7417): 537–540.
- Sen et al. [2008] Sen, P.; Namata, G.; Bilgic, M.; Getoor, L.; Galligher, B.; and Eliassi-Rad, T. 2008. Collective classification in network data. AI magazine, 29(3): 93–93.
- Srivastava et al. [2014] Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research (JMLR), 15(1): 1929–1958.
- Veličković et al. [2017] Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. In International Conference on Learning Representations (ICLR).
- Wan et al. [2013] Wan, L.; Zeiler, M.; Zhang, S.; Le Cun, Y.; and Fergus, R. 2013. Regularization of neural networks using dropconnect. In International Conference on Machine Learning (ICML), 1058–1066.
- Wu et al. [2019] Wu, F.; Zhang, T.; Souza Jr, A. H. d.; Fifty, C.; Yu, T.; and Weinberger, K. Q. 2019. Simplifying graph convolutional networks. In International Conference on Machine Learning (ICML), 6861–6871.
- Yao, Pi, and Chen [2022] Yao, S.; Pi, D.; and Chen, J. 2022. Knowledge embedding via hyperbolic skipped graph convolutional networks. Neurocomputing, 480: 119–130.
- Zhang et al. [2021] Zhang, Y.; Wang, X.; Shi, C.; Liu, N.; and Song, G. 2021. Lorentzian graph convolutional networks. In Proceedings of the Web Conference (WWW), 1249–1261.