This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Residual Hyperbolic Graph Convolution Networks

Yangkai Xue1, Jindou Dai1, Zhipeng Lu2, Yuwei Wu1,2*, Yunde Jia2,1 Corresponding authors.
Abstract

Hyperbolic graph convolutional networks (HGCNs) have demonstrated representational capabilities of modeling hierarchical-structured graphs. However, as in general GCNs, over-smoothing may occur as the number of model layers increases, limiting the representation capabilities of most current HGCN models. In this paper, we propose residual hyperbolic graph convolutional networks (\mathcal{R}-HGCNs) to address the over-smoothing problem. We introduce a hyperbolic residual connection function to overcome the over-smoothing problem, and also theoretically prove the effectiveness of the hyperbolic residual function. Moreover, we use product manifolds and HyperDrop to facilitate the \mathcal{R}-HGCNs. The distinctive features of the \mathcal{R}-HGCNs are as follows: (1) The hyperbolic residual connection preserves the initial node information in each layer and adds a hyperbolic identity mapping to prevent node features from being indistinguishable. (2) Product manifolds in \mathcal{R}-HGCNs have been set up with different origin points in different components to facilitate the extraction of feature information from a wider range of perspectives, which enhances the representing capability of \mathcal{R}-HGCNs. (3) HyperDrop adds multiplicative Gaussian noise into hyperbolic representations, such that perturbations can be added to alleviate the over-fitting problem without deconstructing the hyperbolic geometry. Experiment results demonstrate the effectiveness of \mathcal{R}-HGCNs under various graph convolution layers and different structures of product manifolds.

Introduction

Hyperbolic graph convolutional networks (HGCNs) have been an emerging research topic due to its superior representation capabilities of modeling hierarchical graphs (Chami et al. 2019; Dai et al. 2021). Different from Euclidean spaces with a polynomial expanding space volume, hyperbolic spaces increase exponentially growth of space volume with radius, which is well-suited to the geometry of hierarchical data. Benefiting from this property, great progress has been made by generalizing Euclidean methods to hyperbolic spaces such as hyperbolic graph convolutional networks (Chami et al. 2019; Dai et al. 2021), hyperbolic image embeddings (Khrulkov et al. 2020), and hyperbolic word embeddings (Nickel and Kiela 2017, 2018). However, the over-smoothing issue impeding the development of deep HGCNs, over-smoothing means node features becomes indistinguishable after passing through a large number of graph convolution layers. It proved that graph convolution is a special form of Laplacian smoothing (Li, Han, and Wu 2018) . Smoothing on nodes can reduce intra-class differences, while over-smoothing makes model less discriminative with indistinguishable node features.

In this paper, we propose residual hyperbolic graph convolutional networks (\mathcal{R}-HGCNs) to address the over-smoothing problem. Specifically, we introduce a hyperbolic residual connection function and use product manifolds and HyperDrop into HGCNs. The details are as follows: (1) The hyperbolic residual connection transmits the initial node information to each layer to prevent node features from being indistinguishable, and the hyperbolic identity mapping prevents performance degradation caused by deepening the models. (2) Product manifolds pick different origin points on different components, which makes the same input have different embedding results in different components, giving them the ability to view the graph structure from different perspectives. This enhances the representational ability of \mathcal{R}-HGCNs. (3) HyperDrop adds multiplicative Gaussian noise on hyperbolic neurons to alleviate the over-fitting issue, inheriting the insight of training with noise and preserving the hyperbolic geometry, Extensive experiments demonstrate the effectiveness of \mathcal{R}-HGCNs under various graph convolution layers and different structures of product manifolds. The contributions of this paper are summarized as follows:

  • We propose \mathcal{R}-HGCN, a product manifold based deep hyperbolic graph convolutional network that makes up for the deficiency of existing HGCNs for capturing long-range relationships in graphs.

  • We design hyperbolic residual connection that addresses the over-smoothing issue while deepening HGCNs and theoretically prove the effectiveness of the hyperbolic residual connection.

  • We propose to use product manifolds with different origin points in different components, which enables \mathcal{R}-HGCNs to extract more comprehensive features from the data.

  • We develop HyperDrop, a regularization method tailored for hyperbolic representations. It can improve \mathcal{R}-HGCNs’ generalization ability, alleviating the over-fitting issue.

Related Work

Graph convolutional networks typically embed graphs in Euclidean spaces since Euclidean spaces are the most commonly used and easy to calculate. Many researchers have noticed that Euclidean spaces have limitations while modeling data with hierarchical structure. Sa et al.(De Sa et al. 2018) claimed that it is not possible to embed trees into Euclidean spaces with arbitrarily low distortion, even in an infinite-dimensional Euclidean space. Meanwhile, trees can be embedded into a two-dimensional hyperbolic space with arbitrarily low distortion. Such a surprising fact benefits from the pretty property of hyperbolic spaces: unlike the volume of a ball in Euclidean space, which expands polynomially with radius, the volume of space in hyperbolic space grows exponentially with radius. (Liu, Nickel, and Kiela 2019). Thus, hyperbolic spaces are commonly viewed as a smooth version of tree and more suitable to model hierarchical data.

Several works discovered that graphs, e.g., biological networks and social networks, exhibit a highly hierarchical structure (Krioukov et al. 2010; Papadopoulos et al. 2012). Krioukov et al.(Krioukov et al. 2010) proved that the typical properties such as power-law degree distribution and strong clustering in such graphs are closely related to the curvature of hyperbolic spaces. Based on the above observations, generalizing GCNs from Euclidean spaces to hyperbolic spaces have been an emerging research topic. Liu et al.(Liu, Nickel, and Kiela 2019) and Chami et al.(Chami et al. 2019) first bridged the research gap and concurrently proposed HGCNs. Following the above works, many advanced techniques are proposed to improve HGCNs. Dai et al.(Dai et al. 2021) discovered that performimg graph convolution in tangent space will distort the global structure of hyperbolic spaces because tangent space is only a local approximation of hyperbolic manifolds. Yao et al.(Yao, Pi, and Chen 2022) designs a Hyperbolic Skipped Knowledge Graph Convolutional Network to capture the network structure characteristics in hyperbolic knowledge embeddings. Liu et al.(Liu and Lang 2023) propose a Multi-curvature Hyperbolic Heterogeneous Graph Convolutional Network (McH-HGCN) based on type triplets for heterogeneous graphs.

Preliminaries

Hyperbolic Manifold.

A Riemannian manifold or Riemannian space (,g)(\mathcal{M},g) is a real and smooth manifold \mathcal{M} equipped with a positive-definite metric tensor gg. It is a topological space that is locally homeomorphic to an Euclidean space at each point x\vec{x}\in\mathcal{M}, and the local Euclidean space is termed the tangent space 𝒯x\mathcal{T}_{\vec{x}}\mathcal{M}.

Lorentz Model.

A dd-dimensional Lorentz model (d\mathcal{L}^{d}, gg) is defined by the manifold d={x=[x0,x1,,xd]d+1:x,x=1,x0>0}\mathcal{L}^{d}=\{\vec{x}=[x_{0},x_{1},\cdots,x_{d}]\in\mathbb{R}^{d+1}:\langle\vec{x},\vec{x}\rangle_{\mathcal{L}}=-1,\vec{x}_{0}>0\} where the Lorentz inner product is defined as

x,y=xgy=x0y0+i=1dxiyi,\langle\vec{x},\vec{y}\rangle_{\mathcal{L}}=\vec{x}^{\top}g\vec{y}=-{x}_{0}{y}_{0}+\sum_{i=1}^{d}{x}_{i}{y}_{i}, (1)

and the metric tensor g=diag([1,1,,1])g=\mathrm{diag}([-1,1,\cdots,1]) where diag()\mathrm{diag}(\cdot) denotes a diagonal matrix.

Exponential and logarithmic maps.

Mappings between Riemannian manifold and their tangent spaces are termed exponential and logarithmic maps. Let x\vec{x} be a point on the Lorentz manifold \mathcal{L}, 𝒯x\mathcal{T}_{\vec{x}}\mathcal{L} be the tangent space at x\vec{x}, and v\vec{v} be a vector on the tangent space 𝒯x\mathcal{T}_{\vec{x}}\mathcal{L}. The exponential map expx(v)\mathrm{exp}_{\vec{x}}(\vec{v}) that projects v\vec{v} onto the manifold \mathcal{L} is defined as

expx(v)\displaystyle\mathrm{exp}_{\vec{x}}(\vec{v}) =cosh(v)x+sinh(v)vv,\displaystyle=\mathrm{cosh}(\|\vec{v}\|_{\mathcal{L}})\vec{x}+\mathrm{sinh}(\|\vec{v}\|_{\mathcal{L}})\frac{\vec{v}}{\|\vec{v}\|_{\mathcal{L}}}, (2)

where v=v,v\|\vec{v}\|_{\mathcal{L}}=\sqrt{\langle\vec{v},\vec{v}\rangle_{\mathcal{L}}} is the norm of v\vec{v}. The logarithmic map, inverse to the exponential map at x\vec{x}, is given by

logx(y)\displaystyle\mathrm{log}_{\vec{x}}(\vec{y}) =arcosh(x,y)x,y21(y+x,yx).\displaystyle=\frac{\mathrm{arcosh}(-\langle\vec{x},\vec{y}\rangle_{\mathcal{L}})}{\sqrt{\langle\vec{x},\vec{y}\rangle_{\mathcal{L}}^{2}-1}}(\vec{y}+\langle\vec{x},\vec{y}\rangle_{\mathcal{L}}\vec{x}). (3)

Parallel Transport.

The generalization of parallel translation to non-Euclidean geometry is termed parallel transport. For two points x,y\vec{x},\vec{y}\in\mathcal{L} on the Lorentz model, the parallel transport of a tangent vector v𝒯x\vec{v}\in\mathcal{T}_{\vec{x}}\mathcal{L} on the tangent space at x\vec{x} to the tangent space 𝒯y\mathcal{T}_{\vec{y}}\mathcal{L} at y\vec{y}, along a smooth curve on the Lorentz model, is defined as

Pxy(v)=vlogx(y),vd(x,y)2(logx(y)+logy(x)).P_{\vec{x}\to\vec{y}}(\vec{v})=\vec{v}-\frac{\langle\mathrm{log}_{\vec{x}}(\vec{y}),\vec{v}\rangle_{\mathcal{L}}}{d_{\mathcal{L}}(\vec{x},\vec{y})^{2}}\big{(}\mathrm{log}_{\vec{x}}(\vec{y})+\mathrm{log}_{\vec{y}}(\vec{x})\big{)}. (4)

Method

We propose residual hyperbolic graph convolutional networks (\mathcal{R}-HGCNs) to address the over-smoothing problem and enhance the representational ability of HGCNs.

Let 𝒢=(𝒱,)\mathcal{G}=(\mathcal{V},\mathcal{E}) denote a graph with a vertex set 𝒱\mathcal{V} and an edge set \mathcal{E}. 𝑿n×d{\boldsymbol{X}}\in\mathbb{R}^{n\times d} denotes the node features that typically lie in Euclidean spaces. nn is the number of nodes, and dd is the dimension of the node features.

Residual Hyperbolic Graph Convolution

Lorentz Operations.

Due to the strict manifold constraint of the Lorentz model, basic operations (matrix multiplication, vector addition, etc) are non-trivial to be generalized to Lorentz representations. Based on the maps mentioned above, we define the following operations.

Definition 1

(Lorentz matrix-vector multiplication). Let 𝐖{\boldsymbol{W}} be a (d+1)×(d+1)(d+1)\times(d+1) real matrix and x\vec{x}\in\mathcal{L} be an input in the Lorentz model. Then we define the Lorentz matrix-multiplication as

𝑾x:=expo(𝑾logo(x)),\displaystyle{\boldsymbol{W}}\otimes\vec{x}:=\mathrm{exp}_{\vec{o}}\big{(}{\boldsymbol{W}}\mathrm{log}_{\vec{o}}(\vec{x})\big{)}, (5)

where exp()\mathrm{exp}(\cdot) and log()\mathrm{log}(\cdot) are defined as Eqs (2) and (3).

Definition 2

(Lorentz scalar multiplication). The Lorentz scalar multiplication of a scale ξ\xi and x\vec{x}\in\mathcal{L} on the Lorentz model is defined as

ξx:=expo(ξlogo(x)).\displaystyle\xi\odot\vec{x}:=\mathrm{exp}_{\vec{o}}\big{(}\xi\mathrm{log}_{\vec{o}}(\vec{x})\big{)}. (6)
Definition 3

(Lorentz vector addition). The Lorentz vector addition of x,y\vec{x},\vec{y}\in\mathcal{L} on the Lorentz model is defined as

xy:=expx(Pox(logo(y))),\displaystyle\vec{x}\oplus\vec{y}:=\mathrm{exp}_{\vec{x}}(P_{\vec{o}\to\vec{x}}(\mathrm{log}_{\vec{o}}(\vec{y}))), (7)

where Pox()P_{\vec{o}\to\vec{x}}(\cdot) is the parallel transport operator defined in Eq (4).

Definition 4

(Lorentz activation function). For x\vec{x}\in\mathcal{L}, the Lorentz activation function on the Lorentz model is defined as

σ(x):=expo(σ(logo(x))),\displaystyle\sigma_{\mathcal{L}}(\vec{x}):=\mathrm{exp}_{\vec{o}}\big{(}\sigma(\mathrm{log}_{\vec{o}}(\vec{x}))\big{)}, (8)

where σ()\sigma(\cdot) can be any activation function such as ReLU()\mathrm{ReLU}(\cdot).

Residual Hyperbolic Graph Convolution.

Since the input of a hyperbolic graph convolutional network is required to be hyperbolic, we construct the initial Lorentz node features 𝑯(0)n×(d+1){\boldsymbol{H}}^{(0)}\in\mathbb{R}^{n\times(d+1)} whose ii-th row 𝑯i(0){\boldsymbol{H}}_{i}^{(0)} being the Lorentz feature of the ii-th node is generated by

𝑯i(0)\displaystyle{\boldsymbol{H}}_{i}^{(0)} =expo([0,𝑿i])\displaystyle=\mathrm{exp}_{\vec{o}}\big{(}[0,{\boldsymbol{X}}_{i}]\big{)} (9)
=[cosh(𝑿i2),sinh(𝑿i2)𝑿i𝑿i2],\displaystyle=\Big{[}\mathrm{cosh}\big{(}\|{\boldsymbol{X}}_{i}\|_{2}\big{)},\mathrm{sinh}\big{(}\|{\boldsymbol{X}}_{i}\|_{2}\big{)}\frac{{\boldsymbol{X}}_{i}}{\|{\boldsymbol{X}}_{i}\|_{2}}\Big{]},

where 𝑿i{\boldsymbol{X}}_{i} denotes the ii-row of 𝑿{\boldsymbol{X}}. Such a construction is based on the fact that [0,𝑿i]d+1[0,{\boldsymbol{X}}_{i}]\in\mathbb{R}^{d+1} can be viewed as a tangent vector on the tangent space at the origin point, satisfying o,[0,𝑿i]=0\langle\vec{o},[0,{\boldsymbol{X}}_{i}]\rangle_{\mathcal{L}}=0 where o=[1,0,,0]\vec{o}=[1,0,\cdots,0] is the origin point of the Lorentz model.

The performance of a hyperbolic graph convolutional network declines as the number of graph convolution layers increases, that is called over-smoothing issue (Li, Han, and Wu 2018). This is because the graph convolution is proved to be a special form of Laplacian smoothing, making node features tend to be indistinguishable after extensive graph convolutions(Li, Han, and Wu 2018). Inspired by (Chen et al. 2020), we design hyperbolic residual connection and hyperbolic identity mapping to tackle this issue. The residual hyperbolic graph convolution operator hgc()\mathrm{hgc}(\cdot) is defined as

hgc(𝑯)\displaystyle\mathrm{hgc}({\boldsymbol{H}}) =σ(((1β)𝑰+β𝑾)𝑯¯),\displaystyle=\sigma_{\mathcal{L}}\Big{(}\big{(}(1-\beta){\boldsymbol{I}}+\beta{\boldsymbol{W}}\big{)}\otimes{\boldsymbol{\bar{H}}}\Big{)}, (10)
𝑯¯\displaystyle{\boldsymbol{\bar{H}}} =((1α)(𝑨~𝑯))(α𝑯0),\displaystyle=\big{(}(1-\alpha)\odot({\boldsymbol{\tilde{A}}}\otimes{\boldsymbol{H}})\big{)}\oplus\big{(}\alpha\odot{\boldsymbol{H}}_{0}\big{)},

where \otimes, \odot, and \oplus are the Lorentz matrix-vector multiplication (Definition 1), the Lorentz scalar multiplication (Definition 2), and the Lorentz vector addition (Definition 3). Here σ()\sigma_{\mathcal{L}}(\cdot) is the Lorentz activation function (Definition 4). α\alpha and β\beta are hyper-parameters to control the weight of hyperbolic residual connection and hyperbolic identity mapping.

Formally, at the \ell-th layer, residual hyperbolic graph convolution performs like

𝑯()\displaystyle{\boldsymbol{H}}^{(\ell)} =hgc(𝑯(1))\displaystyle=\mathrm{hgc}({\boldsymbol{H}}^{(\ell-1)}) (11)
=σ(((1β)𝑰+β𝑾())𝑯¯),\displaystyle=\sigma_{\mathcal{L}}\Big{(}\big{(}(1-\beta_{\ell}){\boldsymbol{I}}+\beta_{\ell}{\boldsymbol{W}}^{(\ell)}\big{)}\otimes{\boldsymbol{\bar{H}}}\Big{)},
𝑯¯\displaystyle{\boldsymbol{\bar{H}}} =((1α)(𝑨~𝑯(1)))(α𝑯(0)),\displaystyle=\big{(}(1-\alpha_{\ell})\odot({\boldsymbol{\tilde{A}}}\otimes{\boldsymbol{H}}^{(\ell-1)})\big{)}\oplus\big{(}\alpha_{\ell}\odot{\boldsymbol{H}}^{(0)}\big{)},

where hgc()\mathrm{hgc}(\cdot) takes the node features 𝑯(1){\boldsymbol{H}}^{(\ell-1)} from the previous layer, and outputs the node features 𝑯(){\boldsymbol{H}}^{(\ell)} at the \ell-th layer.

Compared to the convolution operator in vanilla GCNs, i.e𝑯()=σ(𝑨~𝑯(1)𝑾){\boldsymbol{H}}^{(\ell)}=\sigma({\boldsymbol{\tilde{A}}}{\boldsymbol{H}}^{(\ell-1)}{\boldsymbol{W}}), hgc()\mathrm{hgc}(\cdot) relieves over-smoothing issue through two modifications: (1) hyperbolic residual connection adds information paths from initial node features to each graph convolution layer, such that no matter how deep a hyperbolic graph convolutional network is, the node features at the top layer still combine initial node features avoid becoming indistinguishable; (2) hyperbolic identity mapping ensures that a deep hyperbolic graph convolutional network is not worse than a shallow model. In the extreme case where the values of β\beta are set to be zero after the ii-th layer, the model degenerates to an ii-layer GCN no matter how deep it is.

Effectiveness of Hyperbolic Residual Connection

This section is meant to theoretically explain the efficiency of our network architecture. Inspired by (Cai and Wang 2020), we first define a hyperbolic version of “Dirichlet energy" for tracking node embeddings as follows. The Dirichlet energy of a function measures the "smoothness" of a unit norm function. The indistinguishable parameters leading to the over-smoothing issue result in small Dirichlet energy. For details of formulas and proofs in this section, see the supplementary material.

Definition 5

Dirichlet energy E(f)E(f) of a scalar function fdd+1f\in\mathcal{L}^{d}\subset\mathbb{R}^{d+1} on the graph GG is defined as

E(f)=logo(f)TΔ~logo(f),E(f)=\log_{o}(f)^{T}\tilde{\Delta}\log_{o}(f),

where Δ~=Id+1D~12A~D~12,A~=A+Id+1,D~=D+Id+1\tilde{\Delta}=I_{d+1}-\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}},\tilde{A}=A+I_{d+1},\tilde{D}=D+I_{d+1}, while AA and DD are the adjacency and degree matrices of GG. For a vector field F(d+1)×c=(f1,,fc)F_{(d+1)\times c}=(f_{1},\dots,f_{c}), its Dirichlet energy is

E(F)=tr(logo(F)TΔ~logo(F)).E(F)=\mathrm{tr}(\log_{o}(F)^{T}\tilde{\Delta}\log_{o}(F)).

Note that the defined Dirichlet energy always pulls back the node embedding in Lorentz space to its tangent space at the origin point.

If the hyperbolic graph convolution operator hgc()hgc(\cdot) as in (10) has no original input, i.e. H¯=P~H\bar{H}=\tilde{P}\otimes H therein, then

E(H(l))(1λ)2(1βl)I+βl𝑾(l)22E(H(l1)),E(H^{(l)})\leq(1-\lambda)^{2}\|(1-\beta_{l})I+\beta_{l}{\boldsymbol{W}}^{(l)}\|_{2}^{2}E(H^{(l-1)}), (12)

where 0<λ<20<\lambda<2 is the smallest non-zero eigenvalue of Δ~\tilde{\Delta} and X2\|X\|_{2} denotes the maximal singular value of XX. Note that X2=maxu2=1Xu2\|X\|_{2}=\max_{\|u\|_{2}=1}\|Xu\|_{2} for any matrix XX. Hence by the triangle inequality,

((1βl)I+βl𝑾(l))u21βl+βl𝑾(l)u2.\|((1-\beta_{l})I+\beta_{l}{\boldsymbol{W}}^{(l)})u\|_{2}\leq 1-\beta_{l}+\beta_{l}\|{\boldsymbol{W}}^{(l)}u\|_{2}. (13)

Actually Xu2\|Xu\|_{2} for any weight matrix XX may be estimated by

Lemma 1

If X=(Xij)X=(X_{ij}) is a n×nn\times n weight matrix, i.e. j=1nXij=1,Xij0\sum_{j=1}^{n}X_{ij}=1,X_{ij}\geq 0, then for any unu\in\mathbb{R}^{n} with u2=1\|u\|_{2}=1, Xu2n\|Xu\|_{2}\leq\sqrt{n}.

Hence combined with (12) and (13), we easily prove that in HGCNs without initial input, we have E(H(l))d(1λ)2E(H(l1))E(H^{(l)})\leq d(1-\lambda)^{2}E(H^{(l-1)}). Thus if the graph GG does not have enough expansion, say (1λ)<1d(1-\lambda)<\frac{1}{\sqrt{d}}, then HGCNs would be exponentially over-smoothing as the number of layers increases.

The above result suggests us add an initial input as in hgc()hgc(\cdot),

H¯=(((1αl)(P~H(l1)))(αlH(0))),\bar{H}=\left(\left((1-\alpha_{l})\odot(\tilde{P}\otimes H^{(l-1)})\right)\oplus\left(\alpha_{l}\odot H^{(0)}\right)\right),

seeking to interfere the decreasing of Dirichlet energy, which is the motivation of \mathcal{R}-HGCNs.

We investigate in details the interference of initial input. For simplicity we assume that the features in process all have positive entries so that ReLU does not affect the evaluation of Dirichlet energy. Thus utilizing the same argument as in the proof of (12) in the case with initial input, we have

E(H(l))=(yllogo(z))TΔ~(yllogo(z)),\begin{split}&E(H^{(l)})=(y_{l}\log_{o}(z))^{T}\tilde{\Delta}(y_{l}\log_{o}(z)),\end{split} (14)

with yl=(1βl)I+βlW(l)y_{l}=(1-\beta_{l})I+\beta_{l}W^{(l)} and z=expz1(αlPoz1(logo(H(0))))z=\exp_{z_{1}}(\alpha_{l}P_{o\rightarrow z_{1}}(\log_{o}(H^{(0)}))) the last equality of which is due to linearity of parallel transport, and z1=expo((1αl)logo(P~H(l1)))=expo(z2)z_{1}=\exp_{o}((1-\alpha_{l})\log_{o}(\tilde{P}\otimes H^{(l-1)}))=\exp_{o}(z_{2}).

By definition of parallel transport, we have

Poz1(logo(H(0)))=logo(H(0))logo(z1),logo(H(0))d(o,z1)(z2+logz1(o))\begin{split}&P_{o\rightarrow z_{1}}(\log_{o}(H^{(0)}))\\ =&\log_{o}(H^{(0)})-\frac{\langle\log_{o}(z_{1}),\log_{o}(H^{(0)})\rangle_{\mathcal{L}}}{d_{\mathcal{L}}(o,z_{1})}(z_{2}+\log_{z_{1}}(o))\end{split} (15)

Further by definition,

z1=cosh((1αl)P~logo(H(l1))))o+sinh((1αl)P~logo(H(l1)))P~logo(H(l1))P~logo(H(l1))\begin{split}&z_{1}=\cosh((1-\alpha_{l})\|\tilde{P}\log_{o}(H^{(l-1)}))\|_{\mathcal{L}})o\\ &\quad+\frac{\sinh((1-\alpha_{l})\|\tilde{P}\log_{o}(H^{(l-1)})\|_{\mathcal{L}})}{\|\tilde{P}\log_{o}(H^{(l-1)})\|_{\mathcal{L}}}\tilde{P}\log_{o}(H^{(l-1)})\\ \end{split} (16)

Also by definition, expz1(x)=cosh(x)z1+sinh(x)xx\exp_{z_{1}}(x)=\cosh(\|x\|_{\mathcal{L}})z_{1}+\frac{\sinh(\|x\|_{\mathcal{L}})}{\|x\|_{\mathcal{L}}}x. Then again by the property of parallel transport, we have

z=θllogo(H(0))+ϕlP~H(l1)+ψlo,\begin{split}z=\theta_{l}\log_{o}(H^{(0)})+\phi_{l}\tilde{P}H^{(l-1)}+\psi_{l}o,\end{split} (17)

where θl,ϕl,ψl\theta_{l},\phi_{l},\psi_{l} are coefficients depending on αl,βl\alpha_{l},\beta_{l} and the Lorentzian norm of the above 3 vectors. Noting that θl\theta_{l} only depends on αl\alpha_{l} and H(0)H^{(0)}, the effect of logo(H(0))\log_{o}(H^{(0)}) is always not negligible. Also (z=(z0,)z=(z_{0},\dots))

logo(z)=arcosh(z0)z021(zz0o),\log_{o}(z)=\frac{arcosh(z_{0})}{\sqrt{z_{0}^{2}-1}}(z-z_{0}o),

which removes z0z_{0} from zz and re-scale it by a large factor. Then altogether we have

E(H(l))=(θ~llogo(H(0))+ϕ~lP~H(l1)+ψ~lo)TΔ~()=θ~l2E(H(0))+,\begin{split}E(H^{(l)})&=(\tilde{\theta}_{l}\log_{o}(H^{(0)})+\tilde{\phi}_{l}\tilde{P}H^{(l-1)}+\tilde{\psi}_{l}o)^{T}\tilde{\Delta}(\cdots)\\ &=\tilde{\theta}_{l}^{2}E(H^{(0)})+\cdots,\end{split} (18)

where θ~l\tilde{\theta}_{l} is negligible similarly. Thus we prove that in \mathcal{R}-HGCNs with initial input, the Dirichlet energy E(H(l))E(H^{(l)}) will be bounded away from zero even if the Dirichlet energy in the corresponding HGCNs without initial input decrease to zero.

Product Manifold

We use the product manifold of the Lorentz models as embedding space. The Lorentz components of the product manifold are independent of each other.

The product manifold is the Cartesian product of a sequence of Riemannian manifolds, each of which is called a component. Given a sequence of the Lorentz models 1d1,,jdj,,kdk\mathcal{L}^{d_{1}}_{1},\cdots,\mathcal{L}^{d_{j}}_{j},\cdots,\mathcal{L}^{d_{k}}_{k} where djd_{j} denotes the dimension of the jj-th component, the product manifold is defined as 𝕃=1d1××jdj××kdk\mathbb{L}=\mathcal{L}^{d_{1}}_{1}\times\dots\times\mathcal{L}^{d_{j}}_{j}\times\cdots\times\mathcal{L}^{d_{k}}_{k}. The coordinate of a point x\vec{x} on 𝕃\mathbb{L} is written as x=[x1,,xj,xk]\vec{x}=[\vec{x}_{1},\cdots,\vec{x}_{j},\cdots\vec{x}_{k}] where xjjdj\vec{x}_{j}\in\mathcal{L}^{d_{j}}_{j}. Similarly, the coordinate of a tangent vector v𝒯x𝕃\vec{v}\in\mathcal{T}_{\vec{x}}\mathbb{L} is written as v=[v1,,vj,,vk]\vec{v}=[\vec{v}_{1},\cdots,\vec{v}_{j},\cdots,\vec{v}_{k}] where vj𝒯xjjdj\vec{v}_{j}\in\mathcal{T}_{\vec{x}_{j}}\mathcal{L}^{d_{j}}_{j}. For x,y𝕃\vec{x},\vec{y}\in\mathbb{L} and v𝒯x𝕃\vec{v}\in\mathcal{T}_{\vec{x}}\mathbb{L}, the exponential and logarithmic maps on 𝕃\mathbb{L} are defined as

expx(v)\displaystyle\mathrm{exp}_{\vec{x}}(\vec{v}) =[expx1(v1),,expxj(vj),,expxk(vk)],\displaystyle=[\mathrm{exp}_{\vec{x}_{1}}(\vec{v}_{1}),\cdots,\mathrm{exp}_{\vec{x}_{j}}(\vec{v}_{j}),\cdots,\mathrm{exp}_{\vec{x}_{k}}(\vec{v}_{k})], (19)
logx(y)\displaystyle\mathrm{log}_{\vec{x}}(\vec{y}) =[logx1(y1),,logxj(yj),,logxk(yk)].\displaystyle=[\mathrm{log}_{\vec{x}_{1}}(\vec{y}_{1}),\cdots,\mathrm{log}_{\vec{x}_{j}}(\vec{y}_{j}),\cdots,\mathrm{log}_{\vec{x}_{k}}(\vec{y}_{k})]. (20)

Different from ordinary product manifolds, we use 𝕃=(d)o1×(d)o2××(d)ok\mathbb{L}=(\mathcal{L}^{d})_{o_{1}}\times(\mathcal{L}^{d})_{o_{2}}\times\cdots\times(\mathcal{L}^{d})_{o_{k}}, where (d)oi(\mathcal{L}^{d})_{o_{i}} are copies of dd-dimensional Lorentz spaces with randomly prescribed origin points oido_{i}\in\mathcal{L}^{d}. This gives \mathcal{R}-HGCNs the ability to extract node features from a wider range of different perspectives. Mathematically, such construction of product manifolds is inspired by the general construction of manifolds using Euclidean strata with different coordinates.

Hyperbolic Dropout

Hyperbolic Dropout(HyperDrop) adds multiplicative Gaussian noise on Lorentz components to regularize the HGCNs and alleviate the over-fitting issue. Concretely, let 𝕃=1d1××jdj××kdk\mathbb{L}=\mathcal{L}^{d_{1}}_{1}\times\cdots\times\mathcal{L}^{d_{j}}_{j}\times\cdots\times\mathcal{L}^{d_{k}}_{k} denote a product manifold of kk Lorentz models where djd_{j} denotes the dimension of the jj-th Lorentz component. Given an input l=[l1,,lj,lk]𝕃\vec{l}=[\vec{l}_{1},\cdots,\vec{l}_{j},\cdots\vec{l}_{k}]\in\mathbb{L} on the product manifold where ljjdj\vec{l}_{j}\in\mathcal{L}_{j}^{d_{j}}, HyperDrop is formulated as

y\displaystyle\vec{y}{} =[y1,,yj,,yk],\displaystyle=[\vec{y}_{1},\cdots,\vec{y}_{j},\cdots,\vec{y}_{k}], (21)
yj=ξjfθj(lj),\displaystyle\vec{y}_{j}=\xi_{j}\odot f_{\theta_{j}}(\vec{l}_{j}),
ξj𝒩(1,σ2),\displaystyle\xi_{j}\sim\mathcal{N}(1,\sigma^{2}),

where ξj\xi_{j} is the multiplicative Gaussian noise drawn from the Gaussian distribution 𝒩(1,σ2)\mathcal{N}(1,\sigma^{2}). Following (Srivastava et al. 2014), we set σ2=η/(1η)\sigma^{2}={\eta}/{(1-\eta)} where η\eta denotes drop rate. \odot denotes the Lorentz scalar multiplication that is the generalization of scalar multiplication to the Lorentz representations, defined in Definition  2. fθj()f_{\theta_{j}}(\cdot) could be any realization of a desirable function, such as a neural network with parameters θj\theta_{j}.

It is noted that we sample ξj\xi_{j} from the Gaussian distribution instead of the Bernoulli distribution used in the standard dropout for the following reason. If ξj\xi_{j} is drawn from the Bernoulli distribution and happens to be 0 (with a probability of η\eta) at the \ell-th neural network layer, the information flow of the jj-th Lorentz component will be interrupted, leading to the deactivation of the jj-th Lorentz component after the \ell-th neural network layer. In contrast, ξj\xi_{j} drawn from a Gaussian distribution with mean value 11 is exactly equal to 0 is a small probability event. Thus, the jj-th Lorentz component always works.

We may interpret HyperDrop from a Bayesian perspective. For convenience, we take a single Lorentz model and the Lorentz linear transformation as an example, i.ey=ξfθ(l)\vec{y}=\xi\odot f_{\theta}(\vec{l}) and fθ(l)=lθf_{\theta}(\vec{l})=\vec{l}\otimes\theta. We have

y=ξ(lθ),ξ𝒩(1,σ2)\displaystyle\vec{y}=\xi\odot(\vec{l}\otimes\theta),~{}~{}\xi\sim\mathcal{N}(1,\sigma^{2}) (22)

equal to

y=l\displaystyle\vec{y}=\vec{l}\otimes 𝑴,\displaystyle{\boldsymbol{{M}}}, (23)
withmr,c=ξθr,c\displaystyle\mathrm{with}~{}~{}m_{r,c}=\xi\theta_{r,c} ,andξ𝒩(1,σ2),\displaystyle,~{}\mathrm{and}~{}\xi\sim\mathcal{N}(1,\sigma^{2}),

where \otimes denotes the Lorentz matrix-vector multiplication as defined in Definition 1, 𝑴{\boldsymbol{M}} is the matrix with mr,cm_{r,c} as entries, and θ\theta is the matrix with θr,c\theta_{r,c} as entries. Eq (23) can be interpreted as a Bayesian treatment that the posterior distribution of the weight is given by a Gaussian distribution i.eqϕ(mr,c)=𝒩(θr,c,σ2θr,c2)q_{\phi}(m_{r,c})=\mathcal{N}(\theta_{r,c},\sigma^{2}\theta_{r,c}^{2}). The HyperDrop sampling procedure Eq (22) can be interpreted as rising from a reparameterization of the posterior on the parameter 𝑴{\boldsymbol{M}} as shown in Eq (23).

Residual Hyperbolic Graph Convolutional Network

We then investigate our architecture of \mathcal{R}-HGCN and its effect of preventing over-smoothing. In fact, we always combine \mathcal{R}-HGCN with initial input. Note that our model is of the form 𝕃=(d)o1×(d)o2××(d)ok\mathbb{L}=(\mathcal{L}^{d})_{o_{1}}\times(\mathcal{L}^{d})_{o_{2}}\times\cdots\times(\mathcal{L}^{d})_{o_{k}}, where (d)oi(\mathcal{L}^{d})_{o_{i}} are copies of dd-dimensional Lorentz spaces with random prescribed origin points oido_{i}\in\mathcal{L}^{d}. Then for any x=(x1,,xk)𝕃x=(x_{1},\dots,x_{k})\in\mathbb{L}, its Dirichlet energy is defined as

E(x)=max1ik(logoi(xi)TΔ~logoi(xi)):=max1ikEi(xi).E(x)=\max_{1\leq i\leq k}(\log_{o_{i}}(x_{i})^{T}\tilde{\Delta}\log_{o_{i}}(x_{i})):=\max_{1\leq i\leq k}E_{i}(x_{i}). (24)

Then by the similar argument with the proof of (12), we can estimate Ei(Hi(l))E_{i}(H_{i}^{(l)}) separately and then opt for the maximal among them, which may be better behaved than any single component due to possible fluctuation.

Network Architecture.

Let 𝕃=(d)o1×(d)o2××(d)ok\mathbb{L}=(\mathcal{L}^{d})_{o_{1}}\times(\mathcal{L}^{d})_{o_{2}}\times\cdots\times(\mathcal{L}^{d})_{o_{k}} denote the product manifold of kk Lorentz models where djd_{j} is the dimension of the jj-th Lorentz model. The initial node features 𝑯(0){\boldsymbol{H}}^{(0)} on the product manifold of Lorentz models are given by (o=[o1,,oj,,ok]\vec{o}~{}=[\vec{o}_{1},\cdots,\vec{o}_{j},\cdots,\vec{o}_{k}])

𝑯(0)=[𝑯1,(0)\displaystyle{\boldsymbol{H}}^{(0)}=[{\boldsymbol{H}}^{1,{(0)}} ,,𝑯j,(0),,𝑯k,(0)],\displaystyle,\cdots,{\boldsymbol{H}}^{j,{(0)}},\cdots,{\boldsymbol{H}}^{k,{(0)}}], (25)
𝑯ij,(0)\displaystyle{\boldsymbol{H}}^{j,{(0)}}_{i} =expo([0,𝑿i]),\displaystyle=\mathrm{exp}_{\vec{o}}\big{(}[0,{\boldsymbol{X}}_{i}]\big{)},

where 𝑯ij,(0)(dj+1){\boldsymbol{H}}^{j,{(0)}}_{i}\in\mathbb{R}^{(d_{j}+1)} denotes the ii-th row of the node features 𝑯j,(0)n×(dj+1){\boldsymbol{H}}^{j,{(0)}}\in\mathbb{R}^{n\times(d_{j}+1)} on the jj-th Lorentz component.

The graph convolution on the product manifold of the Lorentz models combined with HyperDrop is realized by instantiating f()f(\cdot) in Eq (21) as the hyperbolic graph convolution operator hgc()\mathrm{hgc}(\cdot) , i.e.  Eq (11) becomes

𝑯()=[\displaystyle{\boldsymbol{H}}^{(\ell)}=[ 𝑯1,(),,𝑯j,(),,𝑯k,()],\displaystyle{\boldsymbol{H}}^{1,{(\ell)}},\cdots,{\boldsymbol{H}}^{j,{(\ell)}},\cdots,{\boldsymbol{H}}^{k,{(\ell)}}], (26)
𝑯ij,()\displaystyle{\boldsymbol{H}}_{i}^{j,(\ell)} =ξijhgc(𝑯j,(1)),\displaystyle=\xi_{i}^{j}\odot\mathrm{hgc}({\boldsymbol{H}}^{j,(\ell-1)}),
ξij𝒩(1,σ2),\displaystyle~{}\xi_{i}^{j}\sim\mathcal{N}(1,\sigma^{2}),

where 𝑯j,()n×(dj+1){\boldsymbol{H}}^{j,{(\ell)}}\in\mathbb{R}^{n\times(d_{j}+1)} is the node features on the jj-th Lorentz component at the \ell-th layer. 𝑯ij,()(dj+1){\boldsymbol{H}}_{i}^{j,{(\ell)}}\in\mathbb{R}^{(d_{j}+1)} is the ii-th row (i.e., the ii-th node) of 𝑯j,(){\boldsymbol{H}}^{j,{(\ell)}}. \odot denotes the Lorentz scalar multiplication defined in Definition 2. ξij\xi_{i}^{j} is the random multiplicative noise drawn from the Gaussian distribution 𝒩(1,σ2)\mathcal{N}(1,\sigma^{2}). We set σ=η/(1η)\sigma=\eta/(1-\eta) where η\eta denotes the drop rate. The node features at the last layer can be used for downstream tasks. Taking the node classification task as an example, we map node features to the tangent spaces of the product manifolds, and send tangent representations to a fully-connected layer followed by a softmax for classification.

Experiments

Experiments are performed on the semi-supervised node classification task. We first evaluate the performance of \mathcal{R}-HGCN under different configurations of models, including various graph convolution layers and different structures of product manifolds. Then, we compare with several state-of-the-art Euclidean GCNs and HGCNs, showing that \mathcal{R}-HGCN achieves competitive results. Further, we compare with DropConnect(Wan et al. 2013), a related regularization mathod for deep GCNs.

Datasets Pubmed Citeseer Cora Airport
Classes 33 66 77 44
Nodes 19,71719,717 3,3273,327 2,7082,708 3,1883,188
Edges 44,33844,338 4,7324,732 5,4295,429 18,63118,631
Features 500500 3,7033,703 1,4331,433 44
Table 1: Dataset statistics.

Datasets and Baselines

We use four standard commonly-used citation network graph datasets : Pubmed, Citeseer, Cora and Airport (Sen et al. 2008). Dataset statistics are summarized in Table 1. Experiment details see in the supplementary material.

Datasets Methods 4 layers 8 layers 16 layers 32 layers
Original HyperDrop Original HyperDrop Original HyperDrop Original HyperDrop
Pubmed GCNII 79.3±0.379.3\pm 0.3 79.9±0.379.9\pm 0.3 79.9±1.779.9\pm 1.7 80.0±1.980.0\pm 1.9
𝒫\mathcal{P}-HGCN[2×8] 79.1±0.279.1\pm 0.2 79.8±0.3\textbf{79.8}\pm 0.3 80.0±0.180.0\pm 0.1 80.3±0.1\textbf{80.3}\pm 0.1 79.1±0.279.1\pm 0.2 79.2±0.379.2\pm 0.3 79.2±0.379.2\pm 0.3 79.5±0.479.5\pm 0.4
𝒫\mathcal{P}-HGCN[4×4] 79.0±0.279.0\pm 0.2 79.5±0.279.5\pm 0.2 79.8±0.179.8\pm 0.1 80.1±0.380.1\pm 0.3 79.9±0.379.9\pm 0.3 80.0±0.280.0\pm 0.2 79.8±0.479.8\pm 0.4 80.3±0.3\textbf{80.3}\pm 0.3
𝒫\mathcal{P}-HGCN[8×2] 79.0±0.279.0\pm 0.2 79.4±0.279.4\pm 0.2 79.7±0.279.7\pm 0.2 79.9±0.279.9\pm 0.2 80.0±0.380.0\pm 0.3 80.1±0.3\textbf{80.1}\pm 0.3 80.1±0.280.1\pm 0.2 80.3±0.480.3\pm 0.4
𝒫\mathcal{P}-HGCN[16×1] 78.7±0.378.7\pm 0.3 79.2±0.379.2\pm 0.3 79.4±0.179.4\pm 0.1 79.9±0.279.9\pm 0.2 79.3±0.379.3\pm 0.3 79.5±0.479.5\pm 0.4 79.2±0.279.2\pm 0.2 80.1±0.480.1\pm 0.4
Citeseer GCNII 69.3±269.3\pm 2 70.6±1.470.6\pm 1.4 70.5±1.370.5\pm 1.3 70.8±1.670.8\pm 1.6
𝒫\mathcal{P}-HGCN[2×8] 71.4±0.271.4\pm 0.2 72.0±0.472.0\pm 0.4 72.1±0.272.1\pm 0.2 72.3±0.372.3\pm 0.3 71.9±0.571.9\pm 0.5 71.9±0.771.9\pm 0.7 72.1±0.672.1\pm 0.6 72.3±0.772.3\pm 0.7
𝒫\mathcal{P}-HGCN[4×4] 71.4±0.271.4\pm 0.2 72.0±0.472.0\pm 0.4 71.9±0.371.9\pm 0.3 72.2±0.372.2\pm 0.3 71.5±0.571.5\pm 0.5 71.7±1.071.7\pm 1.0 71.4±0.271.4\pm 0.2 71.9±0.771.9\pm 0.7
𝒫\mathcal{P}-HGCN[8×2] 71.2±0.871.2\pm 0.8 70.4±0.970.4\pm 0.9 68.6±0.868.6\pm 0.8 71.1±0.771.1\pm 0.7 72.0±0.272.0\pm 0.2 71.9±0.971.9\pm 0.9 72.0±0.172.0\pm 0.1 72.0±0.572.0\pm 0.5
𝒫\mathcal{P}-HGCN[16×1] 71.3±0.371.3\pm 0.3 72.4±0.3\textbf{72.4}\pm 0.3 71.9±0.371.9\pm 0.3 72.5±0.5\textbf{72.5}\pm 0.5 72.2±0.472.2\pm 0.4 72.3±0.4\textbf{72.3}\pm 0.4 72.3±0.672.3\pm 0.6 72.5±0.9\textbf{72.5}\pm 0.9
Cora GCNII 76.6±2.476.6\pm 2.4 79.4±1.479.4\pm 1.4 81.3±1.081.3\pm 1.0 81.5±1.481.5\pm 1.4
𝒫\mathcal{P}-HGCN[2×8] 80.3±0.780.3\pm 0.7 82.0±0.5\textbf{82.0}\pm 0.5 81.5±0.281.5\pm 0.2 82.2±0.4\textbf{82.2}\pm 0.4 81.8±0.281.8\pm 0.2 82.1±0.2\textbf{82.1}\pm 0.2 81.9±0.381.9\pm 0.3 82.1±0.182.1\pm 0.1
𝒫\mathcal{P}-HGCN[4×4] 80.3±0.880.3\pm 0.8 81.7±0.581.7\pm 0.5 81.3±0.281.3\pm 0.2 81.9±0.481.9\pm 0.4 81.7±0.281.7\pm 0.2 81.9±0.281.9\pm 0.2 81.6±0.581.6\pm 0.5 82.1±0.782.1\pm 0.7
𝒫\mathcal{P}-HGCN[8×2] 77.9±0.877.9\pm 0.8 80.8±0.880.8\pm 0.8 80.2±0.980.2\pm 0.9 81.5±0.381.5\pm 0.3 81.7±0.281.7\pm 0.2 81.8±0.281.8\pm 0.2 81.9±0.581.9\pm 0.5 82.3±0.8\textbf{82.3}\pm 0.8
𝒫\mathcal{P}-HGCN[16×1] 79.4±0.879.4\pm 0.8 81.9±0.781.9\pm 0.7 80.4±0.480.4\pm 0.4 82.5±0.4{82.5}\pm 0.4 81.6±0.681.6\pm 0.6 81.5±0.581.5\pm 0.5 81.6±0.781.6\pm 0.7 81.4±0.681.4\pm 0.6
Airport GCNII 88.9±0.588.9\pm 0.5 89.1±0.389.1\pm 0.3 89.2±0.489.2\pm 0.4 89.6±0.789.6\pm 0.7
𝒫\mathcal{P}-HGCN[2×8] 88.9±0.488.9\pm 0.4 89.1±0.2\textbf{89.1}\pm 0.2 89.2±1.089.2\pm 1.0 89.4±0.989.4\pm 0.9 89.1±0.389.1\pm 0.3 89.3±0.589.3\pm 0.5 89.7±0.289.7\pm 0.2 90.0±0.490.0\pm 0.4
𝒫\mathcal{P}-HGCN[4×4] 88.6±0.688.6\pm 0.6 89.0±0.289.0\pm 0.2 89.3±0.489.3\pm 0.4 89.4±0.389.4\pm 0.3 89.6±0.189.6\pm 0.1 89.6±0.289.6\pm 0.2 89.7±0.189.7\pm 0.1 90.2±0.7\textbf{90.2}\pm 0.7
𝒫\mathcal{P}-HGCN[8×2] 88.7±0.288.7\pm 0.2 88.7±0.488.7\pm 0.4 89.3±0.589.3\pm 0.5 89.6±0.7\textbf{89.6}\pm 0.7 89.4±0.289.4\pm 0.2 89.8±0.5\textbf{89.8}\pm 0.5 89.6±0.489.6\pm 0.4 89.7±0.789.7\pm 0.7
𝒫\mathcal{P}-HGCN[16×1] 88.5±0.788.5\pm 0.7 88.6±0.488.6\pm 0.4 88.6±0.388.6\pm 0.3 88.5±0.588.5\pm 0.5 88.7±0.288.7\pm 0.2 88.9±0.588.9\pm 0.5 89.0±0.389.0\pm 0.3 89.2±0.589.2\pm 0.5
Table 2: Comparisons on various graph convolution layers and different structures of 1616-dimensional product manifolds w and w/o HyperDrop. We also compare with a related work GCNII using 1616-dimensional embedding space. Mean accuracy (%) and standard deviation are reported. 𝒫\mathcal{P}-HGCN[d×m] denotes the 𝒫\mathcal{P}-HGCN with a product manifold of mm dd-dimensional Lorentz models.

Validation Experiments

Here we demonstrate the effectiveness of the \mathcal{R}-HGCN and our regularization method under different model configurations. For \mathcal{R}-HGCN, increasing the number of hyperbolic graph convolution layers almost always brings improvements on three datasets.

Methods Pubmed Citeseer Cora
Euclidean GCN(Kipf and Welling 2017) 79.1±0.379.1\pm 0.3 71.2±0.671.2\pm 0.6 81.3±0.581.3\pm 0.5
GAT(Veličković et al. 2017) 77.7±0.277.7\pm 0.2 70.9±0.470.9\pm 0.4 82.4±0.682.4\pm 0.6
GraphSage(Hamilton, Ying, and Leskovec 2017) 77.3±0.377.3\pm 0.3 67.8±1.167.8\pm 1.1 77.3±0.877.3\pm 0.8
SGC(Wu et al. 2019) 69.3±0.069.3\pm 0.0 78.9±0.078.9\pm 0.0 80.9±0.080.9\pm 0.0
APPNP(Klicpera, Bojchevski, and Günnemann 2019) 80.1±0.280.1\pm 0.2 71.6±0.471.6\pm 0.4 83.7±0.5\textbf{83.7}\pm 0.5
GCNII(8)(Chen et al. 2020) 79.9±0.379.9\pm 0.3 72.4±0.972.4\pm 0.9 83.5±0.783.5\pm 0.7
Hyperbolic HGCN (Chami et al. 2019) 80.3±0.3\textbf{80.3}\pm 0.3 - 79.9±0.279.9\pm 0.2
H2HGCN(Dai et al. 2021) 79.9±0.579.9\pm 0.5 - 82.8±0.482.8\pm 0.4
κ\kappa GCN (Bachmann, Bécigneul, and Ganea 2020) 78.3±0.678.3\pm 0.6 70.7±0.570.7\pm 0.5 80.0±0.680.0\pm 0.6
LGCN (Zhang et al. 2021) 78.6±0.778.6\pm 0.7 71.9±0.771.9\pm 0.7 83.3±0.783.3\pm 0.7
𝒫\mathcal{P}-HGCN[16×1](8) 79.4±0.179.4\pm 0.1 71.9±0.371.9\pm 0.3 80.4±0.480.4\pm 0.4
𝒫\mathcal{P}-HGCN[16×1](8)+HyperDrop 79.9±0.279.9\pm 0.2 72.5±0.5\textbf{72.5}\pm 0.5 82.5±0.482.5\pm 0.4
𝒫\mathcal{P}-HGCN[2×8](8) 80.0±0.180.0\pm 0.1 72.1±0.272.1\pm 0.2 81.5±0.281.5\pm 0.2
𝒫\mathcal{P}-HGCN[2×8](8)+HyperDrop 80.3±0.1\textbf{80.3}\pm 0.1 72.3±0.372.3\pm 0.3 82.2±0.482.2\pm 0.4
Table 3: Mean accuracy (%) and standard deviation on Pubmed, Citeseer, and Cora. We set the dimensions of embedding spaces to 1616 for all methods and the number of graph convolution layers to 88 (number in parentheses) for deep models, i.e., GCNII and 𝒫\mathcal{P}-HGCN. 𝒫\mathcal{P}-HGCN[d×m] denotes the 𝒫\mathcal{P}-HGCN with a product manifold of mm dd-dimensional Lorentz models.
Layers 𝒫\mathcal{P}-HGCN[2×8] 𝒫\mathcal{P}-HGCN[4×4] 𝒫\mathcal{P}-HGCN[8×2] 𝒫\mathcal{P}-HGCN[16×1]
with IRC w/o IRC with IRC w/o IRC with IRC w/o IRC with IRC w/o IRC
2 78.9±0.478.9\pm 0.4 78.9±0.278.9\pm 0.2 79.0±0.379.0\pm 0.3 78.8±0.378.8\pm 0.3 78.8±0.378.8\pm 0.3 78.9±0.478.9\pm 0.4 78.7±0.278.7\pm 0.2 78.6±0.478.6\pm 0.4
4 79.1±0.479.1\pm 0.4 79.3±0.279.3\pm 0.2 79.0±0.279.0\pm 0.2 79.0±0.679.0\pm 0.6 79.2±0.379.2\pm 0.3 78.7±0.478.7\pm 0.4 79.2±0.379.2\pm 0.3 78.6±0.478.6\pm 0.4
8 80.3±0.080.3\pm 0.0 78.6±0.278.6\pm 0.2 80.2±0.280.2\pm 0.2 79.1±0.279.1\pm 0.2 80.1±0.180.1\pm 0.1 78.5±0.678.5\pm 0.6 79.8±0.279.8\pm 0.2 76.1±3.576.1\pm 3.5
16 79.2±0.379.2\pm 0.3 29.8±11.129.8\pm 11.1 80.0±0.280.0\pm 0.2 60.1±7.860.1\pm 7.8 80.1±0.380.1\pm 0.3 60.1±7.860.1\pm 7.8 79.5±0.479.5\pm 0.4 49.9±2.149.9\pm 2.1
Table 4: Testing accuracy (%) comparisons on different layers and model structures w and w/o hyperbolic residual connection(HRC). 𝒫\mathcal{P}-HGCN[d×m] denotes the 𝒫\mathcal{P}-HGCN with a product manifold of mm dd-dimensional Lorentz models.
Methods Pubmed Citeseer Cora
w/o dropout 79.4±0.179.4\pm 0.1 71.9±0.371.9\pm 0.3 80.3±0.480.3\pm 0.4
DropConnect 79.7±0.379.7\pm 0.3 71.9±0.371.9\pm 0.3 83.0±0.683.0\pm 0.6
HyperDrop 79.9±0.279.9\pm 0.2 72.5±0.572.5\pm 0.5 82.5±0.482.5\pm 0.4
Both 79.9±0.379.9\pm 0.3 72.7±0.572.7\pm 0.5 83.5±0.983.5\pm 0.9
Table 5: Comparisons of HyperDrop and DropConnect.

The criterion for judging the effectiveness of HyperDrop is to test whether the performance of \mathcal{R}-HGCN is improved with the help of HyperDrop. In Table 2, we report the performance of HyperDrop with various graph convolution layers and structures of product manifolds on Pubmed, Citeseer Cora and Airport. We observe that in most experiment, HyperDrop improves the performance of \mathcal{R}-HGCN. For example, on Cora, \mathcal{R}-HGCN[2×8] obtains 1.7%1.7\%, 0.7%0.7\%, 0.3%0.3\% and 0.2%0.2\% gains with 44, 88, 1616 and 3232 layers. The stable improvements demonstrate that HyperDrop can effectively improve the generalization ability of \mathcal{R}-HGCN.

We also compare \mathcal{R}-HGCN with a deep Euclidean method, GCNII. The hyperbolic residual connection and hyperbolic identity mapping in \mathcal{R}-HGCN are inspired by GCNII. The main difference between \mathcal{R}-HGCN and GCNII is, \mathcal{R}-HGCN performs graph representation learning in hyperbolic spaces while GCNII is in Euclidean spaces. As shown in Table 2, \mathcal{R}-HGCN shows superiority compared to GCNII. Actually, \mathcal{R}-HGCN is only baseline model we developed for evaluating the effectiveness of HyperDrop, and we give up extra training tricks for clear evaluations. For example, in Section Comparisons with DropConnect, \mathcal{R}-HGCN obtains the same mean accuracy 83.5%83.5\% as GCNII with 88 layers on Cora while using HyperDrop and DropConnect(Wan et al. 2013) together. DropConnect is used in other hyperbolic graph convolutional networks, such HGCN (Chami et al. 2019) and LGCN (Zhang et al. 2021). We claim that the superior performance of \mathcal{R}-HGCN compared to GCNII benefits from the representing capabilities of hyperbolic spaces while dealing with hierarchical-structure data. It confirms the significance of hyperbolic representation learning.

Ablation Experiments

We conducted ablation experiments on the Pubmed dataset to observe the effect of our proposed hyperbolic residual connection on \mathcal{R}-HGCN performance in different dimension selection approaches, respectively. As can be seen from Tabel 4, without the hyperbolic residual connection, the fortunate performance of the model shows a different degree of decrease respectively. Moreover, as the number of model layers increases, the decline in model performance is more pronounced. The experimental results prove that hyperbolic residual connection has a great helpful effect on the model performance.

Performance Comparisons

The comparisons with several state-of-the-art Euclidean GCNs and HGCNs are shown in Table 3. We have three observations. First and most importantly, compared with other HGCNs that are typically shallow models, \mathcal{R}-HGCN shows better results on Pubmed and Citeseer. Through, on Pubmed, HGCN also achieves the best accuracy, 80.3%80.3\%. Note that HGCN uses extra link prediction task as pre-training model while \mathcal{R}-HGCN does not use this training trick for a clear evaluation of HyperDrop; and the performance of HGCN decreases when the link prediction pre-training is not used. On Cora, LGCN achieves the highest mean accuracy 83.3%83.3\% among HGCNs. Note that both HGCN and LGCN utilize DropConnect (Wan et al. 2013) technique for training. As shown in Section Comparisons with DropConnect, \mathcal{R}-HGCN obtains 83.5%83.5\% mean accuracy on Cora while also using DropConnect, that is 0.2%0.2\% higher than that of LGCN. Second, both \mathcal{R}-HGCN[16×1] and \mathcal{R}-HGCN[2×8] benefit from HyperDrop on three datasets. It proves HyperDrop alleviates over-fitting issue in hyperbolic graph convolutional network and improves the generalization ability of \mathcal{R}-HGCN on the test set. Third, compared with Euclidean GCNs, \mathcal{R}-HGCN combined with HyperDrop achieves the best results on Pubmed and Citeseer. It confirms the superiority of hyperbolic representation learning while modeling graph data.

Comparisons with DropConnect

Table 5 shows the performance of HyperDrop and DropConnect(Wan et al. 2013) on pubmed, Citeseer, and Cora using a 1616-dimensional \mathcal{R}-HGCN. Since there is no dropout method tailed for hyperbolic representations before HyperDrop, some works (Chami et al. 2019; Zhang et al. 2021) use DropConnect as a regularization. DropConnect is one of variants of dropout methods that randomly zeros out elements of the Euclidean parameters in model, and it can be used in hyperbolic graph convolutional network as the parameters in hyperbolic graph convolutional network are Euclidean.

For DropConnect, we search the drop rate from 0.10.1 to 0.90.9 and report the best results. DropConnect obtains improvements on Pubmed and Cora but not on Citeseer. In contrast, HyperDrop achieves stable improvements on three datasets, and higher mean accuracy on Pubmed and Citeseer compared to DropConnect. HyperDrop and DropConnect are two dropout methods that the former works on hyperbolic representations and the latter works on Euclidean parameters. They can work together effectively for a better generalization of \mathcal{R}-HGCN. As the results on Citeseer and Cora show, using HyperDrop and DropConnect together has better performance than using only HyperDrop or DropConnect individually.

Conclusion

In this paper, we have proposed \mathcal{R}-HGCN, a product manifold based residual hyperbolic graph convolutional network for overcoming the over-smoothing problem. The residual connections can prevent node representations from being indistinguishable by hyperbolic residual connection and hyperbolic identity mapping. The product manifold with different origin points also provides a wider range of perspectives of data. A novel hyperbolic dropout method, HyperDrop, is proposed to alleviate the over-fitting issue while deepening models. Experiments have demonstrated the effectiveness of \mathcal{R}-HGCNs under various graph convolution layers and different structures of product manifolds.

Acknowledgements

This work was supported by the Natural Science Foundation of China (NSFC) under Grants No. 62172041 and No. 62176021, Natural Science Foundation of Shenzhen under Grant No. JCYJ20230807142703006, and 2023 Shenzhen National Science Foundation(No. 20231128220938001).

References

  • Bachmann, Bécigneul, and Ganea [2020] Bachmann, G.; Bécigneul, G.; and Ganea, O. 2020. Constant curvature graph convolutional networks. In International Conference on Machine Learning (ICML), 486–496.
  • Cai and Wang [2020] Cai, C.; and Wang, Y. 2020. A note on over-smoothing for graph neural networks. arXiv preprint arXiv:2006.13318.
  • Chami et al. [2019] Chami, I.; Ying, Z.; Ré, C.; and Leskovec, J. 2019. Hyperbolic graph convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 4868–4879.
  • Chen et al. [2020] Chen, M.; Wei, Z.; Huang, Z.; Ding, B.; and Li, Y. 2020. Simple and deep graph convolutional networks. In International Conference on Machine Learning (ICML), 1725–1735.
  • Dai et al. [2021] Dai, J.; Wu, Y.; Gao, Z.; and Jia, Y. 2021. A hyperbolic-to-hyperbolic graph convolutional network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 154–163.
  • De Sa et al. [2018] De Sa, C.; Gu, A.; Ré, C.; and Sala, F. 2018. Representation tradeoffs for hyperbolic embeddings. Proceedings of Machine Learning Research, 80: 4460.
  • Hamilton, Ying, and Leskovec [2017] Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS), 1024–1034.
  • Khrulkov et al. [2020] Khrulkov, V.; Mirvakhabova, L.; Ustinova, E.; Oseledets, I.; and Lempitsky, V. 2020. Hyperbolic image embeddings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6418–6428.
  • Kipf and Welling [2017] Kipf, T. N.; and Welling, M. 2017. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR).
  • Klicpera, Bojchevski, and Günnemann [2019] Klicpera, J.; Bojchevski, A.; and Günnemann, S. 2019. Predict then propagate: Graph neural networks meet personalized pagerank. In International Conference on Learning Representations (ICLR).
  • Krioukov et al. [2010] Krioukov, D.; Papadopoulos, F.; Kitsak, M.; Vahdat, A.; and Boguná, M. 2010. Hyperbolic geometry of complex networks. Physical Review E, 82(3): 036106.
  • Li, Han, and Wu [2018] Li, Q.; Han, Z.; and Wu, X.-M. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 3538–3545.
  • Liu, Nickel, and Kiela [2019] Liu, Q.; Nickel, M.; and Kiela, D. 2019. Hyperbolic graph neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 8230–8241.
  • Liu and Lang [2023] Liu, Y.; and Lang, B. 2023. McH-HGCN: multi-curvature hyperbolic heterogeneous graph convolutional network with type triplets. Neural Computing and Applications, 35(20): 15033–15049.
  • Nickel and Kiela [2017] Nickel, M.; and Kiela, D. 2017. Poincaré embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems (NeurIPS), 6338–6347.
  • Nickel and Kiela [2018] Nickel, M.; and Kiela, D. 2018. Learning continuous hierarchies in the lorentz model of hyperbolic geometry. In International Conference on Machine Learning (ICML), 3776–3785.
  • Papadopoulos et al. [2012] Papadopoulos, F.; Kitsak, M.; Serrano, M. Á.; Boguná, M.; and Krioukov, D. 2012. Popularity versus similarity in growing networks. Nature, 489(7417): 537–540.
  • Sen et al. [2008] Sen, P.; Namata, G.; Bilgic, M.; Getoor, L.; Galligher, B.; and Eliassi-Rad, T. 2008. Collective classification in network data. AI magazine, 29(3): 93–93.
  • Srivastava et al. [2014] Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research (JMLR), 15(1): 1929–1958.
  • Veličković et al. [2017] Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. In International Conference on Learning Representations (ICLR).
  • Wan et al. [2013] Wan, L.; Zeiler, M.; Zhang, S.; Le Cun, Y.; and Fergus, R. 2013. Regularization of neural networks using dropconnect. In International Conference on Machine Learning (ICML), 1058–1066.
  • Wu et al. [2019] Wu, F.; Zhang, T.; Souza Jr, A. H. d.; Fifty, C.; Yu, T.; and Weinberger, K. Q. 2019. Simplifying graph convolutional networks. In International Conference on Machine Learning (ICML), 6861–6871.
  • Yao, Pi, and Chen [2022] Yao, S.; Pi, D.; and Chen, J. 2022. Knowledge embedding via hyperbolic skipped graph convolutional networks. Neurocomputing, 480: 119–130.
  • Zhang et al. [2021] Zhang, Y.; Wang, X.; Shi, C.; Liu, N.; and Song, G. 2021. Lorentzian graph convolutional networks. In Proceedings of the Web Conference (WWW), 1249–1261.