This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Graph as Point Set

Xiyuan Wang    Pan Li    Muhan Zhang
Abstract

Graph is a fundamental data structure to model interconnections between entities. Set, on the contrary, stores independent elements. To learn graph representations, current Graph Neural Networks (GNNs) primarily use message passing to encode the interconnections. In contrast, this paper introduces a novel graph-to-set conversion method that bijectively transforms interconnected nodes into a set of independent points and then uses a set encoder to learn the graph representation. This conversion method holds dual significance. Firstly, it enables using set encoders to learn from graphs, thereby significantly expanding the design space of GNNs. Secondly, for Transformer, a specific set encoder, we provide a novel and principled approach to inject graph information losslessly, different from all the heuristic structural/positional encoding methods adopted in previous graph transformers. To demonstrate the effectiveness of our approach, we introduce Point Set Transformer (PST), a transformer architecture that accepts a point set converted from a graph as input. Theoretically, PST exhibits superior expressivity for both short-range substructure counting and long-range shortest path distance tasks compared to existing GNNs. Extensive experiments further validate PST’s outstanding real-world performance. Besides Transformer, we also devise a Deepset-based set encoder, which achieves performance comparable to representative GNNs, affirming the versatility of our graph-to-set method.

Graph Neural Network

1 Introduction

Graph, composed of interconnected nodes, has a wide range of applications and has been extensively studied. In graph machine learning, a central focus is to effectively leverage node connections. Various architectures have arisen for graph tasks, exhibiting significant divergence in their approaches to utilizing adjacency information.

Two primary paradigms have evolved for encoding adjacency information. The first paradigm involves message passing between nodes via edges. Notable methods in this category include Message Passing Neural Network (MPNN) (Gilmer et al., 2017), a foundational framework for GNNs such as GCN (Kipf & Welling, 2017), GIN (Xu et al., 2019a), and GraphSAGE (Hamilton et al., 2017). Subgraph-based GNNs (Zhang & Li, 2021; Huang et al., 2023b; Bevilacqua et al., 2022; Qian et al., 2022; Frasca et al., 2022; Zhao et al., 2022; Zhang et al., 2023a) select subgraphs from the whole graph and run MPNN within each subgraph. These models aggregate messages from neighbors to update the central nodes’ representations. Additionally, Graph Transformers (GTs) integrate adjacency information into the attention matrix (Mialon et al., 2021; Kreuzer et al., 2021; Wu et al., 2021; Dwivedi & Bresson, 2020; Ying et al., 2021; Shirzad et al., 2023) (note that some early GTs have options to not use adjacency matrix by using only positional encodings, but the performance is significantly worse (Dwivedi & Bresson, 2020)). Some recent GTs even directly incorporate message-passing layers into their architectures (Rampásek et al., 2022; Kim et al., 2021). In summary, this paradigm relies on adjacency relationships to facilitate information exchange among nodes.

The second paradigm designs permutation-equivariant neural networks that directly take adjacency matrices as input. This category includes high-order Weisfeiler-Leman tests (Maron et al., 2019a), invariant graph networks (Maron et al., 2019b), and relational pooling (Chen et al., 2020). Additionally, various studies have explored manual feature extraction from the adjacency matrix, including random walk structural encoding (Dwivedi et al., 2022a; Li et al., 2020), Laplacian matrix eigenvectors (Wang et al., 2022; Lim et al., 2023; Huang et al., 2023a), and shortest path distances (Li et al., 2020). However, these approaches typically serve as data augmentation steps for other models, rather than constituting an independent paradigm.

Both paradigms heavily rely on adjacency information in graph encoding. In contrast, this paper explores whether we can give up adjacency matrix in graph models while achieving competitive performance. As shown in Figure 1, our innovative graph-to-set method converts interconnected nodes into independent points, subsequently encoded by a set encoder like Transformer. Leveraging our symmetric rank decomposition, we break down the augmented adjacency matrix A+DA+D into QQTQQ^{T}, wherein QQ is constituted by column-full-rank rows, each denoting a node coordinate. This representation enables us to express the presence of edges as inner products of coordinate vectors (QiQ_{i} and QjQ_{j}). Consequently, interlinked nodes can be transformed into independent points and supplementary coordinates without information loss. Theoretically, two graphs are isomorphic iff the two converted point sets are equal up to an orthogonal transformation (because for any QQT=A+DQQ^{T}=A+D, QRQR is also a solution where RR is any orthogonal matrix). This equivalence empowers us to encode the set with coordinates in an orthogonal-transformation-equivariant manner, akin to E(3)-equivariant models designed for 3D geometric deep learning. Importantly, our approach is versatile, allowing for using any equivariant set encoder, thereby significantly expanding the design space of GNNs. Furthermore, for Transformer, a specific set encoder, our method offers a novel and principled way to inject graph information losslessly. In Appendix D, we additionally show that it unifies various heuristic structural/positional encodings in previous GTs, including random walk (Li et al., 2020; Dwivedi et al., 2023; Rampásek et al., 2022), heat kernel (Mialon et al., 2021), and resistance distance (Zhang et al., 2023b).

Refer to caption
Figure 1: Our method converts the input graph to a point set first and encoding it with a set encoder. O(r)O(r) denotes the set of rr-dimension orthogonal transformations.

To instantiate our method, we introduce an orthogonal-transformation-equivariant Transformer, namely Point Set Transformer (PST), to encode the point set. PST provably surpasses existing models in long-range and short-range expressivity. Extensive experiments verify these claims across synthetic datasets, graph property prediction datasets, and long-range graph benchmarks. Specifically, PST outperforms all baselines on QM9 (Wu et al., 2017) dataset. Moreover, our graph-to-set method is not constrained to only one specific set encoder. We also propose a Deepset (Segol & Lipman, 2020)-based model, which outperforms comparable to GIN (Xu et al., 2019b) on our datasets.

Differences from eigendecomposition. Note that our graph-to-set method is distinct from previous approaches that decompose adjacency matrices for positional encodings (Dwivedi et al., 2023; Wang et al., 2022; Lim et al., 2023; Bo et al., 2023). The key differences root in that previous methods primarily relied on eigendecomposition (EVD), whereas our method is based on symmetric rank decomposition (SRD). Their differences are as follows:

  • SRD enables a practical conversion of graph problems into set problems. SRD of a matrix is unique up to a single orthogonal transformation, while EVD is unique up to a combination of orthogonal transformations within each eigenspace. This difference allows SRD-based models to easily maintain symmetry, ensuring consistent predictions for isomorphic graphs, while EVD-based methods (Lim et al., 2023) struggle because they need to deal with each eigenspace individually, making them less suitable for graph-level tasks where eigenspaces vary between graphs.

  • Due to the advantage of SRD, we can utilize set encoder with coordinates to capture graph structure, thus expanding the design space of GNN. Moreover, our method provides a principled way to add graph information to Transformers. Note that previous GTs usually require multiple heuristic encodings together. Besides node positional encodings, they also use adjacency matrices: Grit (Ma et al., 2023) and graphit (Mialon et al., 2021) use random walk matrix (normalized adjacency) as relative positional encoding (RPE). Graph Transformer (Dwivedi & Bresson, 2020), Graphormer (Ying et al., 2021), and SAN (Kreuzer et al., 2021) use adjacency matrix as RPE. Dwivedi & Bresson (2020)’s ablation shows that adjacency is crucial. GPS (Rampásek et al., 2022), Exphormer (Shirzad et al., 2023), higher-order Transformer (Kim et al., 2021), and GraphVit/MLP-Mixer (He et al., 2023) even directly incorporate message passing blocks which use adjacency matrix to guide message passing between nodes.

In summary, this paper introduces a novel approach to graph representation learning by converting interconnected graphs into independent points and subsequently encoding them using an orthogonal-transformation-equivariant set encoder like our Point Set Transformer. This innovative approach outperforms existing methods in both long- and short-range tasks, as validated by comprehensive experiments.

2 Preliminary

For a matrix Za×bZ\!\in\!{\mathbb{R}}^{a\times b}, we define ZibZ_{i}\!\in\!{\mathbb{R}}^{b} as the ii-th row (as a column vector), and ZijZ_{ij}\!\in\!{\mathbb{R}} as its (i,j)(i,j) element. For a vector Λa\Lambda\!\in\!{\mathbb{R}}^{a}, diag(Λ)a×a\text{diag}(\Lambda)\!\in\!{\mathbb{R}}^{a\times a} is the diagonal matrix with Λ\Lambda as its diagonal elements. And for Sa×aS\!\in\!{\mathbb{R}}^{a\times a}, diagonal(S)a\text{diagonal}(S)\!\in\!{\mathbb{R}}^{a} represents the vector of its diagonal elements.

Let 𝒢=(V,E,X)\mathcal{G}=(V,E,X) denote an undirected graph. Here, V={1,2,3,,n}V=\{1,2,3,...,n\} is the set of nn nodes, EV×VE\subseteq V\times V is the set of edges, and Xn×dX\in\mathbb{R}^{n\times d} is the node feature matrix, whose vv-th row XvX_{v} is of node vv. The edge set EE can also be represented using the adjacency matrix An×nA\in\mathbb{R}^{n\times n}, where AuvA_{uv} is 11 if the edge exists (i.e., (u,v)E(u,v)\in E) and 0 otherwise. A graph 𝒢\mathcal{G} can also be represented by the pair (V,A,X)(V,A,X) or (A,X)(A,X). The degree matrix DD is a diagonal matrix with node degree (sum of a row of matrix AA) as the diagonal elements.

Given a permutation function π:{1,2,3,,n}{1,2,3,,n}\pi\!:\!\{1,2,3,...,n\}\!\to\!\{1,2,3,...,n\}, the permuted graph is π(𝒢)=(π(A),π(X))\pi(\mathcal{G})=(\pi(A),\pi(X)), where π(A)n×n,π(A)π(u)π(v)=Auv\pi(A)\in\mathbb{R}^{n\times n},\pi(A)_{\pi(u)\pi(v)}=A_{uv}, and π(X)n×d,π(X)π(v)=Xv\pi(X)\in\mathbb{R}^{n\times d},\pi(X)_{\pi(v)}=X_{v} for all u,vVu,v\in V. Essentially, the permutation π\pi reindex each node vv to π(v)\pi(v) while preserving the original graph structure and node features. Two graphs are isomorphic iff they can be mapped to each other through a permutation.

Definition 2.1.

Graphs 𝒢1=(A1,X1)\mathcal{G}_{1}=(A_{1},X_{1}) and 𝒢2=(A2,X2)\mathcal{G}_{2}=(A_{2},X_{2}) are isomorphic, denoted as 𝒢1𝒢2\mathcal{G}_{1}\simeq\mathcal{G}_{2}, if there exists a permutation π\pi such that π(A1)=A2\pi(A_{1})=A_{2} and π(X1)=X2\pi(X_{1})=X_{2}.

Isomorphic graphs can be transformed into each other by merely reindexing their nodes. In graph tasks, models should assign the same prediction to isomorphic graphs.

Symmetric Rank Decomposition (SRD). Decomposing an matrix into two full-rank matrices is well-known (Puntanen et al., 2011). We further show that a positive semi-definite matrix can be decomposed into a full-rank matrix.

Definition 2.2.

(Symmetric Rank Decomposition, SRD) Given a (symmetric) positive semi-definite matrix Ln×nL\in\mathbb{R}^{n\times n} of rank rr, its SRD is Qn×rQ\in\mathbb{R}^{n\times r}, where L=QQTL\!=\!QQ^{T}.

As L=QQTL=QQ^{T}, rank(Q)=rank(L)=rrank(Q)\!=\!rank(L)\!=\!r, which implies that QQ must be full column rank. Moreover, two SRDs of the same matrix are equal up to an orthogonal transformation. Let O(r)O(r) denote the set of orthogonal matrices in r×r\mathbb{R}^{r\times r}.

Proposition 2.3.

Matrices Q1Q_{1} and Q2Q_{2} in n×r\mathbb{R}^{n\times r} are SRD of the same matrix iff there exists RO(r),Q1=Q2RR\in O(r),Q_{1}=Q_{2}R.

SRD is closely related to eigendecomposition. Let L=Udiag(Λ)UTL=U\text{diag}(\Lambda)U^{T} denote the eigendecomposition of LL, where Λr\Lambda\!\in\!\mathbb{R}^{r} is the vector of non-zero eigenvalues, and Un×rU\!\in\!\mathbb{R}^{n\times r} is the matrix whose columns are the corresponding eigenvectors. Q=Udiag(Λ1/2)Q=U\text{diag}(\Lambda^{1/2}) yields an SRD of LL, where the superscript denotes element-wise square root operation.

3 Graph as Point Set

In this section, we present our innovative method for converting graphs into sets of points. We first show that Symmetric Rank Decomposition (SRD) can theoretically achieve this transformation: two graphs are isomorphic iff the sets of coordinates generated by SRD are equal up to orthogonal transformations. Additionally, we parameterize SRD for better real-world performance. Proof details are in Appendix A.

3.1 Symmetric Rank Decomposition for Coordinates

A natural approach to breaking down the interconnections between nodes is to decompose the adjacency matrix. While previous methods often used eigendecomposition outputs as supplementary node features, these features are not unique. Consequently, models relying on them fail to provide consistent predictions for isomorphic graphs, ultimately leading to poor generalization. To address this, we show that Symmetric Rank Decomposition (SRD) can convert graph-level tasks into set-level tasks with perfect alignment. Since SRD only applies to positive semi-definite matrices, we use the augmented adjacency matrix D+AD+A, which is always positive semi-definite (proof in Appendix A.2).

Theorem 3.1.

Given two graphs 𝒢=(V,A,X)\mathcal{G}=(V,A,X) and 𝒢=(V,A,X)\mathcal{G^{\prime}}=(V^{\prime},A^{\prime},X^{\prime}) with respective degree matrices DD and DD^{\prime}, 𝒢𝒢\mathcal{G}\simeq\mathcal{G^{\prime}} iff RO(r),{{(Xv,RQv)|vV}}={{(Xv,Qv)|vV}}\exists R\in O(r),\{\!\!\{(X_{v},RQ_{v})|\forall v\in V\}\!\!\}=\{\!\!\{(X^{\prime}_{v},Q_{v}^{\prime})|v\in V^{\prime}\}\!\!\}, where QQ and QQ^{\prime} are the SRD of D+AD+A and D+AD^{\prime}+A^{\prime} respectively, and rr is the rank of QQ.

In this theorem, the graph 𝒢=(V,A,X)\mathcal{G}=(V,A,X) is converted to a set of points {(Xv,Qv)|vV}\{(X_{v},Q_{v})|v\in V\}, where XvX_{v} is the original node feature of vv, and QvQ_{v}, the vv-th row of SRD of D+AD+A, is the rr-dimensional coordinate of node vv. Consequently, two graphs are isomorphic iff their point sets are equal up to an orthogonal transformation. Intuitively, we can imagine that the graph is mapped into an rr-dimensional space, where each node has a coordinate, and the inner product between two coordinates represents edge existence. This mapping is not unique, since we can freely rotate the coordinates through an orthogonal transformation without changing inner products. This conversion can be loosely likened to the reverse process of constructing molecular graph from atoms’ 3D coordinates, where Euclidean distances between atoms determine node connections in the graph.

Leveraging Theorem 3.1, we can convert a graph into a set and employ a set encoder for encoding it. Our method consistently produces representations for isomorphic graphs when the encoder is orthogonal transformation-invariant. The method’s expressivity hinges on the set encoder’s ability to differentiate non-equal sets, with greater encoder power enhancing overall performance on graph tasks.

3.2 Parameterized Coordinates

In this section, we enhance SRD’s practical performance through parameterization. As shown in Section 2, SRD can be implemented via eigendecomposition: Q=Udiag(Λ1/2)Q=U\text{diag}(\Lambda^{1/2}), where Λr\Lambda\in{\mathbb{R}}^{r} denotes non-zero eigenvalues of the decomposed matrix, and Un×rU\in\mathbb{R}^{n\times r} denotes corresponding eigenvectors. To parameterize SRD, we replace the element-wise square root with a function f:rrf:{\mathbb{R}}^{r}\to{\mathbb{R}}^{r}. This alteration further eliminates the constraint of non-negativity on eigenvalues and enables the use of various symmetric matrices containing adjacency information to generate coordinates. Additionally, for model flexibility, the coordinates can include multiple channels, with each channel corresponding to a distinct eigenvalue function.

Definition 3.2.

(Parameterized SRD, PSRD) With a dd-channel eigenvalue function f:rr×d\!f\!:\!{\mathbb{R}}^{r}\!\to\!{\mathbb{R}}^{r\times d}\! and an adjacency function 𝒵:n×nn×n\mathcal{Z}\!:\!{\mathbb{R}}^{n\times n}\!\to\!{\mathbb{R}}^{n\times n} producing symmetric matrices, PSRD coordinate of a graph 𝒢=(V,A,X){\mathcal{G}}\!=\!(V,A,X) is 𝒬(𝒵(A),f)n×r×d\mathcal{Q}(\!\mathcal{Z}(A),f\!)\!\in\!{\mathbb{R}}^{n\times r\times d}, whose ii-th channel is Udiag(fi(Λ))n×rUdiag(f_{i}(\Lambda))\!\in\!{\mathbb{R}}^{n\times r}, where Λr\Lambda\!\in\!{\mathbb{R}}^{r}, Un×rU\!\in\!{\mathbb{R}}^{n\times r} are non-zero eigenvalues and corresponding eigenvectors of 𝒵(A)\mathcal{Z}(\!A\!), and fi:rrf_{i}\!:\!{\mathbb{R}}^{r}\!\to\!{\mathbb{R}}^{r} is the ii-th channel of ff.

In the definition, 𝒵\mathcal{Z} maps adjacency matrix to its variants like Laplacian matrix, and ff transforms eigenvalues. 𝒬(𝒵(A),f)ur×d\mathcal{Q}(\mathcal{Z}(A),f)_{u}\in{\mathbb{R}}^{r\times d} is node uu’s coordinate. Similar to SRD, PSRD can also convert the graph isomorphism problems to set equality problems.

Theorem 3.3.

Given a permutation-equivariant adjacency function 𝒵\mathcal{Z}, for graphs 𝒢=(V,A,X){\mathcal{G}}\!=\!(V,A,X) and 𝒢=(V,A,X){\mathcal{G}}^{\prime}\!=\!(V^{\prime},A^{\prime},X^{\prime})

  • If eigenvalue function ff is permutation-equivariant and 𝒢𝒢{\mathcal{G}}\simeq{\mathcal{G}}^{\prime}, then two point sets with PSRD coordinates are equal up to an orthogonal transformation, i.e., RO(r)\exists R\!\in\!O(r), {{Xv,R𝒬(𝒵(A),f)v|vV}}={{Xv,𝒬(𝒵(A),f)v|vV}}\{\!\!\{\!X\!_{v},\!R\mathcal{Q}(\mathcal{Z}(\!A\!),\!f)_{v}|v\!\in\!V\!\}\!\!\}\!\!=\!\!\{\!\!\{\!X\!_{v}^{\prime},\!\mathcal{Q}(\mathcal{Z}(\!A^{\prime}\!),\!f)_{v}|v\!\in\!V\!^{\prime}\!\}\!\!\}, where rr is the rank of coordinates.

  • If 𝒵\mathcal{Z} is injective, for all d2d\!\geq\!2, there exists a continuous permutation-equivariant function f:rr×df\!:\!{\mathbb{R}}^{r}\!\to\!{\mathbb{R}}^{r\times d} that if two point sets with PSRD coordinates are equal up to an orthogonal transformation, 𝒢𝒢{\mathcal{G}}\simeq{\mathcal{G}}^{\prime}.

Given permutation equivariant ff and 𝒵\mathcal{Z}, the point sets with PSRD coordinates are equal up to an orthogonal transformation for isomorphic graphs. Moreover, there exists ff making reverse true. Therefore, we can safely employ permutation-equivariant eigenvalue functions, ensuring consistent predictions for isomorphic graphs. An expressive eigenvalue function also allows for the lossless conversion of graph-level tasks into set problems. In implementation, we utilize DeepSet (Segol & Lipman, 2020) due to its universal expressivity for permutation-equivariant set functions. Detailed architecture is shown in Figure 3 in Appendix G.

In summary, we use SRD and its parameterized generalization to decompose the adjacency matrix or its variants into coordinates. Thus, we transform a graph into a point set where each point represents a node and includes both the original node feature and the coordinates as its features.

4 Point Set Transformer

Our method, as depicted in Figure 1, comprises two steps: converting the graph into a set of independent points and encoding the set. Section 3 demonstrates the bijective transformation of the graph into a set. To encode this point set, we introduce a novel architecture, Point Set Transformer (PST), designed to maintain orthogonality invariance and deliver remarkable expressivity. Additionally, to highlight our method’s versatility, we propose a DeepSet (Segol & Lipman, 2020)-based set encoder in Appendix K.

PST’s architecture is depicted in Figure 4 in Appendix G. PST operates with two types of representations for each point: scalars, which remain invariant to coordinate orthogonal transformations, and vectors, which adapt equivariantly to coordinate changes. For a point ii, its scalar representation is sids_{i}\!\in\!{\mathbb{R}}^{d}, and its vector representation is vir×dv_{i}\!\in\!{\mathbb{R}}^{r\times d}, where dd is the hidden dimension, and rr is the rank of coordinates. sis_{i} and viv_{i} are initialized with the input node feature XiX_{i} and PSRD coordinates (detailed in Section 3.2) containing graph structure information, respectively.

Similar to conventional transformers, PST comprises multiple layers. Each layer incorporates two key components:

Scalar-Vector Mixer. This component, akin to the feed-forward network in Transformer, individually transforms point features. To enable information exchange between vectors and scalars, we employ the following architecture.

si\displaystyle s_{i}^{\prime} MLP1(sidiagonal(W1viTviW2T)),\displaystyle\leftarrow\text{MLP}_{1}(s_{i}\|\text{diagonal}(W_{1}v_{i}^{T}v_{i}W_{2}^{T})), (1)
vi\displaystyle v_{i}^{\prime} vidiag(MLP2(si))W3+viW4\displaystyle\leftarrow v_{i}\text{diag}(\text{MLP}_{2}(s_{i}))W_{3}+v_{i}W_{4} (2)

Here, W1,W2,W3,W_{1},W_{2},W_{3}, and W4d×dW_{4}\in\mathbb{R}^{d\times d} are learnable matrices for mixing different channels of vector features. Additionally, MLP1:2dd\text{MLP}_{1}:\mathbb{R}^{2d\to d} and MLP2:dd\text{MLP}_{2}:\mathbb{R}^{d\to d} represent two multi-layer perceptrons transforming scalar representations. The operation diagonal(W1viTviW2)\text{diagonal}(W_{1}v_{i}^{T}v_{i}W_{2}) takes the diagonal elements of a matrix, which translates vectors to scalars, while vidiag(MLP2(si))v_{i}\text{diag}(\text{MLP}_{2}(s_{i})) transforms scalar features into vectors. As viTRTRvi=viTvi,RO(r)v_{i}^{T}R^{T}Rv_{i}=v_{i}^{T}v_{i},\forall R\in O(r), the scalar update is invariant to orthogonal transformations of the coordinates. Similarly, the vector update is equivariant to O(r)O(r).

Attention Layer. Akin to ordinary attention layers, this component compute pairwise attention score to linearly combine point representations.

Attenij=MLP((WqssiWkssj)diagonal(WqvviTvjWkv))\text{Atten}_{ij}=\text{MLP}((W_{q}^{s}s_{i}\odot W_{k}^{s}s_{j})\|\text{diagonal}(W_{q}^{v}v_{i}^{T}v_{j}W_{k}^{v})) (3)

Here, WqsW_{q}^{s} and WqvW_{q}^{v} denote the linear transformations for scalars and vectors queries, respectively, while WksW_{k}^{s} and WkvW_{k}^{v} are for keys. The equation computes the inner products of queries and keys, similar to standard attention mechanisms. It is easy to see Attenij\text{Atten}_{ij} is also invariant to O(r)O(r).

Then we linearly combine point representations with attention scores as the coefficients:

sijAttenijsj,vijAttenijvjs_{i}\leftarrow\sum_{j}\text{Atten}_{ij}s_{j}^{\prime},\quad v_{i}\leftarrow\sum_{j}\text{Atten}_{ij}v_{j}^{\prime} (4)

​Each transformer layer is of time complexity O(n2r)O(n^{2}r) and space complexity O(n2+nr)O(n^{2}+nr).

Pooling. After several layers, we pool all points’ scalar representations as the set representation ss.

sPool({si|iV}),s\leftarrow\text{Pool}(\{s_{i}|i\in V\}), (5)

where Pool is pooling function like sum, mean, and max.

5 Expressivity

In this section, we delve into the theoretical expressivity of our methods. Our PSRD coordinates and the PST architecture exhibit strong long-range expressivity, allowing for efficient computation of distance metrics between nodes, as well as short-range expressivity, enabling the counting of paths and cycles rooted at each node. Therefore, our model is more expressive than many existing models, including GIN (equivalent to the 1-WL test) (Xu et al., 2019b), PPGN (equivalent to the 2-FWL test, more expressive in some cases) (Maron et al., 2019a), GPS (Rampásek et al., 2022), and Graphormer (Ying et al., 2021) (two representative graph transformers). More details are in Appendix B.

5.1 Long Range Expressivity

This section demonstrates that the inner products of PSRD coordinates exhibits strong long-range expressivity, which PST inherits by utilizing inner products in attention layers.

When assessing a model’s capacity to capture long-range interactions (LRI), a key measure is its ability to compute shortest path distance (spd) between nodes. Since formally characterizing LRI can be challenging, we focus on analyzing models’ performance concerning this specific measure. We observe that existing models vary significantly in their capacity to calculate spd. Moreover, we find an intuitive explaination for these differences: spd between nodes can be expressed as spd(i,j,A)=argmink{k|Aijk>0}spd(i,j,A)=\arg\min_{k}\{k|A^{k}_{ij}>0\}, and the ability to compute AKA^{K}, the KK-th power of the adjacency matrix AA, can serve as a straightforward indicator. Different models need different number of layers to compute AKA^{K}.

PSRD coordinates. PSRD coordinates can capture arbitrarily large shortest path distances through their inner products in one step. To illustrate it, we decompose the adjacency matrix as A=Udiag(Λ)UTA=U\text{diag}(\Lambda)U^{T}, and employ coordinates as UU and Udiag(ΛK)U\text{diag}(\Lambda^{K}). Their inner products are as follows:

Udiag(ΛK)UTAK1step\overbrace{U\text{diag}(\Lambda^{K})U^{T}\rightarrow A^{K}}^{1~{}~{}\text{step}} (6)
Theorem 5.1.

There exists permutation-equivariant functions fk,k=0,1,2,,Kf_{k},k=0,1,2,...,K, such that for all graphs 𝒢=(A,X){\mathcal{G}}=(A,X), the shortest path distance between node i,ji,j is a function of 𝒬(A,f0)i,𝒬(A,fk)j\langle\mathcal{Q}(A,f_{0})_{i},\mathcal{Q}(A,f_{k})_{j}\rangle, k=0,1,2,Kk\!=\!0,1,2,...K, where 𝒬(A,f)\mathcal{Q}(A,f) is the PSRD coordinate defined in Section 3.2, KK is the maximum shortest path distance between nodes.

2-FWL. A powerful graph isomorphic test, 2-Folklore-Weisfeiler-Leman Test (2-FWL), and its neural network version PPGN (Maron et al., 2019a) produce node pair representations in a matrix Xn×nX\in{\mathbb{R}}^{n\times n}. XX is initialized with AA. Each layer updates XX with XXXX. So intuitively, computing AKA^{K} takes log2K\lceil\log_{2}K\rceil layers.

AA2=AAA4=A2A2AK=AK/2AK/2log2Klayers\overbrace{A\rightarrow A^{2}\!\!=\!\!AA\rightarrow A^{4}\!\!=\!\!A^{2}A^{2}\rightarrow...\rightarrow A^{K}\!\!=\!\!A^{K/2}A^{K/2}}^{\lceil\log_{2}K\rceil~{}~{}\text{layers}} (7)
Theorem 5.2.

Let ck(𝒢)ijc^{k}({\mathcal{G}})_{ij} denote the color of node tuple (i,j)(i,j) of graph 𝒢{\mathcal{G}} at iteration kk. Given graphs 𝒢=(A,X){\mathcal{G}}\!=\!(A,X) and 𝒢=(A,X){\mathcal{G}}^{\prime}\!=\!(A^{\prime},X^{\prime}), for all K+K\!\in\!{\mathbb{N}}^{+}, if two node tuples (i,j)(i,j) in 𝒢{\mathcal{G}} and (i,j)(i^{\prime},j^{\prime}) in 𝒢{\mathcal{G}}^{\prime} have spd(i,j,A)<spd(i,j,A)2Kspd(i,j,A)\!<\!spd(i^{\prime},j^{\prime},A^{\prime})\!\leq\!2^{K}, then cK(𝒢)ijcK(𝒢)ijc^{K}({\mathcal{G}})_{ij}\!\neq\!c^{K}({\mathcal{G}}^{\prime})_{i^{\prime}j^{\prime}}. Moreover, for all L>2KL\!>\!2^{K}, there exists i,j,i,ji,j,i^{\prime},j^{\prime}, such that spd(i,j,A)>spd(i,j,A)Lspd(i,j,A)\!>\!spd(i^{\prime},j^{\prime},A^{\prime})\!\geq\!L while cK(𝒢)ij=cK(𝒢)ijc^{K}({\mathcal{G}})_{ij}\!=\!c^{K}({\mathcal{G}}^{\prime})_{i^{\prime}j^{\prime}}.

In other words, KK iterations of 2-FWL can distinguish pairs of nodes with different spds, as long as that distance is at most 2K2^{K}. Moreover, KK-iteration 2-FWL cannot differentiate all tuples with spd >2K>2^{K} from other tuples with different spds, which indicates that KK-iteration 2-FWL is effective in counting shortest path distances up to a maximum of 2K2^{K}.

MPNN. Intuitively, each MPNN layer uses AXAX to update node representations XX. However, this operation in general cannot compute AKA^{K} unless the initial node feature X=IX=I.

XAXA2X=AAXAKX=AAK1XKlayers\overbrace{X\rightarrow AX\rightarrow A^{2}X\!\!=\!\!AAX\rightarrow...\to A^{K}X\!\!=\!\!AA^{K-1}X}^{K~{}~{}\text{layers}} (8)
Theorem 5.3.

A graph pair exists that MPNN cannot differentiate, but their sets of all-pair spd are different.

If MPNNs can compute spd between node pairs, they should be able to distinguish this graph pair from the sets of spd. However, we show no MPNNs can distinguish the pair, thus proving that MPNNs cannot compute spd.

Graph Transformers (GTs) are known for their strong long-range capacities (Dwivedi et al., 2022b), as they can aggregate information from the entire graph to update each node’s representation. However, aggregating information from the entire graph is not equivalent to capturing the distance between nodes, and some GTs also fail to compute spd between nodes. Details are in Appendix C. Note that this slightly counter-intuitive results is because we take a new perspective to study long range interaction rather than showing GTs are weak in long range capacity.

Besides shortest path distances, our PSRD coordinates also enables the unification of various structure encodings (distance metrics between nodes), including random walk (Li et al., 2020; Dwivedi et al., 2023; Rampásek et al., 2022), heat kernel (Mialon et al., 2021), resistance distance (Zhang & Li, 2021; Zhang et al., 2023b). Further insights and details are shown in Table 5 in Appendix D.

5.2 Short Range Expressitivity

This section shows PST’s expressivity in representative short-range tasks: path and cycle counting.

Theorem 5.4.

A one-layer PST can count paths of length 11 and 22, a two-layer PST can count paths of length 33 and 44, and a four-layer PST can count paths of length 55 and 66. Here, “count” means that the (i,j)(i,j) element of the attention matrix in the last layer can express the number of paths between nodes ii and jj.

Therefore, with enough layers, our PST models can count the number of paths of length 6\leq 6 between nodes. Furthermore, our PST can also count cycles.

Theorem 5.5.

A one-layer PST can count cycles of length 33, a three-layer PST can count cycles of length 44 and 55, and a five-layer PST can count cycles of length 66 and 77. Here, “count” means the representation of node ii in the last layer can express the number of cycles involving node ii.

Therefore, with enough layers, PST can count the number of cycles of length 7\leq 7 between nodes. Given that even 2-FWL is restricted to counting cycles up to length 77 (Fürer, 2017), the cycle counting power of our Point Set Transformer is at least on par with 2-FWL.

6 Related Work

Graph Neural Network with Eigen-Decomposition. Our approach employs coordinates derived from the symmetric rank decomposition (SRD) of adjacency or related matrices, differing from prior studies that primarily rely on eigendecomposition (EVD). While both approaches have similarities, SRD transforms the graph isomorphism problem into a set problem bijectively, which is challenging for EVD, because SRD of a matrix is unique up to a single orthogonal transformation, while EVD is unique up to multiple orthogonal transformations in different eigenspaces. This key theoretical difference has profound implications for model design. Early efforts, like Dwivedi et al. (2023), introduce eigenvectors into MPNNs’ input node feature (Gilmer et al., 2017), and subsequent works, such as Graph Transformers (GTs) (Dwivedi & Bresson, 2020; Kreuzer et al., 2021), incorporate eigenvectors as node positional encodings. However, due to the non-uniqueness of eigenvectors, these models produce varying predictions for isomorphic graphs, limiting their generalization. Lim et al. (2023) partially solve the non-uniqueness problem. However, their solutions are limited to cases with constant eigenvalue multiplicity in graph tasks due to the property of EVD. On the other hand, approaches like Wang et al. (2022), Bo et al. (2023), and Huang et al. (2024) completely solve non-uniqueness and even apply permutation-equivariant functions to eigenvalues, similar to our PSRD. However, these methods aim to enhance existing MPNNs and GTs with heuristic features. In contrast, we perfectly align graph-level tasks with set-level tasks through SRD, allowing us to convert orthogonal-transformation-equivariant set encoders to graph encoders and to inject graph structure information into Transformers in a principled ways.

Table 1: Normalized MAE (\downarrow) on substructure counting tasks. Following Huang et al. (2023b), models can count the structure if the test loss \leq 10 units (yellow cell in the table), measured using a scale of 10310^{-3}. TT: Tailed Triangle. CC: Chordal Cycle, TR: Triangle-Rectangle.
Method 2-Path 3-Path 4-Path 5-path 6-path 3-Cycle 4-Cycle 5-Cycle 6-Cycle 7-cycle TT CC TR
MPNN 1.0 67.3 159.2 235.3 321.5 351.5 274.2 208.8 155.5 169.8 363.1 311.4 297.9
IDGNN 1.9 1.8 27.3 68.6 78.3 0.6 2.2 49 49.5 49.9 105.3 45.4 62.8
NGNN 1.5 2.1 24.4 75.4 82.6 0.3 1.3 40.2 43.9 52.2 104.4 39.2 72.9
GNNAK 4.5 40.7 7.5 47.9 48.8 0.4 4.1 13.3 23.8 79.8 4.3 11.2 131.1
I2-GNN 1.5 2.6 4.1 54.4 63.8 0.3 1.6 2.8 8.2 39.9 1.1 1.0 1.3
PPGN 0.3 1.7 4.1 15.1 21.7 0.3 0.9 3.6 7.1 27.1 2.6 1.5 14.4
PSDS 2.2±0.12.2_{\pm 0.1} 2.6±0.42.6_{\pm 0.4} 4.9±0.84.9_{\pm 0.8} 9.9±0.59.9_{\pm 0.5} 15.8±0.215.8_{\pm 0.2} 0.6±0.70.6_{\pm 0.7} 2.2±0.32.2_{\pm 0.3} 5.8±0.65.8_{\pm 0.6} 25.1±0.725.1_{\pm 0.7} 57.7±0.357.7_{\pm 0.3} 6.0±1.36.0_{\pm 1.3} 29.8±3.029.8_{\pm 3.0} 56.4±4.756.4_{\pm 4.7}
PST 0.7±0.10.7_{\pm 0.1} 1.1±0.11.1_{\pm 0.1} 1.5±0.11.5_{\pm 0.1} 2.2±0.12.2_{\pm 0.1} 3.3±0.33.3_{\pm 0.3} 0.8±0.10.8_{\pm 0.1} 1.9±0.21.9_{\pm 0.2} 3.1±0.33.1_{\pm 0.3} 4.9±0.34.9_{\pm 0.3} 8.6±0.58.6_{\pm 0.5} 3.0±0.13.0_{\pm 0.1} 4.0±0.74.0_{\pm 0.7} 9.2±0.99.2_{\pm 0.9}

Equivariant Point Cloud and 3-D Molecule Neural Networks. Equivariant point cloud and 3-D molecule tasks share resemblances: both involve unordered sets of 3-D coordinate points as input and require models to produce predictions invariant/equivariant to orthogonal transformations and translations of coordinates. Several works (Chen et al., 2021; Winkels & Cohen, 2018; Cohen et al., 2018; Gasteiger et al., 2021) introduce specialized equivariant convolution operators to preserve prediction symmetry, yet are later surpassed by models that learn both invariant and equivariant representations for each point, transmitting these representations between nodes. Notably, certain models (Satorras et al., 2021; Schütt et al., 2021; Deng et al., 2021; Wang & Zhang, 2022) directly utilize vectors mirroring input coordinate changes as equivariant features, while others (Thomas et al., 2018; Batzner et al., 2022; Fuchs et al., 2020; Hutchinson et al., 2021; Worrall et al., 2017; Weiler et al., 2018) incorporate high-order irreducible representations of the orthogonal group, achieving proven universal expressivity (Dym & Maron, 2021). Our Point Set Transformer (PST) similarly learns both invariant and equivariant point representations. However, due to the specific conversion of point sets from graphs, PST’s architecture varies from existing models. While translation invariance characterizes point clouds and molecules, graph properties are sensitive to coordinate translations in our method. Hence, we adopt inner products of coordinates. Additionally, these prior works center on 3D point spaces, whereas our coordinates exist in high-dimensional space, rendering existing models and theoretical expressivity results based on high-order irreducible representations incompatible with our framework.

7 Experiments

In our experiments, we evaluate our model across three dimensions: substructure counting for short-range expressivity, real-world graph property prediction for practical performance, and Long-Range Graph Benchmarks (Dwivedi et al., 2022b) to assess long-range interactions. Our primary model, Point Set Transformer (PST) with PSRD coordinates derived from the Laplacian matrix, performs well on all tasks. Moreover, our graph-to-set method is adaptable to various configurations. In ablation study (see Appendix H), another set encoders Point Set DeepSet (PSDS, introduced in Appendix K), SRD coordinates different from PSRD, and coordinates decomposed from the adjacency matrix and normalized adjacency matrix all demonstrate good performance, highlighting the versatility of our approach. Although PST has higher time complexity compared to existing Graph Transformers and is slower on large graphs, it shows similar scalability to our baselines in real-world graph property prediction datasets (see Appendix I). Our PST uses fewer or comparable parameters than baselines across all datasets. Dataset details, experiment settings, and hyperparameters are provided in Appendix E and F.

7.1 Graph substructure counting

As Chen et al. (2020) highlight, the ability to count substructures is a crucial metric for assessing expressivity. We evaluate our model’s substructure counting capabilities on synthetic graphs following Huang et al. (2023b). The considered substructures include paths of lengths 2 to 6, cycles of lengths 3 to 7, and other substructures like tailed triangles (TT), chordal cycles (CC), and triangle-rectangle (TR). Our task involves predicting the number of paths originating from each node and the cycles and other substructures in which each node participates. We compare our Point Set Transformer (PST) with expressive GNN models, including ID-GNNs (You et al., 2021), NGNNs (Zhang & Li, 2021), GNNAK+(Zhao et al., 2022), I2-GNN(Huang et al., 2023b), and PPGN (Maron et al., 2019a). Baseline results are from Huang et al. (2023b), where uncertainties are unknown.

Results are in Table 1. Following Huang et al. (2023b), a model can count a substructure if its normalized test Mean Absolute Error (MAE) is below 10210^{-2} (10 units in the table). Remarkably, our PST counts all listed substructures, which aligns with our Theorem 5.4 and Theorem 5.5, while the second-best model, I2-GNN, counts only 10 out of 13 substructures. PSDS can also count 8 out of 13 substructures, showcasing the versatility of our graph-to-set method.

Table 2: MAE (\downarrow) on the QM9 dataset. * denotes models with 3D coordinates or features as input. LRP: Deep LRP (Chen et al., 2020). DF: 2-DRFWL(2) GNN (Zhou et al., 2023). 1GNN: 1-GNN. 123: 1-2-3-GNN (Morris et al., 2019).
Target Unit LRP NGNN I2GNN DF PSDS PST 1GNN* DTNN* 123* PPGN* PST*
μ\mu 10110^{-1}D 3.64 4.28 4.28 3.46 3.53±0.053.53_{\pm 0.05} 3.19±0.04\mathbf{3.19_{\pm 0.04}} 4.93 2.44 4.76 2.31 0.23±0.01\mathbf{0.23_{\pm 0.01}}
α\alpha 10110^{-1}a03a_{0}^{3} 2.98 2.90 2.30 2.22 2.05±0.022.05_{\pm 0.02} 1.89±0.04\mathbf{1.89_{\pm 0.04}} 7.80 9.50 2.70 3.82 0.78±0.05\mathbf{0.78_{\pm 0.05}}
εhomo\varepsilon_{\text{homo}} 10210^{-2}meV 6.91 7.21 7.10 6.15 6.56±0.036.56_{\pm 0.03} 5.98±0.09\mathbf{5.98_{\pm 0.09}} 8.73 10.56 9.17 7.51 2.98±0.08\mathbf{2.98_{\pm 0.08}}
εlumo\varepsilon_{\text{lumo}} 10210^{-2}meV 7.54 8.08 7.27 6.12 6.31±0.056.31_{\pm 0.05} 5.84±0.08\mathbf{5.84_{\pm 0.08}} 9.66 13.93 9.55 7.81 2.20±0.07\mathbf{2.20_{\pm 0.07}}
Δε\Delta\varepsilon 10210^{-2}meV 9.61 10.34 10.34 8.82 9.13±0.049.13_{\pm 0.04} 8.46±0.07\mathbf{8.46_{\pm 0.07}} 13.33 30.48 13.06 11.05 4.47±0.09\mathbf{4.47_{\pm 0.09}}
R2R^{2} a02a_{0}^{2} 19.30 20.50 18.64 15.04 14.35±0.0214.35_{\pm 0.02} 13.08±0.16\mathbf{13.08_{\pm 0.16}} 34.10 17.00 22.90 16.07 0.93±0.03\mathbf{0.93_{\pm 0.03}}
ZPVE 10210^{-2}meV 1.50 0.54 0.38 0.46 0.41±0.020.41_{\pm 0.02} 0.39±0.01\mathbf{0.39_{\pm 0.01}} 3.37 4.68 0.52 17.42 0.26±0.01\mathbf{0.26_{\pm 0.01}}
U0U_{0} meV 11.24 8.03 5.74 4.24 3.53±0.113.53_{\pm 0.11} 3.46±0.17\mathbf{3.46_{\pm 0.17}} 63.13 66.12 1.16 6.37 3.33±0.193.33_{\pm 0.19}
UU meV 11.24 9.82 5.61 4.16 3.49±0.05\mathbf{3.49_{\pm 0.05}} 3.55±0.103.55_{\pm 0.10} 56.60 66.12 3.02 6.37 3.26±0.053.26_{\pm 0.05}
HH meV 11.24 8.30 7.32 3.95 3.47±0.04\mathbf{3.47_{\pm 0.04}} 3.49±0.203.49_{\pm 0.20} 60.68 66.12 1.14 6.23 3.29±0.213.29_{\pm 0.21}
GG meV 11.24 13.31 7.10 4.24 3.56±0.143.56_{\pm 0.14} 3.55±0.17\mathbf{3.55_{\pm 0.17}} 52.79 66.12 1.28 6.48 3.25±0.153.25_{\pm 0.15}
CvC_{v} 10210^{-2}cal/mol/K 12.90 17.40 7.30 9.01 8.35±0.098.35_{\pm 0.09} 7.77±0.157.77_{\pm 0.15} 27.00 243.00 9.44 18.40 3.63±0.13\mathbf{3.63_{\pm 0.13}}

7.2 Graph properties prediction

We conduct experiments on four real-world graph datasets: QM9 (Wu et al., 2017), ZINC, ZINC-full (Gómez-Bombarelli et al., 2016), and ogbg-molhiv (Hu et al., 2020). PST excels in performance, and PSDS performs comparable to GIN (Xu et al., 2019a). PST also outperforms all baselines on TU datasets (Ivanov et al., 2019) (see Appendix J).

For the QM9 dataset, we compare PST with various expressive GNNs, including models considering Euclidean distances (1-GNN, 1-2-3-GNN (Morris et al., 2019), DTNN (Wu et al., 2017), PPGN (Maron et al., 2019a)) and those focusing solely on graph structure (Deep LRP (Chen et al., 2020), NGNN (Zhang & Li, 2021), I2-GNN (Huang et al., 2023b), 2-DRFWL(2) GNN (Zhou et al., 2023)). For fair comparsion, we introduce two versions of our model: PST without Euclidean distance (PST) and PST with Euclidean distance (PST*). Results in Table 2 show PST outperforms all baseline models without Euclidean distance on 11 out of 12 targets, with an average 11% reduction in loss compared to the strongest baseline, 2-DRFWL(2) GNN. PST* outperforms all Euclidean distance-based baselines on 8 out of 12 targets, with an average 4% reduction in loss compared to the strongest baseline, 1-2-3-GNN. Both models rank second in performance for the remaining targets. PSDS without Euclidean distance also outperforms baselines on 6 out of 12 targets.

Table 3: Results on graph property prediction tasks.
zinc zinc-full molhiv
MAE\downarrow MAE\downarrow AUC\uparrow
GIN 0.163±0.0040.163_{\pm 0.004} 0.088±0.0020.088_{\pm 0.002} 77.07±1.4977.07_{\pm 1.49}
GNN-AK+ 0.080±0.0010.080_{\pm 0.001} 79.61±1.1979.61_{\pm 1.19}
ESAN 0.102±0.0030.102_{\pm 0.003} 0.029±0.0030.029_{\pm 0.003} 78.25±0.9878.25_{\pm 0.98}
SUN 0.083±0.0030.083_{\pm 0.003} 0.024±0.0030.024_{\pm 0.003} 80.03±0.5580.03_{\pm 0.55}
SSWL 0.083±0.0030.083_{\pm 0.003} 0.022±0.002¯\underline{0.022_{\pm 0.002}} 79.58±0.3579.58_{\pm 0.35}
DRFWL 0.077±0.0020.077_{\pm 0.002} 0.025±0.0030.025_{\pm 0.003} 78.18±2.1978.18_{\pm 2.19}
CIN 0.079±0.0060.079_{\pm 0.006} 0.022±0.002¯\underline{0.022_{\pm 0.002}} 80.94±0.57\mathbf{80.94_{\pm 0.57}}
NGNN 0.111±0.0030.111_{\pm 0.003} 0.029±0.0010.029_{\pm 0.001} 78.34±1.8678.34_{\pm 1.86}
Graphormer 0.122±0.0060.122_{\pm 0.006} 0.052±0.0050.052_{\pm 0.005} 80.51±0.53¯\underline{80.51_{\pm 0.53}}
GPS 0.070±0.0040.070_{\pm 0.004} - 78.80±1.0178.80_{\pm 1.01}
GMLP-Mixer 0.077±0.0030.077_{\pm 0.003} - 79.97±1.0279.97_{\pm 1.02}
SAN 0.139±0.0060.139_{\pm 0.006} - 77.75±0.6177.75_{\pm 0.61}
Specformer 0.066±0.0030.066_{\pm 0.003} - 78.89±1.2478.89_{\pm 1.24}
SignNet 0.084±0.0060.084_{\pm 0.006} 0.024±0.0030.024_{\pm 0.003} -
Grit 0.059±0.002\mathbf{0.059_{\pm 0.002}} 0.024±0.0030.024_{\pm 0.003} -
PSDS 0.162±0.0070.162_{\pm 0.007} 0.049±0.0020.049_{\pm 0.002} 74.92±1.1874.92_{\pm 1.18}
PST 0.063±0.003¯\underline{0.063_{\pm 0.003}} 0.018±0.001\mathbf{0.018_{\pm 0.001}} 80.32±0.7180.32_{\pm 0.71}

For ZINC, ZINC-full, and ogbg-molhiv datasets, we have conducted an evaluation of PST and PSDS in comparison to a range of expressive GNNs and graph transformers (GTs). This set of models includes expressive MPNN and subgraph GNNs: GIN (Xu et al., 2019b), SUN (Frasca et al., 2022), SSWL (Zhang et al., 2023a), 2-DRFWL(2) GNN (Zhou et al., 2023), CIN (Bodnar et al., 2021), NGNN (Zhang & Li, 2021), and GTs: Graphormer (Ying et al., 2021), GPS (Rampásek et al., 2022), Graph MLP-Mixer (He et al., 2023), Specformer (Bo et al., 2023), SignNet (Lim et al., 2023), and Grit (Ma et al., 2023). Performance results for the expressive GNNs are sourced from (Zhou et al., 2023), while those for the Graph Transformers are extracted from (He et al., 2023; Ma et al., 2023; Lim et al., 2023). The comprehensive results are presented in Table 3. Notably, our PST outperforms all baseline models on ZINC-full datasets, achieving reductions in loss of 18%. On the ogbg-molhiv dataset, our PST also delivers competitive results, with only CIN and Graphormer surpassing it. Overall, PST demonstrates exceptional performance across these four diverse datasets, and PSDS also performs comparable to representative GNNs like GIN (Xu et al., 2019a).

7.3 Long Range Graph Benchmark

To assess the long-range capacity of our Point Set Transformer (PST), we conducted experiments using the Long Range Graph Benchmark (Dwivedi et al., 2022b). Following He et al. (2023), we compared our model to a range of baseline models, including GCN (Kipf & Welling, 2017), GINE (Xu et al., 2019a), GatedGCN (Bresson & Laurent, 2017), SAN (Kreuzer et al., 2021), Graphormer (Ying et al., 2021), GMLP-Mixer, Graph ViT (He et al., 2023), and Grit (Ma et al., 2023). PST outperforms all baselines on the PascalVOC-SP and Peptides-Func datasets and achieves the third-highest performance on the Peptides-Struct dataset. PSDS consistently outperforms GCN and GINE. These results showcase the remarkable long-range interaction capturing abilities of our methods across various benchmark datasets. Note that a contemporary work (Tönshoff et al., 2023) points out that even vanilla MPNNs can achieve similar performance to Graph Transformers on LRGB with better hyperparameters, which implies that LRGB is not a rigor benchmark. However, for comparison with previous work, we maintain the original settings on LRGB datasets.

Table 4: Results on Long Range Graph Benchmark. * means using Random Walk Structural Encoding (Dwivedi et al., 2022a), and ** means Laplacian Eigenvector Encoding (Dwivedi et al., 2023).
Model PascalVOC-SP Peptides-Func Peptides-Struct
F1 score \uparrow AP \uparrow MAE \downarrow
GCN 0.1268±0.00600.1268_{\pm 0.0060} 0.5930±0.00230.5930_{\pm 0.0023} 0.3496±0.00130.3496_{\pm 0.0013}
GINE 0.1265±0.00760.1265_{\pm 0.0076} 0.5498±0.00790.5498_{\pm 0.0079} 0.3547±0.00450.3547_{\pm 0.0045}
GatedGCN 0.2873±0.02190.2873_{\pm 0.0219} 0.5864±0.00770.5864_{\pm 0.0077} 0.3420±0.00130.3420_{\pm 0.0013}
GatedGCN* 0.2860±0.00850.2860_{\pm 0.0085} 0.6069±0.00350.6069_{\pm 0.0035} 0.3357±0.00060.3357_{\pm 0.0006}
Transformer** 0.2694±0.00980.2694_{\pm 0.0098} 0.6326±0.01260.6326_{\pm 0.0126} 0.2529±0.00160.2529_{\pm 0.0016}
SAN* 0.3216±0.00270.3216_{\pm 0.0027} 0.6439±0.00750.6439_{\pm 0.0075} 0.2545±0.00120.2545_{\pm 0.0012}
SAN** 0.3230±0.00390.3230_{\pm 0.0039} 0.6384±0.01210.6384_{\pm 0.0121} 0.2683±0.00430.2683_{\pm 0.0043}
GraphGPS 0.3748±0.01090.3748_{\pm 0.0109} 0.6535±0.00410.6535_{\pm 0.0041} 0.2500±0.00050.2500_{\pm 0.0005}
Exphormer 0.3975±0.00370.3975_{\pm 0.0037} 0.6527±0.00430.6527_{\pm 0.0043} 0.2481±0.00070.2481_{\pm 0.0007}
GMLP-Mixer - 0.6970±0.00800.6970_{\pm 0.0080} 0.2475±0.00150.2475_{\pm 0.0015}
Graph ViT - 0.6942±0.00750.6942_{\pm 0.0075} 0.2449±0.0016\mathbf{0.2449_{\pm 0.0016}}
Grit - 0.6988±0.0082\mathbf{0.6988_{\pm 0.0082}} 0.2460±0.00120.2460_{\pm 0.0012}
PSDS 0.2134±0.00500.2134_{\pm 0.0050} 0.5965±0.00640.5965_{\pm 0.0064} 0.2621±0.00360.2621_{\pm 0.0036}
PST 0.4010±0.0072\mathbf{0.4010_{\pm 0.0072}} 0.6984±0.0051\mathbf{0.6984_{\pm 0.0051}} 0.2470±0.00150.2470_{\pm 0.0015}

8 Conclusion

We introduce a novel approach employing symmetric rank decomposition to transform interconnected nodes in graph into independent points with coordinates. Additionally, we propose the Point Set Transformer to encode the point set. Our approach demonstrates remarkable theoretical expressivity and excels in real-world performance, addressing both short-range and long-range tasks effectively. It extends the design space of GNN and provides a principled way to inject graph structural information into Transformers.

9 Limitations

PST’s scalability is still constrained by the Transformer architecture. To overcome this, acceleration techniques such as sparse attention and linear attention could be explored, which will be our future work.

Impact Statement

This paper presents work whose goal is to advance the field of graph representation learning and will improve the design of graph generation and prediction models. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.

Acknowledgement

Xiyuan Wang and Muhan Zhang are partially supported by the National Key R&D Program of China (2022ZD0160300), the National Key R&D Program of China (2021ZD0114702), the National Natural Science Foundation of China (62276003), and Alibaba Innovative Research Program. Pan Li is supported by the National Science Foundation award IIS-2239565.

References

  • Akiba et al. (2019) Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In SIGKDD, 2019.
  • Batzner et al. (2022) Batzner, S., Musaelian, A., Sun, L., Geiger, M., Mailoa, J. P., Kornbluth, M., Molinari, N., Smidt, T. E., and Kozinsky, B. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature communications, 13(1):1–11, 2022.
  • Bevilacqua et al. (2022) Bevilacqua, B., Frasca, F., Lim, D., Srinivasan, B., Cai, C., Balamurugan, G., Bronstein, M. M., and Maron, H. Equivariant subgraph aggregation networks. In ICLR, 2022.
  • Bo et al. (2023) Bo, D., Shi, C., Wang, L., and Liao, R. Specformer: Spectral graph neural networks meet transformers. In ICLR, 2023.
  • Bodnar et al. (2021) Bodnar, C., Frasca, F., Otter, N., Wang, Y., Liò, P., Montúfar, G. F., and Bronstein, M. M. Weisfeiler and lehman go cellular: CW networks. In NeurIPS, 2021.
  • Bouritsas et al. (2023) Bouritsas, G., Frasca, F., Zafeiriou, S., and Bronstein, M. M. Improving graph neural network expressivity via subgraph isomorphism counting. TPAMI, 45(1), 2023.
  • Bresson & Laurent (2017) Bresson, X. and Laurent, T. Residual gated graph convnets, 2017.
  • Chen et al. (2021) Chen, H., Liu, S., Chen, W., Li, H., and Jr., R. W. H. Equivariant point network for 3d point cloud analysis. In CVPR, 2021.
  • Chen et al. (2020) Chen, Z., Chen, L., Villar, S., and Bruna, J. Can graph neural networks count substructures? In NeurIPS, 2020.
  • Cohen et al. (2018) Cohen, T. S., Geiger, M., Köhler, J., and Welling, M. Spherical cnns. In ICLR, 2018.
  • Deng et al. (2021) Deng, C., Litany, O., Duan, Y., Poulenard, A., Tagliasacchi, A., and Guibas, L. J. Vector neurons: A general framework for so(3)-equivariant networks. In ICCV, 2021.
  • Dwivedi & Bresson (2020) Dwivedi, V. P. and Bresson, X. A generalization of transformer networks to graphs, 2020.
  • Dwivedi et al. (2022a) Dwivedi, V. P., Luu, A. T., Laurent, T., Bengio, Y., and Bresson, X. Graph neural networks with learnable structural and positional representations. In ICLR, 2022a.
  • Dwivedi et al. (2022b) Dwivedi, V. P., Rampásek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., and Beaini, D. Long range graph benchmark. In NeurIPS, 2022b.
  • Dwivedi et al. (2023) Dwivedi, V. P., Joshi, C. K., Luu, A. T., Laurent, T., Bengio, Y., and Bresson, X. Benchmarking graph neural networks. J. Mach. Learn. Res., 24:43:1–43:48, 2023.
  • Dym & Maron (2021) Dym, N. and Maron, H. On the universality of rotation equivariant point cloud networks. In ICLR, 2021.
  • Feng et al. (2022) Feng, J., Chen, Y., Li, F., Sarkar, A., and Zhang, M. How powerful are k-hop message passing graph neural networks. In NeurIPS, 2022.
  • Fey & Lenssen (2019) Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. CoRR, abs/1903.02428, 2019.
  • Frasca et al. (2022) Frasca, F., Bevilacqua, B., Bronstein, M. M., and Maron, H. Understanding and extending subgraph gnns by rethinking their symmetries. In NeurIPS, 2022.
  • Fuchs et al. (2020) Fuchs, F., Worrall, D., Fischer, V., and Welling, M. Se(3)-transformers: 3d roto-translation equivariant attention networks. NeurIPS, 2020.
  • Fürer (2017) Fürer, M. On the combinatorial power of the weisfeiler-lehman algorithm. In CIAC, volume 10236, pp.  260–271, 2017.
  • Gasteiger et al. (2021) Gasteiger, J., Becker, F., and Günnemann, S. Gemnet: Universal directional graph neural networks for molecules. In NeurIPS, 2021.
  • Gilmer et al. (2017) Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In ICML, 2017.
  • Gómez-Bombarelli et al. (2016) Gómez-Bombarelli, R., Duvenaud, D., Hernández-Lobato, J. M., Aguilera-Iparraguirre, J., Hirzel, T. D., Adams, R. P., and Aspuru-Guzik, A. Automatic chemical design using a data-driven continuous representation of molecules, 2016.
  • Hamilton et al. (2017) Hamilton, W. L., Ying, Z., and Leskovec, J. Inductive representation learning on large graphs. In NeurIPS, 2017.
  • He et al. (2023) He, X., Hooi, B., Laurent, T., Perold, A., LeCun, Y., and Bresson, X. A generalization of vit/mlp-mixer to graphs. In ICML, 2023.
  • Hu et al. (2020) Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS, 2020.
  • Huang et al. (2023a) Huang, Y., Lu, W., Robinson, J., Yang, Y., Zhang, M., Jegelka, S., and Li, P. On the stability of expressive positional encodings for graph neural networks. arXiv preprint arXiv:2310.02579, 2023a.
  • Huang et al. (2023b) Huang, Y., Peng, X., Ma, J., and Zhang, M. Boosting the cycle counting power of graph neural networks with i$^2$-gnns. In ICLR, 2023b.
  • Huang et al. (2024) Huang, Y., Lu, W., Robinson, J., Yang, Y., Zhang, M., Jegelka, S., and Li, P. On the stability of expressive positional encodings for graph neural networks. ICLR, 2024.
  • Hutchinson et al. (2021) Hutchinson, M. J., Lan, C. L., Zaidi, S., Dupont, E., Teh, Y. W., and Kim, H. Lietransformer: Equivariant self-attention for lie groups. In ICML, 2021.
  • Ivanov et al. (2019) Ivanov, S., Sviridov, S., and Burnaev, E. Understanding isomorphism bias in graph data sets. CoRR, abs/1910.12091, 2019.
  • Kim et al. (2021) Kim, J., Oh, S., and Hong, S. Transformers generalize deepsets and can be extended to graphs & hypergraphs. In NeurIPS, 2021.
  • Kipf & Welling (2017) Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  • Kreuzer et al. (2021) Kreuzer, D., Beaini, D., Hamilton, W. L., Létourneau, V., and Tossou, P. Rethinking graph transformers with spectral attention. In NeurIPS, 2021.
  • Li et al. (2020) Li, P., Wang, Y., Wang, H., and Leskovec, J. Distance encoding: Design provably more powerful neural networks for graph representation learning. In NeurIPS, 2020.
  • Lim et al. (2023) Lim, D., Robinson, J. D., Zhao, L., Smidt, T. E., Sra, S., Maron, H., and Jegelka, S. Sign and basis invariant networks for spectral graph representation learning. In ICLR, 2023.
  • Ma et al. (2023) Ma, L., Lin, C., Lim, D., Romero-Soriano, A., Dokania, P. K., Coates, M., Torr, P. H. S., and Lim, S. Graph inductive biases in transformers without message passing. In ICML, 2023.
  • Maron et al. (2019a) Maron, H., Ben-Hamu, H., Serviansky, H., and Lipman, Y. Provably powerful graph networks. In NeurIPS, 2019a.
  • Maron et al. (2019b) Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. Invariant and equivariant graph networks. In IGN, 2019b.
  • Mialon et al. (2021) Mialon, G., Chen, D., Selosse, M., and Mairal, J. Graphit: Encoding graph structure in transformers, 2021.
  • Morris et al. (2019) Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI, 2019.
  • Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E. Z., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, pp.  8024–8035, 2019.
  • Perepechko & Voropaev (2009) Perepechko, S. and Voropaev, A. The number of fixed length cycles in an undirected graph. explicit formulae in case of small lengths. MMCP, 148, 2009.
  • Puntanen et al. (2011) Puntanen, S., Styan, G. P., and Isotalo, J. Matrix tricks for linear statistical models: our personal top twenty. Springer, 2011.
  • Qian et al. (2022) Qian, C., Rattan, G., Geerts, F., Niepert, M., and Morris, C. Ordered subgraph aggregation networks. In NeurIPS, 2022.
  • Rampásek et al. (2022) Rampásek, L., Galkin, M., Dwivedi, V. P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scalable graph transformer. In NeurIPS, 2022.
  • Satorras et al. (2021) Satorras, V. G., Hoogeboom, E., and Welling, M. E (n) equivariant graph neural networks. In ICML, 2021.
  • Schütt et al. (2021) Schütt, K., Unke, O., and Gastegger, M. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In ICML, 2021.
  • Segol & Lipman (2020) Segol, N. and Lipman, Y. On universal equivariant set networks. In ICLR, 2020.
  • Shervashidze et al. (2011) Shervashidze, N., Schweitzer, P., van Leeuwen, E. J., Mehlhorn, K., and Borgwardt, K. M. Weisfeiler-lehman graph kernels. JMLR, 2011.
  • Shirzad et al. (2023) Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. Exphormer: Sparse transformers for graphs. In ICML, 2023.
  • Thomas et al. (2018) Thomas, N., Smidt, T., Kearnes, S., Yang, L., Li, L., Kohlhoff, K., and Riley, P. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018.
  • Tönshoff et al. (2023) Tönshoff, J., Ritzert, M., Rosenbluth, E., and Grohe, M. Where did the gap go? reassessing the long-range graph benchmark. CoRR, abs/2309.00367, 2023.
  • Wang et al. (2022) Wang, H., Yin, H., Zhang, M., and Li, P. Equivariant and stable positional encoding for more powerful graph neural networks. In ICLR, 2022.
  • Wang & Zhang (2022) Wang, X. and Zhang, M. Graph neural network with local frame for molecular potential energy surface. LoG, 2022.
  • Weiler et al. (2018) Weiler, M., Geiger, M., Welling, M., Boomsma, W., and Cohen, T. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In NeurIPS, 2018.
  • Wijesinghe & Wang (2022) Wijesinghe, A. and Wang, Q. A new perspective on ”how graph neural networks go beyond weisfeiler-lehman?”. In ICLR, 2022.
  • Winkels & Cohen (2018) Winkels, M. and Cohen, T. S. 3d g-cnns for pulmonary nodule detection, 2018.
  • Worrall et al. (2017) Worrall, D. E., Garbin, S. J., Turmukhambetov, D., and Brostow, G. J. Harmonic networks: Deep translation and rotation equivariance. In CVPR, 2017.
  • Wu et al. (2017) Wu, Z., Ramsundar, B., Feinberg, E. N., Gomes, J., Geniesse, C., Pappu, A. S., Leswing, K., and Pande, V. S. Moleculenet: A benchmark for molecular machine learning, 2017.
  • Wu et al. (2021) Wu, Z., Jain, P., Wright, M. A., Mirhoseini, A., Gonzalez, J. E., and Stoica, I. Representing long-range context for graph neural networks with global attention. In NeurIPS, 2021.
  • Xu et al. (2019a) Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In ICLR, 2019a.
  • Xu et al. (2019b) Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In ICLR, 2019b.
  • Ying et al. (2021) Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y., and Liu, T. Do transformers really perform badly for graph representation? In NeurIPS, 2021.
  • You et al. (2021) You, J., Selman, J. M. G., Ying, R., and Leskovec, J. Identity-aware graph neural networks. In AAAI, 2021.
  • Zhang et al. (2023a) Zhang, B., Feng, G., Du, Y., He, D., and Wang, L. A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests. In ICML, 2023a.
  • Zhang et al. (2023b) Zhang, B., Luo, S., Wang, L., and He, D. Rethinking the expressive power of gnns via graph biconnectivity. In ICLR, 2023b.
  • Zhang & Li (2021) Zhang, M. and Li, P. Nested graph neural networks. In NeurIPS, 2021.
  • Zhang et al. (2018) Zhang, M., Cui, Z., Neumann, M., and Chen, Y. An end-to-end deep learning architecture for graph classification. In AAAI, 2018.
  • Zhao et al. (2022) Zhao, L., Jin, W., Akoglu, L., and Shah, N. From stars to subgraphs: Uplifting any GNN with local structure awareness. In ICLR, 2022.
  • Zhou et al. (2023) Zhou, J., Feng, J., Wang, X., and Zhang, M. Distance-restricted folklore weisfeiler-leman gnns with provable cycle counting power, 2023.

Appendix A Proof

A.1 Proof of Proposition 2.3

The matrices Q1TQ1Q_{1}^{T}Q_{1} and Q2TQ2Q_{2}^{T}Q_{2} in r×r\mathbb{R}^{r\times r} are full rank and thus invertible. This allows us to derive the following equations:

L=Q1Q1T=Q2Q2TL=Q_{1}Q_{1}^{T}=Q_{2}Q_{2}^{T} (9)
Q1Q1T=Q2Q2TQ1TQ1Q1T=Q1TQ2Q2T\displaystyle Q_{1}Q_{1}^{T}=Q_{2}Q_{2}^{T}\Rightarrow Q_{1}^{T}Q_{1}Q_{1}^{T}=Q_{1}^{T}Q_{2}Q_{2}^{T} (10)
Q1T=(Q1TQ1)1Q1TQ2Q2T\displaystyle\Rightarrow Q_{1}^{T}=(Q_{1}^{T}Q_{1})^{-1}Q_{1}^{T}Q_{2}Q_{2}^{T} (11)
Rm×m,Q1T=RQ2T\displaystyle\Rightarrow\exists R\in\mathbb{R}^{m\times m},Q_{1}^{T}=RQ_{2}^{T} (12)
Rm×m,Q1=Q2R\displaystyle\Rightarrow\exists R\in\mathbb{R}^{m\times m},Q_{1}=Q_{2}R (13)
Q1Q1T=Q2RRTQ2T=Q2Q2TQ2TQ2RRTQ2TQ2=Q2TQ2Q2TQ2\displaystyle Q_{1}Q_{1}^{T}=Q_{2}RR^{T}Q_{2}^{T}=Q_{2}Q_{2}^{T}\Rightarrow Q_{2}^{T}Q_{2}RR^{T}Q_{2}^{T}Q_{2}=Q_{2}^{T}Q_{2}Q_{2}^{T}Q_{2} (14)
RRT=(Q2TQ2)1Q2TQ2Q2TQ2(Q2TQ2)1=I\displaystyle\Rightarrow RR^{T}=(Q_{2}^{T}Q_{2})^{-1}Q_{2}^{T}Q_{2}Q_{2}^{T}Q_{2}(Q_{2}^{T}Q_{2})^{-1}=I (15)

Since RR is orthogonal, any two full rank QQ matrices are connected by an orthogonal transformation. Furthermore, if there exists an orthogonal matrix RR where RRT=IRR^{T}=I, then Q1=Q2RQ_{1}=Q_{2}R, and L=Q1Q1T=Q2RRTQ2T=Q2Q2TL=Q_{1}Q_{1}^{T}=Q_{2}RR^{T}Q_{2}^{T}=Q_{2}Q_{2}^{T}.

A.2 Matrix D+A is Always Positive Semi-Definite

xn\forall x\in{\mathbb{R}}^{n},

xT(D+A)x\displaystyle x^{T}(D+A)x =(i,j)Exixj+iV(jVAij)xi2\displaystyle=\sum_{(i,j)\in E}x_{i}x_{j}+\sum_{i\in V}(\sum_{j\in V}A_{ij})x_{i}^{2} (16)
=(i,j)Exixj+12(i,j)Exi2+12(i,j)Exj2\displaystyle=\sum_{(i,j)\in E}x_{i}x_{j}+\frac{1}{2}\sum_{(i,j)\in E}x_{i}^{2}+\frac{1}{2}\sum_{(i,j)\in E}x_{j}^{2} (17)
=12(i,j)E(xi+xj)20\displaystyle=\frac{1}{2}\sum_{(i,j)\in E}(x_{i}+x_{j})^{2}\geq 0 (18)

Therefore, D+AD+A is always positive semi-definite.

A.3 Proof of Theorem 3.1

We restate the theorem here:

Theorem A.1.

Given two graphs 𝒢=(V,A,X)\mathcal{G}=(V,A,X) and 𝒢=(V,A,X)\mathcal{G^{\prime}}=(V^{\prime},A^{\prime},X^{\prime}) with degree matrices DD and DD^{\prime}, respectively, the two graphs are isomorphic (𝒢𝒢\mathcal{G}\simeq\mathcal{G^{\prime}}) if and only if RO(r),{{(Xv,RQv)|vV}}={{(Xv,Qv)|vV}}\exists R\in O(r),\{\!\!\{(X_{v},RQ_{v})|\forall v\in V\}\!\!\}=\{\!\!\{(X_{v},Q_{v}^{\prime})|v\in V^{\prime}\}\!\!\}, where rr denotes the rank of matrix QQ, and QQ and QQ^{\prime} are the symmetric rank decompositions of D+AD+A and D+AD^{\prime}+A^{\prime} respectively.

Proof.

Two graphs are isomorphic \Leftrightarrow πΠn\exists\pi\in\Pi_{n}, π(A)=A\pi(A)=A^{\prime} and π(X)=X\pi(X)=X^{\prime}.

Now we prove that πΠn\exists\pi\in\Pi_{n}, π(A)=A\pi(A)=A^{\prime} and π(X)=X\pi(X)=X \Leftrightarrow RO(r),{{(Xv,RQv)|vV}}={{(Xv,Qv)|vV}}\exists R\in O(r),\{\!\!\{(X_{v},RQ_{v})|v\in V\}\!\!\}=\{\!\!\{(X_{v},Q_{v}^{\prime})|v\in V^{\prime}\}\!\!\}.

When πΠn\exists\pi\in\Pi_{n}, π(A)=A\pi(A)=A^{\prime} and π(X)=X\pi(X)=X^{\prime}, as

π(Q)π(Q)T=π(A+D)=A+D=QQT,\pi(Q)\pi(Q)^{T}=\pi(A+D)=A^{\prime}+D^{\prime}=Q^{\prime}{Q^{\prime}}^{T}, (19)

according to Proposition 2.3, RO(r),π(Q)RT=Q\exists R\in O(r),\pi(Q)R^{T}=Q^{\prime}. Moreover, π(X)=X\pi(X)=X^{\prime}, so

{{(Xv,RQv)|vV}}={{(Xv,Qv)|vV}}\{\!\!\{(X_{v},RQ_{v})|v\in V\}\!\!\}=\{\!\!\{(X_{v}^{\prime},Q_{v}^{\prime})|v\in V^{\prime}\}\!\!\} (20)

When RO(r),{{(Xv,RQv)|vV}}={{(Xv,Qv)|vV}}\exists R\in O(r),\{\!\!\{(X_{v},RQ_{v})|v\in V\}\!\!\}=\{\!\!\{(X_{v}^{\prime},Q_{v}^{\prime})|v\in V^{\prime}\}\!\!\}, there exists permutation πΠn\pi\in\Pi_{n}, π(X)=X,π(Q)RT=Q\pi(X)=X^{\prime},\pi(Q)R^{T}=Q^{\prime}. Therefore,

π(A+D)=π(Q)π(Q)T=π(Q)RTRπ(Q)T=QQT=A+D\pi(A+D)=\pi(Q)\pi(Q)^{T}=\pi(Q)R^{T}R\pi(Q)^{T}=Q^{\prime}{Q^{\prime}}^{T}=A^{\prime}+D^{\prime} (21)

As A=D+A12diag((D+A)1)A=D+A-\frac{1}{2}\text{diag}((D+A)\vec{1}), A=D+A12diag((D+A)1)A^{\prime}=D+A-\frac{1}{2}\text{diag}((D+A)\vec{1}), where 1n\vec{1}\in{\mathbb{R}}^{n} is an vector with all elements =1=1.

π(A)=A\pi(A)=A^{\prime} (22)

A.4 Proof of Theorem 3.3

Now we restate the theorem.

Theorem A.2.

Given two graphs 𝒢=(V,A,X){\mathcal{G}}=(V,A,X) and 𝒢=(V,A,X){\mathcal{G}}=(V^{\prime},A^{\prime},X^{\prime}), and an injective permutation-equivariant function ZZ mapping adjacency matrix to a symmetric matrix: (1) For all permutation-equivariant function ff, if 𝒢𝒢{\mathcal{G}}\simeq{\mathcal{G}}^{\prime}, then the two sets of PSRD coordinates are equal up to an orthogonal transformation, i.e., RO(r),{{Xv,R𝒬(𝒵(A),f)v|vV}}={{Xv,𝒬(𝒵(A),f)v|vV}}\exists R\in O(r),\{\!\!\{X_{v},R\mathcal{Q}(\mathcal{Z}(A),f)_{v}|v\in V\}\!\!\}=\{\!\!\{X_{v}^{\prime},\mathcal{Q}(\mathcal{Z}(A^{\prime}),f)_{v}|v\in V^{\prime}\}\!\!\}, where rr is the rank of AA, 𝒬,𝒬\mathcal{Q},\mathcal{Q}^{\prime} are the PSRD coordinates of AA and AA^{\prime} respectively. (2) There exists a continuous permutation-equivariant function f:rr×2f:{\mathbb{R}}^{r}\to{\mathbb{R}}^{r\times 2}, such that 𝒢𝒢{\mathcal{G}}\simeq{\mathcal{G}}^{\prime} if RO(r),i=1,2,,d,{{(Xv,R𝒬(𝒵(A),f1)v,R𝒬(𝒵(A),f2)v)|vV}}={{(Xv,𝒬(𝒵(A),f1)v,𝒬(𝒵(A),f2)v)|vV}}\exists R\in O(r),\forall i=1,2,...,d,\{\!\!\{(X_{v},R\mathcal{Q}(\mathcal{Z}(A),f_{1})_{v},R\mathcal{Q}(\mathcal{Z}(A),f_{2})_{v})|v\in V\}\!\!\}=\{\!\!\{(X_{v}^{\prime},\mathcal{Q}(\mathcal{Z}(A^{\prime}),f_{1})_{v},\mathcal{Q}(\mathcal{Z}(A^{\prime}),f_{2})_{v})|v\in V^{\prime}\}\!\!\}, where f1:rrf_{1}:{\mathbb{R}}^{r}\to{\mathbb{R}}^{r} and f2:rrf_{2}:{\mathbb{R}}^{r}\to{\mathbb{R}}^{r} are two output channels of ff.

Proof.

First, as 𝒵\mathcal{Z} is an injective permutation equivariant function, forall permutation πΠn\pi\in\Pi_{n}

π(𝒵(A))=𝒵(A)𝒵(π(A))=𝒵(A)π(A)=A.\pi(\mathcal{Z}(A))=\mathcal{Z}(A^{\prime})\Leftrightarrow\mathcal{Z}(\pi(A))=\mathcal{Z}(A^{\prime})\Leftrightarrow\pi(A)=A^{\prime}. (23)

Therefore, two matrix are isomorphic \Leftrightarrow πΠn,π(X)=X,π(Z)=Z\exists\pi\in\Pi_{n},\pi(X)=X^{\prime},\pi(Z)=Z^{\prime}, where Z,ZZ,Z^{\prime} denote Z(A),Z(A)Z(A),Z(A^{\prime}) respectively.

In this proof, we denote eigendecomposition as Z=Udiag(Λ)UTZ=U\text{diag}(\Lambda)U^{T} and Z=Udiag(Λ)UTZ^{\prime}=U^{\prime}\text{diag}(\Lambda^{\prime}){U^{\prime}}^{T}, where elements in Λ\Lambda and Λ\Lambda^{\prime} are sorted in ascending order. Let the multiplicity of eigenvalues in ZZ be r1,r2,,rlr_{1},r_{2},...,r_{l}, corresponding to eigenvalues λ1,λ2,,λi\lambda_{1},\lambda_{2},...,\lambda_{i}.

(1) If 𝒢𝒢{\mathcal{G}}\simeq{\mathcal{G}}^{\prime}, there exists a permutation πΠn\pi\in\Pi_{n}, π(X)=X,π(Z)=Z\pi(X)=X^{\prime},\pi(Z)=Z^{\prime}.

π(Z)=ZZ=π(U)diag(Λ)π(U)T=Udiag(Λ)UT.\pi(Z)=Z^{\prime}\Rightarrow Z^{\prime}=\pi(U)\text{diag}(\Lambda)\pi(U)^{T}=U^{\prime}\text{diag}(\Lambda^{\prime}){U^{\prime}}^{T}. (24)

π(U)diag(Λ)π(U)T\pi(U)\text{diag}(\Lambda)\pi(U)^{T} is also an eigendecomposition of ZZ^{\prime}, so Λ=Λ\Lambda=\Lambda^{\prime} as they are both sorted in ascending order. Moreover, since π(U),U\pi(U),U^{\prime} are both matrices of eigenvectors, they can differ only in the choice of bases in each eigensubspace. So there exists a block diagonal matrix VV with orthogonal matrix V1O(r1),V2O(r2),,VlO(rl)V_{1}\in O(r_{1}),V_{2}\in O(r_{2}),...,V_{l}\in O(r_{l}) as diagonal blocks that π(U)V=U\pi(U)V=U^{\prime}.

As ff is a permutation equivariant function,

Λi=Λj\displaystyle\Lambda_{i}=\Lambda_{j} πΠr,π(i)=j,π(j=i),π(Λ)=Λ\displaystyle\Rightarrow\exists\pi\in\Pi_{r},\pi(i)=j,\pi(j=i),\pi(\Lambda)=\Lambda (25)
πΠr,π(i)=j,π(j=i),π(f(Λ))=f(π(Λ))=f(Λ)\displaystyle\Rightarrow\exists\pi\in\Pi_{r},\pi(i)=j,\pi(j=i),\pi(f(\Lambda))=f(\pi(\Lambda))=f(\Lambda) (26)
f(Λ)i=f(Λ)j\displaystyle\Rightarrow f(\Lambda)_{i}=f(\Lambda)_{j} (27)

Therefore, ff will produce the same value on positions with the same eigenvalue. Therefore, ff can be consider as a block diagonal matrix with f1Ir1,f2Ir2,,flIrlf_{1}I_{r_{1}},f_{2}I_{r_{2}},...,f_{l}I_{r_{l}} as diagonal blocks, where fif_{i}\in{\mathbb{R}} is f(Λ)pif(\Lambda)_{p_{i}}, pip_{i} is a position that Λpi=λi\Lambda_{p_{i}}=\lambda_{i}, and IrI_{r} is an identity matrix r×r\in{\mathbb{R}}^{r\times r}.

Therefore,

Vdiag(f(Λ))=diag(f1V1,f2V2,,flVl)=diag(f(Λ))V.V\text{diag}(f(\Lambda))=\text{diag}(f_{1}V_{1},f_{2}V_{2},...,f_{l}V_{l})=\text{diag}(f(\Lambda))V. (28)

Therefore,

π(Q(Z,f))V\displaystyle\pi(Q(Z,f))V =π(U)diag(f(Λ))V\displaystyle=\pi(U)\text{diag}(f(\Lambda))V (29)
=π(U)Vdiag(f(Λ))\displaystyle=\pi(U)V\text{diag}(f(\Lambda)) (30)
=Udiag(f(Λ))\displaystyle=U^{\prime}\text{diag}(f(\Lambda^{\prime})) (31)
=Q(Z,f)\displaystyle=Q(Z^{\prime},f) (32)

As VVT=I,VO(r)VV^{T}=I,V\in O(r),

RO(r),{{Xv,R𝒬(𝒵(A),f)v|vV}}={{Xv,𝒬(𝒵(A),f)v|vV}}\exists R\in O(r),\{\!\!\{X_{v},R\mathcal{Q}(\mathcal{Z}(A),f)_{v}|v\in V\}\!\!\}=\{\!\!\{X_{v}^{\prime},\mathcal{Q}(\mathcal{Z}(A^{\prime}),f)_{v}|v\in V^{\prime}\}\!\!\} (33)

(2) We simply define f1f_{1} is element-wise abstract value and square root |.|\sqrt{|.|}, f1f_{1} is element-wise abstract value and square root multiplied with its sign sgn(|.|)|.|sgn(|.|)\sqrt{|.|}. Therefore, f1,f2f_{1},f_{2} are continuous and permutation equivariant.

if RO(r)\exists R\in O(r),

{{Xv,R𝒬(Z(A),f1)v,R𝒬(Z(A),f2)v|vV}}={{Xv,𝒬(Z(A),f1)v,𝒬(Z(A),f2)v|vV}}\{\!\!\{X_{v},R\mathcal{Q}(Z(A),f_{1})_{v},R\mathcal{Q}(Z(A),f_{2})_{v}|v\in V\}\!\!\}=\{\!\!\{X_{v}^{\prime},\mathcal{Q}(Z(A^{\prime}),f_{1})_{v},\mathcal{Q}(Z(A^{\prime}),f_{2})_{v}|v\in V^{\prime}\}\!\!\}. then there exist πΠn\pi\in\Pi_{n}, so that

π(X)=X\pi(X)=X^{\prime} (34)
π(U)diag(f1(Λ))RT=Udiag(f1(Λ))\pi(U)\text{diag}(f_{1}(\Lambda))R^{T}=U^{\prime}\text{diag}(f_{1}(\Lambda^{\prime})) (35)
π(U)diag(f2(Λ))RT=Udiag(f2(Λ)).\pi(U)\text{diag}(f_{2}(\Lambda))R^{T}=U^{\prime}\text{diag}(f_{2}(\Lambda^{\prime})). (36)

Therefore,

π(Z)\displaystyle\pi(Z) =π(U)diag(f1(Λ))diag(f2(Λ))π(U)T\displaystyle=\pi(U)\text{diag}(f_{1}(\Lambda))\text{diag}(f_{2}(\Lambda))\pi(U^{\prime})^{T} (37)
=π(U)diag(f1(Λ))RRTdiag(f2(Λ))π(U)T\displaystyle=\pi(U)\text{diag}(f_{1}(\Lambda))RR^{T}\text{diag}(f_{2}(\Lambda))\pi(U^{\prime})^{T} (38)
=Udiag(f1(Λ))diag(f2(Λ))UT\displaystyle=U^{\prime}\text{diag}(f_{1}(\Lambda^{\prime}))\text{diag}(f_{2}(\Lambda^{\prime})){U^{\prime}}^{T} (39)
=Z.\displaystyle=Z^{\prime}. (40)

As π(Z)=Z,π(X)=X\pi(Z)=Z^{\prime},\pi(X)=X^{\prime}, two graphs are isomorphic.

A.5 Proof of Theorem 5.3

Let HlH_{l} denote a circle of ll nodes. Let GlG_{l} denote a graph of two connected components, one is Hl/2H_{\lfloor l/2\rfloor} and the other is Hl/2H_{\lceil l/2\rceil}. Obviously, there exists a node pair in GlG_{l} with shortest path distance equals to infinity, while HlH_{l} does not have such a node pair. So the multiset of shortest path distance is easy to distinguish them. However, they are regular graphs with node degree all equals to 22, so MPNN cannot distinguish them:

Lemma A.3.

For all nodes v,uv,u in GlG_{l}, HlH_{l}, they have the same representation produced by kk-layer MPNN, forall kNk\in N.

Proof.

We proof it by induction.

k=0k=0. Initialization, all node with trivial node feature and are the same.

Assume k1k-1-layer MPNN still produce representation hh for all node. At the kk-th layer, each node’s representation will be updated with its own representation and two neighbors representations as follows.

h,{{h,h}}h,\{\!\!\{h,h\}\!\!\} (41)

So all nodes still have the same representation. ∎

A.6 Proof of Theorem 5.2 and B.3

Given two function f,gf,g, ff can be expressed by gg means that there exists a function ϕ\phi ϕg=f\phi\circ g=f, which is equivalent to given arbitrary input H,GH,G, f(H)=f(G)g(H)=g(G)f(H)=f(G)\Rightarrow g(H)=g(G). We use fgf\to g to denote that ff can be expressed with gg. If both fgf\to g and gfg\to f, there exists a bijective mapping between the output of ff to the output of gg, denoted as fgf\leftrightarrow g.

Here are some basic rule.

  • ghfgfhg\to h\Rightarrow f\circ g\to f\circ h.

  • gh,fsfgshg\to h,f\to s\Rightarrow f\circ g\to s\circ h.

  • ff is bijective, fggf\circ g\to g

2-folklore Weisfeiler-Leman test produce a color hijth_{ij}^{t} for each node pair (i,j)(i,j) at tt-th iteration. It updates the color as follows,

hijt+1=hash(hijt,{{(hikt,hkjt)|kV}}).h_{ij}^{t+1}=\text{hash}(h_{ij}^{t},\{\!\!\{(h_{ik}^{t},h_{kj}^{t})|k\in V\}\!\!\}). (42)

The color of the the whole graph is

hGt=hash({{hijt|(i,j)V×V}}).h_{G}^{t}=\text{hash}(\{\!\!\{h_{ij}^{t}|(i,j)\in V\times V\}\!\!\}). (43)

Initially, tuple color hashes the node feature and edge between the node pair, hij0δij,Aij,Xi,Xjh_{ij}^{0}\to\delta_{ij},A_{ij},X_{i},X_{j}.

We are going to prove that

Lemma A.4.

Forall tt\in{\mathbb{N}}, hijth_{ij}^{t} can express AijkA_{ij}^{k}, k=0,1,2,,2tk=0,1,2,...,2^{t}, where AA is the adjacency matrix of the input graph.

Proof.

We prove it by induction on tt.

  • When t=0t=0, hij0Aij,Iijh_{ij}^{0}\to A_{ij},I_{ij} in initialization.

  • If t>0,t<t,hijtAk,k=0,1,2,,2tt>0,\forall t^{\prime}<t,h_{ij}^{t^{\prime}}\to A^{k^{\prime}},k^{\prime}=0,1,2,...,2^{t^{\prime}}. For all k=0,1,2,k=0,1,2,

    hijt\displaystyle h_{ij}^{t} hash(hijt,{{(hikt,hkjt)|kV}})\displaystyle\to\text{hash}(h_{ij}^{t},\{\!\!\{(h_{ik}^{t},h_{kj}^{t})|k\in V\}\!\!\}) (44)
    hash(hijt,{{(Aikk/2,Akjk/2)|kV}})\displaystyle\to\text{hash}(h_{ij}^{t},\{\!\!\{(A_{ik}^{\lfloor k/2\rfloor},A_{kj}^{\lceil k/2\rceil})|k\in V\}\!\!\}) (45)
    kVAikk/2Akjk/2\displaystyle\to\sum_{k\in V}A_{ik}^{\lfloor k/2\rfloor}A_{kj}^{\lceil k/2\rceil} (46)
    Aijk\displaystyle\to A_{ij}^{k} (47)

To prove that tt-iteration 2-FWL cannot compute shortest path distance larger than 2t2^{t^{\prime}}, we are going to construct an example.

Lemma A.5.

Let HlH_{l} denote a circle of ll nodes. Let GlG_{l} denote a graph of two connected components, one is Hl/2H_{\lfloor l/2\rfloor} and the other is Hl/2H_{\lceil l/2\rceil}. K+\forall K\in{\mathbb{N}}^{+}, 2-FWL can not distinguish HlKH_{l_{K}} and GlKG_{l_{K}}, where lK=2×2×(2K)l_{K}=2\times 2\times(2^{K}). However, GlKG_{l_{K}} contains node tuple with 2K+12^{K}+1 shortest path distance between them while HlKH_{l_{K}} does not, any model count up to 2K+12^{K}+1 shortest path distance can count it.

Proof.

Given a fixed tt, we enumerate the iterations of 2-FWL. Given two graphs HlKH_{l_{K}}, GlKG_{l_{K}}, we partition all tuples in each graph according to the shortest path distance between nodes: c0,c1,,cl,,c2Kc_{0},c_{1},...,c_{l},...,c_{2^{K}}, where clc_{l} contains all tuples with shortest path distance between them as ll, and c>2Kc_{>2^{K}} contains all tuples with shortest path distance between them larger than 2K2^{K}. We are going to prove that at kk-th layer k<=Kk<=K, all ci,i2kc_{i},i\leq 2^{k} nodes have the same representation (denoted as hikh^{k}_{i}) c2k+1,c2k+2,,c2K,c>Kc_{2^{k}+1},c_{2^{k}+2},...,c_{2^{K}},c_{>K} nodes all have the same representation (denoted as h2k+1kh^{k}_{2^{k}+1}).

Initially, all c0c_{0} tuples have representation h00h_{0}^{0}, all c1c_{1} tuples have the same representation h10h_{1}^{0} in both graph, and all other tuples have the same representation h20h_{2}^{0}.

Assume at kk-th layer, all ci,i2kc_{i},i\leq 2^{k} nodes have the same representation hikh^{k}_{i}, c2k+1,c2k+2,,c2K,c>2Kc_{2^{k}+1},c_{2^{k}+2},...,c_{2^{K}},c_{>2^{K}} tuples all have the same representation h2k+1kh^{k}_{2^{k}+1}. At k+1k+1-th layer, each representation is updated as follows.

hijt+1hijt,{{(hivt,hvjt)|vV}}h_{ij}^{t+1}\leftarrow h_{ij}^{t},\{\!\!\{(h_{iv}^{t},h_{vj}^{t})|v\in V\}\!\!\} (48)

For all tuples, the multiset has lKl_{K} elements in total.

For c0c_{0} tuples, the multiset have 1 (h0k,h0k)(h_{0}^{k},h_{0}^{k}) as v=iv=i, 2 (htk,htk)(h_{t}^{k},h_{t}^{k}) for t=1,2,..,2kt=1,2,..,2^{k} respectively as v is the kk-hop neighbor of ii, and all elements left are (h2k+1k,h2k+1k)(h_{2^{k}+1}^{k},h_{2^{k}+1}^{k}) as vv is not in the kk-hop neighbor of ii.

For ct,t=1,2,,2kc_{t},t=1,2,...,2^{k} tuples: the multiset have 1 (hak,htak)(h_{a}^{k},h_{t-a}^{k}) for a=0,1,2,..,ta=0,1,2,..,t respectively as vv is on the shortest path between (i,j)(i,j), and 1 (hak,h2k+1k)(h_{a}^{k},h_{2^{k}+1}^{k}) for a=1,2,,2ka=1,2,...,2^{k} respectively, and 1 (h2k+1k,hak)(h_{2^{k}+1}^{k},h_{a}^{k}) for a=1,2,,2ka=1,2,...,2^{k} respectively, with other elements are (h2k+1k,h2k+1k)(h_{2^{k}+1}^{k},h_{2^{k}+1}^{k}).

For ct,t=2k+1,2k+2,,2k+1c_{t},t=2^{k}+1,2^{k}+2,...,2^{k+1} tuples: the multiset have 1 (hak,htak)(h_{a}^{k},h_{t-a}^{k}) for a=t2k,t2k+1,,2ka=t-2^{k},t-2^{k}+1,...,2^{k} respectively as vv is on the shortest path between (i,j)(i,j), 1 (hak,htak)(h_{a}^{k},h_{t-a}^{k}) for a{0,1,2,,t2k1}{2k+1,2k+2,,2k+1}a\in\{0,1,2,...,t-2^{k}-1\}\cup\{2^{k}+1,2^{k}+2,...,2^{k+1}\} respectively as vv is on the shortest path between (i,j)(i,j), and 1 (hak,h2k+1k)(h_{a}^{k},h_{2^{k}+1}^{k}) for a=1,2,,2ka=1,2,...,2^{k} respectively, and 1 (h2k+1k,hak)(h_{2^{k}+1}^{k},h_{a}^{k}) for a=1,2,,2ka=1,2,...,2^{k} respectively, with other elements are (h2k+1k,h2k+1k)(h_{2^{k}+1}^{k},h_{2^{k}+1}^{k}).

For ct,t=2k+1+1,,2K,>2Kc_{t},t=2^{k+1}+1,...,2^{K},>2^{K}: the multiset are all the same : 2 (hak,h2k+1k)(h_{a}^{k},h_{2^{k}+1}^{k}) and 2 (h2k+1k,hak)(h_{2^{k}+1}^{k},h_{a}^{k}) for a=1,2,3,,2ka=1,2,3,...,2^{k}, respectively. ∎

A.7 Proof of Theorem 5.1

We can simply choose fk(Λ)=Λkf_{k}(\Lambda)=\Lambda^{k}. Then 𝒬(A,f0)i,𝒬(A,fk)j=Aijk\langle\mathcal{Q}(A,f_{0})_{i},\mathcal{Q}(A,f_{k})_{j}\rangle=A_{ij}^{k}. The shortest path distance is

spd(i,j,A)=argmink{k|Aijk>0}spd(i,j,A)=\arg\min_{k}\{k\in{\mathbb{N}}|A_{ij}^{k}>0\} (49)

A.8 Proof of Theorem 5.4 and  5.5

This section assumes that the input graph is undirected and unweighted with no self-loops. Let AA denote the adjacency matrix of the graph. Note that AT=A,AA=AA^{T}=A,A\odot A=A

An LL-path is a sequence of edges [(i1,i2),(i2,i3),,(iL,iL+1)][(i_{1},i_{2}),(i_{2},i_{3}),...,(i_{L},i_{L+1})], where all nodes are different from each other. An LL-cycle is an LL-path except that i1=iL+1i_{1}=i_{L+1}. Two paths/cycles are considered equivalent if their sets of edges are equal. The count of LL path from node uu to vv is the number of non-equivalent pathes with i1=u,iL+1=vi_{1}=u,i_{L+1}=v. The count of LL-cycle rooted in node uu is the number of non-equivalent cycles involves node uu.

Perepechko & Voropaev (2009) show that the number of path can be expressive with a polynomial of AA, where AA is the adjacency matrix of the input unweight graph. Specifically, let PLP_{L} denote path matrix whose (u,v)(u,v) elements denote the number of LL-pathes from uu to vv, Perepechko & Voropaev (2009) provides formula to express PLP_{L} with AA for small LL.

This section considers a weaken version of point cloud transformer. Each layer still consists of sv-mixer and multi-head attention. However, the multi-head attention matrix takes the scalar and vector feature before sv-mixer for Q,KQ,K and use the feature after sv-mixer for VV.

At kk-th layer sv-mixer:

siMLP1(sidiag(W1viTvikW2))\displaystyle s_{i}^{\prime}\leftarrow\text{MLP}_{1}(s_{i}\|\text{diag}(W_{1}v_{i}^{T}v_{i}^{k}W_{2})) (50)
vividiag(MLP2(si))W3+viW4\displaystyle v_{i}^{\prime}\leftarrow v_{i}\text{diag}(\text{MLP}_{2}(s_{i}^{\prime}))W_{3}+v_{i}W_{4} (51)

Attention layer:

Yij=MLP3(Kij),Kij=(WqssiWkssj)diag(WqvviTviWkv),Y_{ij}=\text{MLP}_{3}(K_{ij}),K_{ij}=(W_{q}^{s}s_{i}\odot W_{k}^{s}s_{j})\|\text{diag}(W_{q}^{v}v_{i}^{T}v_{i}W_{k}^{v}), (52)

As sis^{\prime}_{i} and viv^{\prime}_{i} can express si,vis_{i},v_{i}, so the weaken version can be expressed with the original version.

si\displaystyle s_{i} MLP4(sjjAttenijsj)\displaystyle\leftarrow\text{MLP}_{4}(s_{j}^{\prime}\|\sum_{j}\text{Atten}_{ij}s_{j}^{\prime}) (53)
vi\displaystyle v_{i} W5(vjjAttenijvj)\displaystyle\leftarrow W_{5}(v_{j}^{\prime}\|\sum_{j}\text{Atten}_{ij}v_{j}^{\prime}) (54)

Let YkY^{k} denote the attention matrix at kk-th layer. YkY^{k} is a learnable function of AA. Let 𝕐k{\mathbb{Y}}^{k} denote the polynomial space of AA that YkY^{k} can express. Each element in it is a function from n×nn×n{\mathbb{R}}^{n\times n}\to{\mathbb{R}}^{n\times n}

We are going to prove some lemmas about 𝕐{\mathbb{Y}}.

Lemma A.6.

𝕐k𝕐k+1{\mathbb{Y}}^{k}\subseteq{\mathbb{Y}}^{k+1}

Proof.

As there exists residual connection, scalar and vector representations of layer k+1k+1 can always contain those of layer kk, so attention matrix of layer k+1k+1 can always express those of layer kk. ∎

Lemma A.7.

If y1,y2,,ys𝕐ky_{1},y_{2},...,y_{s}\in{\mathbb{Y}}^{k}, their hadamard product y1y2ys𝕐ky_{1}\odot y_{2}\odot...\odot y_{s}\in{\mathbb{Y}}^{k}.

Proof.

As (y1y2ys)ij=l=1s(yl)ij(y_{1}\odot y_{2}\odot...\odot y_{s})_{ij}=\prod_{l=1}^{s}(y_{l})_{ij} is a element-wise polynomial on compact domain, an MLP (denoted as gg) exists that takes (i,j)(i,j) elements of the y1,y2,,ysy_{1},y_{2},...,y_{s} to produce the corresponding elements of their hadamard product. Assume g0g_{0} is the MLP3\text{MLP}_{3} in Equation 53 to produce the concatenation of y1,y2,..,ysy_{1},y_{2},..,y_{s}, use gg0g\circ g_{0} (the composition of two mlps) for the MLP3\text{MLP}_{3} in Equation 53 produces the hadamard product. ∎

Lemma A.8.

If y1,y2,,ys𝕐ky_{1},y_{2},...,y_{s}\in{\mathbb{Y}}^{k}, their linear combination l=1salyl𝕐k\sum_{l=1}^{s}a_{l}y_{l}\in{\mathbb{Y}}^{k}, where ala_{l}\in{\mathbb{R}}.

Proof.

As (l=1salyl)ij=l=1sal(yl)ij(\sum_{l=1}^{s}a_{l}y_{l})_{ij}=\sum_{l=1}^{s}a_{l}(y_{l})_{ij} is a element-wise linear layer (denoted as gg). Assume g0g_{0} is the MLP3\text{MLP}_{3} in Equation 53 to produce the concatenation of y1,y2,..,ysy_{1},y_{2},..,y_{s}, use gg0g\circ g_{0} for the MLP3\text{MLP}_{3} in Equation 53 produces the linear combination. ∎

Lemma A.9.

s>0,As𝕐1\forall s>0,A^{s}\in{\mathbb{Y}}^{1}.

Proof.

As shown in Section 5.1, the inner product of coordinates can produce AsA^{s}. ∎

Lemma A.10.

y1,y2,y3𝕐k,s+,d(y1)y2,y2d(y1),d(y1)y2d(y3),y1As,Asy1,y1Asy2𝕐k\forall y_{1},y_{2},y_{3}\in{\mathbb{Y}}^{k},s\in{\mathbb{N}}^{+},d(y_{1})y_{2},y_{2}d(y_{1}),d(y_{1})y_{2}d(y_{3}),y_{1}A^{s},A^{s}y_{1},y_{1}A^{s}y_{2}\in{\mathbb{Y}}^{k}

Proof.

According to Equation 50 and Equation 52, sis_{i}^{\prime} at kk-th layer can express yiiy_{ii} for all y𝕐ky\in{\mathbb{Y}}^{k}. Therefore, at k+1k+1-th layer in Equation 52, MLP3\text{MLP}_{3} can first compute element (i,j)(i,j) (y2)ij(y_{2})_{ij} from si,sj,vi,vjs_{i},s_{j},v_{i},v_{j}, then multiply (y2)ij(y_{2})_{ij} with (y1)ii(y_{1})_{ii} from sis_{i}, (y3)jj(y_{3})_{jj} from sjs_{j} and thus produce d(y1)y2,y2d(y1),d(y1)y2d(y3)d(y_{1})y_{2},y_{2}d(y_{1}),d(y_{1})y_{2}d(y_{3}).

Moreover, according to Equation 53, viv_{i} at k+1k+1-th layer can express k(y1)ikvk,k(y2)ikvk\sum_{k}(y_{1})_{ik}v_{k},\sum_{k}(y_{2})_{ik}v_{k}. So at k+1k+1-th layer, the (i,j)(i,j) element can express k(y1)ikvk,vj,vi,k(y1)jkvk,(y1)ikvk,k(y2)jkvk\langle\sum_{k}(y_{1})_{ik}v_{k},v_{j}\rangle,\langle v_{i},\sum_{k}(y_{1})_{jk}v_{k}\rangle,\langle(y_{1})_{ik}v_{k},\sum_{k}(y_{2})_{jk}v_{k}\rangle, corresponds to y1As,Asy2,y1Asy2y_{1}A^{s},A^{s}y_{2},y_{1}A^{s}y_{2}, respectively.

Therefore,

Lemma A.11.
  • s>0,l>0,ai>0\forall s>0,l>0,a_{i}>0, i=1lAai𝕐1\odot_{i=1}^{l}A^{a_{i}}\in{\mathbb{Y}}^{1}.

  • s1,s2>0,l>0\forall s_{1},s_{2}>0,l>0, As1d(i=1lAai),d(i=1lAai)As1,d(i=1l1Abi)As1d(i=1l2Abi)𝕐2A^{s_{1}}d(\odot_{i=1}^{l}A^{a_{i}}),d(\odot_{i=1}^{l}A^{a_{i}})A^{s_{1}},d(\odot_{i=1}^{l_{1}}A^{b_{i}})A^{s_{1}}d(\odot_{i=1}^{l_{2}}A^{b_{i}})\in{\mathbb{Y}}^{2}.

  • s1,s2,s3>0\forall s_{1},s_{2},s_{3}>0, As1d(i=1lAai)A^{s_{1}}d(\odot_{i=1}^{l}A^{a_{i}})

Therefore, we come to our main theorem.

Theorem A.12.

The attention matrix of 1-layer PST can express P2P_{2}, 2-layer PST can express P3P_{3}, 3-layer PST can express P4,P5P_{4},P_{5}, 5-layer PST can express P6P_{6}.

Proof.

As shown in (Perepechko & Voropaev, 2009),

P2\displaystyle P_{2} =A2\displaystyle=A^{2} (55)

Only one kind basis i=1lAai\odot_{i=1}^{l}A^{a_{i}}. 1-layer PST can express it.

P3\displaystyle P_{3} =A3+AAd(A2)d(A2)A\displaystyle=A^{3}+A-Ad(A^{2})-d(A^{2})A (56)

Three kind of basis i=1lAai\odot_{i=1}^{l}A^{a_{i}}(A3,AA^{3},A), As1d(i=1lAai)A^{s_{1}}d(\odot_{i=1}^{l}A^{a_{i}})(Ad(A2)Ad(A^{2})), and d(i=1lAai)As1d(\odot_{i=1}^{l}A^{a_{i}})A^{s_{1}}. 2-layer PST can express it.

P4\displaystyle P_{4} =A4+A2+3AA2d(A3)Ad(A2)A2Ad(A3)A2d(A2)Ad(A2)A\displaystyle=A^{4}+A^{2}+3A\odot A^{2}-d(A^{3})A-d(A^{2})A^{2}-Ad(A^{3})-A^{2}d(A^{2})-Ad(A^{2})A (57)

Four kinds of basis i=1lAai\odot_{i=1}^{l}A^{a_{i}} (A4,A2,AA2A^{4},A^{2},A\odot A^{2}), As1d(i=1lAai)A^{s_{1}}d(\odot_{i=1}^{l}A^{a_{i}}) (Ad(A3),A2d(A2)Ad(A^{3}),A^{2}d(A^{2})), d(i=1lAai)As1d(\odot_{i=1}^{l}A^{a_{i}})A^{s_{1}} (d(A3)A,d(A2)A2d(A^{3})A,d(A^{2})A^{2}), and As1d(i=1lAai)As3A^{s_{1}}d(\odot_{i=1}^{l}A^{a_{i}})A^{s_{3}} (Ad(A2)AAd(A^{2})A). 3-layer PST can express it.

P5\displaystyle P_{5} =A5+3A3+4A\displaystyle=A^{5}+3A^{3}+4A (58)
+3A2A2A+3AA34AA2\displaystyle~{}+3A^{2}\odot A^{2}\odot A+3A\odot A^{3}-4A\odot A^{2} (59)
d(A4)Ad(A3)A2d(A2)A3+2d(A2)2A2d(A2)A4d(A2)A\displaystyle~{}-d(A^{4})A-d(A^{3})A^{2}-d(A^{2})A^{3}+2d(A^{2})^{2}A-2d(A^{2})A-4d(A^{2})A (60)
Ad(A4)A2d(A3)A3d(A2)+2Ad(A2)22Ad(A2)4Ad(A2)\displaystyle~{}-Ad(A^{4})-A^{2}d(A^{3})-A^{3}d(A^{2})+2Ad(A^{2})^{2}-2Ad(A^{2})-4Ad(A^{2}) (61)
+d(A2)Ad(A2)\displaystyle~{}+d(A^{2})Ad(A^{2}) (62)
+3(AA2)A\displaystyle~{}+3(A\odot A^{2})A (63)
+3A(AA2)\displaystyle~{}+3A(A\odot A^{2}) (64)
Ad(A3)AAd(A2)A2A2d(A2)A\displaystyle~{}-Ad(A^{3})A-Ad(A^{2})A^{2}-A^{2}d(A^{2})A (65)
+d(Ad(A2)A)A\displaystyle~{}+d(Ad(A^{2})A)A (66)

Basis are in

  • 𝕐1{\mathbb{Y}}^{1}

    • i=1lAai\odot_{i=1}^{l}A^{a_{i}}: A5,A3,A,A2A2A,AA3,AA2A^{5},A^{3},A,A^{2}\odot A^{2}\odot A,A\odot A^{3},A\odot A^{2}.

  • 𝕐2{\mathbb{Y}}^{2}

    • As1d(i=1lAai)A^{s_{1}}d(\odot_{i=1}^{l}A^{a_{i}}):Ad(A4),A2d(A3),A3d(A2),Ad(A2)2,Ad(A2),Ad(A2)Ad(A^{4}),A^{2}d(A^{3}),A^{3}d(A^{2}),Ad(A^{2})^{2},Ad(A^{2}),Ad(A^{2}).

    • d(i=1lAai)As1d(\odot_{i=1}^{l}A^{a_{i}})A^{s_{1}}: Ad(A4),A2d(A3),A3d(A2),Ad(A2)2,Ad(A2),Ad(A2)Ad(A^{4}),A^{2}d(A^{3}),A^{3}d(A^{2}),Ad(A^{2})^{2},Ad(A^{2}),Ad(A^{2}).

    • d(i=1l1Aai)As1d(i=1l1Abi)d(\odot_{i=1}^{l_{1}}A^{a_{i}})A^{s_{1}}d(\odot_{i=1}^{l_{1}}A^{b_{i}}): d(A2)Ad(A2)d(A^{2})Ad(A^{2}).

    • As1(i=1lAai)A^{s_{1}}(\odot_{i=1}^{l}A^{a_{i}}): A(AA2)A(A\odot A^{2}).

    • (i=1lAai)As1(\odot_{i=1}^{l}A^{a_{i}})A^{s_{1}}: (AA2)A(A\odot A^{2})A

  • 𝕐3{\mathbb{Y}}^{3}:

    • As𝕐2A^{s}{\mathbb{Y}}^{2}: Ad(A3)AAd(A^{3})A, Ad(A2)A2Ad(A^{2})A^{2}, A2d(A2)AA^{2}d(A^{2})A.

    • d(𝕐2)Asd({\mathbb{Y}}^{2})A^{s}: d(Ad(A2)A)Ad(Ad(A^{2})A)A

3-layer PST can express it.

Formula for 66-path matrix is quite long.

P6\displaystyle P_{6} =A6+4A4+12A2\displaystyle=A^{6}+4A^{4}+12A^{2} (68)
+3AA4+6AA2A3+A2A2A24A2A2+44AA2\displaystyle~{}+3A\odot A^{4}+6A\odot A^{2}\odot A^{3}+A^{2}\odot A^{2}\odot A^{2}-4A^{2}\odot A^{2}+44A\odot A^{2} (69)
d(A5)Ad(A4)A2d(A3)A35d(A3)Ad(A2)A47d(A2)A2\displaystyle~{}-d(A^{5})A-d(A^{4})A^{2}-d(A^{3})A^{3}-5d(A^{3})A-d(A^{2})A^{4}-7d(A^{2})A^{2} (70)
+2d(A2)2A2+4(d(A2)d(A3))A\displaystyle+2d(A^{2})^{2}A^{2}+4(d(A^{2})\odot d(A^{3}))A (71)
Ad(A5)A4d(A2)A3d(A3)5Ad(A3)A2d(A4)7A2d(A2)\displaystyle~{}-Ad(A^{5})-A^{4}d(A^{2})-A^{3}d(A^{3})-5Ad(A^{3})-A^{2}d(A^{4})-7A^{2}d(A^{2}) (72)
+2A2d(A2)2+4A(d(A2)d(A3))\displaystyle+2A^{2}d(A^{2})^{2}+4A(d(A^{2})\odot d(A^{3})) (73)
+d(A2)Ad(A3)+d(A3)Ad(A2)+d(A2)A2d(A2)\displaystyle~{}+d(A^{2})Ad(A^{3})+d(A^{3})Ad(A^{2})+d(A^{2})A^{2}d(A^{2}) (74)
+2(AA3)A+2(AA2A2)A+(A2A2A)A3(AA2)A+(AA3)A\displaystyle~{}+2(A\odot A^{3})A+2(A\odot A^{2}\odot A^{2})A+(A^{2}\odot A^{2}\odot A)A-3(A\odot A^{2})A+(A\odot A^{3})A (75)
+(AA2)A2+2(AA2)A2(AA2)A\displaystyle~{}+(A\odot A^{2})A^{2}+2(A\odot A^{2})A^{2}-(A\odot A^{2})A (76)
+2A(AA3)+2A(AA2A2)+A(A2A2A)3A(AA2)+A(AA3)\displaystyle~{}+2A(A\odot A^{3})+2A(A\odot A^{2}\odot A^{2})+A(A^{2}\odot A^{2}\odot A)-3A(A\odot A^{2})+A(A\odot A^{3}) (77)
+A2(AA2)+2A2(AA2)A(AA2)\displaystyle+A^{2}(A\odot A^{2})+2A^{2}(A\odot A^{2})-A(A\odot A^{2}) (78)
8A(A(AA2))8A((AA2)A)\displaystyle~{}-8A\odot(A(A\odot A^{2}))-8A\odot((A\odot A^{2})A) (79)
12d(A2)(AA2)12(AA2)d(A2)\displaystyle~{}-12d(A^{2})(A\odot A^{2})-12(A\odot A^{2})d(A^{2}) (80)
Ad(A4)AAd(A2)A3A3d(A2)AAd(A3)A2A2d(A3)AA2d(A2)A2\displaystyle~{}-Ad(A^{4})A-Ad(A^{2})A^{3}-A^{3}d(A^{2})A-Ad(A^{3})A^{2}-A^{2}d(A^{3})A-A^{2}d(A^{2})A^{2} (81)
10Ad(A2)A+2Ad(A2)2A\displaystyle-10Ad(A^{2})A+2Ad(A^{2})^{2}A (82)
+d(A2)Ad(A2)A+Ad(A2)Ad(A2)\displaystyle~{}+d(A^{2})Ad(A^{2})A+Ad(A^{2})Ad(A^{2}) (83)
3A(Ad(A2)A)\displaystyle~{}-3A\odot(Ad(A^{2})A) (84)
4Ad((AA2)A)4Ad(A(AA2))\displaystyle~{}-4Ad((A\odot A^{2})A)-4Ad(A(A\odot A^{2})) (85)
+3A(AA2)A\displaystyle~{}+3A(A\odot A^{2})A (86)
4d(A(AA2))A4d((AA2)A)A\displaystyle~{}-4d(A(A\odot A^{2}))A-4d((A\odot A^{2})A)A (87)
+d(Ad(A3)A)A+d(Ad(A2)A)A2+d(Ad(A2)A2)A+d(A2d(A2)A)A\displaystyle~{}+d(Ad(A^{3})A)A+d(Ad(A^{2})A)A^{2}+d(Ad(A^{2})A^{2})A+d(A^{2}d(A^{2})A)A (88)
+Ad(Ad(A3)A)+A2d(Ad(A2)A)+Ad(A2d(A2)A)+Ad(Ad(A2)A2)\displaystyle~{}+Ad(Ad(A^{3})A)+A^{2}d(Ad(A^{2})A)+Ad(A^{2}d(A^{2})A)+Ad(Ad(A^{2})A^{2}) (89)
+Ad(Ad(A2)A)A\displaystyle~{}+Ad(Ad(A^{2})A)A (90)

Basis are in

  • 𝕐1{\mathbb{Y}}^{1}

    • i=1lAai\odot_{i=1}^{l}A^{a_{i}}: A6,A4,A2,AA4,AA2A3,A2A2A2,A2A2,AA2A^{6},A^{4},A^{2},A\odot A^{4},A\odot A^{2}\odot A^{3},A^{2}\odot A^{2}\odot A^{2},A^{2}\odot A^{2},A\odot A^{2}.

  • 𝕐2{\mathbb{Y}}^{2}

    • As1d(i=1lAai)A^{s_{1}}d(\odot_{i=1}^{l}A^{a_{i}}):Ad(A5)Ad(A^{5}), A4d(A2)A^{4}d(A^{2}), A3d(A3)A^{3}d(A^{3}), Ad(A3)Ad(A^{3}), A2d(A4)A^{2}d(A^{4}), A2d(A2)A^{2}d(A^{2}), A2d(A2)2A^{2}d(A^{2})^{2}, A(d(A2)d(A3))A(d(A^{2})\odot d(A^{3})).

    • d(i=1lAai)As1d(\odot_{i=1}^{l}A^{a_{i}})A^{s_{1}}: d(A5)Ad(A^{5})A, d(A4)A2d(A^{4})A^{2}, d(A3)A3d(A^{3})A^{3}, d(A3)Ad(A^{3})A, d(A2)A4d(A^{2})A^{4}, d(A2)A2d(A^{2})A^{2}, d(A2)2A2d(A^{2})^{2}A^{2}, (d(A2)d(A3))A(d(A^{2})\odot d(A^{3}))A.

    • d(i=1l1Aai)As1d(i=1l1Abi)d(\odot_{i=1}^{l_{1}}A^{a_{i}})A^{s_{1}}d(\odot_{i=1}^{l_{1}}A^{b_{i}}): d(A2)Ad(A3)d(A^{2})Ad(A^{3}), d(A3)Ad(A2)d(A^{3})Ad(A^{2}), d(A2)A2d(A2)d(A^{2})A^{2}d(A^{2}).

    • As1(i=1lAai)A^{s_{1}}(\odot_{i=1}^{l}A^{a_{i}}): A(AA3)A(A\odot A^{3}), A(AA2A2)A(A\odot A^{2}\odot A^{2}), A(A2A2A)A(A^{2}\odot A^{2}\odot A), A(AA2)A(A\odot A^{2}), A(AA3)A(A\odot A^{3}), A2(AA2)A^{2}(A\odot A^{2}), A2(AA2)A^{2}(A\odot A^{2}), A(AA2)A(A\odot A^{2}).

    • (i=1lAai)As1(\odot_{i=1}^{l}A^{a_{i}})A^{s_{1}}: (AA3)A(A\odot A^{3})A, (AA2A2)A(A\odot A^{2}\odot A^{2})A, (A2A2A)A(A^{2}\odot A^{2}\odot A)A, (AA2)A(A\odot A^{2})A, (AA3)A(A\odot A^{3})A, (AA2)A2(A\odot A^{2})A^{2}, (AA2)A2(A\odot A^{2})A^{2}, (AA2)A(A\odot A^{2})A

    • 𝕐2𝕐2{\mathbb{Y}}^{2}\odot{\mathbb{Y}}^{2}: A((AA2)A)A\odot((A\odot A^{2})A), A((AA2)A)A\odot((A\odot A^{2})A).

    • d(𝕐1)𝕐1d({\mathbb{Y}}^{1}){\mathbb{Y}}^{1}: d(A2)(AA2)d(A^{2})(A\odot A^{2})

    • 𝕐1d(𝕐1){\mathbb{Y}}^{1}d({\mathbb{Y}}^{1}): (AA2)d(A2)(A\odot A^{2})d(A^{2})

  • 𝕐3{\mathbb{Y}}^{3}:

    • As𝕐2A^{s}{\mathbb{Y}}^{2}: Ad(A4)AAd(A^{4})A, Ad(A2)A3Ad(A^{2})A^{3}, A3d(A2)AA^{3}d(A^{2})A, Ad(A3)A2Ad(A^{3})A^{2}, A2d(A3)AA^{2}d(A^{3})A, A2d(A2)A2A^{2}d(A^{2})A^{2}, Ad(A2)AAd(A^{2})A, Ad(A2)2AAd(A^{2})^{2}A, Ad(A2)Ad(A2)Ad(A^{2})Ad(A^{2}), A(AA2)AA(A\odot A^{2})A.

    • 𝕐2As{\mathbb{Y}}^{2}A^{s}: d(A2)Ad(A2)Ad(A^{2})Ad(A^{2})A.

    • 𝕐3𝕐3{\mathbb{Y}}^{3}\odot{\mathbb{Y}}^{3}: A(Ad(A2)A)A\odot(Ad(A^{2})A).

    • d(𝕐2)𝕐2d({\mathbb{Y}}^{2}){\mathbb{Y}}^{2}: d(Ad(A2)A)Ad(Ad(A^{2})A)A,d(A(AA2))Ad(A(A\odot A^{2}))A,d((AA2)A)Ad((A\odot A^{2})A)A

    • 𝕐2d(𝕐2){\mathbb{Y}}^{2}d({\mathbb{Y}}^{2}):Ad((AA2)A)Ad((A\odot A^{2})A),Ad(A(AA2))Ad(A(A\odot A^{2}))

  • 𝕐4{\mathbb{Y}}^{4}:

    • d(𝕐3)𝕐3d({\mathbb{Y}}^{3}){\mathbb{Y}}^{3}: d(Ad(A3)A)Ad(Ad(A^{3})A)A, d(Ad(A2)A)A2d(Ad(A^{2})A)A^{2}, d(Ad(A2)A2)Ad(Ad(A^{2})A^{2})A, d(A2d(A2)A)Ad(A^{2}d(A^{2})A)A.

    • 𝕐3d(𝕐3){\mathbb{Y}}^{3}d({\mathbb{Y}}^{3}): Ad(Ad(A3)A)Ad(Ad(A^{3})A), A2d(Ad(A2)A)A^{2}d(Ad(A^{2})A), Ad(A2d(A2)A)Ad(A^{2}d(A^{2})A), Ad(Ad(A2)A2)Ad(Ad(A^{2})A^{2}).

  • 𝕐5{\mathbb{Y}}^{5}:

    • As1d(𝕐3)As2A^{s_{1}}d({\mathbb{Y}}^{3})A^{s_{2}}:Ad(Ad(A2)A)AAd(Ad(A^{2})A)A

5-layer PST can express it. ∎

Count cycle is closely related to counting path. A L+1L+1 cycle contains edge (i,j)(i,j) can be decomposed into a LL-path from ii to jj and edge (i,j)(i,j). Therefore, the vector of count of cycles rooted in each node CL+1=diagonal(APL)C_{L+1}=\text{diagonal}(AP_{L})

Theorem A.13.

The diagonal elements of attention matrix of 2-layer PST can express C3C_{3}, 3-layer PST can express C4C_{4}, 4-layer PST can express C5,C6C_{5},C_{6}, 6-layer PST can express C7C_{7}.

Proof.

It is a direct conjecture of Theorem A.12 as CL+1=diagonl(APL)C_{L+1}=\text{diagonl}(AP_{L}) and k,PL𝕐kAPL𝕐k+1\forall k,P_{L}\in{\mathbb{Y}}^{k}\Rightarrow AP_{L}\in{\mathbb{Y}}^{k+1}

Appendix B Expressivity Comparision with Other Models

Algorithm A is considered more expressive than algorithm B if it can differentiate between all pairs of graphs that algorithm B can distinguish. If there is a pair of links that algorithm A can distinguish while B cannot and A is more expressive than B, we say that A is strictly more expressive than B. We will first demonstrate the greater expressivity of our model by using PST to simulate other models. Subsequently, we will establish the strictness of our model by providing a concrete example.

Our transformer incorporates inner products of coordinates, which naturally allows us to express shortest path distances and various node-to-node distance metrics. These concepts are discussed in more detail in Section 5.1. This naturally leads to the following theorem, which compares our PST with GIN (Xu et al., 2019a).

Theorem B.1.

A kk-layer Point Set Transformer is strictly more expressive than a kk-layer GIN.

Proof.

We first prove that one PST layer can simulate an GIN layer.

Given node features sis_{i} and viv_{i}. Without loss of generality, we can assume that one channel of viv_{i} contains Udiag(Λ1/2)U\text{diag}(\Lambda^{1}/2). The sv-mixer can simulate an MLP function applied to sis_{i}. Leading to sis_{i}^{\prime}. A GIN layer will then update node representations as follows,

sisi+jN(i)sjs_{i}\leftarrow s_{i}^{\prime}+\sum_{j\in N(i)}s_{j}^{\prime} (92)

By inner products of coordinates, the attention matrix can express the adjacency matrix. By setting Wqs,Wks=0W_{q}^{s},W_{k}^{s}=0, and Wqv,WkvW_{q}^{v},W_{k}^{v} be a diagonal matrix with only the diagonal elements at the row corresponding the the channel of Udiag(Λ1/2)U\text{diag}(\Lambda^{1}/2).

Kij=(WqssiWkssj)diagonal(WqvviTvjWkv)diag(Λ1/2)Ui,diag(Λ1/2)Uj=AijK_{ij}=(W_{q}^{s}s_{i}\odot W_{k}^{s}s_{j})\|\text{diagonal}(W_{q}^{v}v_{i}^{T}v_{j}W_{k}^{v})\to\langle\text{diag}(\Lambda^{1}/2)U_{i},\text{diag}(\Lambda^{1}/2)U_{j}\rangle=A_{ij} (93)

Let MLP express an identity function.

Attenij=MLP(Kij)Aij\text{Atten}_{ij}=\text{MLP}(K_{ij})\to A_{ij} (94)

The attention layer will produce

sijAijsj=jN(i)sjs_{i}\leftarrow\sum_{j}A_{ij}s_{j}^{\prime}=\sum_{j\in N(i)}s_{j}^{\prime} (95)

with residual connection, the layer can express GIN

sisi+si=si+jN(i)sjs_{i}\leftarrow s_{i}^{\prime}+s_{i}=s_{i}^{\prime}+\sum_{j\in N(i)}s_{j}^{\prime} (96)

Moreover, as shown in Theorem 5.3, MPNN cannot compute shortest path distance, while PST can. So PST is strictly more expressive. ∎

Moreover, our transformer is strictly more expressive than some representative graph transformers, including Graphormer (Ying et al., 2021) and GPS with RWSE as structural encoding (Rampásek et al., 2022).

Theorem B.2.

A kk-layer Point Set Transformer is strictly more expressive than a kk-layer Graphormer and a kk-layer GPS.

Proof.

We first prove that kk-layer Point Set Transformer is more expressive than a kk-layer Graphormer and a kk-layer GPS.

In initialization, besides the original node feature, Graphormer further add node degree features and GPS further utilize RWSE. Our PST can add these features with the first sv-mixer layer.

siMLP1(sidiagonal(W1viTviW2T))s_{i}^{\prime}\leftarrow\text{MLP}_{1}(s_{i}\|\text{diagonal}(W_{1}v_{i}^{T}v_{i}W_{2}^{T})) (97)

Here, diagonal(W1viviTW2T)\text{diagonal}(W_{1}v_{i}v_{i}^{T}W_{2}^{T}) add coordinate inner products, which can express RWSE (diagonal elements of random walk matrix) and degree (see Appendix D), to node feature.

Then we are going to prove that one PST layer can express one GPS and one Graphormer layer. PST’s attention matrix is as follows,

Attenij=MLP(Kij),Kij=(WqssiWkssj)diagonal(WqvviTvjWkv)diag(Λ1/2)Ui,diag(Λ1/2)Uj\text{Atten}_{ij}=\text{MLP}(K_{ij}),\quad K_{ij}=(W_{q}^{s}s_{i}\odot W_{k}^{s}s_{j})\|\text{diagonal}(W_{q}^{v}v_{i}^{T}v_{j}W_{k}^{v})\to\langle\text{diag}(\Lambda^{1}/2)U_{i},\text{diag}(\Lambda^{1}/2)U_{j}\rangle (98)

The Hadamard product (WqssiWkssj)(W_{q}^{s}s_{i}\odot W_{k}^{s}s_{j}) with MLP can express the inner product of node representations used in Graphormer and GPS. The inner product of coordinates can express adjacency matrix used in GPS and Graphormer and shortest path distance used in Graphormer. Therefore, PST’ attention matrix can express the attention matrix in GPS and Graphormer.

To prove strictness, Figure 2(c) in (Zhang et al., 2023b) provides an example. As PST can capture resistance distance and simulate 1-WL, so it can differentiate the two graphs according to Theorem 4.2 in (Zhang et al., 2023b). However, Graphormer cannot distinguish the two graphs, as proved in (Zhang et al., 2023b).

For GPS, Two graphs in Figure 2(c) have the same RWSE: RWSE is

diagonal(U^diag(Λ^k)U^T),k=1,2,3,,\text{diagonal}(\hat{U}\text{diag}(\hat{\Lambda}^{k})\hat{U}^{T}),k=1,2,3,..., (99)

where the eigendecomposition of normalized adjacency matrix D1/2AD1/2D^{-1/2}AD^{-1/2} is U^\hat{U}. By computation, we find that two graphs share the same Λ^\hat{\Lambda}. Moroever, diagonal(U^diag(Λ^k)U^T)\text{diagonal}(\hat{U}\text{diag}(\hat{\Lambda}^{k})\hat{U}^{T}) are equal in two graphs for k=0,1,2,,9k=0,1,2,...,9, where 99 is the number of nodes in graphs. Λk\Lambda^{k} and diagonal(U^diag(Λ^k)U^T)\text{diagonal}(\hat{U}\text{diag}(\hat{\Lambda}^{k})\hat{U}^{T}) with larger kk are only linear combinations of Λk\Lambda^{k} and thus diagonal(U^diag(Λ^k)U^T)\text{diagonal}(\hat{U}\text{diag}(\hat{\Lambda}^{k})\hat{U}^{T}) for k=0,1,,9k=0,1,...,9. So the RWSE in the two graphs are equal and equivalent to simply assigning feature h1h_{1} to the center node and feature h2h_{2} to other nodes in two graphs. Then GPS simply run a model be a submodule of Graphormer on the graph and thus cannot differentiate the two graphs either. ∎

Even against a highly expressive method such as 2-FWL, our models can surpass it in expressivity with a limited number of layers:

Theorem B.3.

For all K>0K>0, a graph exists that a KK-iteration 2-FWL method fails to distinguish, while a 11-layer Point Set Transformer can.

Proof.

It is a direct corollary of Theorem 5.2. ∎

Appendix C Some Graph Transformers Fail to Compute Shortest Path Distance

First, we demonstrate that computing inner products of node representations alone cannot accurately determine the shortest path distance when the node representations are permutation-equivariant. Consider Figure 2 as an illustration. In cases where node representations exhibit permutation-equivariance, nodes v2v_{2} and v3v_{3} will share identical representations. Consequently, the pairs (v1,v2)(v_{1},v_{2}) and (v1,v3)(v_{1},v_{3}) will yield the same inner products of node representations, despite having different shortest path distances. Consequently, it becomes evident that the attention matrices of some Graph Transformers are incapable of accurately computing the shortest path distance.

Theorem C.1.

GPS with RWSE (Rampásek et al., 2022) and Graphormer without shortest path distance encoding cannot compute shortest path distance with the elements of adjacency matrix.

Proof.

Their adjacency matrix elements are functions of the inner products of node representations and the adjacency matrix.

Attenij=si,sj||Aij.\text{Atten}_{ij}=\langle s_{i},s_{j}\rangle|\!|A_{ij}. (100)

This element is equal for the node pair (v1,v2)(v_{1},v_{2}) and (v1,v3)(v_{1},v_{3}) in Figure 2. However, two pairs have different shortest path distances. ∎

Furthermore, while Graph Transformers gather information from the entire graph, they may not have the capacity to emulate multiple MPNNs with just a single transformer layer. To address this, we introduce the concept of a vanilla Graph Transformer, which applies a standard Transformer to nodes using the adjacency matrix for relative positional encoding. This leads us to the following theorem.

Theorem C.2.

For all kk\in{\mathbb{N}}, there exists a pair of graph that k+1k+1-layer MPNN can differentiate while kk-layer MPNN and kk-layer vanilla Graph Transformer cannot.

Proof.

Let HlH_{l} denote a circle of ll nodes. Let GlG_{l} denote a graph of two components, one is Hl/2H_{\lfloor l/2\rfloor} and the other is Hl/2H_{\lceil l/2\rceil}. Let HlH_{l}^{\prime} denote adding a unique feature 11 to a node in HlH_{l} (as all nodes are symmetric for even ll, the selection of node does not matter), and GlG_{l}^{\prime} denote adding a unique feature 11 to one node in GlG_{l}. All other nodes have feature 0. Now we prove that

Lemma C.3.

For all KK\in{\mathbb{N}}, (K+1)(K+1)-layer MPNN can distinguish H4(K+1)H^{\prime}_{4(K+1)} and G4(K+1)G^{\prime}_{4(K+1)}, while KK-layer MPNN and KK-layer vanilla Graph Transformer cannot distinguish.

Given H4(K+1)H^{\prime}_{4(K+1)}, G4(K+1)G^{\prime}_{4(K+1)}, we assign each node a color according to its distance to the node with extra label 11: c0c_{0} (the labeled node itself), c1c_{1} (two nodes connected to the labeled node), c2c_{2} (two nodes whose shortest path distance to the labeled node is 22),…, cKc_{K} (two nodes whose shortest path distance to the labeled node is KK), c>Kc_{>K} (nodes whose shortest path distance to the labeled node is larger than KK). Now by simulating the process of MPNN, we prove that at kk-th layer k<=Kk<=K, ik\forall i\leq k, cic_{i} nodes have the same representation (denoted as hikh^{k}_{i}), respectively, ck+1,ck+2,,cK,c>Kc_{k+1},c_{k+2},...,c_{K},c_{>K} nodes all have the same representation (denoted as hk+1kh^{k}_{k+1}).

Initially, all c0c_{0} nodes have representation h00h_{0}^{0}, all other nodes have representation h10h_{1}^{0} in both graph.

Assume at kk-th layer, ik\forall i\leq k, cic_{i} nodes have the same representation hikh^{k}_{i}, respectively, ck+1,ck+1,,cK,c>Kc_{k+1},c_{k+1},...,c_{K},c_{>K} nodes all have the same representation hk+1kh^{k}_{k+1}. At k+1k+1-th layer, c0c_{0} node have two neighbors of representation h1kh_{1}^{k}. all ci,1<i<=kc_{i},1<i<=k node two neighbors of representations hi1kh^{k}_{i-1} and hi+1kh^{k}_{i+1}, respectively. ck+1c_{k+1} nodes have two neighbors of representation hkkh^{k}_{k} and hk+1kh^{k}_{k+1}. All other nodes have two neighbors of representation hk+1kh^{k}_{k+1}. So ci,ik+1c_{i},i\leq k+1 nodes have the same representation (denoted as hik+1h^{k+1}_{i}), respectively, ck+1+1,,cK,c>Kc_{k+1+1},...,c_{K},c_{>K} nodes all have the same representation (denoted as hk+1kh^{k}_{k+1}).

The same induction also holds for KK-layer vanilla graph transformer.

However, in the K+1K+1-th message passing layer, only one node in G4(K+1)G_{4(K+1)} is of shortest path distance K+1K+1 to the labeled node. It also have two neighbors of representation hKKh^{K}_{K}. While such a node is not exist in H4(K+1)H_{4(K+1)}. So (K+1K+1)-layer MPNN can distinguish them. ∎

The issue with a vanilla Graph Transformer is that, although it collects information from all nodes in the graph, it can only determine the presence of features in 1-hop neighbors. It lacks the ability to recognize features in higher-order neighbors, such as those in 2-hop or 3-hop neighbors. A straightforward solution to this problem is to manually include the shortest path distance as a feature. However, our analysis highlights that aggregating information from the entire graph is insufficient for capturing long-range interactions.

Refer to caption
Figure 2: The failure of using inner products of permutation-equivariant node representations to predict shortest path distance. v2v_{2} and v3v_{3} have equal node representations due to symmetry. Therefore, (v1,v2)(v_{1},v_{2}) and (v1,v3)(v_{1},v_{3}) will have the same inner products of node representations but different shortest path distance.
Table 5: Connection between existing structural embeddings and our parameterized coordinates. The eigendecomposition are A^U^Λ^U^\hat{A}\leftarrow\hat{U}\hat{\Lambda}\hat{U}, DAU~Λ~U~TD-A\leftarrow\tilde{U}\tilde{\Lambda}\tilde{U}^{T}, AUΛUTA\leftarrow U\Lambda U^{T}. did_{i} denote the degree of node ii.
Method Description Connection
Random walk matrix (Li et al., 2020; Dwivedi et al., 2023; Rampásek et al., 2022) kk-step random walk matrix is (D1A)k(D^{-1}A)^{k}, whose element (i,j)(i,j) is the probability that a kk-step random walk starting from node ii ends at node jj. (D1A)ijk(D^{-1}A)^{k}_{ij} =(D1/2(A^)kD1/2)ij=(D^{-1/2}(\hat{A})^{k}D^{1/2})_{ij} =djdiU^i,diag(Λ^k)U^j=\sqrt{\frac{d_{j}}{d_{i}}}\langle\hat{U}_{i},\text{diag}(\hat{\Lambda}^{k})\hat{U}_{j}\rangle
Heat kernel matrix (Mialon et al., 2021) Heat kernel is a solution of the heat equation. Its element (i,j)(i,j) represent how much heat diffuse from node ii to node jj (I+U~(diag(exp(tΛ~))I)U~T)ij\!(\!I\!+\!\tilde{U}(\text{diag}(\exp(-t\tilde{\Lambda}))\!-\!I)\tilde{U}^{T})_{ij} =δij+U~i,(diag(exp(tΛ~))I)U~j=\!\!\delta_{ij}\!+\!\langle\tilde{U}_{i},(\text{diag}(\exp(-t\tilde{\Lambda}))\!-\!I)\tilde{U}_{j}\rangle
Resistance distance (Zhang & Li, 2021; Zhang et al., 2023b) Its element (i,j)(i,j) is the resistance between node i,ji,j considering the graph as an electrical network. It is also the pseudo-inverse of laplacian matrix LL, (U~diag(Λ~1)U~T)ij(\tilde{U}\text{diag}(\tilde{\Lambda}^{-1})\tilde{U}^{T})_{ij} =U~i,diag(Λ~1)U~j=\langle\tilde{U}_{i},\text{diag}(\tilde{\Lambda}^{-1})\tilde{U}_{j}\rangle
Equivariant and stable laplacian PE (Wang et al., 2022) The encoding of node pair i,ji,j is 1K(UiUj)\|1_{K}\odot(U_{i}-U_{j})\|, where 1K1_{K} means a vector r\in{\mathbb{R}}^{r} with its elements coresponding to KK largest eigenvalue of LL 1K(UiUj)2\|1_{K}\odot(U_{i}-U_{j})\|^{2} =Ui,diag(1K)Ui=\langle U_{i},\text{diag}(1_{K})U_{i}\rangle +Uj,diag(1K)Uj+\langle U_{j},\text{diag}(1_{K})U_{j}\rangle 2Ui,diag(1K)Uj-2\langle U_{i},\text{diag}(1_{K})U_{j}\rangle
Degree and number of triangular (Bouritsas et al., 2023) did_{i} is the number of edges, and tit_{i} is the number of triangular rooted in node ii. di=Ui,diag(Λ2)Ujd_{i}=\langle U_{i},\text{diag}(\Lambda^{2})U_{j}\rangle. ti=Ui,diag(Λ3)Ujt_{i}=\langle U_{i},\text{diag}(\Lambda^{3})U_{j}\rangle

Appendix D Connection with Structural Embeddings

We show the equivalence between the structural embeddings and the inner products of our PSRD coordinates in Table 5. The inner products of PSRD coordinates can unify a wide range of positional encodings, including random walk (Li et al., 2020; Dwivedi et al., 2023; Rampásek et al., 2022), heat kernel (Mialon et al., 2021), and resistance distance (Zhang & Li, 2021; Zhang et al., 2023b).

Appendix E Datasets

Table 6: Statistics of the datasets. #Nodes and #Edges denote the number of nodes and edges per graph. In split column, ’fixed’ means the dataset uses the split provided in the original release. Otherwise, it is of the formal training set ratio/valid ratio/test ratio.
#Graphs #Nodes #Edges Task Metric Split
Synthetic 5,000 18.8 31.3 Node Regression MAE 0.3/0.2/0.5.
QM9 130,831 18.0 18.7 Regression MAE 0.8/0.1/0.1
ZINC 12,000 23.2 24.9 Regression MAE fixed
ZINC-full 249,456 23.2 24.9 Regression MAE fixed
ogbg-molhiv 41,127 25.5 27.5 Binary classification AUC fixed
MUTAG 188 17.9 39.6 classification Accuracy 10-fold cross validataion
PTC-MR 344 14.3 14.7 classification Accuracy 10-fold cross validation
PROTEINS 1113 39.1 145.6 classification Accuracy 10-fold cross validataion
IMDB-BINARY 1000 19.8 193.1 classification Accuracy 10-fold cross validataion
PascalVOC-SP 11,355 479.4 2710.5 Node Classification Macro F1 fixed
Peptides-func 15,535 150.9 307.3 Classification AP fixed
Peptides-struct 1 15,535 150.9 307.3 Regression MAE fixed

We summarize the statistics of our datasets in Table 6. Synthetic is the dataset used in substructure counting tasks provided by  Huang et al. (2023b), they are random graph with the count of substructure as node label. QM9 (Wu et al., 2017), ZINC (Gómez-Bombarelli et al., 2016), and ogbg-molhiv are three datasets of molecules. QM9 use 13 quantum chemistry property as the graph label. It provides both the graph and the coordinates of each atom. ZINC provides graph structure only and aim to predict constrained solubility. Ogbg-molhiv is one of Open Graph Benchmark dataset, which aims to use graph structure to predict whether a molecule can inhibits HIV virus replication. We further use MUTAG, PTC-MR, PROTEINS, and IMDB-BINARY from TU database (Ivanov et al., 2019). MUTAG comprises 188 mutagenic aromatic and heteroaromatic nitro compounds. PROTEINS represents secondary structure elements as nodes with edges between neighbors in amino-acid sequence or 3D space. PTC involves 344 chemical compounds, classifying carcinogenicity for rats. IMDB-BINARY features ego-networks for actors/actresses in movie collaborations, classifying movie genre graphs. We also use three datasets in Long Range Graph Benchmark (Dwivedi et al., 2022b). They consists of larger graphs. PascalVOC-SP comes from the computer vision domain. Each node in it representation a superpixel and the task is to predict the semantic segmentation label for each node. Peptide-func and peptide struct are peptide molecular graphs. Task in Peptides-func is to predict the peptide function. Peptides-struct is to predict 3D properties of the peptide. PTC is a collection of 344 chemical compounds represented as graphs which report the carcinogenicity for rats. There are 19 node labels for each node.

Appendix F Experimental Details

Our code is available in supplementary material. Our code is primarily based on PyTorch (Paszke et al., 2019) and PyG (Fey & Lenssen, 2019). All our experiments are conducted on NVIDIA RTX 3090 GPUs on a linux server. We use l1 loss for regression tasks and cross entropy loss for classification tasks. We select the hyperparameters by running optuna (Akiba et al., 2019) to optimize the validation score. We run each experiment with 8 different seeds, reporting the averaged results at the epoch achieving the best validation metric. For optimization, we use AdamW optimizer and cosine annealing scheduler. Hyperparameters for datasets are shown in Table 7. All PST and PSDS models (except these in ablation study) decompose laplacian matrix for coordinates.

Table 7: Hyperparameters of our model for each dataset. #warm means the number of warmup epochs, #cos denotes the number of cosine annealing epochs, gn denotes the magnitude of the gaussian noise injected into the point coordinates, hiddim denotes hidden dimension, bs means batch size, lr represents learning rate, and #epoch is the number of epochs for training.
dataset #warm #cos wd gn #layer hiddim bs lr #epoch #param
Synthetic 10 15 6e-4 1e-6 9 96 16 0.0006 300 961k
qm9 1 40 1e-1 1e-5 8 128 256 0.001 150 1587k
ZINC 17 17 1e-1 1e-4 6 80 128 0.001 800 472k
ZINC-full 40 40 1e-1 1e-6 8 80 512 0.003 400 582k
ogbg-molhiv 5 5 1e-1 1e-6 6 96 24 0.001 300 751k
MUTAG 20 1 1e-7 1e-4 2 48 64 2e-3 70 82k
PTC-MR 25 1 1e-1 1e-3 4 16 64 3e-3 70 15k
PROTEINS 25 1 1e-7 3e-3 2 48 8 1.5e-3 80 82k
IMDB-BINARY 35 1 1e-7 1e-5 3 48 64 3e-3 80 100k
PascalVOC-SP 5 5 1e-1 1e-5 4 96 6 0.001 40 527k
Peptide-func 40 20 1e-1 1e-6 6 128 2 0.0003 80 1337k
Peptide-struct 40 20 1e-1 1e-6 6 128 2 0.0003 40 1337k

ZINC, ZINC-full, PascalVOC-SP, Peptide-func, and Peptide-struct have 500k parameter budgets. Other datasets have no parameter limit. Graphormer (Ying et al., 2021) takes 47000k parameters on ogbg-molhiv. 1-2-3-GNN takes 929k parameters on qm9. Our PST follows these budgets on ZINC, ZINC-full and PascalVOC-SP. However, on the peptide-func and peptide-struct datasets, we find that hidden dimension is quite crucial for good performance. So we use a comparable number of hidden dimension and transformer layers. This leads to about 1M parameters because our PST employs two sets of parameters (one for scalar and one for vector), which resulted in twice the parameters with the same hidden dimension and number of transformer layers. We conduct experiments with baselines with larger hidden dimensions for these datasets. The results are shown in Table 8. When the PST and baselines are all to 1M parameters, out PST outperforms baselines with the same parameter budget 1M, and our method is effective on the two datasets.

Table 8: Results on peptide-func and peptide-struct dataset with 1M parameter budget.
peptide-func peptide-struct
Graph-MLPMixer 0.6970±0.00800.6970\pm 0.0080 0.2475±0.00150.2475\pm 0.0015
GraphGPS 0.6535±0.00410.6535\pm 0.0041 0.2500±0.00050.2500\pm 0.0005
PST 0.6984±0.00510.6984\pm 0.0051 0.2470±0.00150.2470\pm 0.0015

Other datasets we explored do not have explicit parameter constraints, and it’s worth noting that our PST has fewer parameters compared to representative baselines in these cases. Hyperparameter Configuration:

Our experiments involved tuning seven hyperparameters: depth (4, 8), hidden dimension (64, 256), learning rate (0.0001, 0.003), number of warmup epochs (0, 40), number of cosine annealing epochs (1, 20), magnitude of Gaussian noise added to input (1e-6, 1e-4), and weight decay (1e-6, 1e-1). We observed that [1] also used seven hyperparameters in their setup. Batch size was determined based on GPU memory constraints and was not a parameter that we optimized.

Appendix G Architecture

Refer to caption
Figure 3: The pipeline of parameterized SRD. We first decompose Laplacian matrix or other matrice for the non-zero eigenvalue and the corresponding eigenvectors. Then the eigenvalue is transformed with DeepSet (Segol & Lipman, 2020). Multiply the transformed eigenvalue and the eigenvector leads to coordinates.
Refer to caption
Figure 4: Architecture of Point Set Transformer (PST) (a) PST contains several layers. Each layer is composed of an scalar-vector (sv)-mixer and an attention layer. (b) The architecture of sv-mixer. (c) The architecture of attention layer. sis_{i} and sis_{i}^{\prime} denote the scalar representations of node ii, and vi\vec{v}_{i} and vi\vec{v}_{i}^{\prime} denote the vector representations. xix_{i} is the initial features of node ii. QiQ_{i} and point coordinates of node ii produced by parameterized SRD in Section 3.2.

The architecture of parameterized SRD is shown in Figure 3. As illustrated in Section 3.2, it first do eigendecomposition for non-zero eigenvalues and the corresponding eigenvectors, then use DeepSet (Segol & Lipman, 2020) to process the eigenvalues, leading to coordinates with multiple channels. The architecture of PST is shown in Figure 4. As illustrated in Section 4, it is composed of scalar-vector mixers and attention layers.

Appendix H Ablation

To assess the design choices made in our Point Set Transformer (PST), we conducted ablation experiments. First, we replace the PSRD coordinates (see Section 3.2) with SRD coordinates, resulting in a reduced version referred to as the PST-gc model. Additionally, we introduced a variant called PST-onelayer, which is distinct from PST in that it only computes the attention matrix once and does not combine information in scalar and vector features. Furthermore, PST decompose Laplacian matrix by default to produce coordinates. PST-adj uses adjacency matrix instead. Similar to PST, PSDS takes node coordinates as input. However, it use DeepSet (Segol & Lipman, 2020) rather than transformer as the set encoder. For better comparison, we also use our strongest baseline on QM9 dataset, DF (Zhou et al., 2023).

The results of the ablation study conducted on the QM9 dataset are summarized in Table 2. Notably, PST-gc exhibits only a slight increase in test loss compared to PST, and even outperforms PST on 44 out of 1212 target metrics, highlighting the effectiveness of the Graph as Point Set approach with vanilla symmetric rank decomposition. In contrast, PST-onelayer performs significantly worse, underscoring the advantages of PST over previous methods that augment adjacency matrices with spectral features. PST-adj and PST-normadj achieves similar performance to PST, illustrating that the choice of matrix to decompose does not matter. DeepSet performs worse than PST, but it still outperforms our strongest baseline DF, showing the potential of combining set encoders other than transformer with our convertion from graph to set. On the long-range graph benchmark, PST maintains a significant performance edge over PST-onelayer. However, it’s worth noting that the gap between PST and PST-gc widens, further confirming the effectiveness of gc in modeling long-range interactions.

Table 9: Ablation study on qm9 dataset.
μ\mu α\alpha εhomo\varepsilon_{\text{homo}} εlumo\varepsilon_{\text{lumo}} Δε\Delta\varepsilon R2R^{2} ZPVE U0U_{0} UU HH GG CvC_{v}
Unit 10110^{-1}D 10110^{-1}a03a_{0}^{3} 10210^{-2}meV 10210^{-2}meV 10210^{-2}meV a02a_{0}^{2} 10210^{-2}meV meV meV meV meV 10210^{-2}cal/mol/K
PST 3.19±0.043.19_{\pm 0.04} 1.89±0.041.89_{\pm 0.04} 5.98±0.095.98_{\pm 0.09} 5.84±0.085.84_{\pm 0.08} 8.46±0.078.46_{\pm 0.07} 13.08±0.1613.08_{\pm 0.16} 0.39±0.010.39_{\pm 0.01} 3.46±0.173.46_{\pm 0.17} 3.55±0.103.55_{\pm 0.10} 3.49±0.203.49_{\pm 0.20} 3.55±0.173.55_{\pm 0.17} 7.77±0.157.77_{\pm 0.15}
PST-onelayer 3.72±0.023.72_{\pm 0.02} 2.25±0.052.25_{\pm 0.05} 6.62±0.116.62_{\pm 0.11} 6.67±0.076.67_{\pm 0.07} 9.37±0.159.37_{\pm 0.15} 15.95±0.2915.95_{\pm 0.29} 0.55±0.010.55_{\pm 0.01} 3.46±0.063.46_{\pm 0.06} 3.50±0.143.50_{\pm 0.14} 3.50±0.033.50_{\pm 0.03} 3.45±0.073.45_{\pm 0.07} 9.62±0.249.62_{\pm 0.24}
PST-gc 3.34±0.023.34_{\pm 0.02} 1.93±0.031.93_{\pm 0.03} 6.08±0.116.08_{\pm 0.11} 6.10±0.106.10_{\pm 0.10} 8.65±0.108.65_{\pm 0.10} 13.71±0.1213.71_{\pm 0.12} 0.40±0.010.40_{\pm 0.01} 3.38±0.133.38_{\pm 0.13} 3.43±0.123.43_{\pm 0.12} 3.33±0.083.33_{\pm 0.08} 3.29±0.113.29_{\pm 0.11} 8.04±0.158.04_{\pm 0.15}
PST-adj 3.16±0.023.16_{\pm 0.02} 1.86±0.011.86_{\pm 0.01} 6.31±0.066.31_{\pm 0.06} 6.10±0.056.10_{\pm 0.05} 8.84±0.018.84_{\pm 0.01} 13.60±0.0913.60_{\pm 0.09} 0.39±0.010.39_{\pm 0.01} 3.59±0.123.59_{\pm 0.12} 3.73±0.083.73_{\pm 0.08} 3.65±0.063.65_{\pm 0.06} 3.60±0.0163.60_{\pm 0.016} 7.62±0.217.62_{\pm 0.21}
PST-normadj 3.22±0.043.22_{\pm 0.04} 1.85±0.021.85_{\pm 0.02} 5.97±0.235.97_{\pm 0.23} 6.15±0.076.15_{\pm 0.07} 8.79±0.048.79_{\pm 0.04} 13.42±0.1513.42_{\pm 0.15} 0.41±0.010.41_{\pm 0.01} 3.36±0.253.36_{\pm 0.25} 3.41±0.243.41_{\pm 0.24} 3.46±0.183.46_{\pm 0.18} 3.38±0.233.38_{\pm 0.23} 8.10±0.128.10_{\pm 0.12}
PSDS 3.53±0.053.53_{\pm 0.05} 2.05±0.022.05_{\pm 0.02} 6.56±0.036.56_{\pm 0.03} 6.31±0.056.31_{\pm 0.05} 9.13±0.049.13_{\pm 0.04} 14.35±0.0214.35_{\pm 0.02} 0.41±0.020.41_{\pm 0.02} 3.53±0.113.53_{\pm 0.11} 3.49±0.053.49_{\pm 0.05} 3.47±0.043.47_{\pm 0.04} 3.56±0.143.56_{\pm 0.14} 8.35±0.098.35_{\pm 0.09}
DF 3.463.46 2.222.22 6.156.15 6.126.12 8.828.82 15.0415.04 0.460.46 4.244.24 4.164.16 3.953.95 4.244.24 9.019.01
Table 10: Ablation study on Long Range Graph Benchmark dataset.
Model PascalVOC-SP Peptides-Func Peptides-Struct
PST 0.4010±0.00720.4010_{\pm 0.0072} 0.6984±0.00510.6984_{\pm 0.0051} 0.2470±0.00150.2470_{\pm 0.0015}
PST-onelayer 0.3229±0.00510.3229_{\pm 0.0051} 0.6517±0.00760.6517_{\pm 0.0076} 0.2634±0.00190.2634_{\pm 0.0019}
PST-gc 0.4007±0.00390.4007_{\pm 0.0039} 0.6439±0.03420.6439_{\pm 0.0342} 0.2564±0.01200.2564_{\pm 0.0120}

Appendix I Scalability

We present training time per epoch and GPU memory consumption data in Table 11 and Table 12. Due to architecture, PST has higher time complexity than existing Graph Transformers and does not scale well on large graphs like pascalvoc-sp dataset. However, on the ZINC dataset, PST ranks as the second fastest model, and its memory consumption is comparable to existing models with strong expressivity, such as SUN and SSWL, and notably lower than PPGN.

Table 11: Training time per epoch and GPU memory consumption on zinc dataset with batch size 128.
PST SUN SSWL PPGN Graphormer GPS SAN-GPS
Time/s 15.20 20.93 45.30 20.21 123.79 11.70 79.08
Memory/GB 4.08 3.72 3.89 20.37 0.07 0.25 2.00
Table 12: Training time per epoch and GPU memory consumption on pascalvoc-sp dataset with batch size 6.
PST SUN SSWL PPGN Graphormer GPS SAN-GPS
Time/s 15.20 20.93 45.30 20.21 123.79 11.70 79.08
Memory/GB 4.08 3.72 3.89 20.37 0.07 0.25 2.00

Appendix J Results on TU datasets

Following the setting of  Feng et al. (2022), we test our PST on four TU datasets (Ivanov et al., 2019). The results are shown in Table 13. Baselines include WL subtree kernel (Shervashidze et al., 2011), GIN (Xu et al., 2019a), DGCNN (Zhang et al., 2018), GraphSNN (Wijesinghe & Wang, 2022), GNN-AK+ (Zhao et al., 2022), and three variants of KP-GNN (Feng et al., 2022) (KP-GCN, KP-GraphSAGE, and KP-GIN). We use 10-fold cross-validation, where 9 folds are for training and 1 fold is for testing. The average test accuracy is reported. Our PST consistently outperforms our baselines.

Table 13: TU dataset evaluation result.
Method MUTAG PTC-MR PROTEINS IMDB-B
WL 90.4±\pm5.7 59.9±\pm4.3 75.0±\pm3.1 73.8±\pm3.9
GIN 89.4±\pm5.6 64.6±\pm7.0 75.9±\pm2.8 75.1±\pm5.1
DGCNN 85.8±\pm1.7 58.6 ±\pm2.5 75.5±\pm0.9 70.0±\pm0.9
GraphSNN 91.24±\pm2.5 66.96±\pm3.5 76.51±\pm2.5 76.93±\pm3.3
GIN-AK+ 91.30±\pm7.0 68.20±\pm5.6 77.10±\pm5.7 75.60±\pm3.7
KP-GCN 91.7±\pm6.0 67.1±\pm6.3 75.8±\pm3.5 75.9±\pm3.8
KP-GraphSAGE 91.7±\pm6.5 66.5±\pm4.0 76.5±\pm4.6 76.4±\pm2.7
KP-GIN 92.2±\pm6.5 66.8±\pm6.8 75.8±\pm4.6 76.6±\pm4.2
PST 94.4±\pm3.5 68.8±\pm4.6 80.7±\pm3.5 78.9±\pm3.6

Appendix K Point Set DeepSet

Besides Transformer, we also propose a DeepSet (Segol & Lipman, 2020)-Based set encoder, Point Set DeepSet (PSDS), for point set to illustrate the versatility of our graph-to-set method. Similar to PST, PSDS also operates with points carrying two types of representations: scalars, which remain invariant to coordinate orthogonal transformations, and vectors, which adapt equivariantly to coordinate changes. For a point ii, its scalar representation is denoted by sids_{i}\in{\mathbb{R}}^{d}, and its vector representation is denoted by vir×dv_{i}\in{\mathbb{R}}^{r\times d}, where dd is the hidden dimension, and rr is the rank of coordinates. sis_{i} is initialized with the input node feature XiX_{i}, and viv_{i} is initialized with the parameterized coordinates containing graph structural information, as detailed in Section 3.2. Similar to DeepSet, PSDS transforms point representations individually, aggregates them to produce global feature, and combine global features and individual point representations to update point representations.

Scalar-Vector Mixer. This component individually transforms point representations. To enable the information exchange between vector and scalar features, we design a mixer architecture as follows.

si\displaystyle s_{i}^{\prime} MLP1(sidiagonal(W1viTviW2T)),\displaystyle\leftarrow\text{MLP}_{1}(s_{i}\|\text{diagonal}(W_{1}v_{i}^{T}v_{i}W_{2}^{T})), (101)
vi\displaystyle v_{i}^{\prime} vidiag(MLP2(si))W3+viW4\displaystyle\leftarrow v_{i}\text{diag}(\text{MLP}_{2}(s_{i}))W_{3}+v_{i}W_{4} (102)

Here, W1,W2,W3,W_{1},W_{2},W_{3}, and W4d×dW_{4}\in\mathbb{R}^{d\times d} are learnable matrices for mixing different channels of vector features. Additionally, MLP1:2dd\text{MLP}_{1}:\mathbb{R}^{2d\to d} and MLP2:dd\text{MLP}_{2}:\mathbb{R}^{d\to d} represent two multi-layer perceptrons transforming scalar representations. The operation diagonal(W1viTviW2)\text{diagonal}(W_{1}v_{i}^{T}v_{i}W_{2}) takes the diagonal elements of a matrix, which translates vectors to scalars, while diag(MLP2(si))vi\text{diag}(\text{MLP}_{2}(s_{i}))v_{i} transforms scalar features into vectors. As viTRTRvi=viTvi,RO(r)v_{i}^{T}R^{T}Rv_{i}=v_{i}^{T}v_{i},\forall R\in O(r), the scalar update is invariant to orthogonal transformations of the coordinates. Similarly, the vector update is equivariant to O(r)O(r).

Aggregator. This component aggregates individual point representations for global features s,v,vsqs,v,vsq.

s\displaystyle s iVMLP3(si)\displaystyle\leftarrow\sum_{i\in V}\text{MLP}_{3}(s_{i}) (103)
v\displaystyle v iVviW5\displaystyle\leftarrow\sum_{i\in V}v_{i}W_{5} (104)
vsq\displaystyle v_{sq} iVviW6W7viT\displaystyle\leftarrow\sum_{i\in V}v_{i}W_{6}W_{7}v_{i}^{T} (105)

Here, W5,W6W_{5},W_{6} and W7d×dW_{7}\in{\mathbb{R}}^{d\times d} denote the linear transformations vectors. MLP3:dd\text{MLP}_{3}:{\mathbb{R}}^{d}\to{\mathbb{R}}^{d} is an MLP converting scalars. Global feature sds\in{\mathbb{R}}^{d} is scalar, vr×dv{\mathbb{R}}^{r\times d} is vector, and vsqr×rv_{sq}\in{\mathbb{R}}^{r\times r} is the sum of square for each vector.

Point Representation Update. Each point representation is updated by combining global features.

si\displaystyle s_{i} MLP4(si+s)\displaystyle\leftarrow\text{MLP}_{4}(s_{i}+s) (106)
vi\displaystyle v_{i} vsqvi+vW8\displaystyle\leftarrow v_{sq}v_{i}+vW_{8} (107)

sis_{i} is combined with global scalar ss and transformed with an MLP MLP4:dd\text{MLP}_{4}:{\mathbb{R}}^{d}\to{\mathbb{R}}^{d}. viv_{i} is combined with vsqv_{sq} and vv with linear layer W8d×dW_{8}\in{\mathbb{R}}^{d\times d}.

Pooling. After several layers, we pool all points’ scalar representations as the set representation ss.

sPool({si|iV}),s\leftarrow\text{Pool}(\{s_{i}|i\in V\}), (108)

where Pool is pooling function like sum, mean, and max.