Graph as Point Set
Abstract
Graph is a fundamental data structure to model interconnections between entities. Set, on the contrary, stores independent elements. To learn graph representations, current Graph Neural Networks (GNNs) primarily use message passing to encode the interconnections. In contrast, this paper introduces a novel graph-to-set conversion method that bijectively transforms interconnected nodes into a set of independent points and then uses a set encoder to learn the graph representation. This conversion method holds dual significance. Firstly, it enables using set encoders to learn from graphs, thereby significantly expanding the design space of GNNs. Secondly, for Transformer, a specific set encoder, we provide a novel and principled approach to inject graph information losslessly, different from all the heuristic structural/positional encoding methods adopted in previous graph transformers. To demonstrate the effectiveness of our approach, we introduce Point Set Transformer (PST), a transformer architecture that accepts a point set converted from a graph as input. Theoretically, PST exhibits superior expressivity for both short-range substructure counting and long-range shortest path distance tasks compared to existing GNNs. Extensive experiments further validate PST’s outstanding real-world performance. Besides Transformer, we also devise a Deepset-based set encoder, which achieves performance comparable to representative GNNs, affirming the versatility of our graph-to-set method.
1 Introduction
Graph, composed of interconnected nodes, has a wide range of applications and has been extensively studied. In graph machine learning, a central focus is to effectively leverage node connections. Various architectures have arisen for graph tasks, exhibiting significant divergence in their approaches to utilizing adjacency information.
Two primary paradigms have evolved for encoding adjacency information. The first paradigm involves message passing between nodes via edges. Notable methods in this category include Message Passing Neural Network (MPNN) (Gilmer et al., 2017), a foundational framework for GNNs such as GCN (Kipf & Welling, 2017), GIN (Xu et al., 2019a), and GraphSAGE (Hamilton et al., 2017). Subgraph-based GNNs (Zhang & Li, 2021; Huang et al., 2023b; Bevilacqua et al., 2022; Qian et al., 2022; Frasca et al., 2022; Zhao et al., 2022; Zhang et al., 2023a) select subgraphs from the whole graph and run MPNN within each subgraph. These models aggregate messages from neighbors to update the central nodes’ representations. Additionally, Graph Transformers (GTs) integrate adjacency information into the attention matrix (Mialon et al., 2021; Kreuzer et al., 2021; Wu et al., 2021; Dwivedi & Bresson, 2020; Ying et al., 2021; Shirzad et al., 2023) (note that some early GTs have options to not use adjacency matrix by using only positional encodings, but the performance is significantly worse (Dwivedi & Bresson, 2020)). Some recent GTs even directly incorporate message-passing layers into their architectures (Rampásek et al., 2022; Kim et al., 2021). In summary, this paradigm relies on adjacency relationships to facilitate information exchange among nodes.
The second paradigm designs permutation-equivariant neural networks that directly take adjacency matrices as input. This category includes high-order Weisfeiler-Leman tests (Maron et al., 2019a), invariant graph networks (Maron et al., 2019b), and relational pooling (Chen et al., 2020). Additionally, various studies have explored manual feature extraction from the adjacency matrix, including random walk structural encoding (Dwivedi et al., 2022a; Li et al., 2020), Laplacian matrix eigenvectors (Wang et al., 2022; Lim et al., 2023; Huang et al., 2023a), and shortest path distances (Li et al., 2020). However, these approaches typically serve as data augmentation steps for other models, rather than constituting an independent paradigm.
Both paradigms heavily rely on adjacency information in graph encoding. In contrast, this paper explores whether we can give up adjacency matrix in graph models while achieving competitive performance. As shown in Figure 1, our innovative graph-to-set method converts interconnected nodes into independent points, subsequently encoded by a set encoder like Transformer. Leveraging our symmetric rank decomposition, we break down the augmented adjacency matrix into , wherein is constituted by column-full-rank rows, each denoting a node coordinate. This representation enables us to express the presence of edges as inner products of coordinate vectors ( and ). Consequently, interlinked nodes can be transformed into independent points and supplementary coordinates without information loss. Theoretically, two graphs are isomorphic iff the two converted point sets are equal up to an orthogonal transformation (because for any , is also a solution where is any orthogonal matrix). This equivalence empowers us to encode the set with coordinates in an orthogonal-transformation-equivariant manner, akin to E(3)-equivariant models designed for 3D geometric deep learning. Importantly, our approach is versatile, allowing for using any equivariant set encoder, thereby significantly expanding the design space of GNNs. Furthermore, for Transformer, a specific set encoder, our method offers a novel and principled way to inject graph information losslessly. In Appendix D, we additionally show that it unifies various heuristic structural/positional encodings in previous GTs, including random walk (Li et al., 2020; Dwivedi et al., 2023; Rampásek et al., 2022), heat kernel (Mialon et al., 2021), and resistance distance (Zhang et al., 2023b).

To instantiate our method, we introduce an orthogonal-transformation-equivariant Transformer, namely Point Set Transformer (PST), to encode the point set. PST provably surpasses existing models in long-range and short-range expressivity. Extensive experiments verify these claims across synthetic datasets, graph property prediction datasets, and long-range graph benchmarks. Specifically, PST outperforms all baselines on QM9 (Wu et al., 2017) dataset. Moreover, our graph-to-set method is not constrained to only one specific set encoder. We also propose a Deepset (Segol & Lipman, 2020)-based model, which outperforms comparable to GIN (Xu et al., 2019b) on our datasets.
Differences from eigendecomposition. Note that our graph-to-set method is distinct from previous approaches that decompose adjacency matrices for positional encodings (Dwivedi et al., 2023; Wang et al., 2022; Lim et al., 2023; Bo et al., 2023). The key differences root in that previous methods primarily relied on eigendecomposition (EVD), whereas our method is based on symmetric rank decomposition (SRD). Their differences are as follows:
-
•
SRD enables a practical conversion of graph problems into set problems. SRD of a matrix is unique up to a single orthogonal transformation, while EVD is unique up to a combination of orthogonal transformations within each eigenspace. This difference allows SRD-based models to easily maintain symmetry, ensuring consistent predictions for isomorphic graphs, while EVD-based methods (Lim et al., 2023) struggle because they need to deal with each eigenspace individually, making them less suitable for graph-level tasks where eigenspaces vary between graphs.
-
•
Due to the advantage of SRD, we can utilize set encoder with coordinates to capture graph structure, thus expanding the design space of GNN. Moreover, our method provides a principled way to add graph information to Transformers. Note that previous GTs usually require multiple heuristic encodings together. Besides node positional encodings, they also use adjacency matrices: Grit (Ma et al., 2023) and graphit (Mialon et al., 2021) use random walk matrix (normalized adjacency) as relative positional encoding (RPE). Graph Transformer (Dwivedi & Bresson, 2020), Graphormer (Ying et al., 2021), and SAN (Kreuzer et al., 2021) use adjacency matrix as RPE. Dwivedi & Bresson (2020)’s ablation shows that adjacency is crucial. GPS (Rampásek et al., 2022), Exphormer (Shirzad et al., 2023), higher-order Transformer (Kim et al., 2021), and GraphVit/MLP-Mixer (He et al., 2023) even directly incorporate message passing blocks which use adjacency matrix to guide message passing between nodes.
In summary, this paper introduces a novel approach to graph representation learning by converting interconnected graphs into independent points and subsequently encoding them using an orthogonal-transformation-equivariant set encoder like our Point Set Transformer. This innovative approach outperforms existing methods in both long- and short-range tasks, as validated by comprehensive experiments.
2 Preliminary
For a matrix , we define as the -th row (as a column vector), and as its element. For a vector , is the diagonal matrix with as its diagonal elements. And for , represents the vector of its diagonal elements.
Let denote an undirected graph. Here, is the set of nodes, is the set of edges, and is the node feature matrix, whose -th row is of node . The edge set can also be represented using the adjacency matrix , where is if the edge exists (i.e., ) and otherwise. A graph can also be represented by the pair or . The degree matrix is a diagonal matrix with node degree (sum of a row of matrix ) as the diagonal elements.
Given a permutation function , the permuted graph is , where , and for all . Essentially, the permutation reindex each node to while preserving the original graph structure and node features. Two graphs are isomorphic iff they can be mapped to each other through a permutation.
Definition 2.1.
Graphs and are isomorphic, denoted as , if there exists a permutation such that and .
Isomorphic graphs can be transformed into each other by merely reindexing their nodes. In graph tasks, models should assign the same prediction to isomorphic graphs.
Symmetric Rank Decomposition (SRD). Decomposing an matrix into two full-rank matrices is well-known (Puntanen et al., 2011). We further show that a positive semi-definite matrix can be decomposed into a full-rank matrix.
Definition 2.2.
(Symmetric Rank Decomposition, SRD) Given a (symmetric) positive semi-definite matrix of rank , its SRD is , where .
As , , which implies that must be full column rank. Moreover, two SRDs of the same matrix are equal up to an orthogonal transformation. Let denote the set of orthogonal matrices in .
Proposition 2.3.
Matrices and in are SRD of the same matrix iff there exists .
SRD is closely related to eigendecomposition. Let denote the eigendecomposition of , where is the vector of non-zero eigenvalues, and is the matrix whose columns are the corresponding eigenvectors. yields an SRD of , where the superscript denotes element-wise square root operation.
3 Graph as Point Set
In this section, we present our innovative method for converting graphs into sets of points. We first show that Symmetric Rank Decomposition (SRD) can theoretically achieve this transformation: two graphs are isomorphic iff the sets of coordinates generated by SRD are equal up to orthogonal transformations. Additionally, we parameterize SRD for better real-world performance. Proof details are in Appendix A.
3.1 Symmetric Rank Decomposition for Coordinates
A natural approach to breaking down the interconnections between nodes is to decompose the adjacency matrix. While previous methods often used eigendecomposition outputs as supplementary node features, these features are not unique. Consequently, models relying on them fail to provide consistent predictions for isomorphic graphs, ultimately leading to poor generalization. To address this, we show that Symmetric Rank Decomposition (SRD) can convert graph-level tasks into set-level tasks with perfect alignment. Since SRD only applies to positive semi-definite matrices, we use the augmented adjacency matrix , which is always positive semi-definite (proof in Appendix A.2).
Theorem 3.1.
Given two graphs and with respective degree matrices and , iff , where and are the SRD of and respectively, and is the rank of .
In this theorem, the graph is converted to a set of points , where is the original node feature of , and , the -th row of SRD of , is the -dimensional coordinate of node . Consequently, two graphs are isomorphic iff their point sets are equal up to an orthogonal transformation. Intuitively, we can imagine that the graph is mapped into an -dimensional space, where each node has a coordinate, and the inner product between two coordinates represents edge existence. This mapping is not unique, since we can freely rotate the coordinates through an orthogonal transformation without changing inner products. This conversion can be loosely likened to the reverse process of constructing molecular graph from atoms’ 3D coordinates, where Euclidean distances between atoms determine node connections in the graph.
Leveraging Theorem 3.1, we can convert a graph into a set and employ a set encoder for encoding it. Our method consistently produces representations for isomorphic graphs when the encoder is orthogonal transformation-invariant. The method’s expressivity hinges on the set encoder’s ability to differentiate non-equal sets, with greater encoder power enhancing overall performance on graph tasks.
3.2 Parameterized Coordinates
In this section, we enhance SRD’s practical performance through parameterization. As shown in Section 2, SRD can be implemented via eigendecomposition: , where denotes non-zero eigenvalues of the decomposed matrix, and denotes corresponding eigenvectors. To parameterize SRD, we replace the element-wise square root with a function . This alteration further eliminates the constraint of non-negativity on eigenvalues and enables the use of various symmetric matrices containing adjacency information to generate coordinates. Additionally, for model flexibility, the coordinates can include multiple channels, with each channel corresponding to a distinct eigenvalue function.
Definition 3.2.
(Parameterized SRD, PSRD) With a -channel eigenvalue function and an adjacency function producing symmetric matrices, PSRD coordinate of a graph is , whose -th channel is , where , are non-zero eigenvalues and corresponding eigenvectors of , and is the -th channel of .
In the definition, maps adjacency matrix to its variants like Laplacian matrix, and transforms eigenvalues. is node ’s coordinate. Similar to SRD, PSRD can also convert the graph isomorphism problems to set equality problems.
Theorem 3.3.
Given a permutation-equivariant adjacency function , for graphs and
-
•
If eigenvalue function is permutation-equivariant and , then two point sets with PSRD coordinates are equal up to an orthogonal transformation, i.e., , , where is the rank of coordinates.
-
•
If is injective, for all , there exists a continuous permutation-equivariant function that if two point sets with PSRD coordinates are equal up to an orthogonal transformation, .
Given permutation equivariant and , the point sets with PSRD coordinates are equal up to an orthogonal transformation for isomorphic graphs. Moreover, there exists making reverse true. Therefore, we can safely employ permutation-equivariant eigenvalue functions, ensuring consistent predictions for isomorphic graphs. An expressive eigenvalue function also allows for the lossless conversion of graph-level tasks into set problems. In implementation, we utilize DeepSet (Segol & Lipman, 2020) due to its universal expressivity for permutation-equivariant set functions. Detailed architecture is shown in Figure 3 in Appendix G.
In summary, we use SRD and its parameterized generalization to decompose the adjacency matrix or its variants into coordinates. Thus, we transform a graph into a point set where each point represents a node and includes both the original node feature and the coordinates as its features.
4 Point Set Transformer
Our method, as depicted in Figure 1, comprises two steps: converting the graph into a set of independent points and encoding the set. Section 3 demonstrates the bijective transformation of the graph into a set. To encode this point set, we introduce a novel architecture, Point Set Transformer (PST), designed to maintain orthogonality invariance and deliver remarkable expressivity. Additionally, to highlight our method’s versatility, we propose a DeepSet (Segol & Lipman, 2020)-based set encoder in Appendix K.
PST’s architecture is depicted in Figure 4 in Appendix G. PST operates with two types of representations for each point: scalars, which remain invariant to coordinate orthogonal transformations, and vectors, which adapt equivariantly to coordinate changes. For a point , its scalar representation is , and its vector representation is , where is the hidden dimension, and is the rank of coordinates. and are initialized with the input node feature and PSRD coordinates (detailed in Section 3.2) containing graph structure information, respectively.
Similar to conventional transformers, PST comprises multiple layers. Each layer incorporates two key components:
Scalar-Vector Mixer. This component, akin to the feed-forward network in Transformer, individually transforms point features. To enable information exchange between vectors and scalars, we employ the following architecture.
(1) | ||||
(2) |
Here, and are learnable matrices for mixing different channels of vector features. Additionally, and represent two multi-layer perceptrons transforming scalar representations. The operation takes the diagonal elements of a matrix, which translates vectors to scalars, while transforms scalar features into vectors. As , the scalar update is invariant to orthogonal transformations of the coordinates. Similarly, the vector update is equivariant to .
Attention Layer. Akin to ordinary attention layers, this component compute pairwise attention score to linearly combine point representations.
(3) |
Here, and denote the linear transformations for scalars and vectors queries, respectively, while and are for keys. The equation computes the inner products of queries and keys, similar to standard attention mechanisms. It is easy to see is also invariant to .
Then we linearly combine point representations with attention scores as the coefficients:
(4) |
Each transformer layer is of time complexity and space complexity .
Pooling. After several layers, we pool all points’ scalar representations as the set representation .
(5) |
where Pool is pooling function like sum, mean, and max.
5 Expressivity
In this section, we delve into the theoretical expressivity of our methods. Our PSRD coordinates and the PST architecture exhibit strong long-range expressivity, allowing for efficient computation of distance metrics between nodes, as well as short-range expressivity, enabling the counting of paths and cycles rooted at each node. Therefore, our model is more expressive than many existing models, including GIN (equivalent to the 1-WL test) (Xu et al., 2019b), PPGN (equivalent to the 2-FWL test, more expressive in some cases) (Maron et al., 2019a), GPS (Rampásek et al., 2022), and Graphormer (Ying et al., 2021) (two representative graph transformers). More details are in Appendix B.
5.1 Long Range Expressivity
This section demonstrates that the inner products of PSRD coordinates exhibits strong long-range expressivity, which PST inherits by utilizing inner products in attention layers.
When assessing a model’s capacity to capture long-range interactions (LRI), a key measure is its ability to compute shortest path distance (spd) between nodes. Since formally characterizing LRI can be challenging, we focus on analyzing models’ performance concerning this specific measure. We observe that existing models vary significantly in their capacity to calculate spd. Moreover, we find an intuitive explaination for these differences: spd between nodes can be expressed as , and the ability to compute , the -th power of the adjacency matrix , can serve as a straightforward indicator. Different models need different number of layers to compute .
PSRD coordinates. PSRD coordinates can capture arbitrarily large shortest path distances through their inner products in one step. To illustrate it, we decompose the adjacency matrix as , and employ coordinates as and . Their inner products are as follows:
(6) |
Theorem 5.1.
There exists permutation-equivariant functions , such that for all graphs , the shortest path distance between node is a function of , , where is the PSRD coordinate defined in Section 3.2, is the maximum shortest path distance between nodes.
2-FWL. A powerful graph isomorphic test, 2-Folklore-Weisfeiler-Leman Test (2-FWL), and its neural network version PPGN (Maron et al., 2019a) produce node pair representations in a matrix . is initialized with . Each layer updates with . So intuitively, computing takes layers.
(7) |
Theorem 5.2.
Let denote the color of node tuple of graph at iteration . Given graphs and , for all , if two node tuples in and in have , then . Moreover, for all , there exists , such that while .
In other words, iterations of 2-FWL can distinguish pairs of nodes with different spds, as long as that distance is at most . Moreover, -iteration 2-FWL cannot differentiate all tuples with spd from other tuples with different spds, which indicates that -iteration 2-FWL is effective in counting shortest path distances up to a maximum of .
MPNN. Intuitively, each MPNN layer uses to update node representations . However, this operation in general cannot compute unless the initial node feature .
(8) |
Theorem 5.3.
A graph pair exists that MPNN cannot differentiate, but their sets of all-pair spd are different.
If MPNNs can compute spd between node pairs, they should be able to distinguish this graph pair from the sets of spd. However, we show no MPNNs can distinguish the pair, thus proving that MPNNs cannot compute spd.
Graph Transformers (GTs) are known for their strong long-range capacities (Dwivedi et al., 2022b), as they can aggregate information from the entire graph to update each node’s representation. However, aggregating information from the entire graph is not equivalent to capturing the distance between nodes, and some GTs also fail to compute spd between nodes. Details are in Appendix C. Note that this slightly counter-intuitive results is because we take a new perspective to study long range interaction rather than showing GTs are weak in long range capacity.
Besides shortest path distances, our PSRD coordinates also enables the unification of various structure encodings (distance metrics between nodes), including random walk (Li et al., 2020; Dwivedi et al., 2023; Rampásek et al., 2022), heat kernel (Mialon et al., 2021), resistance distance (Zhang & Li, 2021; Zhang et al., 2023b). Further insights and details are shown in Table 5 in Appendix D.
5.2 Short Range Expressitivity
This section shows PST’s expressivity in representative short-range tasks: path and cycle counting.
Theorem 5.4.
A one-layer PST can count paths of length and , a two-layer PST can count paths of length and , and a four-layer PST can count paths of length and . Here, “count” means that the element of the attention matrix in the last layer can express the number of paths between nodes and .
Therefore, with enough layers, our PST models can count the number of paths of length between nodes. Furthermore, our PST can also count cycles.
Theorem 5.5.
A one-layer PST can count cycles of length , a three-layer PST can count cycles of length and , and a five-layer PST can count cycles of length and . Here, “count” means the representation of node in the last layer can express the number of cycles involving node .
Therefore, with enough layers, PST can count the number of cycles of length between nodes. Given that even 2-FWL is restricted to counting cycles up to length (Fürer, 2017), the cycle counting power of our Point Set Transformer is at least on par with 2-FWL.
6 Related Work
Graph Neural Network with Eigen-Decomposition. Our approach employs coordinates derived from the symmetric rank decomposition (SRD) of adjacency or related matrices, differing from prior studies that primarily rely on eigendecomposition (EVD). While both approaches have similarities, SRD transforms the graph isomorphism problem into a set problem bijectively, which is challenging for EVD, because SRD of a matrix is unique up to a single orthogonal transformation, while EVD is unique up to multiple orthogonal transformations in different eigenspaces. This key theoretical difference has profound implications for model design. Early efforts, like Dwivedi et al. (2023), introduce eigenvectors into MPNNs’ input node feature (Gilmer et al., 2017), and subsequent works, such as Graph Transformers (GTs) (Dwivedi & Bresson, 2020; Kreuzer et al., 2021), incorporate eigenvectors as node positional encodings. However, due to the non-uniqueness of eigenvectors, these models produce varying predictions for isomorphic graphs, limiting their generalization. Lim et al. (2023) partially solve the non-uniqueness problem. However, their solutions are limited to cases with constant eigenvalue multiplicity in graph tasks due to the property of EVD. On the other hand, approaches like Wang et al. (2022), Bo et al. (2023), and Huang et al. (2024) completely solve non-uniqueness and even apply permutation-equivariant functions to eigenvalues, similar to our PSRD. However, these methods aim to enhance existing MPNNs and GTs with heuristic features. In contrast, we perfectly align graph-level tasks with set-level tasks through SRD, allowing us to convert orthogonal-transformation-equivariant set encoders to graph encoders and to inject graph structure information into Transformers in a principled ways.
Method | 2-Path | 3-Path | 4-Path | 5-path | 6-path | 3-Cycle | 4-Cycle | 5-Cycle | 6-Cycle | 7-cycle | TT | CC | TR |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MPNN | 1.0 | 67.3 | 159.2 | 235.3 | 321.5 | 351.5 | 274.2 | 208.8 | 155.5 | 169.8 | 363.1 | 311.4 | 297.9 |
IDGNN | 1.9 | 1.8 | 27.3 | 68.6 | 78.3 | 0.6 | 2.2 | 49 | 49.5 | 49.9 | 105.3 | 45.4 | 62.8 |
NGNN | 1.5 | 2.1 | 24.4 | 75.4 | 82.6 | 0.3 | 1.3 | 40.2 | 43.9 | 52.2 | 104.4 | 39.2 | 72.9 |
GNNAK | 4.5 | 40.7 | 7.5 | 47.9 | 48.8 | 0.4 | 4.1 | 13.3 | 23.8 | 79.8 | 4.3 | 11.2 | 131.1 |
I2-GNN | 1.5 | 2.6 | 4.1 | 54.4 | 63.8 | 0.3 | 1.6 | 2.8 | 8.2 | 39.9 | 1.1 | 1.0 | 1.3 |
PPGN | 0.3 | 1.7 | 4.1 | 15.1 | 21.7 | 0.3 | 0.9 | 3.6 | 7.1 | 27.1 | 2.6 | 1.5 | 14.4 |
PSDS | |||||||||||||
PST |
Equivariant Point Cloud and 3-D Molecule Neural Networks. Equivariant point cloud and 3-D molecule tasks share resemblances: both involve unordered sets of 3-D coordinate points as input and require models to produce predictions invariant/equivariant to orthogonal transformations and translations of coordinates. Several works (Chen et al., 2021; Winkels & Cohen, 2018; Cohen et al., 2018; Gasteiger et al., 2021) introduce specialized equivariant convolution operators to preserve prediction symmetry, yet are later surpassed by models that learn both invariant and equivariant representations for each point, transmitting these representations between nodes. Notably, certain models (Satorras et al., 2021; Schütt et al., 2021; Deng et al., 2021; Wang & Zhang, 2022) directly utilize vectors mirroring input coordinate changes as equivariant features, while others (Thomas et al., 2018; Batzner et al., 2022; Fuchs et al., 2020; Hutchinson et al., 2021; Worrall et al., 2017; Weiler et al., 2018) incorporate high-order irreducible representations of the orthogonal group, achieving proven universal expressivity (Dym & Maron, 2021). Our Point Set Transformer (PST) similarly learns both invariant and equivariant point representations. However, due to the specific conversion of point sets from graphs, PST’s architecture varies from existing models. While translation invariance characterizes point clouds and molecules, graph properties are sensitive to coordinate translations in our method. Hence, we adopt inner products of coordinates. Additionally, these prior works center on 3D point spaces, whereas our coordinates exist in high-dimensional space, rendering existing models and theoretical expressivity results based on high-order irreducible representations incompatible with our framework.
7 Experiments
In our experiments, we evaluate our model across three dimensions: substructure counting for short-range expressivity, real-world graph property prediction for practical performance, and Long-Range Graph Benchmarks (Dwivedi et al., 2022b) to assess long-range interactions. Our primary model, Point Set Transformer (PST) with PSRD coordinates derived from the Laplacian matrix, performs well on all tasks. Moreover, our graph-to-set method is adaptable to various configurations. In ablation study (see Appendix H), another set encoders Point Set DeepSet (PSDS, introduced in Appendix K), SRD coordinates different from PSRD, and coordinates decomposed from the adjacency matrix and normalized adjacency matrix all demonstrate good performance, highlighting the versatility of our approach. Although PST has higher time complexity compared to existing Graph Transformers and is slower on large graphs, it shows similar scalability to our baselines in real-world graph property prediction datasets (see Appendix I). Our PST uses fewer or comparable parameters than baselines across all datasets. Dataset details, experiment settings, and hyperparameters are provided in Appendix E and F.
7.1 Graph substructure counting
As Chen et al. (2020) highlight, the ability to count substructures is a crucial metric for assessing expressivity. We evaluate our model’s substructure counting capabilities on synthetic graphs following Huang et al. (2023b). The considered substructures include paths of lengths 2 to 6, cycles of lengths 3 to 7, and other substructures like tailed triangles (TT), chordal cycles (CC), and triangle-rectangle (TR). Our task involves predicting the number of paths originating from each node and the cycles and other substructures in which each node participates. We compare our Point Set Transformer (PST) with expressive GNN models, including ID-GNNs (You et al., 2021), NGNNs (Zhang & Li, 2021), GNNAK+(Zhao et al., 2022), I2-GNN(Huang et al., 2023b), and PPGN (Maron et al., 2019a). Baseline results are from Huang et al. (2023b), where uncertainties are unknown.
Results are in Table 1. Following Huang et al. (2023b), a model can count a substructure if its normalized test Mean Absolute Error (MAE) is below (10 units in the table). Remarkably, our PST counts all listed substructures, which aligns with our Theorem 5.4 and Theorem 5.5, while the second-best model, I2-GNN, counts only 10 out of 13 substructures. PSDS can also count 8 out of 13 substructures, showcasing the versatility of our graph-to-set method.
Target | Unit | LRP | NGNN | I2GNN | DF | PSDS | PST | 1GNN* | DTNN* | 123* | PPGN* | PST* |
---|---|---|---|---|---|---|---|---|---|---|---|---|
D | 3.64 | 4.28 | 4.28 | 3.46 | 4.93 | 2.44 | 4.76 | 2.31 | ||||
2.98 | 2.90 | 2.30 | 2.22 | 7.80 | 9.50 | 2.70 | 3.82 | |||||
meV | 6.91 | 7.21 | 7.10 | 6.15 | 8.73 | 10.56 | 9.17 | 7.51 | ||||
meV | 7.54 | 8.08 | 7.27 | 6.12 | 9.66 | 13.93 | 9.55 | 7.81 | ||||
meV | 9.61 | 10.34 | 10.34 | 8.82 | 13.33 | 30.48 | 13.06 | 11.05 | ||||
19.30 | 20.50 | 18.64 | 15.04 | 34.10 | 17.00 | 22.90 | 16.07 | |||||
ZPVE | meV | 1.50 | 0.54 | 0.38 | 0.46 | 3.37 | 4.68 | 0.52 | 17.42 | |||
meV | 11.24 | 8.03 | 5.74 | 4.24 | 63.13 | 66.12 | 1.16 | 6.37 | ||||
meV | 11.24 | 9.82 | 5.61 | 4.16 | 56.60 | 66.12 | 3.02 | 6.37 | ||||
meV | 11.24 | 8.30 | 7.32 | 3.95 | 60.68 | 66.12 | 1.14 | 6.23 | ||||
meV | 11.24 | 13.31 | 7.10 | 4.24 | 52.79 | 66.12 | 1.28 | 6.48 | ||||
cal/mol/K | 12.90 | 17.40 | 7.30 | 9.01 | 27.00 | 243.00 | 9.44 | 18.40 |
7.2 Graph properties prediction
We conduct experiments on four real-world graph datasets: QM9 (Wu et al., 2017), ZINC, ZINC-full (Gómez-Bombarelli et al., 2016), and ogbg-molhiv (Hu et al., 2020). PST excels in performance, and PSDS performs comparable to GIN (Xu et al., 2019a). PST also outperforms all baselines on TU datasets (Ivanov et al., 2019) (see Appendix J).
For the QM9 dataset, we compare PST with various expressive GNNs, including models considering Euclidean distances (1-GNN, 1-2-3-GNN (Morris et al., 2019), DTNN (Wu et al., 2017), PPGN (Maron et al., 2019a)) and those focusing solely on graph structure (Deep LRP (Chen et al., 2020), NGNN (Zhang & Li, 2021), I2-GNN (Huang et al., 2023b), 2-DRFWL(2) GNN (Zhou et al., 2023)). For fair comparsion, we introduce two versions of our model: PST without Euclidean distance (PST) and PST with Euclidean distance (PST*). Results in Table 2 show PST outperforms all baseline models without Euclidean distance on 11 out of 12 targets, with an average 11% reduction in loss compared to the strongest baseline, 2-DRFWL(2) GNN. PST* outperforms all Euclidean distance-based baselines on 8 out of 12 targets, with an average 4% reduction in loss compared to the strongest baseline, 1-2-3-GNN. Both models rank second in performance for the remaining targets. PSDS without Euclidean distance also outperforms baselines on 6 out of 12 targets.
zinc | zinc-full | molhiv | |
MAE | MAE | AUC | |
GIN | |||
GNN-AK+ | – | ||
ESAN | |||
SUN | |||
SSWL | |||
DRFWL | |||
CIN | |||
NGNN | |||
Graphormer | |||
GPS | - | ||
GMLP-Mixer | - | ||
SAN | - | ||
Specformer | - | ||
SignNet | - | ||
Grit | - | ||
PSDS | |||
PST |
For ZINC, ZINC-full, and ogbg-molhiv datasets, we have conducted an evaluation of PST and PSDS in comparison to a range of expressive GNNs and graph transformers (GTs). This set of models includes expressive MPNN and subgraph GNNs: GIN (Xu et al., 2019b), SUN (Frasca et al., 2022), SSWL (Zhang et al., 2023a), 2-DRFWL(2) GNN (Zhou et al., 2023), CIN (Bodnar et al., 2021), NGNN (Zhang & Li, 2021), and GTs: Graphormer (Ying et al., 2021), GPS (Rampásek et al., 2022), Graph MLP-Mixer (He et al., 2023), Specformer (Bo et al., 2023), SignNet (Lim et al., 2023), and Grit (Ma et al., 2023). Performance results for the expressive GNNs are sourced from (Zhou et al., 2023), while those for the Graph Transformers are extracted from (He et al., 2023; Ma et al., 2023; Lim et al., 2023). The comprehensive results are presented in Table 3. Notably, our PST outperforms all baseline models on ZINC-full datasets, achieving reductions in loss of 18%. On the ogbg-molhiv dataset, our PST also delivers competitive results, with only CIN and Graphormer surpassing it. Overall, PST demonstrates exceptional performance across these four diverse datasets, and PSDS also performs comparable to representative GNNs like GIN (Xu et al., 2019a).
7.3 Long Range Graph Benchmark
To assess the long-range capacity of our Point Set Transformer (PST), we conducted experiments using the Long Range Graph Benchmark (Dwivedi et al., 2022b). Following He et al. (2023), we compared our model to a range of baseline models, including GCN (Kipf & Welling, 2017), GINE (Xu et al., 2019a), GatedGCN (Bresson & Laurent, 2017), SAN (Kreuzer et al., 2021), Graphormer (Ying et al., 2021), GMLP-Mixer, Graph ViT (He et al., 2023), and Grit (Ma et al., 2023). PST outperforms all baselines on the PascalVOC-SP and Peptides-Func datasets and achieves the third-highest performance on the Peptides-Struct dataset. PSDS consistently outperforms GCN and GINE. These results showcase the remarkable long-range interaction capturing abilities of our methods across various benchmark datasets. Note that a contemporary work (Tönshoff et al., 2023) points out that even vanilla MPNNs can achieve similar performance to Graph Transformers on LRGB with better hyperparameters, which implies that LRGB is not a rigor benchmark. However, for comparison with previous work, we maintain the original settings on LRGB datasets.
Model | PascalVOC-SP | Peptides-Func | Peptides-Struct |
---|---|---|---|
F1 score | AP | MAE | |
GCN | |||
GINE | |||
GatedGCN | |||
GatedGCN* | |||
Transformer** | |||
SAN* | |||
SAN** | |||
GraphGPS | |||
Exphormer | |||
GMLP-Mixer | - | ||
Graph ViT | - | ||
Grit | - | ||
PSDS | |||
PST |
8 Conclusion
We introduce a novel approach employing symmetric rank decomposition to transform interconnected nodes in graph into independent points with coordinates. Additionally, we propose the Point Set Transformer to encode the point set. Our approach demonstrates remarkable theoretical expressivity and excels in real-world performance, addressing both short-range and long-range tasks effectively. It extends the design space of GNN and provides a principled way to inject graph structural information into Transformers.
9 Limitations
PST’s scalability is still constrained by the Transformer architecture. To overcome this, acceleration techniques such as sparse attention and linear attention could be explored, which will be our future work.
Impact Statement
This paper presents work whose goal is to advance the field of graph representation learning and will improve the design of graph generation and prediction models. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
Acknowledgement
Xiyuan Wang and Muhan Zhang are partially supported by the National Key R&D Program of China (2022ZD0160300), the National Key R&D Program of China (2021ZD0114702), the National Natural Science Foundation of China (62276003), and Alibaba Innovative Research Program. Pan Li is supported by the National Science Foundation award IIS-2239565.
References
- Akiba et al. (2019) Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In SIGKDD, 2019.
- Batzner et al. (2022) Batzner, S., Musaelian, A., Sun, L., Geiger, M., Mailoa, J. P., Kornbluth, M., Molinari, N., Smidt, T. E., and Kozinsky, B. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature communications, 13(1):1–11, 2022.
- Bevilacqua et al. (2022) Bevilacqua, B., Frasca, F., Lim, D., Srinivasan, B., Cai, C., Balamurugan, G., Bronstein, M. M., and Maron, H. Equivariant subgraph aggregation networks. In ICLR, 2022.
- Bo et al. (2023) Bo, D., Shi, C., Wang, L., and Liao, R. Specformer: Spectral graph neural networks meet transformers. In ICLR, 2023.
- Bodnar et al. (2021) Bodnar, C., Frasca, F., Otter, N., Wang, Y., Liò, P., Montúfar, G. F., and Bronstein, M. M. Weisfeiler and lehman go cellular: CW networks. In NeurIPS, 2021.
- Bouritsas et al. (2023) Bouritsas, G., Frasca, F., Zafeiriou, S., and Bronstein, M. M. Improving graph neural network expressivity via subgraph isomorphism counting. TPAMI, 45(1), 2023.
- Bresson & Laurent (2017) Bresson, X. and Laurent, T. Residual gated graph convnets, 2017.
- Chen et al. (2021) Chen, H., Liu, S., Chen, W., Li, H., and Jr., R. W. H. Equivariant point network for 3d point cloud analysis. In CVPR, 2021.
- Chen et al. (2020) Chen, Z., Chen, L., Villar, S., and Bruna, J. Can graph neural networks count substructures? In NeurIPS, 2020.
- Cohen et al. (2018) Cohen, T. S., Geiger, M., Köhler, J., and Welling, M. Spherical cnns. In ICLR, 2018.
- Deng et al. (2021) Deng, C., Litany, O., Duan, Y., Poulenard, A., Tagliasacchi, A., and Guibas, L. J. Vector neurons: A general framework for so(3)-equivariant networks. In ICCV, 2021.
- Dwivedi & Bresson (2020) Dwivedi, V. P. and Bresson, X. A generalization of transformer networks to graphs, 2020.
- Dwivedi et al. (2022a) Dwivedi, V. P., Luu, A. T., Laurent, T., Bengio, Y., and Bresson, X. Graph neural networks with learnable structural and positional representations. In ICLR, 2022a.
- Dwivedi et al. (2022b) Dwivedi, V. P., Rampásek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., and Beaini, D. Long range graph benchmark. In NeurIPS, 2022b.
- Dwivedi et al. (2023) Dwivedi, V. P., Joshi, C. K., Luu, A. T., Laurent, T., Bengio, Y., and Bresson, X. Benchmarking graph neural networks. J. Mach. Learn. Res., 24:43:1–43:48, 2023.
- Dym & Maron (2021) Dym, N. and Maron, H. On the universality of rotation equivariant point cloud networks. In ICLR, 2021.
- Feng et al. (2022) Feng, J., Chen, Y., Li, F., Sarkar, A., and Zhang, M. How powerful are k-hop message passing graph neural networks. In NeurIPS, 2022.
- Fey & Lenssen (2019) Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. CoRR, abs/1903.02428, 2019.
- Frasca et al. (2022) Frasca, F., Bevilacqua, B., Bronstein, M. M., and Maron, H. Understanding and extending subgraph gnns by rethinking their symmetries. In NeurIPS, 2022.
- Fuchs et al. (2020) Fuchs, F., Worrall, D., Fischer, V., and Welling, M. Se(3)-transformers: 3d roto-translation equivariant attention networks. NeurIPS, 2020.
- Fürer (2017) Fürer, M. On the combinatorial power of the weisfeiler-lehman algorithm. In CIAC, volume 10236, pp. 260–271, 2017.
- Gasteiger et al. (2021) Gasteiger, J., Becker, F., and Günnemann, S. Gemnet: Universal directional graph neural networks for molecules. In NeurIPS, 2021.
- Gilmer et al. (2017) Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In ICML, 2017.
- Gómez-Bombarelli et al. (2016) Gómez-Bombarelli, R., Duvenaud, D., Hernández-Lobato, J. M., Aguilera-Iparraguirre, J., Hirzel, T. D., Adams, R. P., and Aspuru-Guzik, A. Automatic chemical design using a data-driven continuous representation of molecules, 2016.
- Hamilton et al. (2017) Hamilton, W. L., Ying, Z., and Leskovec, J. Inductive representation learning on large graphs. In NeurIPS, 2017.
- He et al. (2023) He, X., Hooi, B., Laurent, T., Perold, A., LeCun, Y., and Bresson, X. A generalization of vit/mlp-mixer to graphs. In ICML, 2023.
- Hu et al. (2020) Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS, 2020.
- Huang et al. (2023a) Huang, Y., Lu, W., Robinson, J., Yang, Y., Zhang, M., Jegelka, S., and Li, P. On the stability of expressive positional encodings for graph neural networks. arXiv preprint arXiv:2310.02579, 2023a.
- Huang et al. (2023b) Huang, Y., Peng, X., Ma, J., and Zhang, M. Boosting the cycle counting power of graph neural networks with i$^2$-gnns. In ICLR, 2023b.
- Huang et al. (2024) Huang, Y., Lu, W., Robinson, J., Yang, Y., Zhang, M., Jegelka, S., and Li, P. On the stability of expressive positional encodings for graph neural networks. ICLR, 2024.
- Hutchinson et al. (2021) Hutchinson, M. J., Lan, C. L., Zaidi, S., Dupont, E., Teh, Y. W., and Kim, H. Lietransformer: Equivariant self-attention for lie groups. In ICML, 2021.
- Ivanov et al. (2019) Ivanov, S., Sviridov, S., and Burnaev, E. Understanding isomorphism bias in graph data sets. CoRR, abs/1910.12091, 2019.
- Kim et al. (2021) Kim, J., Oh, S., and Hong, S. Transformers generalize deepsets and can be extended to graphs & hypergraphs. In NeurIPS, 2021.
- Kipf & Welling (2017) Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
- Kreuzer et al. (2021) Kreuzer, D., Beaini, D., Hamilton, W. L., Létourneau, V., and Tossou, P. Rethinking graph transformers with spectral attention. In NeurIPS, 2021.
- Li et al. (2020) Li, P., Wang, Y., Wang, H., and Leskovec, J. Distance encoding: Design provably more powerful neural networks for graph representation learning. In NeurIPS, 2020.
- Lim et al. (2023) Lim, D., Robinson, J. D., Zhao, L., Smidt, T. E., Sra, S., Maron, H., and Jegelka, S. Sign and basis invariant networks for spectral graph representation learning. In ICLR, 2023.
- Ma et al. (2023) Ma, L., Lin, C., Lim, D., Romero-Soriano, A., Dokania, P. K., Coates, M., Torr, P. H. S., and Lim, S. Graph inductive biases in transformers without message passing. In ICML, 2023.
- Maron et al. (2019a) Maron, H., Ben-Hamu, H., Serviansky, H., and Lipman, Y. Provably powerful graph networks. In NeurIPS, 2019a.
- Maron et al. (2019b) Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. Invariant and equivariant graph networks. In IGN, 2019b.
- Mialon et al. (2021) Mialon, G., Chen, D., Selosse, M., and Mairal, J. Graphit: Encoding graph structure in transformers, 2021.
- Morris et al. (2019) Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI, 2019.
- Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E. Z., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, pp. 8024–8035, 2019.
- Perepechko & Voropaev (2009) Perepechko, S. and Voropaev, A. The number of fixed length cycles in an undirected graph. explicit formulae in case of small lengths. MMCP, 148, 2009.
- Puntanen et al. (2011) Puntanen, S., Styan, G. P., and Isotalo, J. Matrix tricks for linear statistical models: our personal top twenty. Springer, 2011.
- Qian et al. (2022) Qian, C., Rattan, G., Geerts, F., Niepert, M., and Morris, C. Ordered subgraph aggregation networks. In NeurIPS, 2022.
- Rampásek et al. (2022) Rampásek, L., Galkin, M., Dwivedi, V. P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scalable graph transformer. In NeurIPS, 2022.
- Satorras et al. (2021) Satorras, V. G., Hoogeboom, E., and Welling, M. E (n) equivariant graph neural networks. In ICML, 2021.
- Schütt et al. (2021) Schütt, K., Unke, O., and Gastegger, M. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In ICML, 2021.
- Segol & Lipman (2020) Segol, N. and Lipman, Y. On universal equivariant set networks. In ICLR, 2020.
- Shervashidze et al. (2011) Shervashidze, N., Schweitzer, P., van Leeuwen, E. J., Mehlhorn, K., and Borgwardt, K. M. Weisfeiler-lehman graph kernels. JMLR, 2011.
- Shirzad et al. (2023) Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. Exphormer: Sparse transformers for graphs. In ICML, 2023.
- Thomas et al. (2018) Thomas, N., Smidt, T., Kearnes, S., Yang, L., Li, L., Kohlhoff, K., and Riley, P. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018.
- Tönshoff et al. (2023) Tönshoff, J., Ritzert, M., Rosenbluth, E., and Grohe, M. Where did the gap go? reassessing the long-range graph benchmark. CoRR, abs/2309.00367, 2023.
- Wang et al. (2022) Wang, H., Yin, H., Zhang, M., and Li, P. Equivariant and stable positional encoding for more powerful graph neural networks. In ICLR, 2022.
- Wang & Zhang (2022) Wang, X. and Zhang, M. Graph neural network with local frame for molecular potential energy surface. LoG, 2022.
- Weiler et al. (2018) Weiler, M., Geiger, M., Welling, M., Boomsma, W., and Cohen, T. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In NeurIPS, 2018.
- Wijesinghe & Wang (2022) Wijesinghe, A. and Wang, Q. A new perspective on ”how graph neural networks go beyond weisfeiler-lehman?”. In ICLR, 2022.
- Winkels & Cohen (2018) Winkels, M. and Cohen, T. S. 3d g-cnns for pulmonary nodule detection, 2018.
- Worrall et al. (2017) Worrall, D. E., Garbin, S. J., Turmukhambetov, D., and Brostow, G. J. Harmonic networks: Deep translation and rotation equivariance. In CVPR, 2017.
- Wu et al. (2017) Wu, Z., Ramsundar, B., Feinberg, E. N., Gomes, J., Geniesse, C., Pappu, A. S., Leswing, K., and Pande, V. S. Moleculenet: A benchmark for molecular machine learning, 2017.
- Wu et al. (2021) Wu, Z., Jain, P., Wright, M. A., Mirhoseini, A., Gonzalez, J. E., and Stoica, I. Representing long-range context for graph neural networks with global attention. In NeurIPS, 2021.
- Xu et al. (2019a) Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In ICLR, 2019a.
- Xu et al. (2019b) Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In ICLR, 2019b.
- Ying et al. (2021) Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y., and Liu, T. Do transformers really perform badly for graph representation? In NeurIPS, 2021.
- You et al. (2021) You, J., Selman, J. M. G., Ying, R., and Leskovec, J. Identity-aware graph neural networks. In AAAI, 2021.
- Zhang et al. (2023a) Zhang, B., Feng, G., Du, Y., He, D., and Wang, L. A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests. In ICML, 2023a.
- Zhang et al. (2023b) Zhang, B., Luo, S., Wang, L., and He, D. Rethinking the expressive power of gnns via graph biconnectivity. In ICLR, 2023b.
- Zhang & Li (2021) Zhang, M. and Li, P. Nested graph neural networks. In NeurIPS, 2021.
- Zhang et al. (2018) Zhang, M., Cui, Z., Neumann, M., and Chen, Y. An end-to-end deep learning architecture for graph classification. In AAAI, 2018.
- Zhao et al. (2022) Zhao, L., Jin, W., Akoglu, L., and Shah, N. From stars to subgraphs: Uplifting any GNN with local structure awareness. In ICLR, 2022.
- Zhou et al. (2023) Zhou, J., Feng, J., Wang, X., and Zhang, M. Distance-restricted folklore weisfeiler-leman gnns with provable cycle counting power, 2023.
Appendix A Proof
A.1 Proof of Proposition 2.3
The matrices and in are full rank and thus invertible. This allows us to derive the following equations:
(9) |
(10) | |||
(11) | |||
(12) | |||
(13) |
(14) | |||
(15) |
Since is orthogonal, any two full rank matrices are connected by an orthogonal transformation. Furthermore, if there exists an orthogonal matrix where , then , and .
A.2 Matrix D+A is Always Positive Semi-Definite
,
(16) | ||||
(17) | ||||
(18) |
Therefore, is always positive semi-definite.
A.3 Proof of Theorem 3.1
We restate the theorem here:
Theorem A.1.
Given two graphs and with degree matrices and , respectively, the two graphs are isomorphic () if and only if , where denotes the rank of matrix , and and are the symmetric rank decompositions of and respectively.
Proof.
Two graphs are isomorphic , and .
Now we prove that , and .
When , there exists permutation , . Therefore,
(21) |
∎
As , , where is an vector with all elements .
(22) |
A.4 Proof of Theorem 3.3
Now we restate the theorem.
Theorem A.2.
Given two graphs and , and an injective permutation-equivariant function mapping adjacency matrix to a symmetric matrix: (1) For all permutation-equivariant function , if , then the two sets of PSRD coordinates are equal up to an orthogonal transformation, i.e., , where is the rank of , are the PSRD coordinates of and respectively. (2) There exists a continuous permutation-equivariant function , such that if , where and are two output channels of .
Proof.
First, as is an injective permutation equivariant function, forall permutation
(23) |
Therefore, two matrix are isomorphic , where denote respectively.
In this proof, we denote eigendecomposition as and , where elements in and are sorted in ascending order. Let the multiplicity of eigenvalues in be , corresponding to eigenvalues .
(1) If , there exists a permutation , .
(24) |
is also an eigendecomposition of , so as they are both sorted in ascending order. Moreover, since are both matrices of eigenvectors, they can differ only in the choice of bases in each eigensubspace. So there exists a block diagonal matrix with orthogonal matrix as diagonal blocks that .
As is a permutation equivariant function,
(25) | ||||
(26) | ||||
(27) |
Therefore, will produce the same value on positions with the same eigenvalue. Therefore, can be consider as a block diagonal matrix with as diagonal blocks, where is , is a position that , and is an identity matrix .
Therefore,
(28) |
Therefore,
(29) | ||||
(30) | ||||
(31) | ||||
(32) |
As ,
(33) |
(2) We simply define is element-wise abstract value and square root , is element-wise abstract value and square root multiplied with its sign . Therefore, are continuous and permutation equivariant.
if ,
. then there exist , so that
(34) |
(35) |
(36) |
Therefore,
(37) | ||||
(38) | ||||
(39) | ||||
(40) |
As , two graphs are isomorphic.
∎
A.5 Proof of Theorem 5.3
Let denote a circle of nodes. Let denote a graph of two connected components, one is and the other is . Obviously, there exists a node pair in with shortest path distance equals to infinity, while does not have such a node pair. So the multiset of shortest path distance is easy to distinguish them. However, they are regular graphs with node degree all equals to , so MPNN cannot distinguish them:
Lemma A.3.
For all nodes in , , they have the same representation produced by -layer MPNN, forall .
Proof.
We proof it by induction.
. Initialization, all node with trivial node feature and are the same.
Assume -layer MPNN still produce representation for all node. At the -th layer, each node’s representation will be updated with its own representation and two neighbors representations as follows.
(41) |
So all nodes still have the same representation. ∎
A.6 Proof of Theorem 5.2 and B.3
Given two function , can be expressed by means that there exists a function , which is equivalent to given arbitrary input , . We use to denote that can be expressed with . If both and , there exists a bijective mapping between the output of to the output of , denoted as .
Here are some basic rule.
-
•
.
-
•
.
-
•
is bijective,
2-folklore Weisfeiler-Leman test produce a color for each node pair at -th iteration. It updates the color as follows,
(42) |
The color of the the whole graph is
(43) |
Initially, tuple color hashes the node feature and edge between the node pair, .
We are going to prove that
Lemma A.4.
Forall , can express , , where is the adjacency matrix of the input graph.
Proof.
We prove it by induction on .
-
•
When , in initialization.
-
•
If . For all
(44) (45) (46) (47)
∎
To prove that -iteration 2-FWL cannot compute shortest path distance larger than , we are going to construct an example.
Lemma A.5.
Let denote a circle of nodes. Let denote a graph of two connected components, one is and the other is . , 2-FWL can not distinguish and , where . However, contains node tuple with shortest path distance between them while does not, any model count up to shortest path distance can count it.
Proof.
Given a fixed , we enumerate the iterations of 2-FWL. Given two graphs , , we partition all tuples in each graph according to the shortest path distance between nodes: , where contains all tuples with shortest path distance between them as , and contains all tuples with shortest path distance between them larger than . We are going to prove that at -th layer , all nodes have the same representation (denoted as ) nodes all have the same representation (denoted as ).
Initially, all tuples have representation , all tuples have the same representation in both graph, and all other tuples have the same representation .
Assume at -th layer, all nodes have the same representation , tuples all have the same representation . At -th layer, each representation is updated as follows.
(48) |
For all tuples, the multiset has elements in total.
For tuples, the multiset have 1 as , 2 for respectively as v is the -hop neighbor of , and all elements left are as is not in the -hop neighbor of .
For tuples: the multiset have 1 for respectively as is on the shortest path between , and 1 for respectively, and 1 for respectively, with other elements are .
For tuples: the multiset have 1 for respectively as is on the shortest path between , 1 for respectively as is on the shortest path between , and 1 for respectively, and 1 for respectively, with other elements are .
For : the multiset are all the same : 2 and 2 for , respectively. ∎
A.7 Proof of Theorem 5.1
We can simply choose . Then . The shortest path distance is
(49) |
A.8 Proof of Theorem 5.4 and 5.5
This section assumes that the input graph is undirected and unweighted with no self-loops. Let denote the adjacency matrix of the graph. Note that
An -path is a sequence of edges , where all nodes are different from each other. An -cycle is an -path except that . Two paths/cycles are considered equivalent if their sets of edges are equal. The count of path from node to is the number of non-equivalent pathes with . The count of -cycle rooted in node is the number of non-equivalent cycles involves node .
Perepechko & Voropaev (2009) show that the number of path can be expressive with a polynomial of , where is the adjacency matrix of the input unweight graph. Specifically, let denote path matrix whose elements denote the number of -pathes from to , Perepechko & Voropaev (2009) provides formula to express with for small .
This section considers a weaken version of point cloud transformer. Each layer still consists of sv-mixer and multi-head attention. However, the multi-head attention matrix takes the scalar and vector feature before sv-mixer for and use the feature after sv-mixer for .
At -th layer sv-mixer:
(50) | |||
(51) |
Attention layer:
(52) |
As and can express , so the weaken version can be expressed with the original version.
(53) | ||||
(54) |
Let denote the attention matrix at -th layer. is a learnable function of . Let denote the polynomial space of that can express. Each element in it is a function from
We are going to prove some lemmas about .
Lemma A.6.
Proof.
As there exists residual connection, scalar and vector representations of layer can always contain those of layer , so attention matrix of layer can always express those of layer . ∎
Lemma A.7.
If , their hadamard product .
Proof.
As is a element-wise polynomial on compact domain, an MLP (denoted as ) exists that takes elements of the to produce the corresponding elements of their hadamard product. Assume is the in Equation 53 to produce the concatenation of , use (the composition of two mlps) for the in Equation 53 produces the hadamard product. ∎
Lemma A.8.
If , their linear combination , where .
Proof.
Lemma A.9.
.
Proof.
As shown in Section 5.1, the inner product of coordinates can produce . ∎
Lemma A.10.
Proof.
According to Equation 50 and Equation 52, at -th layer can express for all . Therefore, at -th layer in Equation 52, can first compute element from , then multiply with from , from and thus produce .
Moreover, according to Equation 53, at -th layer can express . So at -th layer, the element can express , corresponds to , respectively.
∎
Therefore,
Lemma A.11.
-
•
, .
-
•
, .
-
•
,
Therefore, we come to our main theorem.
Theorem A.12.
The attention matrix of 1-layer PST can express , 2-layer PST can express , 3-layer PST can express , 5-layer PST can express .
Proof.
(56) |
Three kind of basis (), (), and . 2-layer PST can express it.
(57) |
Four kinds of basis (), (), (), and (). 3-layer PST can express it.
(58) | ||||
(59) | ||||
(60) | ||||
(61) | ||||
(62) | ||||
(63) | ||||
(64) | ||||
(65) | ||||
(66) |
Basis are in
-
•
-
–
: .
-
–
-
•
-
–
:.
-
–
: .
-
–
: .
-
–
: .
-
–
:
-
–
-
•
:
-
–
: , , .
-
–
:
-
–
3-layer PST can express it.
Formula for -path matrix is quite long.
(68) | ||||
(69) | ||||
(70) | ||||
(71) | ||||
(72) | ||||
(73) | ||||
(74) | ||||
(75) | ||||
(76) | ||||
(77) | ||||
(78) | ||||
(79) | ||||
(80) | ||||
(81) | ||||
(82) | ||||
(83) | ||||
(84) | ||||
(85) | ||||
(86) | ||||
(87) | ||||
(88) | ||||
(89) | ||||
(90) |
Basis are in
-
•
-
–
: .
-
–
-
•
-
–
:, , , , , , , .
-
–
: , , , , , , , .
-
–
: , , .
-
–
: , , , , , , , .
-
–
: , , , , , , ,
-
–
: , .
-
–
:
-
–
:
-
–
-
•
:
-
–
: , , , , , , , , , .
-
–
: .
-
–
: .
-
–
: ,,
-
–
:,
-
–
-
•
:
-
–
: , , , .
-
–
: , , , .
-
–
-
•
:
-
–
:
-
–
5-layer PST can express it. ∎
Count cycle is closely related to counting path. A cycle contains edge can be decomposed into a -path from to and edge . Therefore, the vector of count of cycles rooted in each node
Theorem A.13.
The diagonal elements of attention matrix of 2-layer PST can express , 3-layer PST can express , 4-layer PST can express , 6-layer PST can express .
Proof.
It is a direct conjecture of Theorem A.12 as and ∎
Appendix B Expressivity Comparision with Other Models
Algorithm A is considered more expressive than algorithm B if it can differentiate between all pairs of graphs that algorithm B can distinguish. If there is a pair of links that algorithm A can distinguish while B cannot and A is more expressive than B, we say that A is strictly more expressive than B. We will first demonstrate the greater expressivity of our model by using PST to simulate other models. Subsequently, we will establish the strictness of our model by providing a concrete example.
Our transformer incorporates inner products of coordinates, which naturally allows us to express shortest path distances and various node-to-node distance metrics. These concepts are discussed in more detail in Section 5.1. This naturally leads to the following theorem, which compares our PST with GIN (Xu et al., 2019a).
Theorem B.1.
A -layer Point Set Transformer is strictly more expressive than a -layer GIN.
Proof.
We first prove that one PST layer can simulate an GIN layer.
Given node features and . Without loss of generality, we can assume that one channel of contains . The sv-mixer can simulate an MLP function applied to . Leading to . A GIN layer will then update node representations as follows,
(92) |
By inner products of coordinates, the attention matrix can express the adjacency matrix. By setting , and be a diagonal matrix with only the diagonal elements at the row corresponding the the channel of .
(93) |
Let MLP express an identity function.
(94) |
The attention layer will produce
(95) |
with residual connection, the layer can express GIN
(96) |
Moreover, as shown in Theorem 5.3, MPNN cannot compute shortest path distance, while PST can. So PST is strictly more expressive. ∎
Moreover, our transformer is strictly more expressive than some representative graph transformers, including Graphormer (Ying et al., 2021) and GPS with RWSE as structural encoding (Rampásek et al., 2022).
Theorem B.2.
A -layer Point Set Transformer is strictly more expressive than a -layer Graphormer and a -layer GPS.
Proof.
We first prove that -layer Point Set Transformer is more expressive than a -layer Graphormer and a -layer GPS.
In initialization, besides the original node feature, Graphormer further add node degree features and GPS further utilize RWSE. Our PST can add these features with the first sv-mixer layer.
(97) |
Here, add coordinate inner products, which can express RWSE (diagonal elements of random walk matrix) and degree (see Appendix D), to node feature.
Then we are going to prove that one PST layer can express one GPS and one Graphormer layer. PST’s attention matrix is as follows,
(98) |
The Hadamard product with MLP can express the inner product of node representations used in Graphormer and GPS. The inner product of coordinates can express adjacency matrix used in GPS and Graphormer and shortest path distance used in Graphormer. Therefore, PST’ attention matrix can express the attention matrix in GPS and Graphormer.
To prove strictness, Figure 2(c) in (Zhang et al., 2023b) provides an example. As PST can capture resistance distance and simulate 1-WL, so it can differentiate the two graphs according to Theorem 4.2 in (Zhang et al., 2023b). However, Graphormer cannot distinguish the two graphs, as proved in (Zhang et al., 2023b).
For GPS, Two graphs in Figure 2(c) have the same RWSE: RWSE is
(99) |
where the eigendecomposition of normalized adjacency matrix is . By computation, we find that two graphs share the same . Moroever, are equal in two graphs for , where is the number of nodes in graphs. and with larger are only linear combinations of and thus for . So the RWSE in the two graphs are equal and equivalent to simply assigning feature to the center node and feature to other nodes in two graphs. Then GPS simply run a model be a submodule of Graphormer on the graph and thus cannot differentiate the two graphs either. ∎
Even against a highly expressive method such as 2-FWL, our models can surpass it in expressivity with a limited number of layers:
Theorem B.3.
For all , a graph exists that a -iteration 2-FWL method fails to distinguish, while a -layer Point Set Transformer can.
Proof.
It is a direct corollary of Theorem 5.2. ∎
Appendix C Some Graph Transformers Fail to Compute Shortest Path Distance
First, we demonstrate that computing inner products of node representations alone cannot accurately determine the shortest path distance when the node representations are permutation-equivariant. Consider Figure 2 as an illustration. In cases where node representations exhibit permutation-equivariance, nodes and will share identical representations. Consequently, the pairs and will yield the same inner products of node representations, despite having different shortest path distances. Consequently, it becomes evident that the attention matrices of some Graph Transformers are incapable of accurately computing the shortest path distance.
Theorem C.1.
GPS with RWSE (Rampásek et al., 2022) and Graphormer without shortest path distance encoding cannot compute shortest path distance with the elements of adjacency matrix.
Proof.
Their adjacency matrix elements are functions of the inner products of node representations and the adjacency matrix.
(100) |
This element is equal for the node pair and in Figure 2. However, two pairs have different shortest path distances. ∎
Furthermore, while Graph Transformers gather information from the entire graph, they may not have the capacity to emulate multiple MPNNs with just a single transformer layer. To address this, we introduce the concept of a vanilla Graph Transformer, which applies a standard Transformer to nodes using the adjacency matrix for relative positional encoding. This leads us to the following theorem.
Theorem C.2.
For all , there exists a pair of graph that -layer MPNN can differentiate while -layer MPNN and -layer vanilla Graph Transformer cannot.
Proof.
Let denote a circle of nodes. Let denote a graph of two components, one is and the other is . Let denote adding a unique feature to a node in (as all nodes are symmetric for even , the selection of node does not matter), and denote adding a unique feature to one node in . All other nodes have feature . Now we prove that
Lemma C.3.
For all , -layer MPNN can distinguish and , while -layer MPNN and -layer vanilla Graph Transformer cannot distinguish.
Given , , we assign each node a color according to its distance to the node with extra label : (the labeled node itself), (two nodes connected to the labeled node), (two nodes whose shortest path distance to the labeled node is ),…, (two nodes whose shortest path distance to the labeled node is ), (nodes whose shortest path distance to the labeled node is larger than ). Now by simulating the process of MPNN, we prove that at -th layer , , nodes have the same representation (denoted as ), respectively, nodes all have the same representation (denoted as ).
Initially, all nodes have representation , all other nodes have representation in both graph.
Assume at -th layer, , nodes have the same representation , respectively, nodes all have the same representation . At -th layer, node have two neighbors of representation . all node two neighbors of representations and , respectively. nodes have two neighbors of representation and . All other nodes have two neighbors of representation . So nodes have the same representation (denoted as ), respectively, nodes all have the same representation (denoted as ).
The same induction also holds for -layer vanilla graph transformer.
However, in the -th message passing layer, only one node in is of shortest path distance to the labeled node. It also have two neighbors of representation . While such a node is not exist in . So ()-layer MPNN can distinguish them. ∎
The issue with a vanilla Graph Transformer is that, although it collects information from all nodes in the graph, it can only determine the presence of features in 1-hop neighbors. It lacks the ability to recognize features in higher-order neighbors, such as those in 2-hop or 3-hop neighbors. A straightforward solution to this problem is to manually include the shortest path distance as a feature. However, our analysis highlights that aggregating information from the entire graph is insufficient for capturing long-range interactions.

Method | Description | Connection |
---|---|---|
Random walk matrix (Li et al., 2020; Dwivedi et al., 2023; Rampásek et al., 2022) | -step random walk matrix is , whose element is the probability that a -step random walk starting from node ends at node . | |
Heat kernel matrix (Mialon et al., 2021) | Heat kernel is a solution of the heat equation. Its element represent how much heat diffuse from node to node | |
Resistance distance (Zhang & Li, 2021; Zhang et al., 2023b) | Its element is the resistance between node considering the graph as an electrical network. It is also the pseudo-inverse of laplacian matrix , | |
Equivariant and stable laplacian PE (Wang et al., 2022) | The encoding of node pair is , where means a vector with its elements coresponding to largest eigenvalue of | |
Degree and number of triangular (Bouritsas et al., 2023) | is the number of edges, and is the number of triangular rooted in node . | . |
Appendix D Connection with Structural Embeddings
We show the equivalence between the structural embeddings and the inner products of our PSRD coordinates in Table 5. The inner products of PSRD coordinates can unify a wide range of positional encodings, including random walk (Li et al., 2020; Dwivedi et al., 2023; Rampásek et al., 2022), heat kernel (Mialon et al., 2021), and resistance distance (Zhang & Li, 2021; Zhang et al., 2023b).
Appendix E Datasets
#Graphs | #Nodes | #Edges | Task | Metric | Split | |
---|---|---|---|---|---|---|
Synthetic | 5,000 | 18.8 | 31.3 | Node Regression | MAE | 0.3/0.2/0.5. |
QM9 | 130,831 | 18.0 | 18.7 | Regression | MAE | 0.8/0.1/0.1 |
ZINC | 12,000 | 23.2 | 24.9 | Regression | MAE | fixed |
ZINC-full | 249,456 | 23.2 | 24.9 | Regression | MAE | fixed |
ogbg-molhiv | 41,127 | 25.5 | 27.5 | Binary classification | AUC | fixed |
MUTAG | 188 | 17.9 | 39.6 | classification | Accuracy | 10-fold cross validataion |
PTC-MR | 344 | 14.3 | 14.7 | classification | Accuracy | 10-fold cross validation |
PROTEINS | 1113 | 39.1 | 145.6 | classification | Accuracy | 10-fold cross validataion |
IMDB-BINARY | 1000 | 19.8 | 193.1 | classification | Accuracy | 10-fold cross validataion |
PascalVOC-SP | 11,355 | 479.4 | 2710.5 | Node Classification | Macro F1 | fixed |
Peptides-func | 15,535 | 150.9 | 307.3 | Classification | AP | fixed |
Peptides-struct 1 | 15,535 | 150.9 | 307.3 | Regression | MAE | fixed |
We summarize the statistics of our datasets in Table 6. Synthetic is the dataset used in substructure counting tasks provided by Huang et al. (2023b), they are random graph with the count of substructure as node label. QM9 (Wu et al., 2017), ZINC (Gómez-Bombarelli et al., 2016), and ogbg-molhiv are three datasets of molecules. QM9 use 13 quantum chemistry property as the graph label. It provides both the graph and the coordinates of each atom. ZINC provides graph structure only and aim to predict constrained solubility. Ogbg-molhiv is one of Open Graph Benchmark dataset, which aims to use graph structure to predict whether a molecule can inhibits HIV virus replication. We further use MUTAG, PTC-MR, PROTEINS, and IMDB-BINARY from TU database (Ivanov et al., 2019). MUTAG comprises 188 mutagenic aromatic and heteroaromatic nitro compounds. PROTEINS represents secondary structure elements as nodes with edges between neighbors in amino-acid sequence or 3D space. PTC involves 344 chemical compounds, classifying carcinogenicity for rats. IMDB-BINARY features ego-networks for actors/actresses in movie collaborations, classifying movie genre graphs. We also use three datasets in Long Range Graph Benchmark (Dwivedi et al., 2022b). They consists of larger graphs. PascalVOC-SP comes from the computer vision domain. Each node in it representation a superpixel and the task is to predict the semantic segmentation label for each node. Peptide-func and peptide struct are peptide molecular graphs. Task in Peptides-func is to predict the peptide function. Peptides-struct is to predict 3D properties of the peptide. PTC is a collection of 344 chemical compounds represented as graphs which report the carcinogenicity for rats. There are 19 node labels for each node.
Appendix F Experimental Details
Our code is available in supplementary material. Our code is primarily based on PyTorch (Paszke et al., 2019) and PyG (Fey & Lenssen, 2019). All our experiments are conducted on NVIDIA RTX 3090 GPUs on a linux server. We use l1 loss for regression tasks and cross entropy loss for classification tasks. We select the hyperparameters by running optuna (Akiba et al., 2019) to optimize the validation score. We run each experiment with 8 different seeds, reporting the averaged results at the epoch achieving the best validation metric. For optimization, we use AdamW optimizer and cosine annealing scheduler. Hyperparameters for datasets are shown in Table 7. All PST and PSDS models (except these in ablation study) decompose laplacian matrix for coordinates.
dataset | #warm | #cos | wd | gn | #layer | hiddim | bs | lr | #epoch | #param |
---|---|---|---|---|---|---|---|---|---|---|
Synthetic | 10 | 15 | 6e-4 | 1e-6 | 9 | 96 | 16 | 0.0006 | 300 | 961k |
qm9 | 1 | 40 | 1e-1 | 1e-5 | 8 | 128 | 256 | 0.001 | 150 | 1587k |
ZINC | 17 | 17 | 1e-1 | 1e-4 | 6 | 80 | 128 | 0.001 | 800 | 472k |
ZINC-full | 40 | 40 | 1e-1 | 1e-6 | 8 | 80 | 512 | 0.003 | 400 | 582k |
ogbg-molhiv | 5 | 5 | 1e-1 | 1e-6 | 6 | 96 | 24 | 0.001 | 300 | 751k |
MUTAG | 20 | 1 | 1e-7 | 1e-4 | 2 | 48 | 64 | 2e-3 | 70 | 82k |
PTC-MR | 25 | 1 | 1e-1 | 1e-3 | 4 | 16 | 64 | 3e-3 | 70 | 15k |
PROTEINS | 25 | 1 | 1e-7 | 3e-3 | 2 | 48 | 8 | 1.5e-3 | 80 | 82k |
IMDB-BINARY | 35 | 1 | 1e-7 | 1e-5 | 3 | 48 | 64 | 3e-3 | 80 | 100k |
PascalVOC-SP | 5 | 5 | 1e-1 | 1e-5 | 4 | 96 | 6 | 0.001 | 40 | 527k |
Peptide-func | 40 | 20 | 1e-1 | 1e-6 | 6 | 128 | 2 | 0.0003 | 80 | 1337k |
Peptide-struct | 40 | 20 | 1e-1 | 1e-6 | 6 | 128 | 2 | 0.0003 | 40 | 1337k |
ZINC, ZINC-full, PascalVOC-SP, Peptide-func, and Peptide-struct have 500k parameter budgets. Other datasets have no parameter limit. Graphormer (Ying et al., 2021) takes 47000k parameters on ogbg-molhiv. 1-2-3-GNN takes 929k parameters on qm9. Our PST follows these budgets on ZINC, ZINC-full and PascalVOC-SP. However, on the peptide-func and peptide-struct datasets, we find that hidden dimension is quite crucial for good performance. So we use a comparable number of hidden dimension and transformer layers. This leads to about 1M parameters because our PST employs two sets of parameters (one for scalar and one for vector), which resulted in twice the parameters with the same hidden dimension and number of transformer layers. We conduct experiments with baselines with larger hidden dimensions for these datasets. The results are shown in Table 8. When the PST and baselines are all to 1M parameters, out PST outperforms baselines with the same parameter budget 1M, and our method is effective on the two datasets.
peptide-func | peptide-struct | |
---|---|---|
Graph-MLPMixer | ||
GraphGPS | ||
PST |
Other datasets we explored do not have explicit parameter constraints, and it’s worth noting that our PST has fewer parameters compared to representative baselines in these cases. Hyperparameter Configuration:
Our experiments involved tuning seven hyperparameters: depth (4, 8), hidden dimension (64, 256), learning rate (0.0001, 0.003), number of warmup epochs (0, 40), number of cosine annealing epochs (1, 20), magnitude of Gaussian noise added to input (1e-6, 1e-4), and weight decay (1e-6, 1e-1). We observed that [1] also used seven hyperparameters in their setup. Batch size was determined based on GPU memory constraints and was not a parameter that we optimized.
Appendix G Architecture


The architecture of parameterized SRD is shown in Figure 3. As illustrated in Section 3.2, it first do eigendecomposition for non-zero eigenvalues and the corresponding eigenvectors, then use DeepSet (Segol & Lipman, 2020) to process the eigenvalues, leading to coordinates with multiple channels. The architecture of PST is shown in Figure 4. As illustrated in Section 4, it is composed of scalar-vector mixers and attention layers.
Appendix H Ablation
To assess the design choices made in our Point Set Transformer (PST), we conducted ablation experiments. First, we replace the PSRD coordinates (see Section 3.2) with SRD coordinates, resulting in a reduced version referred to as the PST-gc model. Additionally, we introduced a variant called PST-onelayer, which is distinct from PST in that it only computes the attention matrix once and does not combine information in scalar and vector features. Furthermore, PST decompose Laplacian matrix by default to produce coordinates. PST-adj uses adjacency matrix instead. Similar to PST, PSDS takes node coordinates as input. However, it use DeepSet (Segol & Lipman, 2020) rather than transformer as the set encoder. For better comparison, we also use our strongest baseline on QM9 dataset, DF (Zhou et al., 2023).
The results of the ablation study conducted on the QM9 dataset are summarized in Table 2. Notably, PST-gc exhibits only a slight increase in test loss compared to PST, and even outperforms PST on out of target metrics, highlighting the effectiveness of the Graph as Point Set approach with vanilla symmetric rank decomposition. In contrast, PST-onelayer performs significantly worse, underscoring the advantages of PST over previous methods that augment adjacency matrices with spectral features. PST-adj and PST-normadj achieves similar performance to PST, illustrating that the choice of matrix to decompose does not matter. DeepSet performs worse than PST, but it still outperforms our strongest baseline DF, showing the potential of combining set encoders other than transformer with our convertion from graph to set. On the long-range graph benchmark, PST maintains a significant performance edge over PST-onelayer. However, it’s worth noting that the gap between PST and PST-gc widens, further confirming the effectiveness of gc in modeling long-range interactions.
ZPVE | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Unit | D | meV | meV | meV | meV | meV | meV | meV | meV | cal/mol/K | ||
PST | ||||||||||||
PST-onelayer | ||||||||||||
PST-gc | ||||||||||||
PST-adj | ||||||||||||
PST-normadj | ||||||||||||
PSDS | ||||||||||||
DF |
Model | PascalVOC-SP | Peptides-Func | Peptides-Struct |
---|---|---|---|
PST | |||
PST-onelayer | |||
PST-gc |
Appendix I Scalability
We present training time per epoch and GPU memory consumption data in Table 11 and Table 12. Due to architecture, PST has higher time complexity than existing Graph Transformers and does not scale well on large graphs like pascalvoc-sp dataset. However, on the ZINC dataset, PST ranks as the second fastest model, and its memory consumption is comparable to existing models with strong expressivity, such as SUN and SSWL, and notably lower than PPGN.
PST | SUN | SSWL | PPGN | Graphormer | GPS | SAN-GPS | |
---|---|---|---|---|---|---|---|
Time/s | 15.20 | 20.93 | 45.30 | 20.21 | 123.79 | 11.70 | 79.08 |
Memory/GB | 4.08 | 3.72 | 3.89 | 20.37 | 0.07 | 0.25 | 2.00 |
PST | SUN | SSWL | PPGN | Graphormer | GPS | SAN-GPS | |
---|---|---|---|---|---|---|---|
Time/s | 15.20 | 20.93 | 45.30 | 20.21 | 123.79 | 11.70 | 79.08 |
Memory/GB | 4.08 | 3.72 | 3.89 | 20.37 | 0.07 | 0.25 | 2.00 |
Appendix J Results on TU datasets
Following the setting of Feng et al. (2022), we test our PST on four TU datasets (Ivanov et al., 2019). The results are shown in Table 13. Baselines include WL subtree kernel (Shervashidze et al., 2011), GIN (Xu et al., 2019a), DGCNN (Zhang et al., 2018), GraphSNN (Wijesinghe & Wang, 2022), GNN-AK+ (Zhao et al., 2022), and three variants of KP-GNN (Feng et al., 2022) (KP-GCN, KP-GraphSAGE, and KP-GIN). We use 10-fold cross-validation, where 9 folds are for training and 1 fold is for testing. The average test accuracy is reported. Our PST consistently outperforms our baselines.
Method | MUTAG | PTC-MR | PROTEINS | IMDB-B |
---|---|---|---|---|
WL | 90.45.7 | 59.94.3 | 75.03.1 | 73.83.9 |
GIN | 89.45.6 | 64.67.0 | 75.92.8 | 75.15.1 |
DGCNN | 85.81.7 | 58.6 2.5 | 75.50.9 | 70.00.9 |
GraphSNN | 91.242.5 | 66.963.5 | 76.512.5 | 76.933.3 |
GIN-AK+ | 91.307.0 | 68.205.6 | 77.105.7 | 75.603.7 |
KP-GCN | 91.76.0 | 67.16.3 | 75.83.5 | 75.93.8 |
KP-GraphSAGE | 91.76.5 | 66.54.0 | 76.54.6 | 76.42.7 |
KP-GIN | 92.26.5 | 66.86.8 | 75.84.6 | 76.64.2 |
PST | 94.43.5 | 68.84.6 | 80.73.5 | 78.93.6 |
Appendix K Point Set DeepSet
Besides Transformer, we also propose a DeepSet (Segol & Lipman, 2020)-Based set encoder, Point Set DeepSet (PSDS), for point set to illustrate the versatility of our graph-to-set method. Similar to PST, PSDS also operates with points carrying two types of representations: scalars, which remain invariant to coordinate orthogonal transformations, and vectors, which adapt equivariantly to coordinate changes. For a point , its scalar representation is denoted by , and its vector representation is denoted by , where is the hidden dimension, and is the rank of coordinates. is initialized with the input node feature , and is initialized with the parameterized coordinates containing graph structural information, as detailed in Section 3.2. Similar to DeepSet, PSDS transforms point representations individually, aggregates them to produce global feature, and combine global features and individual point representations to update point representations.
Scalar-Vector Mixer. This component individually transforms point representations. To enable the information exchange between vector and scalar features, we design a mixer architecture as follows.
(101) | ||||
(102) |
Here, and are learnable matrices for mixing different channels of vector features. Additionally, and represent two multi-layer perceptrons transforming scalar representations. The operation takes the diagonal elements of a matrix, which translates vectors to scalars, while transforms scalar features into vectors. As , the scalar update is invariant to orthogonal transformations of the coordinates. Similarly, the vector update is equivariant to .
Aggregator. This component aggregates individual point representations for global features .
(103) | ||||
(104) | ||||
(105) |
Here, and denote the linear transformations vectors. is an MLP converting scalars. Global feature is scalar, is vector, and is the sum of square for each vector.
Point Representation Update. Each point representation is updated by combining global features.
(106) | ||||
(107) |
is combined with global scalar and transformed with an MLP . is combined with and with linear layer .
Pooling. After several layers, we pool all points’ scalar representations as the set representation .
(108) |
where Pool is pooling function like sum, mean, and max.