Random Projections of Sparse Adjacency Matrices
Abstract
We analyze a random projection method for adjacency matrices, studying its utility in representing sparse graphs. We show that these random projections retain the functionality of their underlying adjacency matrices while having extra properties that make them attractive as dynamic graph representations. In particular, they can represent graphs of different sizes and vertex sets in the same space, allowing for the aggregation and manipulation of graphs in a unified manner. We also provide results on how the size of the projections need to scale in order to preserve accurate graph operations, showing that the size of the projections can scale linearly with the number of vertices while accurately retaining first-order graph information. We conclude by characterizing our random projection as a distance-preserving map of adjacency matrices analogous to the usual Johnson-Lindenstrauss map.
1 Introduction
The adjacency matrix is a popular and flexible graph representation, encoding a graph’s edgeset in an explicit and easy to access manner. Furthermore, many natural graph operations correspond to linear algebraic operations on the adjacency matrix, such as edge addition-deletion, edge or vertex queries, and edge composition. The properties of the adjacency matrix and related quantities like the graph Laplacian also yield important insights about the underlying graph [19] [1].
However, a major defect of the adjacency matrix is that its size scales quadratically with the number of vertices. In many real world applications where the underlying graph is extremely large, this presents a fundamental problem. For example, graphs in the Stanford Large Network Collection include graphs derived from social, communication, and road networks that each have millions of vertices. The adjacency matrices of such large graphs would require trillions of parameters to store and work with, which is infeasible in practice. Hence, there has been much interest in graph compression techniques. There is a particular interest in sparse graph techniques, since a majority of real world graphs tend to have sparsely connected vertices.
In a previous work [18], we introduced a graph embedding method that represents a graph’s edgeset as a random sum. This method can be viewed as a random projection technique for adjacency matrices, and in this paper we discuss its application to large sparse graphs. In particular, we show that these random projections have the same space and operational time complexity as popular sparse graph representations like the Compressed Sparse Row format (CSR). Moreover, they retain all of the linear-algebraic graph operations of adjacency matrices, giving a flexible and computationally-efficient representation. Interestingly, the random projection described in this paper is an inner-product preserving projection on the space of adjacency matrices, analogous to the Johnson-Lindenstrauss map for vectors[7].
2 Related Work
We proposed our random projection method in previous work [18], framing it as a graph embedding method in the spirit of hyperdimensional computing(HDC) [8][10][6]. HDC graph representations encode vertices as high-dimensional vectors, bind these vectors to generate edge embeddings, and sum these edge embeddings to represent the entire edgeset: a bind-and-sum approach [15][12][17][16][9]. Our method can be classified as a bind-and-sum method, since we assign a random vector to each vertex and represent an edge by binding the source and target vertices via the tensor(outer) product. However, our method is purely a random projection and does not seek to learn a good node or graph embedding, instead leveraging pseudo-orthogonality to maintain properties of the adjacency matrix.
Our randomized projections can also be viewed as a ”reverse” adjacency matrix factorization. These techniques seek to find an informative and compressed factorization of the adjacency matrix (or adjacency tensor for typed graphs). For example, the RESCAL algorithm [14] applied to a adjacency matrix learns a entity matrix and relational matrix such that . Here, the columns of are the entity (vertex) embeddings that encode information about each entity, and the matrix encodes information about entity-entity interactions. We work with the source-target factorization of an adjacency matrix , where and are matrices whose columns are the respective coordinate vectors of the source and target vertex of the edge. Our method encodes each vertex as a random vector and then replaces the columns of and with their corresponding random vertex codes.
As a random projection of matrices, our method falls within the field of random numerical linear algebra [13]. These are random algorithms and techniques for solving problems in linear algebra, such as solving linear systems and finding maximum/minimum eigenvalues. These techniques aim to offer advantages in speed and memory over their deterministic counterparts, and our method aims to provide both in representing and working with a sparse adjacency matrix. Among these algorithms, the FastRP algorithm[4] is another example of a random projection method. It computes node embeddings by taking a weighted sum of the power of the adjacency matrix and then randomly projects them into a matrix whose columns form the node embeddings. While the goal of FastRP is to generate node embeddings that capture multi-hop information, our method seeks to informatively embed the adjacency matrix itself.
3 Intuition Behind the Projection
Given directed graph and an arbitrary ordering of its vertex set , its adjacency matrix can be expressed as the sum of outer products of coordinate vectors:
In this form, it is easy to see that many graph operations correspond to linear operations on the adjacency matrix. For example, adding/deleting the edge corresponds to adding/subtracting the matrix from ; querying if the graph contains the edge corresponds to the product ; the matrix powers correspond to -length paths. One property these fundamental graph operations share is that they only require orthonormality of the vertex vectors. If we swap the coordinate vectors with any set of orthonormal vectors, the transformed adjacency matrix retains all of its functionality.
For example, let be any orthogonal matrix, whose columns form an orthonormal basis. Changing bases to , our transformed adjacency matrix takes the form:
In this form, we can perform all the usual adjacency matrix operations by swapping coordinate vectors with their counterparts in . For example, adding/deleting the edge now corresponds to adding/subtracting the matrix from ; the edge query of becomes ; matrix powers still correspond to finite length paths in the sense that if and only if there is a -length path from to .
From the above discussion, we see that orthonormality is the key property required for exact graph functionality. Hence, relaxing to approximate orthonormality allows us to compress adjacency matrices while retaining graph functionality in a minimally noisy manner. We make use of random pseudo-orthonormal vectors, which are vectors whose dot products are negligible with high probability. While we can pack at most orthonormal vectors in a -dimensional space, we can pack many more pseudo-orthonormal vectors in the same space. This allows us to compress adjacency matrices, and in the case of sparse matrices this compression effect is especially pronounced.
4 Random Projection of Adjacency Matrices
Consider a directed graph with . An ordering of the vertex set induces the adjacency matrix :
We then construct a random projection matrix , where the columns of are sampled i.i.d. from the uniform distribution on the -dimension unit sphere . The random projection is then given by the following equation:
Expressing the adjacency matrices as the sum of outer products between coordinate vectors , the projection swaps the coordinate vector with the column of : . In light of the discussion of Section 3, this can be viewed as a pseudo-orthogonal basis change.
4.1 Graph Operations on Random Projections
We saw in Section 3 that operations in a new orthonormal basis require swapping coordinate vectors with the new basis vectors. Similarly, we can perform all the usual graph operations with the projection by substituting the appropriate random code for each vertex. For example, to query if the edge is in the edgeset, we would usually compute the product . This returns a 1 if is an edge and 0 otherwise. The randomized analogue then takes the form . This quantity is close to 1 with high probability if is an edge and close to 0 with high probability otherwise.
In general, every graph operation has an analogue on the projected matrices by substituting in the random vertex codes. Rather than using the coordinate vector associated with vertex , when working with the randomized projection we use the column vector of the projection matrix instead.
4.2 Changing the Vertex Set and Graph Aggregation
One attractive property about our random projections is that they transform nicely under changes to the underlying vertex set. Suppose we wish to expand our graph by adding a new vertex and new edges . With the usual adjacency matrix, this would require expanding to a matrix and filling in the appropriate entries of the added column and row. However, for the projected matrix we need to only generate a new random vector and add the appropriate edges to the projection: .
Our random matrix projection can be viewed as a projection method for the set of graphs whose vertices are in a fixed vertex set . In light of the above discussion, this projection can be naturally extended to sets containing or restricted to subsets of by expanding or restricting the set of random vertex codes respectively. This property is particularly attractive for dynamic graph representations, where the edge and vertex set of the graph change over time. The dimension of the projected matrices stays the same under changes to the vertex set, and the addition/deletion of edges involving new vertices is simple. We only need to keep in mind the total capacity of the projection space: how many edges we can add to a projected matrix before accuracy in graph operations begins to break down. We analyze this behavior in Section 5.
4.3 Translation between Different Projections
Once we assign a random code to each vertex and construct our projection matrix , that vertex-vector codebook is fixed. However, we might desire to reassign a new random vector to each vertex. This translation procedure has a simple analogue for projected matrices. Suppose we have a vertex set with two different random projection matrices and . The columns of and are the random vectors assigned to vertex under each projection respectively. In order to swap the vector with , we construct the translation matrix . By pseudo-orthonormality, we see that for every . Hence, if is the projection of under and is the projection under , we have the following relation:
4.4 Graph Subsets and Aggregation
Given a subset , the subgraph generated generated by , denoted , is the subgraph generated by all edges involving vertices in . Given the projected adjacency matrix , we can extract the projection of the subgraph by the following procedure. Let be the matrix whose columns are the random codes assigned to each vertex of , and define . By pseudo-orthonormality, we have the following approximate relation:
This subset procedure, along with graph translation, can be applied to aggregate multiple graphs into a single large graph. For example, suppose we have two disjoint graphs and . Their aggregate graph is the result of combining their vertex and edge sets together. We saw earlier that edge addition corresponds to adding the projected edges to our projected matrices. Therefore, given projections and , the projection of their aggregate is the sum of their projections:
This can be combined in tandem with the subsetting procedure to combine selected subgraphs from a collection of graphs. Even when their vertex codes are different, the translation procedure mentioned previously allows use to translate them all into a single code before aggregation.
One interesting application of this suite of procedures is allowing a divide-and-conquer approach to storing a large sparse graph. Graph operations on the projected matrices depend on the size of the projection space (see Section 5), so breaking a large graph into subgraphs allows us to store their projections in a smaller projection space. Operations involving an individual subgraph also happen in a lower-dimensional space and have decreased time complexity. However, when the need arises to operate on information from a pool of these subgraphs, we can use the above subset and pooling procedures to easily generate a wide suite of projections that correspond to those generated by their graph aggregates.
The above examples demonstrate that we can represent graphs of varying size and different vertex sets in the same projection space. Translation and pooling operations are of fixed dimension, allowing for a unified way to manipulate and combine a broad range of graphs. This suggests that random projections are also ideal for applications where graph aggregation is an important operation such as the divid-and-conquer situation described above.
5 Scaling Properties of Random Projections
We now study how the size of the random projection space needs to scale with the underlying graph. Importantly, we show that the size scales with size of the edgeset rather than the vertex set. This makes our method particularly amenable to sparse graphs, where the size of the edgeset is proportional to the size of the vertex set.
5.1 -Order Graph Operations
First, we need to define -order graph operations. This is important because each order requires a different scaling of the projection space. We say a graph operation has order if it can be expressed as an operation involving the power of the adjacency matrix. For example, the edge query is a first order graph operation because it is a function of the adjacency matrix: .
5.2 First and Second Order Scaling
We first characterize how the size of the projection needs to scale in order to retain accurate first and second order graph operations. Intuitively, these correspond to edge information of the 1-hop and 2-hop neighborhoods of the graph respectively. We give two informal statements on how the dimension of the projection space must scale with the number of edges, subject to a constraint on the vertex connectivity (number of edges each vertex participates in). An account of the technical results justifying these statements is given in Appendix A.
Property 1 (First Order Scaling).
Let be a graph with edges with maximum node degree . A random projection into needs to have in order to retain accurate first order operations.
Property 2 (Second Order Scaling).
Let be a graph with edges with maximum node degree . A random projection into needs to have in order to retain accurate second order operations.
For sparse graphs, the node degree condition is satisfied for all or a majority of vertices. Indeed, a closer inspection of the proofs in Appendix A show that as long as the vertices of the relevant edges satisfy this or a much looser bound on the node degree, the results still hold. This allows us to include sparse graphs that might have central vertices, which act like hubs and connect to many other vertices. If we are only interested in first order properties of the graph, the first result implies that we can compress a large sparse adjacency matrix using parameters into a smaller matrix using parameters, resulting a drastic compression for large sparse graphs.
5.3 -Order Scaling
While first and possibly second order operations comprise a bulk of the interesting graph operations, for completeness we have the following informal scaling statement for -order graph operations.
Property 3 (-Order Scaling).
Let be a graph with edges with maximum node degree . A random projection into needs to have in order to retain accurate -order operations.
6 Edge Representations of Sparse Matrices
In this section, for various sparse matrix representations we analyze both their size complexity as well as the time complexity of graph operations. We also consider the time complexity of numerical sparse matrix methods when appropriate. In this manner, we hope to contextualize both the strengths and weaknesses of our random projections relative to other sparse matrix representations. Table 1 summarizes the results of this section.
6.1 Alternate Sparse Matrix Representations
We first consider the coordinate list representation. This represents a sparse matrix as a list of 3-tuples corresponding to each non-zero entry, where and are the row and column indices while is the value. A related representation is the Dictionary of Keys (DoK) representation, which is similar to the coordinate list representation except now each row-column index serves as a key with value . Finally, the CSR format represents a matrix using three arrays containing the non-zero values, their column indices, and the number of non-zero entries above each row respectively. The CSR format can be easily computed from the previous two representations and vice versa.
6.2 Space Complexity
A sparse graph with vertices has edges. Hence, from their descriptions all three of the alternate sparse matrix representations require parameters. For random projections, if we are only concerned with representing first-order properties, Theorem A.1 establishes that the projections must have , meaning the projections are of size .
6.3 Graph Operation Complexity
We analyze some graph operations where the complexity differs based on the representation. For data structure operations, we use their complexity in Python as stated in the Python Wiki. Throughout this section, our random projection are matrices.
6.3.1 Edge Query
To look up if an edge exists in a coordinate list, we merely check if the list contains a tuple whose first two entries are . Since the coordinate list has length , list lookup has complexity . As a three list format, the CSR representation has the same complexity. However, the ”in” operator for dictionaries is , meaning edge lookup in DoK is . The edge query for random projections is the product , which has . If our project space is calibrated to preserve only first-order operations, then this edge query has complexity .
6.3.2 Edge Composition
Edge composition is a complicated case. For each of the sparse matrix representatiosn, edge compositions requires a nested procedure where, for each edge , one needs to identify all edges and return the edge , which is naively . Importantly, this naive algorithm is not parallelizable, making it especially painful to compute. Alternatively, edge composition naturally corresponds to the second power of the adjacency matrix. For a sparse adjacency matrix with edges, numerical methods for sparse matrices have complexity at most [2].
As for random projections, the naive algorithm for the multiplication of two matrices has complexity [5] [11]. In order to retain accuracy for a second-order operations, property 2 states that . Thus, computing the second matrix power of our projections has naive complexity while using methods like Strassen’s algorithm[11] can speed it up even further.
6.4 Fast Numerical Linear Algebra
One important aspect of our random projections is they benefit from numerical methods for linear algebra, such as parallel computation. This means that the time complexity of graph operations can be substantially reduced through these methods. The other sparse matrix representations do not enjoy these benefits, as we saw with the edge composition example. Hence, in practice the time complexity of graph operations with random projections can be significantly reduced, and they benefit from advances in numerical linear algebra.
Random Projections | DoK | CL | CSR | Sparse NLA | |
---|---|---|---|---|---|
Space Complexity | N/A | ||||
Edge Query | N/A | ||||
Edge Composition | N/A | ||||
Matrix Addition | N/A | N/A | N/A | ||
Matrix Multiplication | N/A | N/A | N/A |
7 Johnson-Lindenstrauss Analogue for Graphs
In the introduction, we stated that our random projection method is a norm-preserving map analogous to the one given by Johnson-Lindenstrauss (JL) lemma for vectors. Here, we first discuss a natural inner product on the space of adjacency matrices. We then state results showing how our random projections preserve this inner product and its associated norm for a finite set of adjacency matrices, characterizing our method as a JL map for the space of graphs.
7.1 A Natural Inner Product for Adjacency Matrices
In our previous work [18], we noted that there is a natural inner product on the space of adjacency matrices that counts the number of shared edges. Consider two graphs and whose vertex sets are contained in some larger set . Given an ordering of , we have the induced adjacency matrices and . Define the inner product between their adjacency matrices as:
Expressing each adjacency matrix as the sum of coordinate outer products, we see that this function counts the number of edges common to both graphs:
One can check this is indeed an inner product, and in fact it generates the Frobenius norm: .
Note that we assumed the vertex sets of both graphs were contained in some larger set . This is not an issue for a finite set of graphs with finite vertex sets , as we can define . The adjacency matrix of a graph with respect to this large set is a block matrix whose single block is equal to the original adjacency matrix.
7.2 Random Projections are a JL Map
In light of graph inner product defined in Section 7.1, we almost have a JL-type result since the Frobenius norm of matrices coincides with the Euclidean norm if we regard matrices in as vectors in . We confirm this by proving that, with high probability, our random projection method is a map that satisfies the conditions of the JL Lemma.
Theorem 7.1 (Random Projections are a JL map).
Consider a set of graphs with adjacency matrices . For small and , with high probability our random projections satisfy:
(1) |
8 Discussion
We presented a random projection method for sparse adjacency matrices. These random projections exploit pseudo-orthonormality of random vectors to drastically compress the adjacency matrix while still retaining all of its functionality. While the exact scaling depends on the graph properties we wish to preserve, Theorem A.1 shows that we can compress sparse matrices into matrices while preserving first order graph operations. These random projections also enjoy properties that their underlying adjacency matrices do not. Sections 4.2 and 4.3 show that random projection allow us to represent graphs of varying size and different vertex sets in the same projection space. This common space is equipped with modification and aggregation operations that apply to all graphs in the space, and random projections provide a unified way for representing and working with graphs. The complexity analysis of 6 shows that these random projections are competitive with existing sparse matrix representations and numerical methods, and they can take advantage of numerical techniques for speeding up linear algebra operations. All these properties suggest that random adjacency matrix projections are a dynamic, flexible, and expressive graph compression technique well suited to a variety of applications where large sparse matrices occur. One interesting application of our random compression technique would be to the field of graph neural networks [20]. Such networks use the adjacency matrix during the linear portion of its computations, and random projections could help extend such networks to both large sparse graphs as well as time-varying, dynamic graphs.
References
- [1] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373–1396, 2003.
- [2] Keivan Borna and Sohrab Fard. A note on the multiplication of sparse matrices. Open Computer Science, 4(1):1–11, 2014.
- [3] Stéphane Boucheron, Gábor Lugosi, and Olivier Bousquet. Concentration Inequalities, pages 208–240. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004.
- [4] Haochen Chen, Syed Fahad Sultan, Yingtao Tian, Muhao Chen, and Steven Skiena. Fast and accurate network embeddings via very sparse random projection, 2019.
- [5] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Third Edition. The MIT Press, 3rd edition, 2009.
- [6] Ross W. Gayler and Simon D. Levy. A distributed basis for analogical mapping. 2009.
- [7] William B. Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary Mathematics, 26, 1984.
- [8] Pentti Kanerva. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA, 1988.
- [9] Jaeyoung Kang, Minxuan Zhou, Abhinav Bhansali, Weihong Xu, Anthony Thomas, and Tajana Rosing. Relhd: A graph-based learning on fefet with hyperdimensional computing. 2022 IEEE 40th International Conference on Computer Design (ICCD), pages 553–560, 2022.
- [10] Denis Kleyko, Dmitri A. Rachkovskij, Evgeny Osipov, and Abbas Jawdat Rahim. A survey on hyperdimensional computing aka vector symbolic architectures, part ii: Applications, cognitive models, and challenges. ArXiv, abs/2112.15424, 2021.
- [11] Yan Li, Sheng-Long Hu, Jie Wang, and Zheng-Hai Huang. An introduction to the computational complexity of matrix multiplication. Journal of the Operations Research Society of China, 8(1):29–43, 2020.
- [12] Yunpu Ma, Marcel Hildebrandt, Volker Tresp, and Stephan Baier. Holistic representations for memorization and inference. In UAI, 2018.
- [13] Per-Gunnar Martinsson and Joel Tropp. Randomized numerical linear algebra: Foundations and algorithms, 2021.
- [14] Maximilian Nickel, Xueyan Jiang, and Volker Tresp. Reducing the rank in relational factorization models by including observable patterns. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014.
- [15] Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. Holographic embeddings of knowledge graphs. In AAAI, 2016.
- [16] Igor O. Nunes, Mike Heddes, Tony Givargis, Alexandru Nicolau, and Alexander V. Veidenbaum. Graphhd: Efficient graph classification using hyperdimensional computing. 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 1485–1490, 2022.
- [17] Prathyush Poduval, Haleh Alimohamadi, Ali Zakeri, Farhad Imani, M. Hassan Najafi, Tony Givargis, and Mohsen Imani. Graphd: Graph-based hyperdimensional memorization for brain-like cognitive learning. Frontiers in Neuroscience, 16, 2022.
- [18] Frank Qiu. Graph embeddings via tensor products and approximately orthonormal codes, 2022.
- [19] Ulrike von Luxburg. A tutorial on spectral clustering, 2007.
- [20] Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI Open, 1:57–81, 2020.
Appendix A Scaling Theorems
We provide theorems justifying the properties listed Section 5. These are mainly an adaptation of results found in our previous work [18]. Throughout this section, we abuse notation and use the same symbol to denote both the vertex and its random code. Similarly, we use the same symbol to denote the graph and its random projection.
A.1 Auxiliary Lemma for Spherical Random Vectors
We will need the following lemma [18].
Lemma A.1.
Let be the dot product between two vectors sampled uniformly and independently from the -dimensional unit sphere . Then and .
A.2 Property 1: First-Order Scaling
The following theorem and its proof is an abbreviated adaptation of Theorems 10.7 in our previous work [18]. Given an edge and adjacency matrix , the edge query returns 1 if is an edge of the graph and 0 otherwise. Our goal is prove the edge query analogue for our random projection returns the correct value with high probability, showing that our random projection accurately retains first-order edge information. An edge query is a true query if the queried edge exists in the graph and a false query if it doesn’t.
Theorem A.1.
Let be a graph with edges and maximum node degree . For a random projection into , we have the following:
-
1.
If denotes the result of a true query, then:
(2) -
2.
If denotes the result of a false query, then:
(3)
Proof.
Consider the random projection of a graph with edges:
(4) |
where , , , are random vectors drawn i.i.d. from the uniform distribution on the -dimensional unit sphere.
We first prove equation 2 by considering the result of querying our random projection for the exist of the edge , which is a valid edge. We assume of the other edges share one vertex with , and WLOG we assume they all share vertex . The result of true query can be expressed as:
(5) |
From Lemma A.1, both sums have mean 0 with the first sum having variance and the second having variance . Hence, the total variance is , and applying Bernstein’s inequality gives:
By assumption, we can bound , and plugging in the worst case of gives equation 2.
Similarly, now suppose we query by a spurious edge . Assuming of the other edges share one vertex with , we use the same argument as above to derive the false query bound given by equation 3. ∎
Letting denote a general edge query and , Theorem A.1 can be summarized as:
Rewriting this bound in terms of :
If then . Hence, if then is close to with high probability. Since for a true query and for a false query, is close to the correct value with high probability. This justifies the statement of property 1.
A.3 Property 2: Second-Order Scaling
This following theorem and its proof is an abbreviated adaptation of Theorems 11.7 from our previous work [18]. Two edges are composable if the target vertex of one is the source vertex of another, and an edge is in the second power of the adjacency matrix if and only if it is the composition of two composable edges. We need to show that the second matrix power accurately represents edge information. To this end, we prove the edge query returns the correct result with high probability when applied to the second power of the random projection.
Theorem A.2.
Let be a graph with composable edges and along with nuisance edges, and assume has maximum node degree . For a random projection into , we have the following results for edge queries involving projection’s second matrix power:
-
1.
If denotes the result of a true query of the second matrix power, then:
(6) -
2.
If denotes the result of a false query of the second matrix power, then:
(7)
Proof.
For the nuisance edge, assume have common vertex and have common source or target , and let . WLOG we may assume all edges have common source vertex , and our projection can be expressed as:
(8) |
From equation 8, we see the second matrix power should only contain the edge .
We first prove equation 6 and query the second matrix power for the presence of the edge . Letting denote the dot product of two independent spherical vectors, the result of this edge query is a sum of terms that are products of i.i.d. ’s. Let and . Grouping terms by how many ’s they contain, we write our true query as:
Using Lemma A.1, we see the variance of the error terms is . An application of Bernstein’s inequality gives:
Since and , plugging in the worst case of and gives equation 6.
Similarly, now we consider querying by a spurious edge . We assume that of the edges have either common source vertex or target vertex . Then, a similar computation as above shows that we can express the false query as:
An application of Bernstein’s inequality and a worst-case bound gives equation 7. ∎
As with Theorem A.1, the bounds of Theorem A.2 can be expressed in terms of the query variance :
If , then in both cases all terms in the variance can be expressed as powers of . Hence, as long as then we see that the edge query is close to its correct value with high probability. This justifies the statement of Property 2.
A.4 Property 3: -Order Scaling
We give a proof sketch is adapted from Section 12.1 of our previous work[18]. We aim to analyze the accuracy recovering edge information from the matrix power of our random projections and how the dimension of the projection space needs to scale with .
As in the proof of Theorem A.2, let denote the dot product of distinct spherical vectors. From the proofs of Theorems A.1 and A.2, we need to control the edge query variance to ensure that true and false queries are close to their expected values with high probability. Intuitively, if the node degree is small then performing an edge query on the matrix power will result in a majority of the error terms being products of independent ’s. Such terms will be mean 0 and variance . Since the true and false query scores are 1 and 0 respectively, for accurate recovery we need the edge query variance to be less than 1. As a sum of independent terms, the variance of the noise term will be approximately , implying that in order to retain accuracy. The node degree bound of ensures that the variance can be expressed as sums of powers of .
Appendix B JL Lemma for Graphs
Here, we aim to prove Theorem 7.1 by establishing intermediate results. Importantly, along the way we show how our random projection method preserves the graph inner product of Section 7.1.
B.1 Random Projections Preserve Inner Products
Theorem B.1.
Consider two graphs and with and edges respectively. Suppose they have edges in common. Among all pairs , suppose of these edge pairs share exactly one vertex. Let and denote the random projections of their adjacency matrices and into . For any , we have:
(9) |
Proof.
We can express their random projections as:
where are random spherical vectors. Computing their inner product:
(10) |
We proceed to bound the error term in equation 10. By assumption, of the terms in share one vertex with the remaining terms sharing no vertices:
where each is the dot product of independent spherical vectors. Using Lemma A.1 and Bernstein’s inequality[3], we get the following bound:
∎
B.2 Random Projections are a JL Map for Graphs
Theorem B.2.
Let be a graph with edges. For and small , its random projection into satisfies:
(11) |
Theorem B.3.
Let and be two graphs with . For and small , the random projections of their adjacency matrices into satisfies the following:
(12) |
Proof.
If we include signed edges, the graph has total edges. An application of Theorem B.2 gives the result. ∎
B.2.1 Proof of Theorem 7.1
Proof.
From Theorem B.3, our random projection satisfies the following inequalities with high probability:
(13) |
If we want this to hold for a set of sparse adjacency matrices, a union bound over all pairs shows equation 13 holds with probability at least . For a fixed probability threshold of violating the inequalities 13, let us choose the optimal given , denoted . That is, is the smallest integer such that . The optimal satisfies the following inequalities:
Combining the two inequalities gives:
Thus, we have , which matches the scaling of the usual Johnson-Lindenstrauss lemma and establishes our random projections as a JL map for graphs. ∎