This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Semisupervised regression in latent structure networks on unknown manifolds

AA\fnmAranyak Acharyya    JA\fnmJoshua Agterberg    MWT\fnmMichael W. Trosset    YP\fnmYoungser Park    CEP\fnmCarey E. Priebe \orgnameDepartment of Applied Mathematics and Statistics, Johns Hopkins University, \cityBaltimore, \cnyUSA \orgnameInstitute for Data Engineering and Science, Department of Electrical and Systems Engineering, and Department of Statistics and Data Science, University of Pennsylvania, \cityPennsylvania, \cnyUSA \orgnameDepartment of Statistics, Indiana University, \cityBloomington, \cnyUSA \orgnameCentre for Imaging Science, Johns Hopkins University, \cityBaltimore, \cnyUSA
Abstract

Random graphs are increasingly becoming objects of interest for modeling networks in a wide range of applications. Latent position random graph models posit that each node is associated with a latent position vector, and that these vectors follow some geometric structure in the latent space. In this paper, we consider random dot product graphs, in which an edge is formed between two nodes with probability given by the inner product of their respective latent positions. We assume that the latent position vectors lie on an unknown one-dimensional curve and are coupled with a response covariate via a regression model. Using the geometry of the underlying latent position vectors, we propose a manifold learning and graph embedding technique to predict the response variable on out-of-sample nodes, and we establish convergence guarantees for these responses. Our theoretical results are supported by simulations and an application to Drosophila brain data.

network inference,
vertex covariates,
random dot product graph,
manifold learning,
regression,
keywords:
\startlocaldefs\endlocaldefs
{fmbox}\dochead

Research

{artnotes}
{abstractbox}

1 Introduction

Random graphs have long been an area of interest for scientists from different disciplines, primarily because of their applicability in modeling networks (Erdos1984OnTE ,goldenberg2010survey ). Latent position random graphs (Hoff2002LatentSA ) constitute a category of random graphs where each node is associated with an unobserved vector, known as the latent position. One popular model, the random dot product graph model (Young2007RandomDP ), comprise a subcategory of network models where the probability of edge formation between a pair of nodes is given by the inner product of their respective latent position vectors. This model was further generalized to the generalized random dot product graph model (RubinDelanchy2022ASI ) which replaces the inner product with the indefinite inner product (see RubinDelanchy2022ASI ). A survey of inference problems under the random dot product model can be found in athreya2017statistical . In rubin2020manifold , it is shown that under certain regularity conditions, latent position random graphs can be equivalently thought of as generalized random dot product graphs whose nodes lie on a low dimensional manifold, which motivates the model we study in this work. Consider observing a random dot product graph whose latent positions lie on an unknown one-dimensional manifold in ambient space d\mathbb{R}^{d}, and suppose responses are recorded at some of these nodes. We choose to work in a semisupervised setting because in realistic scenarios, collecting observations is easier than obtaining labels corresponding to those observations. It is assumed that the responses are linked to the scalar pre-images of the corresponding latent positions via a regression model. In this semisupervised setting, we aim to predict the responses at the out-of-sample nodes.

The semisupervised learning framework in network analysis problems has been considered in a number of previous works. In 1326716 , a framework for regularization on graphs with labeled and unlabeled nodes was developed to predict the labels of unlabeled nodes. A dimensionality reduction technique was proposed from a graph-based algorithm developed to represent data on low dimensional manifold in high dimensional ambient space in 6789755 . In the context of latent position networks with underlying low dimensional manifold structure, athreya2021estimation discusses the problem of carrying out inference on the distribution of the latent positions of a random dot product graph, which are assumed to lie on a known low dimensional manifold in a high dimensional ambient space. Moreover, trosset2020learning studies the problem of two-sample hypothesis testing for equality of means in a random dot product graph whose latent positions lie on a one-dimensional manifold in a high dimensional ambient space, where the manifold is unknown and hence must be estimated. To be more precise, trosset2020learning proposes a methodology to learn the underlying manifold, and proves that the power of the statistical test based on the resulting embeddings can approach the power of the test based on the knowledge of the true manifold.

In our paper, we study the problem of predicting response covariate in a semisupervised setting, in a random dot product graph whose latent positions lie on an unknown one-dimensional manifold in ambient space d\mathbb{R}^{d}. Our main result establishes a convergence guarantee for the predicted responses when the manifold is learned using a particular manifold learning procedure (see Section 2.3). As a corollary to our main result, we derive a convergence guarantee for the power of the test for model validity based on the resulting embeddings. To help develop intuition, we first consider the problem of regression parameter estimation assuming the underlying manifold is known, and we show that a particular estimator is consistent in this setting.

We present an illustrative example of an application of our theoretical results. A connectome dataset consisting of a network of 100100 Kenyon cell neurons in larval Drosophila (details in Eichler2017TheCC ) indicates the presence of an underlying low dimensional manifold structure. Each node (that is, each Kenyon cell) is coupled with a response covariate, and the latent position of each node is estimated by a six-dimensional vector, using adjacency spectral embedding (see Section 2.2). A scatterplot is obtained for each pair of dimensions of the estimated latent positions, and thus a 6×66\times 6 matrix of scatterplots is obtained (Figure 5). Each dimension is seen to be approximately related to another, and hence it is assumed that the latent positions lie on an one-dimensional manifold in six-dimensional ambient space. In order to capture the underlying structure, we construct a localization graph on the estimated latent positions and embed the dissimilarity matrix of shortest path distances into one-dimension (see Section 2.3 for description of the method of embedding). A scatterplot of the first two dimensions of the estimated latent positions is presented in Figure 1, where the size of the points varies as per the values of the associated response covariate. A scatterplot of the responses yiy_{i} against the one-dimensional embeddings z^i\hat{z}_{i} is also presented along with fitted regression line indicating a significant effect. These results demonstrate that it will be reasonable to posit that the responses are linked to the embeddings via a simple linear regression model.

Refer to caption
Figure 1: Illustrative application of response prediction in latent structure networks on unknown manifolds. Our methodology is applied to the connectome of the right hemisphere of the Drosophila larval mushroom body. Left panel: scatter plot of two dimensions of the estimated latent positions for the 100 Kenyon cell neurons, obtained from spectral embedding of the network; the dot size represents the response variable yiy_{i} (the distance in microns between bundle entry point of neuron ii and the mushroom body neuropil). Right panel: plot of responses yiy_{i} against learnt 11-d embeddings z^i\hat{z}_{i} approximating geodesic distances along this curve, for the 100 Kenyon cell neurons, together with the regression line. In the left panel we observe that a one-dimensional curve captures nonlinear structure in the spectral embedding. In the right panel we observe that response regressed against geodesic distance indicates a significant effect (p<0.01p<0.01 for H0:a=0H_{0}:a=0 in yi=az^i+b+ηiy_{i}=a\hat{z}_{i}+b+\eta_{i}).

Our analysis raises the question of testing the validity of a simple linear regression model assumed to be linking the nodal responses to the scalar pre-images of the latent positions. Our theory shows that the power of the test for validity of the model based on the raw-stress embeddings approximates the power of the test based on the true regressors.

In Section 2 , we discuss key points about random dot product graphs and manifold learning. Section 3 discusses the models and the methods to carry out subsequent inference on the corresponding regression models. Section 4 presents our theoretical results in both the settings of known and unknown manifolds. Section 5 presents our findings from simulations. Section 6 revisits our connectome application. Section 7 discusses the results and poses some questions that require further investigation. The proofs of our theoretical results are given in Section 8.

2 Background notations, definitions and results

In this section, we introduce and explain the notations used throughout the paper. We also state relevant definitions and results pertaining to random dot product graphs (in Section 2.2) and manifold learning (in Section 2.3).

2.1 Notations

We shall denote a vector by a bold lower-case letter, 𝐱\mathbf{x} for instance. Bold upper-case letters such as 𝐏\mathbf{P} will be used to represent matrices. The Frobenius norm and the maximum row-norm of a matrix 𝐁\mathbf{B} will be denoted respectively by 𝐁F\left\lVert\mathbf{B}\right\rVert_{F} and 𝐁2,\left\lVert\mathbf{B}\right\rVert_{2,\infty}. The ii-th row of a matrix 𝐁\mathbf{B} will be denoted by 𝐁i\mathbf{B}_{i*}, and the jj-th column of 𝐁\mathbf{B} will be denoted by 𝐁j\mathbf{B}_{*j}. We will denote the n×nn\times n identity matrix by 𝐈n\mathbf{I}_{n}, and 𝐉n\mathbf{J}_{n} will denote the n×nn\times n matrix whose each entry equals one. Also, 𝐇n=𝐈n1n𝐉n\mathbf{H}_{n}=\mathbf{I}_{n}-\frac{1}{n}\mathbf{J}_{n} will denote the n×nn\times n centering matrix. Unless otherwise mentioned, 𝐱\left\lVert\mathbf{x}\right\rVert will represent the Euclidean norm of a vector 𝐱\mathbf{x}. The set of all orthogonal d×dd\times d matrices will be denoted by 𝒪(d)\mathcal{O}(d). The set of all positive integers will be denoted by \mathbb{N} and for any nn\in\mathbb{N}, [n][n] will denote the set {1,2,3,n}\left\{1,2,3...,n\right\}.

2.2 Preliminaries on Random Dot Product Graphs

A graph is an ordered pair (V,E)(V,E) where VV is the set of vertices (or nodes) and EV×VE\subset V\times V is the set of edges connecting vertices. An adjacency matrix 𝐀\mathbf{A} of a graph is defined as 𝐀ij=1\mathbf{A}_{ij}=1 if (i,j)E(i,j)\in E, and 𝐀ij=0\mathbf{A}_{ij}=0 otherwise. Here, we deal with hollow and undirected graphs; hence 𝐀\mathbf{A} is symmetric and 𝐀ii=0\mathbf{A}_{ii}=0 for all ii. Latent position random graphs are those for which each node is associated with a vector that is called its latent position, denoted by 𝐱i\mathbf{x}_{i}, and the probability of formation of an edge between the ii-th and jj-th nodes is given by κ(𝐱i,𝐱j)\kappa(\mathbf{x}_{i},\mathbf{x}_{j}) where κ\kappa is a suitable kernel.

Random vectors drawn from any arbitrary probability distribution cannot be latent positions of a random dot product graph, as their magnitudes can be unbounded whereas probabilities must lie in the interval [0,1][0,1]. The following definition allows us to work with a restricted class of distributions more amenable to random dot product graphs.

Definition 1.

(Inner product distribution): If FF is a probability distribution function on d\mathbb{R}^{d} such that for any 𝐱,𝐲supp(F)\mathbf{x},\mathbf{y}\in\mathrm{supp}(F), 𝐱T𝐲[0,1]\mathbf{x}^{T}\mathbf{y}\in[0,1], then FF is called an inner product distribution on d\mathbb{R}^{d}.

Next, we define the random dot product graphs, the basis of the models considered in this paper.

Definition 2.

(Random Dot Product Graph): Suppose GG is a hollow, undirected random graph with latent positions 𝐱1,𝐱nd\mathbf{x}_{1},\dots\mathbf{x}_{n}\in\mathbb{R}^{d}. Let 𝐗=[𝐱1||𝐱n]T\mathbf{X}=[\mathbf{x}_{1}|\dots|\mathbf{x}_{n}]^{T} be its latent position matrix and 𝐀\mathbf{A} be its adjacency matrix. The graph GG is called random dot product graph if for all i<ji<j, 𝐀ijBernoulli(𝐱iT𝐱j)\mathbf{A}_{ij}\sim\mathrm{Bernoulli}(\mathbf{x}_{i}^{T}\mathbf{x}_{j}) independently. The probability distribution of 𝐀\mathbf{A} is given by

P[𝐀]=Πi<j(𝐱iT𝐱j)𝐀ij(1𝐱iT𝐱j)1𝐀ij.P[\mathbf{A}]=\Pi_{i<j}(\mathbf{x}_{i}^{T}\mathbf{x}_{j})^{\mathbf{A}_{ij}}(1-\mathbf{x}_{i}^{T}\mathbf{x}_{j})^{1-\mathbf{A}_{ij}}.

If 𝐱1,𝐱niidF\mathbf{x}_{1},...\mathbf{x}_{n}\sim^{iid}F are the latent positions where FF is an inner product distribution, then we write (𝐀,𝐗)RDPG(F)(\mathbf{A},\mathbf{X})\sim RDPG(F). The distribution of the adjacency matrix conditional upon 𝐱1,𝐱n\mathbf{x}_{1},...\mathbf{x}_{n}, is

P[𝐀|𝐗]=Πi<j(𝐱iT𝐱j)𝐀ij(1𝐱iT𝐱j)1𝐀ij.P[\mathbf{A}|\mathbf{X}]=\Pi_{i<j}(\mathbf{x}_{i}^{T}\mathbf{x}_{j})^{\mathbf{A}_{ij}}(1-\mathbf{x}_{i}^{T}\mathbf{x}_{j})^{1-\mathbf{A}_{ij}}.

The latent positions of a random dot product graph are typically unknown and need to be estimated in practice. The following definition puts forth an estimate of these latent positions via adjacency spectral embedding.

Definition 3.

(Adjacency spectral embedding): Suppose λi\lambda_{i} is the ii-th largest (in magnitude) eigenvalue of 𝐀\mathbf{A}, and let 𝐯i\mathbf{v}_{i} be the corresponding orthonormal eigenvector. Define 𝐒=diag(λ1,λd)\mathbf{S}=\mathrm{diag}(\lambda_{1},\dots\lambda_{d}) and 𝐕=[𝐯1||𝐯d]\mathbf{V}=[\mathbf{v}_{1}|\dots|\mathbf{v}_{d}]. We define 𝐗^\hat{\mathbf{X}}, the adjacency spectral embedding of 𝐀\mathbf{A} into d\mathbb{R}^{d}, via 𝐗^=𝐕|𝐒|12\hat{\mathbf{X}}=\mathbf{V}|\mathbf{S}|^{\frac{1}{2}}.

Now, we present two results from the literature which give us the consistency and asymptotic normality of suitably rotated adjacency spectral estimates of the latent positions of a random dot product graph.

Theorem 1.

(Theorem 88 from athreya2017statistical ): Suppose 𝐱1,𝐱2,𝐱nd\mathbf{x}_{1},\mathbf{x}_{2},...\mathbf{x}_{n}\in\mathbb{R}^{d} denote the latent positions of a random dot product graph with nn nodes, and let 𝐗(n)=[𝐱1|𝐱2||𝐱n]T\mathbf{X}^{(n)}=[\mathbf{x}_{1}|\mathbf{x}_{2}|...|\mathbf{x}_{n}]^{T} be the latent position matrix. Assume that for all nn sufficiently large, rank(𝐗(n))=drank(\mathbf{X}^{(n)})=d, and there exists a>0a>0 such that bn=maxi[n]j=1d𝐱iT𝐱j>(log(n))4+ab_{n}=\max_{i\in[n]}\sum_{j=1}^{d}\mathbf{x}_{i}^{T}\mathbf{x}_{j}>(\log(n))^{4+a}. Suppose 𝐀(n)\mathbf{A}^{(n)} denotes the corresponding adjacency matrix and let 𝐗^(n)\hat{\mathbf{X}}^{(n)} be the adjacency spectral embedding of 𝐀(n)\mathbf{A}^{(n)} in d\mathbb{R}^{d} . There exists a constant C>0C>0 and a sequence 𝐖(n)𝒪(d)\mathbf{W}^{(n)}\in\mathcal{O}(d) such that

limn[maxi[n](𝐗^(n)𝐖(n)𝐗(n))iC(log(n))2bn]=1\lim_{n\to\infty}\mathbb{P}\left[\max_{i\in[n]}\left\lVert(\hat{\mathbf{X}}^{(n)}\mathbf{W}^{(n)}-\mathbf{X}^{(n)})_{i*}\right\rVert\leq\frac{C(\log(n))^{2}}{\sqrt{b_{n}}}\right]=1 (1)

Observe that under the assumption bn>(log(n))4+ab_{n}>(\log(n))^{4+a} , Eq. (1) implies
𝐗^(n)𝐖(n)𝐗(n)2,P0\left\lVert\hat{\mathbf{X}}^{(n)}\mathbf{W}^{(n)}-\mathbf{X}^{(n)}\right\rVert_{2,\infty}\to^{P}0 as nn\to\infty.

Henceforth, we will denote this optimally rotated adjacency spectral embedding by 𝐗~(n)\tilde{\mathbf{X}}^{(n)}, that is, 𝐗~(n)=𝐗^(n)𝐖(n)\tilde{\mathbf{X}}^{(n)}=\hat{\mathbf{X}}^{(n)}\mathbf{W}^{(n)}. To simplify notations we will often omit the superscript nn, and we will use 𝐱~i\tilde{\mathbf{x}}_{i} to denote the ii-th row of 𝐗~\tilde{\mathbf{X}}. Observe that in the attempt to estimate the true latent positions from the adjacency matrix, we encounter an inherent non-identifiability issue: for any 𝐖𝒪(d)\mathbf{W}\in\mathcal{O}(d), 𝔼(𝐀|𝐗)=𝐗𝐗T=(𝐗𝐖)(𝐗𝐖)T\mathbb{E}(\mathbf{A}|\mathbf{X})=\mathbf{X}\mathbf{X}^{T}=(\mathbf{X}\mathbf{W})(\mathbf{X}\mathbf{W})^{T}. This is the reason why the adjacency spectral embedding needs to be rotated suitably so that it can approximate the true latent position matrix.

Theorem 2.

(Theorem 99 from athreya2017statistical ): Suppose (𝐀(n),𝐗(n))RDPG(F)(\mathbf{A}^{(n)},\mathbf{X}^{(n)})\sim RDPG(F) be adjacency matrices and latent position matrices for a sequence of random dot product graphs, for which the latent positions are generated from an inner product distribution FF on d\mathbb{R}^{d}. Let

𝚺(𝐱𝟎)=Δ1𝔼𝐱F[𝐱𝟎T𝐱(1𝐱𝟎T𝐱)𝐱𝐱T]Δ1\mathbf{\Sigma}(\mathbf{x_{0}})=\Delta^{-1}\mathbb{E}_{\mathbf{x}\sim F}\left[\mathbf{x_{0}}^{T}\mathbf{x}(1-\mathbf{x_{0}}^{T}\mathbf{x})\mathbf{x}\mathbf{x}^{T}\right]\Delta^{-1} (2)

where Δ=𝔼𝐱F[𝐱𝐱T]\Delta=\mathbb{E}_{\mathbf{x}\sim F}[\mathbf{x}\mathbf{x}^{T}]. Then, there exists a sequence 𝐖(n)𝒪(d)\mathbf{W}^{(n)}\in\mathcal{O}(d) such that for all 𝐮d\mathbf{u}\in\mathbb{R}^{d},

limn[n(𝐗^(n)𝐖(n)𝐗(n))i𝐮]=𝐱supp(F)Φ𝟎,𝚺(𝐱)(𝐮)𝑑F(𝐱)\lim_{n\to\infty}\mathbb{P}\left[\sqrt{n}(\hat{\mathbf{X}}^{(n)}\mathbf{W}^{(n)}-\mathbf{X}^{(n)})_{i*}\leq\mathbf{u}\right]=\int_{\mathbf{x}\in supp(F)}\Phi_{\mathbf{0},\mathbf{\Sigma}(\mathbf{x})}(\mathbf{u})dF(\mathbf{x}) (3)

where Φ𝟎,𝚺(𝐱)(.)\Phi_{\mathbf{0},\mathbf{\Sigma}(\mathbf{x})}(.) denotes the distribution function of multivariate normal N(𝟎,𝚺(𝐱))N(\mathbf{0},\mathbf{\Sigma}(\mathbf{x})) distribution.

Under suitable regularity conditions, a combined use of Theorem 2 and Delta method gives us the asymptotic distribution of n(γ(𝐱~i)γ(𝐱i))\sqrt{n}(\gamma(\tilde{\mathbf{x}}_{i})-\gamma(\mathbf{x}_{i})) for a function γ:d\gamma:\mathbb{R}^{d}\to\mathbb{R}, which depends on the true distribution FF of the latent positions. Therefore, var(γ(𝐱~i)γ(𝐱i))\mathrm{var}(\gamma(\tilde{\mathbf{x}}_{i})-\gamma(\mathbf{x}_{i})) can be approximated from the optimally rotated adjacency spectral estimates 𝐱~i\tilde{\mathbf{x}}_{i} and their empirical distribution function. In a random dot product graph for which the latent positions lie on a known one-dimensional manifold in ambient space d\mathbb{R}^{d}, and the nodal responses are linked to the scalar pre-images of the latent positions via a simple linear regression model, we can use the approximated variance of (γ(𝐱~i)γ(𝐱i))(\gamma(\tilde{\mathbf{x}}_{i})-\gamma(\mathbf{x}_{i})) (motivated by works in Chapters 22, 33 of fuller1987measurement ) to improve the performance (in terms of mean squared errors) of the naive estimators of the regression parameters, which are obtained by replacing 𝐱i\mathbf{x}_{i} with 𝐱~i\tilde{\mathbf{x}}_{i} in the least square estimates of the regression parameters. We demonstrate this method in detail in Section 7 (see Figure 7).

Remark 1.

Suppose FF is a probability distribution satisfying supp(F)=Kd\mathrm{supp}(F)=K\subset\mathbb{R}^{d}, and 𝐳1,.𝐳niidF\mathbf{z}_{1},....\mathbf{z}_{n}\sim^{iid}F are latent positions of a hollow symmetric latent position random graph with associated kernel κ\kappa. Extensive works presented in rubin2020manifold show that if κL2(d×d)\kappa\in L^{2}(\mathbb{R}^{d}\times\mathbb{R}^{d}), then there exists a mapping q:dL2(d)q:\mathbb{R}^{d}\to L^{2}(\mathbb{R}^{d}) such that the graph can be equivalently represented as a generalized random dot product graph with latent positions 𝐱i=q(𝐳i)L(d)\mathbf{x}_{i}=q(\mathbf{z}_{i})\in L(\mathbb{R}^{d}). If κ\kappa is assumed to be Hölder continuous with exponent cc, then the Hausdorff dimension of q(K)q(K) can be bounded by dc\frac{d}{c}, as shown in rubin2020manifold . In whiteley2022discovering , it has been shown that if KK is a Riemannian manifold, then stronger assumptions lead us to the conclusion that q(K)L2(d)q(K)\subset L^{2}(\mathbb{R}^{d}) is also Riemannian manifold diffeomorphic to KK. Thus, under suitable regularity assumptions, any latent position graph can be treated as a generalized random dot product graph with latent positions on a low dimensional manifold.

After stating the relevant definitions and results pertinent to random dot product graphs, in the following section we introduce the manifold learning technique we will use in this paper. Just for the sake of clarity, the topic of manifold learning in general has nothing to do with random dot product graph model; hence Section 2.2 and Section 2.3 can be read independently.

2.3 Manifold learning by raw-stress minimization

Our main model is based on a random dot product graph whose latent positions lie on a one-dimensional Riemannian manifold. Since one-dimensional Riemannian manifolds are isometric to one-dimensional Euclidean space, we wish to represent the latent positions as points on the real line. This is the motivation behind the use of the following manifold learning technique, which relies upon approximation of geodesics by shortest path distances on localization graphs (tenenbaum2000global , bernstein2000graph , trosset2021rehabilitating ). Given points 𝐱1,𝐱n\mathbf{x}_{1},\dots\mathbf{x}_{n}\in\mathcal{M} where \mathcal{M} is an unknown one-dimensional manifold in ambient space d\mathbb{R}^{d}, the goal is to find z^1,z^n\hat{z}_{1},\dots\hat{z}_{n}\in\mathbb{R}, such that the interpoint Euclidean distances between z^i\hat{z}_{i} approximately equal the interpoint geodesic distances between 𝐱i\mathbf{x}_{i}. However, the interpoint geodesic distances between 𝐱i\mathbf{x}_{i} are unknown. The following result shows how to estimate these unknown geodesic distances under suitable regularity assumptions.

Theorem 3.

(Theorem 33 from trosset2021rehabilitating ) Suppose \mathcal{M} is a one-dimensional compact Riemannian manifold in ambient space d\mathbb{R}^{d}. Let r0r_{0} and s0s_{0} be the minimum radius of curvature and the minimum branch separation of \mathcal{M}. Suppose ν\nu is given and suppose λ>0\lambda>0 is chosen such that λ<s0\lambda<s_{0} and λ<2πr024ν\lambda<\frac{2}{\pi}r_{0}\sqrt{24\nu}. Additionally, assume 𝐱1,𝐱n\mathbf{x}_{1},\dots\mathbf{x}_{n}\in\mathcal{M} are such that for every 𝐮\mathbf{u}\in\mathcal{M}, dM(𝐮,𝐱i)<δd_{M}(\mathbf{u},\mathbf{x}_{i})<\delta. A localization graph is constructed on 𝐱i\mathbf{x}_{i} as nodes under the following rule: two nodes 𝐱i\mathbf{x}_{i} and 𝐱j\mathbf{x}_{j} are joined by an edge if 𝐱i𝐱j<λ\left\lVert\mathbf{x}_{i}-\mathbf{x}_{j}\right\rVert<\lambda. When δ<νλ4\delta<\frac{\nu\lambda}{4}, the following condition holds for all i,j[n]i,j\in[n],

(1ν)dM(𝐱i,𝐱j)dn,λ(𝐱i,𝐱j)(1+ν)dM(𝐱i,𝐱j),(1-\nu)d_{M}(\mathbf{x}_{i},\mathbf{x}_{j})\leq d_{n,\lambda}(\mathbf{x}_{i},\mathbf{x}_{j})\leq(1+\nu)d_{M}(\mathbf{x}_{i},\mathbf{x}_{j}),

where dn,λ(𝐱i,𝐱j)d_{n,\lambda}(\mathbf{x}_{i},\mathbf{x}_{j}) denotes the shortest path distance between 𝐱i\mathbf{x}_{i} and 𝐱j\mathbf{x}_{j}.

Given the dissimilarity matrix 𝐃=(dn,λ(𝐱i,𝐱j))i,j=1n\mathbf{D}=\left(d_{n,\lambda}(\mathbf{x}_{i},\mathbf{x}_{j})\right)_{i,j=1}^{n}, the raw-stress function at (z1,zn)(z_{1},\dots z_{n}) is defined as

σ(z1,zn)=i<jwij(|zizj|dn,λ(𝐱i,𝐱j))2\sigma(z_{1},\dots z_{n})=\sum_{i<j}w_{ij}(|z_{i}-z_{j}|-d_{n,\lambda}(\mathbf{x}_{i},\mathbf{x}_{j}))^{2}

where wij0w_{ij}\geq 0 are weights. For the purpose of learning the manifold \mathcal{M}, we set wij=1w_{ij}=1 for all i,ji,j, and compute

(z^1,z^n)=argminσ(z1,zn)=argmini<j(|zizj|dn,λ(𝐱i,𝐱j))2.(\hat{z}_{1},\dots\hat{z}_{n})=\arg\min\sigma(z_{1},\dots z_{n})=\arg\min\sum_{i<j}(|z_{i}-z_{j}|-d_{n,\lambda}(\mathbf{x}_{i},\mathbf{x}_{j}))^{2}.

Since the scalars z^i\hat{z}_{i} are obtained by embedding 𝐃\mathbf{D} into one-dimension upon minimization of raw-stress, we shall henceforth refer to z^i\hat{z}_{i} as the one-dimensional raw-stress embeddings of 𝐃\mathbf{D}.

Remark 2.

In practice, raw-stress is minimized numerically by iterative majorization (Chapter 88 of borg2005modern ). Standard algorithms can sometimes be trapped in nonglobal minima. However, nearly optimal values can be obtained by repeated iterations of Guttman transformation (Chapter 88 of borg2005modern ) when the configurations are intialized by classical multidimensional scaling. In our paper, for theoretical results, we assume that the global minima is achieved.

3 Model and Methodology

Here we describe our models under both the assumptions of known and unknown manifold. In each case, we assume that we observe a random dot product graph for which the latent positions of the nodes lie on a one-dimensional manifold in dd-dimensional ambient space. Under the assumption that the underlying manifold is known, each node is coupled with a response linked to the scalar pre-image of the corresponding latent position via a regression model, and our goal is to estimate the regression parameters. When the underlying manifold is assumed to be unknown, our model involves a network with a small number of labeled nodes and a large number of unlabeled nodes, and our objective is to predict the response at a given unlabeled node assuming that the responses for the labeled nodes are linked to the scalar pre-images of the respective latent positions via a regression model. In the setting of unknown manifold, we can approximate the true regressors only up to scale and location transformations, and due to this non-identifiablity issue we carry out prediction of responses instead of estimation of regression parameters when the manifold is unknown.

Remark 3.

We would like to remind the reader here that the setting of the known manifold is not realistic. We take the setting of the known manifold into account to help familiarize the reader with the best case scenario, for sake of comparison with the results obtained in the realistic setting of the unknown manifold.

3.1 Regression parameter estimation on known manifold

Suppose ψ:d\psi:\mathbb{R}\to\mathbb{R}^{d} is a known bijective function and let =ψ([0,L])\mathcal{M}=\psi([0,L]) be a one-dimensional compact Reimannian manifold. Consider a random dot product graph for which the nodes (with latent positions 𝐱i\mathbf{x}_{i}) lie on the known one-dimensional manifold \mathcal{M} in dd-dimensional ambient space. Let t1,tnt_{1},\dots t_{n} be the scalar pre-images of the latent positions such that 𝐱i=ψ(ti)\mathbf{x}_{i}=\psi(t_{i}) for all i[n]i\in[n], where nn is the number of nodes of the graph. Suppose, for each ii, the ii-th node is coupled with a response yiy_{i} which is linked to the latent position via the following regression model

yi=α+βti+ϵi,i[n]\displaystyle y_{i}=\alpha+\beta t_{i}+\epsilon_{i},i\in[n] (4)

where ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}) for all i[n]i\in[n]. Our goal is to estimate α\alpha and β\beta.
If the true regressors tit_{i} were known, we could estimate α\alpha and β\beta by their ordinary least square estimates given by

β^true=i=1n(yiy¯)(tit¯)i=1n(tit¯)2,α^true=y¯β^truet¯.\displaystyle\hat{\beta}_{true}=\frac{\sum_{i=1}^{n}(y_{i}-\bar{y})(t_{i}-\bar{t})}{\sum_{i=1}^{n}(t_{i}-\bar{t})^{2}},\hskip 14.22636pt\hat{\alpha}_{true}=\bar{y}-\hat{\beta}_{true}\bar{t}. (5)

Since the true latent positions 𝐱i\mathbf{x}_{i} are unknown, we estimate the true regressors tit_{i} by
t^i=argmint𝐱~iψ(t)\hat{t}_{i}=\arg\min_{t}\left\lVert\tilde{\mathbf{x}}_{i}-\psi(t)\right\rVert where 𝐱~i\tilde{\mathbf{x}}_{i} is the optimally rotated adjacency spectral estimate for the ii-th latent position 𝐱i\mathbf{x}_{i}. The existence of t^i\hat{t}_{i} is guaranteed by the compactness of the manifold =ψ([0,L])\mathcal{M}=\psi([0,L]). We then substitute tit_{i} by t^i\hat{t}_{i} in α^true\hat{\alpha}_{true} and β^true\hat{\beta}_{true} to obtain the substitute (or the plug-in estimators) given by

β^sub=i=1n(yiy¯)(t^it^¯)i=1n(t^it^¯)2,α^sub=y¯β^subt^¯.\displaystyle\hat{\beta}_{sub}=\frac{\sum_{i=1}^{n}(y_{i}-\bar{y})(\hat{t}_{i}-\bar{\hat{t}})}{\sum_{i=1}^{n}(\hat{t}_{i}-\bar{\hat{t}})^{2}},\hskip 14.22636pt\hat{\alpha}_{sub}=\bar{y}-\hat{\beta}_{sub}\bar{\hat{t}}. (6)

The steps to compute α^sub\hat{\alpha}_{sub} and β^sub\hat{\beta}_{sub} are formally stated in Algorithm 1a.

Algorithm 1a EST(𝐀n×n,𝐖𝒪(d),d,ψ:[0,L]d,{yi}i=1n\mathbf{A}\in\mathbb{R}^{n\times n},\mathbf{W}\in\mathcal{O}(d),d,\psi:[0,L]\to\mathbb{R}^{d},\left\{y_{i}\right\}_{i=1}^{n})
1:Compute adjacency spectral embedding 𝐗^\hat{\mathbf{X}} of the adjacency matrix 𝐀\mathbf{A} into d\mathbb{R}^{d}.
2:Use the given rotation matrix 𝐖\mathbf{W} to get the optimally rotated adjacency spectral embedding 𝐗~=𝐗^𝐖\tilde{\mathbf{X}}=\hat{\mathbf{X}}\mathbf{W}.
3:Obtain the pre-images of the projections of the estimated latent positions on the manifold by
t^i=argmint𝐱~iψ(t)\hat{t}_{i}=\arg\min_{t}\left\lVert\tilde{\mathbf{x}}_{i}-\psi(t)\right\rVert.
4:Compute the substitute estimators given by β^sub=i=1n(yiy¯)(t^it^¯)i=1n(t^it^¯)2,α^sub=y¯β^subt^¯.\hat{\beta}_{sub}=\frac{\sum_{i=1}^{n}(y_{i}-\bar{y})(\hat{t}_{i}-\bar{\hat{t}})}{\sum_{i=1}^{n}(\hat{t}_{i}-\bar{\hat{t}})^{2}},\hskip 5.69046pt\hat{\alpha}_{sub}=\bar{y}-\hat{\beta}_{sub}\bar{\hat{t}}.
5:return (α^sub,β^sub)(\hat{\alpha}_{sub},\hat{\beta}_{sub}).

3.2 Prediction of responses on unknown manifold

Here, we assume that ψ:[0,L]d\psi:[0,L]\to\mathbb{R}^{d} is unknown and arclength parameterized, that is, ψ˙(t)=1\left\lVert\dot{\psi}(t)\right\rVert=1 for all tt. Additionally, assume that =ψ([0,L])\mathcal{M}=\psi([0,L]) is a compact Reimannian manifold. Consider a nn-node random dot product graph whose nodes (with latent positions 𝐱i\mathbf{x}_{i}) lie on the unknown manifold =ψ([0,L])\mathcal{M}=\psi([0,L]) in ambient space d\mathbb{R}^{d}. Assume that the first ss nodes of the graph are coupled with a response covariate and the response yiy_{i} at the ii-th node is linked to the latent position via a linear regression model

yi=α+βti+ϵi,i[s]\displaystyle y_{i}=\alpha+\beta t_{i}+\epsilon_{i},i\in[s] (7)

where ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}) for all i[s]i\in[s]. Our goal is to predict the response for the rr-th node, where r>sr>s. First, we compute the adjacency spectral estimates 𝐱^i\hat{\mathbf{x}}_{i} of the latent positions of all nn nodes. We then construct a localization graph on the adjacency spectral estimates 𝐱^i\hat{\mathbf{x}}_{i} under the following rule: join two nodes 𝐱^i\hat{\mathbf{x}}_{i} and 𝐱^j\hat{\mathbf{x}}_{j} if and only if 𝐱^i𝐱^j<λ\left\lVert\hat{\mathbf{x}}_{i}-\hat{\mathbf{x}}_{j}\right\rVert<\lambda, for some pre-determined λ>0\lambda>0 known as the neighbourhood parameter. Denoting the shortest path distance between 𝐱^i\hat{\mathbf{x}}_{i} and 𝐱^j\hat{\mathbf{x}}_{j} by dn,λ(𝐱^i,𝐱^j)d_{n,\lambda}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j}), we embed the dissimilarity matrix 𝐃=(dn,λ(𝐱^i,𝐱^j))i,j=1l\mathbf{D}=\left(d_{n,\lambda}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j})\right)_{i,j=1}^{l} into one-dimension by minimizing the raw-stress criterion, thus obtaining

(z^1,.z^l)=arg mini=1lj=1l(|zizj|dn,λ(𝐱^i,𝐱^j))2\displaystyle(\hat{z}_{1},....\hat{z}_{l})=\text{arg min}\sum_{i=1}^{l}\sum_{j=1}^{l}(|z_{i}-z_{j}|-d_{n,\lambda}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j}))^{2} (8)

where ll is such that s<r<lns<r<l\leq n. We then use a simple linear regression model on the bivariate data (yi,z^i)i=1s(y_{i},\hat{z}_{i})_{i=1}^{s} to predict the response for z^r\hat{z}_{r} corresponding to the rr-th node. The abovementioned procedure to predict the response at the rr-th node from given observations is formally described in Algorithm 1b.

Algorithm 1b PRED(𝐀n×n,d,λ,l,{yi}i=1s,r\mathbf{A}\in\mathbb{R}^{n\times n},d,\lambda,l,\left\{y_{i}\right\}_{i=1}^{s},r)
1:Obtain the adjacency spectral estimates 𝐱^1𝐱^nd\hat{\mathbf{x}}_{1}\dots\hat{\mathbf{x}}_{n}\in\mathbb{R}^{d} of the latent positions from the adjacency matrix 𝐀\mathbf{A}.
2:Construct a localization graph with 𝐱^i\hat{\mathbf{x}}_{i} as vertices by the following rule: join two vertices 𝐱^i,𝐱^j\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j} if and only if 𝐱^i𝐱^j<λ\left\lVert\hat{\mathbf{x}}_{i}-\hat{\mathbf{x}}_{j}\right\rVert<\lambda.
3:For every i,j[n]i,j\in[n], get shortest path distance dn,λ(𝐱^i,𝐱^j)d_{n,\lambda}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j}).
4:Obtain (z^1,.z^l)=arg mini=1lj=1l(|zizj|dn,λ(𝐱^i,𝐱^j))2(\hat{z}_{1},....\hat{z}_{l})=\text{arg min}\sum_{i=1}^{l}\sum_{j=1}^{l}(|z_{i}-z_{j}|-d_{n,\lambda}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j}))^{2}.
5:Compute b^=i=1s(yiy¯)(z^iz^¯)i=1s(z^iz^¯)2\hat{b}=\frac{\sum_{i=1}^{s}(y_{i}-\bar{y})(\hat{z}_{i}-\bar{\hat{z}})}{\sum_{i=1}^{s}(\hat{z}_{i}-\bar{\hat{z}})^{2}} and a^=y¯b^z^¯\hat{a}=\bar{y}-\hat{b}\bar{\hat{z}}.
6:For r>sr>s, compute y~r=a^+b^z^r\tilde{y}_{r}=\hat{a}+\hat{b}\hat{z}_{r}.
7:return y~r\tilde{y}_{r}.

4 Main results

In this section we present our theoretical results showing consistency of the estimators of the regression parameters on a known manifold, and convergence guarantees for the predicted responses based on the raw-stress embeddings on an unknown manifold. In the setting of unknown manifold, as a corollary to consistency of the predicted responses, we also derive a convergence guarantee for a test for validity of a simple linear regression model based on an approximate FF-statistic.

4.1 The case of known manifold

Recall that we observe a random dot product graph with nn nodes for which the latent positions 𝐱i\mathbf{x}_{i} lie on a one-dimensional manifold. Our following result shows that we can consistently estimate (α,β)(\alpha,\beta) by (α^sub,β^sub)(\hat{\alpha}_{sub},\hat{\beta}_{sub}).

Theorem 4.

Suppose ψ:[0,L]d\psi:[0,L]\to\mathbb{R}^{d} is bijective, and its inverse γ\gamma satisfies γ(𝐰)<K\left\lVert\nabla\gamma(\mathbf{w})\right\rVert<K for all 𝐰ψ([0,L])\mathbf{w}\in\psi([0,L]), for some K>0K>0. Let 𝐱i=ψ(ti)\mathbf{x}_{i}=\psi(t_{i}) be the latent position of the ii-th node of a random dot product graph with nn nodes, and assume yi=α+βti+ϵiy_{i}=\alpha+\beta t_{i}+\epsilon_{i}, ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}) for all i[n]i\in[n]. Assume 𝐱iiidF\mathbf{x}_{i}\sim^{iid}F for all ii where FF is an inner product distribution on d\mathbb{R}^{d}. Let 𝐗(n)=[𝐱1||𝐱n]\mathbf{X}^{(n)}=[\mathbf{x}_{1}|\dots|\mathbf{x}_{n}] be the latent position matrix and suppose 𝐗^(n)\hat{\mathbf{X}}^{(n)} is the adjacency spectral embedding of the adjacency matrix 𝐀(n)\mathbf{A}^{(n)} into d\mathbb{R}^{d}. Assume 𝐖^(n)=argmin𝐖𝒪(d)𝐗^(n)𝐖𝐗(n)F\hat{\mathbf{W}}^{(n)}=\arg\min_{\mathbf{W}\in\mathcal{O}(d)}\left\lVert\hat{\mathbf{X}}^{(n)}\mathbf{W}-\mathbf{X}^{(n)}\right\rVert_{F} is known. Then, as nn\to\infty, we have α^subPα\hat{\alpha}_{sub}\to^{P}\alpha and β^subPβ\hat{\beta}_{sub}\to^{P}\beta, where (α^sub,β^sub)=EST(𝐀(n),d,𝐖^(n),ψ,{yi}i=1n)(\hat{\alpha}_{sub},\hat{\beta}_{sub})=\mathrm{EST}(\mathbf{A}^{(n)},d,\hat{\mathbf{W}}^{(n)},\psi,\left\{y_{i}\right\}_{i=1}^{n}) (see Algorithm 1a).

A rough sketch of proof for Theorem 4 is as follows. Note that t^i=argmint𝐱~iψ(t)\hat{t}_{i}=\arg\min_{t}\left\lVert\tilde{\mathbf{x}}_{i}-\psi(t)\right\rVert where 𝐱~i\tilde{\mathbf{x}}_{i} is the optimally rotated adjacency spectral estimate of 𝐱i\mathbf{x}_{i}, and ti=argmint𝐱iψ(t)t_{i}=\arg\min_{t}\left\lVert\mathbf{x}_{i}-\psi(t)\right\rVert. Recall that Theorem 1 tells us that 𝐱~i\tilde{\mathbf{x}}_{i} is consistent for 𝐱i\mathbf{x}_{i}. This enables us to prove that t^i\hat{t}_{i} is consistent for tit_{i} which ultimately leads us to the consistency of α^sub\hat{\alpha}_{sub} and β^sub\hat{\beta}_{sub}, because α^sub\hat{\alpha}_{sub} and β^sub\hat{\beta}_{sub} are computed by replacing the true regressors tit_{i} with t^i\hat{t}_{i} in the expressions of α^true\hat{\alpha}_{true} and β^true\hat{\beta}_{true} which are consistent for α\alpha and β\beta respectively. Theorem 4 gives us the consistency of the substitute estimators of the regression parameters under the assumption of boundedness of the gradient of the inverse function. Since continuously differentiable functions have bounded gradients on compact subsets of their domain, we can apply Theorem 4 whenever γ=ψ1\gamma=\psi^{-1} can be expressed as a restriction to a function with continuous partial derivatives. As a direct consequence of Theorem 4, our next result demonstrates that the substitute estimators have optimal asymptotic variance amongst all linear unbiased estimators.

Corollary 1.

Conditioning upon the true regressors tit_{i} in the setting of Theorem 4, the following two conditions hold
(A) 𝔼(α^sub)α,𝔼(β^sub)β as n.\mathbb{E}(\hat{\alpha}_{sub})\to\alpha,\hskip 14.22636pt\mathbb{E}(\hat{\beta}_{sub})\to\beta\text{ as $n\to\infty$}.,
(B) For any two linear unbiased estimators α~\tilde{\alpha} and β~\tilde{\beta} and an arbitrary δ>0\delta>0,
var(α^sub)var(α~)+δ,var(β^sub)var(β~)+δ\mathrm{var}(\hat{\alpha}_{sub})\leq\mathrm{var}(\tilde{\alpha})+\delta,\hskip 14.22636pt\mathrm{var}(\hat{\beta}_{sub})\leq\mathrm{var}(\tilde{\beta})+\delta for sufficiently large nn.

4.2 The case of unknown manifold

Recall that our goal is to predict responses in a semisupervised setting on a random dot product graph on an unknown one-dimensional manifold in ambient space d\mathbb{R}^{d}. We provide justification for the use of Algorithm 1b for this purpose, by showing that the predicted response y~r\tilde{y}_{r} at the rr-th node based on the raw-stress embeddings approaches the predicted response based on the true regressors tit_{i} as nn\to\infty.

Intuition suggests that in order to carry out inference on the regression model, we must learn the unknown manifold \mathcal{M}. We exploit the availability of large number of unlabeled nodes whose latent positions lie on the one-dimensional manifold, to learn the manifold. Observe that since the underlying manifold ψ([0,L])\psi([0,L]) is arclength parameterized, the geodesic distance between any two points on it is the same as the interpoint distance between their corresponding pre-images. Results from the literature (bernstein2000graph ) show that if an appropriate localization graph is constructed on sufficiently large number of points on a manifold, then the shortest path distance between two points approximates the geodesic distance between those two points. Therefore, on a localization graph of an appropriately chosen neighbourhood parameter λ\lambda, constructed on the adjacency spectral estimates 𝐱^1,𝐱^2,,𝐱^n\hat{\mathbf{x}}_{1},\hat{\mathbf{x}}_{2},...,\hat{\mathbf{x}}_{n}, the shortest path distance dn,λ(𝐱^i,𝐱^j)d_{n,\lambda}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j}) between 𝐱^i\hat{\mathbf{x}}_{i} and 𝐱^j\hat{\mathbf{x}}_{j} is expected to approximate the geodesic distance dM(𝐱i,𝐱j)=|titj|d_{M}(\mathbf{x}_{i},\mathbf{x}_{j})=|t_{i}-t_{j}|. Note that here is no need to rotate the adjacency spectral estimates 𝐱^i\hat{\mathbf{x}}_{i}, because the shortest path distance is sum of Euclidean distances and Euclidean distances are invariant to orthogonal transformations. Thus, when the dissimilarity matrix 𝐃=(dn,λ(𝐱^i,𝐱^j))i,j=1l\mathbf{D}=\left(d_{n,\lambda}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j})\right)_{i,j=1}^{l} is embedded into one-dimension by minimization of raw-stress, we obtain embeddings z^1,z^l\hat{z}_{1},\dots\hat{z}_{l} for which interpoint distance |z^iz^j||\hat{z}_{i}-\hat{z}_{j}| approximates the interpoint distance |titj||t_{i}-t_{j}|. In other words, the estimated distances from the raw-stress embeddings applied to the adjacency spectral estimates of the latent positions approximate the true geodesic distances, which is demonstrated by the following result. This argument is the basis for construction of a sequence of predicted responses based on the raw-stress embeddings, which approach the predicted responses based on the true regressors as the number of auxiliary nodes go to infinity.

Theorem 5.

(Theorem 4 from trosset2020learning ): Suppose the function ψ:[0,L]d\psi:[0,L]\to\mathbb{R}^{d} is such that ψ˙(t)=1\left\lVert\dot{\psi}(t)\right\rVert=1 for all t[0,L]t\in[0,L]. Let 𝐱id\mathbf{x}_{i}\in\mathbb{R}^{d} be the latent position for the ii-th node of the random dot product graph, and tit_{i}\in\mathbb{R} be such that 𝐱i=ψ(ti)\mathbf{x}_{i}=\psi(t_{i}) for all ii, and assume tiiidgt_{i}\sim^{iid}g where gg is an absolutely continuous probability density function satisfying supp(g)=[0,L]supp(g)=[0,L] and gmin>0g_{min}>0. Let 𝐱^i\hat{\mathbf{x}}_{i} be the adjacency spectral estimate of the true latent position 𝐱i\mathbf{x}_{i} for all ii. There exist sequences {nK}K=1\left\{n_{K}\right\}_{K=1}^{\infty} of number of nodes and {λK}K=1\left\{\lambda_{K}\right\}_{K=1}^{\infty} of neighbourhood parameters satisfying nKn_{K}\to\infty, λK0\lambda_{K}\to 0 as KK\to\infty, such that for any fixed integer ll satisfying s<l<nKs<l<n_{K} for all KK,

(|z^iz^j||titj|)P0, for all i,j[l](|\hat{z}_{i}-\hat{z}_{j}|-|t_{i}-t_{j}|)\to^{P}0,\text{ for all }i,j\in[l] (9)

holds, where (z^1,.z^l)(\hat{z}_{1},....\hat{z}_{l}) is minimizer of the raw stress criterion, that is

(z^1,.z^l)=arg mini=1lj=1l(|zizj|dnK,λK(𝐱^i,𝐱^j))2.(\hat{z}_{1},....\hat{z}_{l})=\text{arg min}\sum_{i=1}^{l}\sum_{j=1}^{l}(|z_{i}-z_{j}|-d_{n_{K},\lambda_{K}}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j}))^{2}. (10)

Theorem 5 shows that the one-dimensional raw-stress embeddings z^1,z^l\hat{z}_{1},\dots\hat{z}_{l} satisfy (|z^iz^j||titj|)0(|\hat{z}_{i}-\hat{z}_{j}|-|t_{i}-t_{j}|)\to 0 as KK\to\infty, for all i,j[l]i,j\in[l]. This means that for every i[l]i\in[l], z^i\hat{z}_{i} approximates tit_{i} up to scale and location transformations. Since in simple linear regression the accuracy of the predicted response is independent of scale and location transformations, we can expect the predicted response at a particular node based on z^i\hat{z}_{i} to approach the predicted response based on the true regressors tit_{i}. The following theorem, our key result in this paper, demonstrates that this is in fact the case.

Theorem 6.

Consider a random dot product graph for which each node lies on an arclength parameterized one-dimensional manifold ψ([0,L])\psi([0,L]) where ψ\psi is unknown. Let 𝐱i=ψ(ti)\mathbf{x}_{i}=\psi(t_{i}) be the latent position of the ii-th node for all ii. Assume yi=α+βti+ϵiy_{i}=\alpha+\beta t_{i}+\epsilon_{i}, ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}) for i[s]i\in[s], where ss is a fixed integer. The predicted response at the rr-th node based on the true regressors is y^r=α^true+β^truetr\hat{y}_{r}=\hat{\alpha}_{true}+\hat{\beta}_{true}t_{r}. There exist sequences nKn_{K}\to\infty of number of nodes and λK0\lambda_{K}\to 0 of neighbourhood parameters such that for every r>sr>s, |y^ry~r(K)|P0|\hat{y}_{r}-\tilde{y}_{r}^{(K)}|\to^{P}0 as KK\to\infty, where y~r(K)=PRED(𝐀(K),d,λK,l,{yi}i=1s,r)\tilde{y}_{r}^{(K)}=\mathrm{PRED}(\mathbf{A}^{(K)},d,\lambda_{K},l,\left\{y_{i}\right\}_{i=1}^{s},r) (see Algorithm 1b), 𝐀(K)\mathbf{A}^{(K)} being the adjacency matrix when the number of nodes is nKn_{K} and ll being a fixed natural number that satisfies l>r>sl>r>s.

Recall that the validity of a simple linear regression model can be tested by an FF-test, whose test statistic is dependent on the predicted responses based on the true regressors. Since we have a way to approximate the predicted responses based on the true regressors by predicted responses based on the raw-stress embeddings, we can also devise a test whose power approximates the power of the FF-test based on the true regressors, as shown by our following result.

Corollary 2.

In the setting of Theorem 6, suppose {(y~1(K),y~2(K),.y~s(K))}K=1\left\{(\tilde{y}_{1}^{(K)},\tilde{y}_{2}^{(K)},....\tilde{y}_{s}^{(K)})\right\}_{K=1}^{\infty} is the sequence of vector of predicted responses at the first ss nodes of the random dot product graph, based on the raw-stress embeddings z^1,,z^s\hat{z}_{1},...,\hat{z}_{s}. Define

F=(s2)i=1s(y^iy¯)2i=1s(yiy^i)2,F^(K)=(s2)i=1s(y~i(K)y¯)2i=1s(yiy~i(K))2.F^{*}=(s-2)\frac{\sum_{i=1}^{s}(\hat{y}_{i}-\bar{y})^{2}}{\sum_{i=1}^{s}(y_{i}-\hat{y}_{i})^{2}},\hskip 28.45274pt\hat{F}^{(K)}=(s-2)\frac{\sum_{i=1}^{s}(\tilde{y}^{(K)}_{i}-\bar{y})^{2}}{\sum_{i=1}^{s}(y_{i}-\tilde{y}^{(K)}_{i})^{2}}. (11)

Consider testing the null hypothesis H0:β=0H_{0}:\beta=0 against H1:β0H_{1}:\beta\neq 0 in the absence of the true regressors tit_{i}, and the decision rule is: reject H0H_{0} in favour of H1H_{1} at level of significance α~\tilde{\alpha} if F^(K)>cα~\hat{F}^{(K)}>c_{\tilde{\alpha}}, where cα~c_{\tilde{\alpha}} is the (1α~)(1-\tilde{\alpha})-th quantile of F1,s2F_{1,s-2} distribution. If the power of this test is denoted by π^(K)\hat{\pi}^{(K)}, then limKπ^(K)=π\lim_{K\to\infty}\hat{\pi}^{(K)}=\pi^{*}, where π\pi^{*} is the power of the test for which the decision rule is to reject H0H_{0} in favour of H1H_{1} at level of significance α~\tilde{\alpha} if F>cα~F^{*}>c_{\tilde{\alpha}}.

Thus, if one wants to perform a test for model validity in the absence of the true regressors tit_{i}, then a test of approximately equal power, based on the raw-stress embeddings z^i\hat{z}_{i} for a graph of sufficiently large number of auxiliary nodes, can be used instead.

5 Simulation

In this section, we present our simulation results demonstrating support for our theorems. We conducted simulations on 100100 Monte Carlo samples of graphs on known and unknown one-dimensional manifolds.

5.1 The case of known manifold

We take the manifold to be ψ([0,1])\psi([0,1]), where ψ:[0,1]3\psi:[0,1]\to\mathbb{R}^{3} is the Hardy Weinberg curve, given by ψ(t)=(t2,2t(1t),(1t)2)\psi(t)=(t^{2},2t(1-t),(1-t)^{2}). The number of nodes, nn, varies from 600600 to 25002500 in steps of 100100. For each nn, we repeat the following procedure over 100100 Monte Carlo samples. A sample t1,..tniidU[0,1]t_{1},.....t_{n}\sim^{iid}U[0,1] is generated, and responses yiy_{i} are sampled via the regression model yi=α+βti+ϵiy_{i}=\alpha+\beta t_{i}+\epsilon_{i}, ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}), i[n]i\in[n], where α=2.0\alpha=2.0, β=5.0\beta=5.0, σϵ=0.1\sigma_{\epsilon}=0.1. An undirected hollow random dot product graph with latent positions 𝐱i=ψ(ti)\mathbf{x}_{i}=\psi(t_{i}), i[n]i\in[n] is generated. More specifically, the (i,j)(i,j)-th element of the adjacency matrix 𝐀\mathbf{A} satisfies 𝐀ijBernoulli(𝐱iT𝐱j)\mathbf{A}_{ij}\sim\mathrm{Bernoulli}(\mathbf{x}_{i}^{T}\mathbf{x}_{j}) for all i<ji<j, and 𝐀ij=𝐀ji\mathbf{A}_{ij}=\mathbf{A}_{ji} for all i,j[n]i,j\in[n], and 𝐀ii=0\mathbf{A}_{ii}=0 for all i[n]i\in[n]. We denote the true latent position matrix by 𝐗=[𝐱1|𝐱2||𝐱n]T\mathbf{X}=[\mathbf{x}_{1}|\mathbf{x}_{2}|...|\mathbf{x}_{n}]^{T}, and the adjacency spectral estimate of it by 𝐗^\hat{\mathbf{X}}. We compute

𝐖^=argmin𝐖𝒪(d)𝐗𝐗^𝐖F\hat{\mathbf{W}}=\arg\min_{\mathbf{W}\in\mathcal{O}(d)}\left\lVert\mathbf{X}-\hat{\mathbf{X}}\mathbf{W}\right\rVert_{F}

and finally, set 𝐗~=𝐗^𝐖^\tilde{\mathbf{X}}=\hat{\mathbf{X}}\hat{\mathbf{W}} to be the optimally rotated adjacency spectral estimate of the latent position matrix 𝐗\mathbf{X}. Then we obtain t^i=argmint𝐱~iψ(t)\hat{t}_{i}=\arg\min_{t}\left\lVert\tilde{\mathbf{x}}_{i}-\psi(t)\right\rVert for i[n]i\in[n], and get α^sub\hat{\alpha}_{sub} and β^sub\hat{\beta}_{sub}. Setting 𝜽=(α,β)\boldsymbol{\theta}=(\alpha,\beta), 𝜽^true=(α^true,β^true)\hat{\boldsymbol{\theta}}_{true}=(\hat{\alpha}_{true},\hat{\beta}_{true}) and 𝜽^sub=(α^sub,β^sub)\hat{\boldsymbol{\theta}}_{sub}=(\hat{\alpha}_{sub},\hat{\beta}_{sub}), we compute the sample mean squared errors (MSE) of 𝜽^true\hat{\boldsymbol{\theta}}_{true} and 𝜽^sub\hat{\boldsymbol{\theta}}_{sub} over the 100100 Monte Carlo samples and plot them against nn. The plot is given in Figure 2.

Remark 4.

The fact that the optimal rotation matrix 𝐖^\hat{\mathbf{W}} needs to be computed from the true latent position matrix 𝐗\mathbf{X} is what makes inference on the regression model in the scenario of known manifold unrealistic, because 𝐗\mathbf{X} is typically unknown.

Refer to caption
Figure 2: Plot showing consistency of the substitute estimator of the regression parameter vector on known manifold. For 100100 Monte Carlo samples, substitute estimates are computed using the projections of the optimally rotated adjacency spectral estimates of the latent positions onto the manifold, and then the sample MSEs of the estimator based on the true regressors and the substitute estimator are computed. For graphs of moderate size (n2000n\leq 2000), the substitute estimator performs significantly worse than the estimator based on the true regressors. However, as the number of nodes increases, the difference in performances of the estimators diminish and the mean squared errors of both the estimators approach zero.

5.2 The case of unknown manifold

We assume that the underlying arclength parameterized manifold is ψ([0,1])\psi([0,1]) where ψ:[0,1]4\psi:[0,1]\to\mathbb{R}^{4} is given by ψ(t)=(t/2,t/2,t/2,t/2)\psi(t)=(t/2,t/2,t/2,t/2). We take the number of nodes at which responses are recorded to be s=20s=20. Here, mm denotes the number of auxiliary nodes, and n=m+sn=m+s denotes the total number of nodes. We vary nn over the set {500,750,1000,..3000}\left\{500,750,1000,.....3000\right\}. For each nn, we repeat the following procedure over 100100 Monte Carlo samples. A sample t1,.tsiidU[0,1]t_{1},....t_{s}\sim^{iid}U[0,1] is generated, and responses yiy_{i} are generated from the regression model yi=α+βti+ϵiy_{i}=\alpha+\beta t_{i}+\epsilon_{i}, ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}) for all i[s]i\in[s], where α=2.0,β=5.0,σϵ=0.01\alpha=2.0,\beta=5.0,\sigma_{\epsilon}=0.01. Additionally, we sample the pre-images of the auxiliary nodes, ts+1,.tniidU[0,1]t_{s+1},....t_{n}\sim^{iid}U[0,1]. Thus, a random dot product graph with latent positions 𝐱i=ψ(ti)\mathbf{x}_{i}=\psi(t_{i}), i[n]i\in[n] is generated and the adjacency spectral estimates 𝐱^i\hat{\mathbf{x}}_{i} of its latent positions are computed. A localization graph is constructed with the first n/10n/10 of the adjacency spectral estimates as nodes under the following rule: two nodes 𝐱^i\hat{\mathbf{x}}_{i} and 𝐱^j\hat{\mathbf{x}}_{j} of the localization graph are to be joined by an edge if and only if 𝐱^i𝐱^j<λ\left\lVert\hat{\mathbf{x}}_{i}-\hat{\mathbf{x}}_{j}\right\rVert<\lambda, where λ=0.8×0.99K1\lambda=0.8\times 0.99^{K-1} when nn is the KK-th term in the vector (500,750,1000,3000)(500,750,1000,\dots 3000). Then, the shortest path distance matrix 𝐃=(dn,λ(𝐱^i,𝐱^j))i,j=1l\mathbf{D}=\left(d_{n,\lambda}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j})\right)_{i,j=1}^{l} of the first l=s+1l=s+1 estimated latent positions is embedded into one-dimension by raw-stress minimization using mds function of smacof package in R and one-dimensional embeddings z^i\hat{z}_{i} are obtained. The responses yiy_{i} are regressed upon the 11-dimensional raw-stress embeddings z^i\hat{z}_{i} for i[s]i\in[s], and the predicted response y~s+1\tilde{y}_{s+1} for the (s+1)(s+1)-th node is computed. The predicted response y^s+1\hat{y}_{s+1} for the (s+1)(s+1)-th node based on the true regressors tit_{i} is also obtained. Due to identicality of distributions of the true regressors tit_{i}, the distribution of each of the predicted responses is invariant to the label of the node; hence we drop the subscript and use y^true\hat{y}_{true} to denote the predicted response based on the true regressors tit_{i}, and use y^sub\hat{y}_{sub} to denote the predicted response based on the raw-stress embeddings z^i\hat{z}_{i} (since raw-stress embeddings are used as substitutes for the true regressors in predicting the response). Finally, the sample mean of the squared distances (y^suby^true)2(\hat{y}_{sub}-\hat{y}_{true})^{2} over all the Monte Carlo samples is computed and plotted against nn. The plot is given in Figure 3.

Refer to caption
Figure 3: Plot showing consistency of the predicted responses based on raw-stress embeddings on unknown manifold. The arclength parameterized manifold is taken to be ψ([0,1])\psi([0,1]) where ψ(t)=(t/2,t/2,t/2,t/2)\psi(t)=(t/2,t/2,t/2,t/2). The sample mean of the squared difference between the predicted response y^sub\hat{y}_{sub} based on raw-stress embeddings and predicted response y^true\hat{y}_{true} based on the true regressors is plotted against the total number of nodes in the graph. The substitute predicted response y^sub\hat{y}_{sub} performs significantly worse than y^true\hat{y}_{true} for moderate number of auxiliary nodes (n1000n\leq 1000). However, as the number of auxiliary nodes increases further, the squared difference between y^true\hat{y}_{true} and y^sub\hat{y}_{sub} goes to zero.


Next, we focus on the issue of hypothesis testing based on the raw-stress embeddings z^i\hat{z}_{i}. In order to test the validity of the model

yi=α+βti+ϵiy_{i}=\alpha+\beta t_{i}+\epsilon_{i}

where ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}), i[s]i\in[s], one would conduct hypothesis testing H0:β=0H_{0}:\beta=0 against H1:β0H_{1}:\beta\neq 0. If the true regressors tit_{i} were known, we would have used the test statistic FF^{*} of Eq. (11), but Corollary 2 tells us that with sufficiently large number of auxiliary latent positions, one can have a test based on the one-dimensional raw-stress embeddings z^i\hat{z}_{i}, whose power approximates the power of the test based on the true FF-statistic FF^{*}. We present a plot in Figure 4 that speaks in support of Corollary 2. We show that the power of the test based on the raw-stress embeddings approaches the power of the test based on the true regressors for a chosen value of the pair of the regression parameters. The setting is almost the same as the previous one, except that here the number nn of nodes varies from 100100 to 10001000 in steps of 5050. For each nn, the true FF-statistic (based on the true regressors tit_{i}) and the estimated FF-statistic (based on the raw-stress embeddings z^i\hat{z}_{i}) are computed for 100100 Monte Carlo samples, and the power of the two tests are estimated by the proportion of the Monte Carlo samples for which the statistics exceed a particular threshold. Then, for each nn, the difference between the empirically estimated powers of the two tests (one based on the raw-stress embeddings and the other based on the true regressors) is computed and plotted against nn. The plot is given in Figure 4 which shows that the difference between the empirically estimated powers of the tests based on the true regressors and the raw-stress embeddings approach zero as the number of auxiliary nodes grows.

Refer to caption
Figure 4: Plot of the difference between the empirical powers of tests for model validity based on the 11-dimensional raw-stress embeddings and the true regressors. The arclength parameterized manifold is taken to be ψ([0,1])\psi([0,1]) where ψ(t)=(t/2,t/2,t/2,t/2)\psi(t)=(t/2,t/2,t/2,t/2). For a small fixed number (s=20s=20) of nodes, responses yiy_{i} are generated from yi=α+βti+ϵiy_{i}=\alpha+\beta t_{i}+\epsilon_{i}, ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}). A large number (ns)(n-s) of auxiliary nodes are generated on ψ([0,1])\psi([0,1]) and a localization graph is constructed on the adjacency spectral estimates corresponding to the first n/2n/2 nodes. When nn is the KK-th term of the vector (100,150,200,1000)(100,150,200,\dots 1000), the neighbourhood parameter is taken to be λ=0.9×0.99K1\lambda=0.9\times 0.99^{K-1}. The dissimilarity matrix of the shortest path distances is embedded into 11-dimension by minimization of raw-stress criterion. In order to test H0:β=0H_{0}:\beta=0 vs H1:β0H_{1}:\beta\neq 0, the test statistics FF^{*} based on the true regressors tit_{i} and F^\hat{F} based on the 11-dimensional isomap embeddings z^i\hat{z}_{i} are comapared, where nn is the total number of nodes in the graph. The corresponding powers are empirically estimated by the proportions of times in a collection of 100100 Monte Carlo samples the test statistics reject H0H_{0}, for every nn varying from 100100 to 500500 in steps of 2525. The plot shows that the difference between the estimated powers of the two tests goes to zero, indicating the tests based on the isomap embeddings are almost as good as the tests based on the true regressors, for sufficiently large number of auxiliary nodes.

6 Application

In this section, we demonstrate the application of our methodology to real world data. Howard Hughes Medical Institute Janelia reconstructed the complete wiring diagram of the higher order parallel fibre system for associative learning in the larval Drosophila brain, the mushroom body. There are n=100n=100 datapoints corresponding to 100100 Kenyon cell neurons forming a network in a latent space. The distance (in microns) between the bundle entry point of a Kenyon cell neuron and mushroom body neuropil is treated as the response corresponding to that neuron. We carry out hypothesis testing to test whether a simple linear regression model links the responses on the neurons of the right hemisphere of the larval Drosophila (Eichler2017TheCC ,https://doi.org/10.48550/arxiv.1705.03297 ,athreya2017statistical ) to some dimension-reduced version of the latent positions of the neurons.

A directed graph representing a network of the 100100 Kenyon cell neurons is observed. Since the graph under consideration is directed, the adjacency spectral embedding is formed by taking into account both the left and right singular vectors of the adjacency matrix. The latent position of each node is estimated by a 66-dimensional vector formed by augmenting the top 33 left singular vectors scaled by the diagonal matrix of the corresponding singular values, with the top 33 right singular vectors scaled by the diagonal matrix of the corresponding singular values. For each pair of components, we obtain a scatterplot of the bivariate dataset of all 100100 points, thus obtaining a 6×66\times 6 matrix of scatterplots which is shown in Figure 5.

Refer to caption
Figure 5: Matrix of scatterplots indicating an underlying low-dimensional structure in the network of 100100 Kenyon Cell neurons in larval Drosophila. A directed graph is taken into account where every node represents a neuron. A 66-dimensional adjacency spectral estimate is obtained for every node by augmenting the 33 leading left singular vectors scaled by corresponding singular values, with 33 leading right singular vectors scaled by corresponding singular values. A scatterplot is then obtained for every pair of these 66 components. Since each dimension appears to be approximately related to another dimension via a function, presence of an underlying 11-dimensional structure is assumed.

Figure 5 shows that every component is approximately related to every other component, thus indicating the possibility that the six-dimensional datapoints lie on a one-dimensional manifold. We construct a localization graph with neighbourhood parameter λ=0.50\lambda=0.50 on the 66-dimensional estimates of the latent positions and embed the dissimilarity matrix 𝐃=(d100,0.5(𝐱^i,𝐱^j))i,j=1100\mathbf{D}=\left(d_{100,0.5}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j})\right)_{i,j=1}^{100} of shortest path distances into one-dimension by minimizing the raw-stress criterion to obtain the 11-dimensional embeddings z^i\hat{z}_{i}. Figure 6 presents the plot of responses yiy_{i} against the one-dimensional raw-stress embeddings z^i\hat{z}_{i}.

Refer to caption
Figure 6: Scatterplot indicating that the responses and the 11-dimensional raw-stress embeddings are linked via a simple linear regression model. From 66-dimensional estimates of the latent positions corresponding to 100100 Kenyon Cell neurons forming a directed network in larval Drosophila, 11-dimensional embeddings z^i\hat{z}_{i}’s are obtained by raw-stress minimization of the shortest path distances. The distance (yiy_{i}) between bundle entry point of the ii-th neuron and mushroom body neuropil is treated as the response corresponding to the ii-th neuron. Scatterplot of (yi,z^i)(y_{i},\hat{z}_{i}), with fitted regression line y=4356.1+1296.6xy=4356.1+1296.6x indicates a significant effect (p<0.01p<0.01 for H0:a=0H_{0}:a=0 vs H1:a0H_{1}:a\neq 0 in yi=a+biz^i+ηiy_{i}=a+b_{i}\hat{z}_{i}+\eta_{i})

The plot in Figure 6 gives an idea that a simple linear regression model links the responses with the raw-stress embeddings. We wish to check the validity the model

yi=a+bz^i+ηiy_{i}=a+b\hat{z}_{i}+\eta_{i}

where ηiiidN(0,ση2)\eta_{i}\sim^{iid}N(0,\sigma^{2}_{\eta}), i[n]i\in[n]. For that purpose, we test H0:b=0H_{0}:b=0 vs H1:b0H_{1}:b\neq 0 at level of significance 0.010.01. The value of the FF-statistic, with degrees of freedom 11 and 9898, is found to be 9.8159.815. This yields pp-value =P[F1,98>9.815]=0.0023=P[F_{1,98}>9.815]=0.0023 which is lower than our level of significance 0.010.01. Therefore, we conclude that a simple linear regression model involving yiy_{i} as values of the dependent variable and z^i\hat{z}_{i} as values of the independent variable exist. Using Corollary 2, we conclude that the responses on the neurons are linked to the scalar pre-images of the latent positions via a simple linear regression model. Moreover, if the distances between the bundle entry point and the mushroom body neuropil is not recorded for some Kenyon cell neurons, then the values can be predicted using the one-dimensional raw-stress embeddings z^i\hat{z}_{i} as proxy for the true regressors.

7 Conclusion

In the presented work, theoretical and numerical results are derived on models where latent positions of random dot product graphs lie on a one-dimensional manifold in a high dimensional ambient space. We demonstrated that for a known manifold, the parameters of a simple linear regression model linking the response variable recorded at each node of the graph to the scalar pre-images of the latent positions of the nodes can be estimated consistently even though the true regressors were unknown. However, a key result of our work is to show that even when the manifold is unknown (the more realistic scenario) one can learn it reasonably well under favourable conditions in order to obtain predicted responses that are close to the predicted responses based on the true regressors.

We use the convergence guarantees for raw-stress embeddings (trosset2020learning ) to obtain the consistent estimators of the interpoint distances of the regressors when the underlying manifold is unknown. We demonstrate that as the number of auxiliary latent positions grow to infinity, at every auxiliary node the predicted response based on the raw-stress embeddings approach the predicted response based on the true regressors.

Observe that while the substitute estimators of the regression parameters (or the predicted responses based on the raw-stress embeddings) can deliver asymptotic performances close to the performance of their counterparts based on the true regressors, in real-life scenarios we can be dealing with small samples, where the substitute estimators (or the predicted responses) are likely to be poor performers. When the underlying manifold is known, we can overcome this issue by taking into account the measurement errors which are the differences between the estimated regressors and the true regressors, thus making adjustments in the estimators of the regression parameters (fuller1987measurement ). We conduct a simulation to compare the performances of the estimator based on the true regressors, the substitute (or naive) estimator and the measurement error adjusted estimators, on a known manifold. For a regression model yi=βti+ϵiy_{i}=\beta t_{i}+\epsilon_{i}, ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}), where regressors tit_{i} are estimated by t^i\hat{t}_{i}, the measurement error adjusted estimator is given by β^adj,σ=i=1nyit^ii=1nt^i2i=1nΓi\hat{\beta}_{adj,\sigma}=\frac{\sum_{i=1}^{n}y_{i}\hat{t}_{i}}{\sum_{i=1}^{n}\hat{t}_{i}^{2}-\sum_{i=1}^{n}\Gamma_{i}} where Γi=var(t^iti)\Gamma_{i}=var(\hat{t}_{i}-t_{i}). In most realistic scenarios, it is not possible to know the true values of Γi\Gamma_{i}. However, if they admit consistent estimates Γ^i\hat{\Gamma}_{i}, then we can use the proxy given by β^adj,σ^=i=1nyit^ii=1nt^i2i=1nΓ^i\hat{\beta}_{adj,\hat{\sigma}}=\frac{\sum_{i=1}^{n}y_{i}\hat{t}_{i}}{\sum_{i=1}^{n}\hat{t}_{i}^{2}-\sum_{i=1}^{n}\hat{\Gamma}_{i}}. In order to compare the performances of these estimators, we sample a random dot product graph whose nodes lie on a one-dimensional curve in a high dimensional ambient space. We compute the adjacency spectral estimates of the latent positions, project them onto the manifold, and obtain estimates of the regressors which are then used to compute the values of β^true\hat{\beta}_{true}, β^naive\hat{\beta}_{naive}, β^adj,σ\hat{\beta}_{adj,\sigma} and β^adj,σ^\hat{\beta}_{adj,\hat{\sigma}} for 100100 Monte Carlo samples. A boxplot of the values of these estimators computed over 100100 Mont Carlo samples is shown in Figure 7.

Refer to caption
Figure 7: Boxplot of squared errors of the 44 estimators of the regression slope parameter is given, where the intercept term of the regression model is zero. On each of 100100 Monte Carlo samples, a random graph of n=800n=800 nodes is generated for which the latent position of the ii-th node is given by 𝐱i=(ti2,2ti(1ti),(1ti)2)\mathbf{x}_{i}=(t_{i}^{2},2t_{i}(1-t_{i}),(1-t_{i})^{2}) where tiiidU[0,1]t_{i}\sim^{iid}U[0,1]. Response yiy_{i} is generated at the ii-th node via the regression model yi=βti+ϵiy_{i}=\beta t_{i}+\epsilon_{i}, ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}) where β=5.0\beta=5.0, σϵ=0.1\sigma_{\epsilon}=0.1. The naive estimator was computed by plugging-in the pre-images of the projections of the optimally rotated adjacency spectral estimates of latent positions. In order to compute β^adj,σ\hat{\beta}_{adj,\sigma}, we plug-in the sum of sample variances obtained from another set of Monte Carlo samples where the graphs are generated from the same model. We obtain i=1nΓ^i\sum_{i=1}^{n}\hat{\Gamma}_{i}, by using delta method on the asymptotic variance (see 2) of the optimally rotated adjacency spectral estimates of the latent positions, and thus compute β^adj,σ^\hat{\beta}_{adj,\hat{\sigma}}.


Figure 7 clearly shows that the measurement error adjusted estimators β^adj,σ\hat{\beta}_{adj,\sigma} and β^adj,σ^\hat{\beta}_{adj,\hat{\sigma}} outperform the naive estimator β^naive\hat{\beta}_{naive}. Moreover, it is also apparent that the performances of β^adj,σ\hat{\beta}_{adj,\sigma} and β^adj,σ^\hat{\beta}_{adj,\hat{\sigma}} don’t differ by a significant amount. This assures us that the use of measurement error adjusted estimator β^adj,σ^\hat{\beta}_{adj,\hat{\sigma}} is pragmatic and effective, since the computation of β^adj,σ\hat{\beta}_{adj,\sigma} is not possible in many realistic scenarios owing to the lack of knowledge of the true values of Γi\Gamma_{i}.

Unfortunately, in the case of unknown manifolds, one cannot readily apply this same methodology, as only interpoint distances, and not embeddings are preserved, as discussed in Section 4. We believe it can be an interesting problem to approach in future. Another area of future investigation can be to see whether the results for one-dimensional manifold can be generalized to kk-dimensional manifolds where k>1k>1.

Availability of data and materials

The dataset analyzed in this study are included in the published article Eichler2017TheCC .

Competing interests

The authors declare that they have no competing interests.

Author’s contributions

AA developed the theory, conducted experiments and wrote the manuscript. JA developed the theory and edited the manuscript. MWT developed the theory and edited the manuscript. YP conducted experiments and edited the manuscript. CEP formulated the problem, developed the theory and edited the manuscript.

Acknowledgements

This work is partially supported by the Johns Hopkins Mathematical Institute for Data Science (MINDS) Fellowship.

References

  • (1) Erdos, P.L., Rényi, A.: On the evolution of random graphs. Transactions of the American Mathematical Society 286, 257–257 (1984)
  • (2) Goldenberg, A., Zheng, A.X., Fienberg, S.E., Airoldi, E.M., et al.: A survey of statistical network models. Foundations and Trends® in Machine Learning 2(2), 129–233 (2010)
  • (3) Hoff, P.D., Raftery, A.E., Handcock, M.S.: Latent space approaches to social network analysis. Journal of the American Statistical Association 97, 1090–1098 (2002)
  • (4) Young, S.J., Scheinerman, E.R.: Random dot product graph models for social networks. In: Workshop on Algorithms and Models for the Web-Graph (2007)
  • (5) Rubin-Delanchy, P., Priebe, C.E., Tang, M., Cape, J.: A statistical interpretation of spectral embedding: The generalised random dot product graph. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 84, 1446–1473 (2022)
  • (6) Athreya, A., Fishkind, D.E., Tang, M., Priebe, C.E., Park, Y., Vogelstein, J.T., Levin, K., Lyzinski, V., Qin, Y.: Statistical inference on random dot product graphs: a survey. The Journal of Machine Learning Research 18(1), 8393–8484 (2017)
  • (7) Rubin-Delanchy, P.: Manifold structure in graph embeddings. Advances in Neural Information Processing Systems 33, 11687–11699 (2020)
  • (8) Belkin, M., Matveeva, I., Niyogi, P.: Tikhonov regularization and semi-supervised learning on large graphs. In: 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 3, p. 1000 (2004). doi:10.1109/ICASSP.2004.1326716
  • (9) Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15(6), 1373–1396 (2003). doi:10.1162/089976603321780317
  • (10) Athreya, A., Tang, M., Park, Y., Priebe, C.E.: On estimation and inference in latent structure random graphs. Statistical Science 36(1), 68–88 (2021)
  • (11) Trosset, M.W., Gao, M., Tang, M., Priebe, C.E.: Learning 1-dimensional submanifolds for subsequent inference on random dot product graphs. arXiv preprint arXiv:2004.07348 (2020)
  • (12) Eichler, K., Li, F., Litwin-Kumar, A., Park, Y., Andrade, I.V., Schneider-Mizell, C.M., Saumweber, T., Huser, A., Eschbach, C., Gerber, B., Fetter, R.D., Truman, J.W., Priebe, C.E., Abbott, L.F., Thum, A.S., Zlatic, M., Cardona, A.: The complete connectome of a learning and memory centre in an insect brain. Nature 548, 175–182 (2017)
  • (13) Fuller, W.A.: Measurement error models (1987)
  • (14) Whiteley, N., Gray, A., Rubin-Delanchy, P.: Discovering latent topology and geometry in data: a law of large dimension. arXiv preprint arXiv:2208.11665 (2022)
  • (15) Tenenbaum, J.B., De Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. science 290(5500), 2319–2323 (2000)
  • (16) Bernstein, M., De Silva, V., Langford, J.C., Tenenbaum, J.B.: Graph approximations to geodesics on embedded manifolds. Technical report, Citeseer (2000)
  • (17) Trosset, M.W., Buyukbas, G.: Rehabilitating Isomap: Euclidean Representation of Geodesic Structure (2021). 2006.10858
  • (18) Borg, I., Groenen, P.J.: Modern Multidimensional Scaling: Theory and Applications. Springer, ??? (2005)
  • (19) Priebe, C.E., Park, Y., Tang, M., Athreya, A., Lyzinski, V., Vogelstein, J.T., Qin, Y., Cocanougher, B., Eichler, K., Zlatic, M., Cardona, A.: Semiparametric spectral modeling of the Drosophila connectome. arXiv (2017). doi:10.48550/ARXIV.1705.03297. https://arxiv.org/abs/1705.03297
  • (20) Alyakin, A.A., Agterberg, J., Helm, H.S., Priebe, C.E.: Correcting a Nonparametric Two-sample Graph Hypothesis Test for Graphs with Different Numbers of Vertices (2020). 2008.09434
  • (21) Athreya, A., Priebe, C., Tang, M., Lyzinski, V., Marchette, D., Sussman, D.: A limit theorem for scaled eigenvectors of random dot product graphs. Sankhya A 1(78), 1–18 (2016)
  • (22) Cape, J., Tang, M., Priebe, C.E.: The two-to-infinity norm and singular subspace geometry with applications to high-dimensional statistics. The Annals of Statistics (2019)
  • (23) Tang, M., Athreya, A., Sussman, D.L., Lyzinski, V., Priebe, C.E.: A nonparametric two-sample hypothesis testing problem for random dot product graphs. arXiv (2014). doi:10.48550/ARXIV.1409.2344. https://arxiv.org/abs/1409.2344
  • (24) Trosset, M.W., Priebe, C.E.: Approximate Information Tests on Statistical Submanifolds. arXiv (2019). doi:10.48550/ARXIV.1903.08656. https://arxiv.org/abs/1903.08656
  • (25) Sekhon, S.: A Result on Convergence of Double Sequences of Measurable Functions (2021). 2104.09819

8 Appendix

Theorem 4: Suppose ψ:[0,L]d\psi:[0,L]\to\mathbb{R}^{d} is bijective, and its inverse γ\gamma satisfies γ(𝐰)<K\left\lVert\nabla\gamma(\mathbf{w})\right\rVert<K for all 𝐰ψ([0,L])\mathbf{w}\in\psi([0,L]), for some K>0K>0. Let 𝐱i=ψ(ti)\mathbf{x}_{i}=\psi(t_{i}) be the latent position of the ii-th node of a random dot product graph with nn nodes, and assume yi=α+βti+ϵiy_{i}=\alpha+\beta t_{i}+\epsilon_{i}, ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}) for all i[n]i\in[n]. Assume 𝐱iiidF\mathbf{x}_{i}\sim^{iid}F for all ii where FF is an inner product distribution on d\mathbb{R}^{d}. Suppose 𝐱~i\tilde{\mathbf{x}}_{i} is the optimally rotated adjacency spectral estimate of 𝐱i\mathbf{x}_{i} for all ii, and
t^i=argmint𝐱~iψ(t)\hat{t}_{i}=\arg\min_{t}\left\lVert\tilde{\mathbf{x}}_{i}-\psi(t)\right\rVert. Then, α^subPα\hat{\alpha}_{sub}\to^{P}\alpha, β^subPβ\hat{\beta}_{sub}\to^{P}\beta as nn\to\infty.

Proof: Set 𝐮i=𝐱~i𝐱i\mathbf{u}_{i}=\tilde{\mathbf{x}}_{i}-\mathbf{x}_{i} for all i[n]i\in[n] and note that by Theorem 1, maxi𝐮iP0\max_{i}\left\lVert\mathbf{u}_{i}\right\rVert\to^{P}0 as nn\to\infty. Let 𝐮i=𝐱~i𝐱i\mathbf{u}_{i}=\tilde{\mathbf{x}}_{i}-\mathbf{x}_{i} and let 𝐡i\mathbf{h}_{i} be the vector of minimum length for which 𝐱~i+𝐡iψ([0,L])\tilde{\mathbf{x}}_{i}+\mathbf{h}_{i}\in\psi([0,L]). Note that 𝐡i𝐮i\left\lVert\mathbf{h}_{i}\right\rVert\leq\left\lVert\mathbf{u}_{i}\right\rVert for all ii.
Setting 𝐪i=𝐡i+𝐮i\mathbf{q}_{i}=\mathbf{h}_{i}+\mathbf{u}_{i} and using Taylor’s theorem, we observe that for all ii,

t^i=γ(𝐱i+𝐪i)=ti+𝐪iTγ(𝐱i)+o(𝐪i).\displaystyle\hat{t}_{i}=\gamma(\mathbf{x}_{i}+\mathbf{q}_{i})=t_{i}+\mathbf{q}_{i}^{T}\nabla\gamma(\mathbf{x}_{i})+o(\left\lVert\mathbf{q}_{i}\right\rVert).

Hence, by Cauchy-Schwarz Inequality,

|t^iti|𝐪iγ(𝐱i)+o(𝐪i)K𝐪i+o(𝐪i).|\hat{t}_{i}-t_{i}|\leq\left\lVert\mathbf{q}_{i}\right\rVert\left\lVert\nabla\gamma(\mathbf{\mathbf{x}}_{i})\right\rVert+o(\left\lVert\mathbf{q}_{i}\right\rVert)\leq K\left\lVert\mathbf{q}_{i}\right\rVert+o(\left\lVert\mathbf{q}_{i}\right\rVert).

Note that 𝐪i2𝐮i\left\lVert\mathbf{q}_{i}\right\rVert\leq 2\left\lVert\mathbf{u}_{i}\right\rVert by Triangle Inequality, and therefore maxi𝐪iP0\max_{i}\left\lVert\mathbf{q}_{i}\right\rVert\to^{P}0 as nn\to\infty, which implies maxi|t^iti|P0\max_{i}|\hat{t}_{i}-t_{i}|\to^{P}0 as nn\to\infty.
Recall that the regression parameter estimators based on true regressor values are

β^true=1ni=1n(yiy¯)(tit¯)1ni=1n(tit¯)2,α^true=y¯β^truet¯,\hat{\beta}_{true}=\frac{\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\bar{y})(t_{i}-\bar{t})}{\frac{1}{n}\sum_{i=1}^{n}(t_{i}-\bar{t})^{2}},\hskip 14.22636pt\hat{\alpha}_{true}=\bar{y}-\hat{\beta}_{true}\bar{t},

and the substitute or plug-in estimators are

β^sub=1ni=1n(yiy¯)(t^it^¯)1ni=1n(t^it^¯)2,α^sub=y¯β^subt^¯.\hat{\beta}_{sub}=\frac{\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\bar{y})(\hat{t}_{i}-\bar{\hat{t}})}{\frac{1}{n}\sum_{i=1}^{n}(\hat{t}_{i}-\bar{\hat{t}})^{2}},\hskip 14.22636pt\hat{\alpha}_{sub}=\bar{y}-\hat{\beta}_{sub}\bar{\hat{t}}.

Note that by Triangle Inequality, as nn\to\infty, maxi|t^iti|P0\max_{i}|\hat{t}_{i}-t_{i}|\to^{P}0 implies |t¯t^¯|P0|\bar{t}-\bar{\hat{t}}|\to^{P}0, |1ni=1n(tit¯)21ni=1n(t^it^¯)2|P0|\frac{1}{n}\sum_{i=1}^{n}(t_{i}-\bar{t})^{2}-\frac{1}{n}\sum_{i=1}^{n}(\hat{t}_{i}-\bar{\hat{t}})^{2}|\to^{P}0 and |1ni=1nyi(tit¯)1ni=1nyi(t^it^¯)|P0|\frac{1}{n}\sum_{i=1}^{n}y_{i}(t_{i}-\bar{t})-\frac{1}{n}\sum_{i=1}^{n}y_{i}(\hat{t}_{i}-\bar{\hat{t}})|\to^{P}0. Thus, as nn\to\infty, |β^subβ^true|P0|\hat{\beta}_{sub}-\hat{\beta}_{true}|\to^{P}0 and |α^subα^true|P0|\hat{\alpha}_{sub}-\hat{\alpha}_{true}|\to^{P}0. Recalling α^true\hat{\alpha}_{true} and β^true\hat{\beta}_{true} are consistent for α\alpha and β\beta respectively, we conclude α^sub\hat{\alpha}_{sub} and β^sub\hat{\beta}_{sub} too are consistent for α\alpha and β\beta.


Corollary 1: Conditioning upon the true regressors tit_{i} in the setting of Theorem 4, the following two conditions hold
(A) 𝔼(α^sub)α,𝔼(β^sub)β as n.\mathbb{E}(\hat{\alpha}_{sub})\to\alpha,\hskip 14.22636pt\mathbb{E}(\hat{\beta}_{sub})\to\beta\text{ as $n\to\infty$}.,
(B) For any two linear unbiased estimators α~\tilde{\alpha} and β~\tilde{\beta} and an arbitrary δ>0\delta>0,
var(α^sub)var(α~)+δ,var(β^sub)var(β~)+δ\mathrm{var}(\hat{\alpha}_{sub})\leq\mathrm{var}(\tilde{\alpha})+\delta,\hskip 14.22636pt\mathrm{var}(\hat{\beta}_{sub})\leq\mathrm{var}(\tilde{\beta})+\delta for sufficiently large nn.

Proof: From Theorem 4 it directly follows that as nn\to\infty, E(α^sub)αE(\hat{\alpha}_{sub})\to\alpha, E(β^sub)βE(\hat{\beta}_{sub})\to\beta. Moreover, note that (α^subα^true)P0(\hat{\alpha}_{sub}-\hat{\alpha}_{true})\to^{P}0 and (β^subβ^true)P0(\hat{\beta}_{sub}-\hat{\beta}_{true})\to^{P}0 as nn\to\infty. Thus, for any δ>0\delta>0, var(α^sub)var(α^true)+δvar(\hat{\alpha}_{sub})\leq var(\hat{\alpha}_{true})+\delta, var(β^sub)var(β^true)+δvar(\hat{\beta}_{sub})\leq var(\hat{\beta}_{true})+\delta for sufficiently large nn. Recalling that α^true\hat{\alpha}_{true} and β^true\hat{\beta}_{true} are best linear unbiased estimators of α\alpha and β\beta respectively, (B)(B) follows.


Theorem 6: Consider a random dot product graph for which each node lies on an arclength parameterized one-dimensional manifold ψ([0,L])\psi([0,L]) where ψ\psi is unknown. Let 𝐱i=ψ(ti)\mathbf{x}_{i}=\psi(t_{i}) be the latent position of the ii-th node for all ii. Assume yi=α+βti+ϵiy_{i}=\alpha+\beta t_{i}+\epsilon_{i}, ϵiiidN(0,σϵ2)\epsilon_{i}\sim^{iid}N(0,\sigma^{2}_{\epsilon}) for i[s]i\in[s], where ss is a fixed integer. The predicted response at the rr-th node based on the true regressors is y^r=α^true+β^truetr\hat{y}_{r}=\hat{\alpha}_{true}+\hat{\beta}_{true}t_{r}. There exist sequences nKn_{K}\to\infty of number of nodes and λK0\lambda_{K}\to 0 of neighbourhood parameters such that for every r>sr>s, |y^ry~r(K)|P0|\hat{y}_{r}-\tilde{y}_{r}^{(K)}|\to^{P}0 as KK\to\infty, where y~r(K)=PRED(𝐀(K),d,λK,l,{yi}i=1s,r)\tilde{y}_{r}^{(K)}=\mathrm{PRED}(\mathbf{A}^{(K)},d,\lambda_{K},l,\left\{y_{i}\right\}_{i=1}^{s},r) (see Algorithm 1b), 𝐀(K)\mathbf{A}^{(K)} being the adjacency matrix when the number of nodes is nKn_{K} and ll being a fixed natural number that satisfies l>r>sl>r>s.

Proof: Fix ll\in\mathbb{N} such that s<rls<r\leq l. For each KK\in\mathbb{N}, choose number of nodes nKn_{K} to be observed and appropriate λK\lambda_{K} such that eqn 9 holds, and recall from eqn 10 that (z^1(K),.z^l(K))(\hat{z}_{1}^{(K)},....\hat{z}_{l}^{(K)}) is the minimizer of the raw stress criterion:

σl(z1,zl)=12i=1lj=1l(|zizj|dnK,λK(𝐱^i,𝐱^j))2\sigma_{l}(z_{1},...z_{l})=\frac{1}{2}\sum_{i=1}^{l}\sum_{j=1}^{l}(|z_{i}-z_{j}|-d_{n_{K},\lambda_{K}}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j}))^{2}

From Theorem 5, we know that for all i,j[l]i,j\in[l], as KK\to\infty,

(|z^i(K)z^j(K)||titj|)P0.(|\hat{z}_{i}^{(K)}-\hat{z}_{j}^{(K)}|-|t_{i}-t_{j}|)\to^{P}0. (12)

Define

β~(K)=i=1s(yiy¯)(z^i(K)z^¯(K))i=1s(z^i(K)z^¯(K))2,α~(K)=y¯β~(K)z^¯(K)\tilde{\beta}^{(K)}=\frac{\sum_{i=1}^{s}(y_{i}-\bar{y})(\hat{z}_{i}^{(K)}-\bar{\hat{z}}^{(K)})}{\sum_{i=1}^{s}(\hat{z}_{i}^{(K)}-\bar{\hat{z}}^{(K)})^{2}},\hskip 28.45274pt\tilde{\alpha}^{(K)}=\bar{y}-\tilde{\beta}^{(K)}\bar{\hat{z}}^{(K)} (13)

where z^¯(K)=1si=1sz^i(K)\bar{\hat{z}}^{(K)}=\frac{1}{s}\sum_{i=1}^{s}\hat{z}_{i}^{(K)}. Then we can define the predictor of yry_{r} based on z^i(K)\hat{z}_{i}^{(K)}’s to be

y~r(K)=α~(K)+β~(K)z^r(K)=y¯+β~(K)(z^r(K)z^¯(K)).\tilde{y}_{r}^{(K)}=\tilde{\alpha}^{(K)}+\tilde{\beta}^{(K)}\hat{z}_{r}^{(K)}=\bar{y}+\tilde{\beta}^{(K)}(\hat{z}_{r}^{(K)}-\bar{\hat{z}}^{(K)}).

If original tit_{i}’s were known, the predictor of yry_{r} would be

y^r=α^+β^tr=y¯+β^(trt¯).\hat{y}_{r}=\hat{\alpha}+\hat{\beta}t_{r}=\bar{y}+\hat{\beta}(t_{r}-\bar{t}).

Now, recall that for all i,j[l]i,j\in[l], (|z^i(K)z^j(K)||titj|)P0(|\hat{z}_{i}^{(K)}-\hat{z}_{j}^{(K)}|-|t_{i}-t_{j}|)\to^{P}0 as KK\to\infty. Thus, for any τ>0\tau>0 and ν>0\nu>0, there exists K0K_{0}\in\mathbb{N} such that for all KK0K\geq K_{0}, with probability at least (1ν)(1-\nu), for all i[l]i\in[l],

|a(K)ti+b(K)z^i(K)|τ\displaystyle|a^{(K)}t_{i}+b^{(K)}-\hat{z}_{i}^{(K)}|\leq\tau |a(K)(tit¯)(z^i(K)z^¯(K))|2τ\displaystyle\implies|a^{(K)}(t_{i}-\bar{t})-(\hat{z}_{i}^{(K)}-\bar{\hat{z}}^{(K)})|\leq 2\tau (14)

where a(K){1,+1}a^{(K)}\in\left\{-1,+1\right\}, b(K)b^{(K)}\in\mathbb{R}. Note that (a(K))2=1(a^{(K)})^{2}=1 for all KK. Thus, taking sufficiently large KK, we can bring i=1syi(tit¯)(trt¯)\sum_{i=1}^{s}y_{i}(t_{i}-\bar{t})(t_{r}-\bar{t}) and i=1syi(z^i(K)z^¯(K))(z^r(K)z^¯(K))\sum_{i=1}^{s}y_{i}(\hat{z}_{i}^{(K)}-\bar{\hat{z}}^{(K)})(\hat{z}^{(K)}_{r}-\bar{\hat{z}}^{(K)}) arbitrarily close with arbitrarily high probability. We can also bring i=1s(z^i(K)z^¯(K))2\sum_{i=1}^{s}(\hat{z}_{i}^{(K)}-\bar{\hat{z}}^{(K)})^{2} and i=1s(tit¯)2\sum_{i=1}^{s}(t_{i}-\bar{t})^{2} arbitrarily close with arbitrarily high probability, by choosing sufficiently large KK. Recall that

y~r(K)=y¯+i=1syi(z^i(K)z^¯(K))(z^r(K)z^¯(K))i=1s(z^i(K)z^¯(K))2,\displaystyle\tilde{y}_{r}^{(K)}=\bar{y}+\frac{\sum_{i=1}^{s}y_{i}(\hat{z}_{i}^{(K)}-\bar{\hat{z}}^{(K)})(\hat{z}^{(K)}_{r}-\bar{\hat{z}}^{(K)})}{\sum_{i=1}^{s}(\hat{z}_{i}^{(K)}-\bar{\hat{z}}^{(K)})^{2}}, (15)
y^r=y¯+i=1syi(tit¯)(trt¯)i=1s(tit¯)2.\displaystyle\hat{y}_{r}=\bar{y}+\frac{\sum_{i=1}^{s}y_{i}(t_{i}-\bar{t})(t_{r}-\bar{t})}{\sum_{i=1}^{s}(t_{i}-\bar{t})^{2}}.

Thus, we can bring y^r\hat{y}_{r} and y~r(K)\tilde{y}_{r}^{(K)} arbitrarily close with arbitrarily high probability, by choosing sufficiently large KK, which means |y~r(K)y^r|P0|\tilde{y}_{r}^{(K)}-\hat{y}_{r}|\to^{P}0 as KK\to\infty.

Corollary 2: In the setting of Theorem 6, suppose {(y~1(K),y~2(K),.y~s(K))}K=1\left\{(\tilde{y}_{1}^{(K)},\tilde{y}_{2}^{(K)},....\tilde{y}_{s}^{(K)})\right\}_{K=1}^{\infty} is the sequence of vector of predicted responses at the first ss nodes of the random dot product graph, based on the raw-stress embeddings z^1,,z^s\hat{z}_{1},...,\hat{z}_{s}. Define

F=(s2)i=1s(y^iy¯)2i=1s(yiy^i)2,F^(K)=(s2)i=1s(y~i(K)y¯)2i=1s(yiy~i(K))2.F^{*}=(s-2)\frac{\sum_{i=1}^{s}(\hat{y}_{i}-\bar{y})^{2}}{\sum_{i=1}^{s}(y_{i}-\hat{y}_{i})^{2}},\hskip 28.45274pt\hat{F}^{(K)}=(s-2)\frac{\sum_{i=1}^{s}(\tilde{y}^{(K)}_{i}-\bar{y})^{2}}{\sum_{i=1}^{s}(y_{i}-\tilde{y}^{(K)}_{i})^{2}}. (16)

Consider testing the null hypothesis H0:β=0H_{0}:\beta=0 against H1:β0H_{1}:\beta\neq 0 in the absence of the true regressors tit_{i}, and the decision rule is: reject H0H_{0} in favour of H1H_{1} at level of significance α~\tilde{\alpha} if F^(K)>cα~\hat{F}^{(K)}>c_{\tilde{\alpha}}, where cα~c_{\tilde{\alpha}} is the (1α~)(1-\tilde{\alpha})-th quantile of F1,s2F_{1,s-2} distribution. If the power of this test is denoted by π^(K)\hat{\pi}^{(K)}, then limKπ^(K)=π\lim_{K\to\infty}\hat{\pi}^{(K)}=\pi^{*}, where π\pi^{*} is the power of the test for which the decision rule is to reject H0H_{0} in favour of H1H_{1} at level of significance α~\tilde{\alpha} if F>cα~F^{*}>c_{\tilde{\alpha}}.

Proof: From Theorem 5, we have maxi[s]|y~i(K)y^i|P0\max_{i\in[s]}|\tilde{y}_{i}^{(K)}-\hat{y}_{i}|\to^{P}0, which implies (F^(K)F)P0(\hat{F}^{(K)}-F^{*})\to^{P}0 as KK\to\infty. Thus, for any (α,β)2(\alpha,\beta)\in\mathbb{R}^{2}, as KK\to\infty, Pα,β[F^(K)>cα~]Pα,β[F>cα~]P_{\alpha,\beta}[\hat{F}^{(K)}>c_{\tilde{\alpha}}]\to P_{\alpha,\beta}[F^{*}>c_{\tilde{\alpha}}].