This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

High-dimensional sliced inverse regression with endogeneity

Linh H. Nghiem Corresponding author: [email protected] School of Mathematics and Statistics, University of Sydney, Australia Francis K.C. Hui Research School of Finance, Actuarial Studies and Statistics, The Australian National University, Australia Samuel Muller School of Mathematical and Physical Sciences, Macquarie University, Sydney Australia A.H. Welsh
Abstract

Sliced inverse regression (SIR) is a popular sufficient dimension reduction method that identifies a few linear transformations of the covariates without losing regression information with the response. In high-dimensional settings, SIR can be combined with sparsity penalties to achieve sufficient dimension reduction and variable selection simultaneously. Nevertheless, both classical and sparse estimators assume the covariates are exogenous. However, endogeneity can arise in a variety of situations, such as when variables are omitted or are measured with error. In this article, we show such endogeneity invalidates SIR estimators, leading to inconsistent estimation of the true central subspace. To address this challenge, we propose a two-stage Lasso SIR estimator, which first constructs a sparse high-dimensional instrumental variables model to obtain fitted values of the covariates spanned by the instruments, and then applies SIR augmented with a Lasso penalty on these fitted values. We establish theoretical bounds for the estimation and selection consistency of the true central subspace for the proposed estimators, allowing the number of covariates and instruments to grow exponentially with the sample size. Simulation studies and applications to two real-world datasets in nutrition and genetics illustrate the superior empirical performance of the two-stage Lasso SIR estimator compared with existing methods that disregard endogeneity and/or nonlinearity in the outcome model.

Keywords: Dimension reduction, high-dimension, instrumental variable, Lasso, measurement error, statistical genomics


1 Introduction

Sufficient dimension reduction (SDR, Ma and Zhu,, 2013; Li,, 2018) refers to a class of methods that assume the outcome yy\in\mathbb{R} depends on a set of covariates 𝐗p\mathbf{X}\in\mathbb{R}^{p} through a small number of linear combinations 𝜷1𝐗,,𝜷d𝐗\bm{\beta}_{1}^{\top}\mathbf{X},\ldots,\bm{\beta}_{d}^{\top}\mathbf{X}, with dpd\ll p. These linear combinations are known as sufficient predictors and retain the full regression information between the response and covariates. Since the pioneering work ofLi, (1991), a vast literature has grown on different SDR approaches, ranging from inverse-moment-based and regression-based methods (e.g., Cook and Forzani,, 2008; Bura et al.,, 2022), forward regression (e.g., Xia et al.,, 2002; Dai et al.,, 2024), and semiparametric techniques (e.g., Zhou and Zhu,, 2016; Sheng and Yin,, 2016). Among the various SDR methods, sliced inverse regression (SIR, Li,, 1991) remains the most widely used due to its simplicity and computational efficiency, and as such continues to be the subject of much research (see Tan et al.,, 2018; Xu et al.,, 2022, among many others). When the number of covariates pp is large, it is often (further) assumed each linear combination depends only on a few covariates, i.e., the vectors 𝜷1,,𝜷d\bm{\beta}_{1},\ldots,\bm{\beta}_{d} are sparse. Recent literature has thus sought to combine SIR and other SDR methods with variable selection to achieve sparse sufficient dimension reduction (e.g., Lin et al.,, 2018, 2019; Nghiem et al., 2023c, ).

Much of the literature dedicated to SIR and many other SDR methods, including those mentioned above, has focused on the setting where all covariates are exogenous, and little has been done to tackle sufficient dimension reduction with endogeneity. While there is some related literature on single index models with endogenity (Hu et al.,, 2015; Gao et al.,, 2020), their approach requires approximation of the link function and is not easy to generalize to multiple index models. More specifically, letting 𝐁=[𝜷1,,𝜷d]\mathbf{B}=[\bm{\beta}_{1},\ldots,\bm{\beta}_{d}] denote a p×dp\times d matrix, we can formulate SDR as a multiple index model

y=f(𝐁𝐗,ε),y=f(\mathbf{B}^{\top}\mathbf{X},\varepsilon), (1)

where ff denotes an unknown link function, and ε\varepsilon\in\mathbb{R} is random noise. In (1), the target of estimation is the column space of 𝐁\mathbf{B}, also known as the central subspace. Almost all current SDR methods assume the random noise ε\varepsilon is independent of 𝐗\mathbf{X}, implying y𝐗𝐁𝐗y\perp\mathbf{X}\mid\mathbf{B}^{\top}\mathbf{X} where ‘\perp’ denotes statistical independence. However, this assumption may not be reasonable in many settings where the random noise ε\varepsilon is dependent upon 𝐗\mathbf{X} i.e., the covariates are endogenous.

We discuss two examples, corresponding to two data applications in Section 6, where such endogeneity can arise.

  • Omitted-variable bias occurs when one or more (truly important) covariate is not included in the model. A classic example relates to studying the effect of education on future earnings. Here, it is well-known future earnings depend on many other variables that are also highly correlated with education level, such as family background or an individual’s overall abilities (Card,, 1999). Such variables are usually not fully known or not collected, so the ordinary least squares fit of a regression model of earnings on education levels and other observed factors produces a biased estimate for the true effect of education (Card,, 1999). Similarly, in Section 6.1, we consider a genomic study that relates a biological outcome (e.g., body weight) to gene expressions of mice. Outside of gene expressions, there can be potentially unmeasured confounding factors, such as those related to environmental conditions, which influence both gene expression levels and the outcome.

  • Covariate measurement error arises when some or all covariates are not accurately measured. In the second data application in Section 6.2, we consider a nutritional study relating an individual’s health outcomes to their long–term diet, collected through 24–hour recall interviews that are subject to measurement errors (Carroll et al.,, 2006). Let 𝐗\mathbf{X} denote the long-term dietary covariates. The observed data can then be modeled by 𝐖=𝐗+𝐄\mathbf{W}=\mathbf{X}+\mathbf{E}, with 𝐄\mathbf{E} denoting an additive measurement error independent of 𝐗\mathbf{X}. In this situation, even when in the true model (1) it holds that ε𝐗\varepsilon\perp\mathbf{X}, a naive analysis which replaces 𝐗\mathbf{X} by 𝐖\mathbf{W} results in endogeneity, since we can write y=f(𝐁𝐖+𝐁𝐄,ε)y=f(\mathbf{B}^{\top}\mathbf{W}+\mathbf{B}^{\top}\mathbf{E},\varepsilon) and 𝐄\mathbf{E} is correlated with 𝐖\mathbf{W}. This causes naive estimators that ignore measurement errors to be biased and inconsistent.

As an aside, note that while the omitted variables lead to endogeneity because too few covariates are included in the model, Fan and Liao, (2014) argued that endogeneity also arises when a large number of covariates are included in the model i.e., 𝐗\mathbf{X} is high-dimensional in (1). Specifically, for large pp there is a high probability at least one covariate is correlated with the random noise. For instance, genome-wide association studies typically include thousands of gene expression covariates to model a clinical outcome, and at least one gene expression is likely to correlate with random noise reflecting experimental conditions or environmental perturbations (Lin et al.,, 2015).

In this paper, we show under fairly general conditions that endogeneity also invalidates sliced inverse regression, leading to inconsistent estimates of the true column space of 𝐁\mathbf{B}. To address this issue, we propose a two-stage Lasso SIR method to estimate the central space in the presence of endogeneity when valid instrumental variables are available. In the first stage, we construct a sparse multivariate linear model regressing all the covariates against the instruments and then obtain fitted values from this model. In the second stage, we apply a sparse sliced inverse regression between the response and the fitted values from the first stage, to perform simultaneous SDR and variable selection.

Addressing endogeneity via instrumental variables has been extensively studied in the econometrics and statistics literature; see Angrist and Krueger, (2001) for a review. Classical instrumental variable estimators are mostly studied with a linear model for the outcome, i.e., y=𝐗𝜷0+εy=\mathbf{X}^{\top}\bm{\beta}_{0}+\varepsilon, and often computing a two-stage least squares estimator or a limited information maximum likelihood estimator. Hansen et al., (2008) showed in this setting that these estimators are consistent for 𝜷0\bm{\beta}_{0} when the number of instruments qq increases slowly with the sample size. More recently however, a number of methods have combined instrumental variables with regularization techniques on the covariates and/or instrumental variables to improve their performance in high-dimensional settings e.g., the Lasso-type generalized methods of moments (Caner,, 2009; Caner et al.,, 2018), focused generalized methods of moments (Fan and Liao,, 2014), and two-stage penalized least squares Lin et al., (2015); the last of these methods inspires this article. Another line of research has focused on the use of sparsity-inducing penalties to estimate optimal instruments from a large set of a-priori valid instruments (see for instance Belloni et al.,, 2012; Chernozhukov et al.,, 2015), or to screen out invalid instruments (e.g., Windmeijer et al.,, 2019). To the best of our knowledge though, none of these methods allow for the presence of non-linear link functions in the model for the response e.g., the multiple index model (1), and more broadly have not been adapted to the SDR context.

Under a setting where both the number of covariates pp and the number of instrumental variables qq can grow exponentially with sample size nn, we derive theoretical bounds on the estimation and selection consistency of the two-stage Lasso SIR estimator. Simulation studies demonstrate the proposed estimator exhibits superior finite sample performance compared with a single stage Lasso SIR estimator when endogeneity is present but ignored, and the two-stage regularized estimator proposed by Lin et al., (2015) when the outcome model is non-linear in nature. We apply the proposed method to two real data applications studying the effects of different nutrients on the cholesterol level (covariate measurement error), and for identifying genes that are relevant to the body weight of mice (omitted variable bias).

The remainder of this article is structured as follows. In Section 2, we briefly review two versions of sliced inverse regression and prove that endogeneity causes SIR to be inconsistent for estimating the target subspace. In Section 3, we formulate the proposed two-stage Lasso estimators, and derive their theoretical properties in Section 4. Section 5 presents simulation studies to compare the performance of the proposed estimator with existing methods, while Section 6 discusses two data applications Finally, Section 7 offers some concluding remarks.

2 SIR and endogeneity

In this section, we first establish the setting of interest, namely a high-dimensional multiple index model for the response along with a high-dimensional linear instrumental variable model for the covariates characterizing endogeneity. We then review SIR and a version augmented with the Lasso penalty (Section 2.1), and then demonstrate inconsistency of SIR under endogeneity in Section 2.2.

Before proceeding, we set out some notation to be used throughout the paper. For any matrix 𝐀\mathbf{A} with elements aija_{ij}, let 𝐀1=maxji|aij|\|\mathbf{A}\|_{1}=\max_{j}\sum_{i}|a_{ij}|, 𝐀=maxij|aij|\|\mathbf{A}\|_{\infty}=\max_{i}\sum_{j}|a_{ij}|, and 𝐀max=maxi,j|ai,j|\left\lVert\mathbf{A}\right\rVert_{\max}=\max_{i,j}|a_{i,j}| denote the matrix 11-norm, \infty-norm, and element-wise maximum norm, respectively. Also, let 𝐀2\|\mathbf{A}\|_{2} denote the 2\ell_{2}-operator norm, i.e. the largest singular value of 𝐀\mathbf{A}. For any vector 𝐯\mathbf{v} with elements viv_{i}, let 𝐯2=(ivi2)1/2\|\mathbf{v}\|_{2}=\left(\sum_{i}v_{i}^{2}\right)^{1/2} and 𝐯1=i|vi|\|\mathbf{v}\|_{1}=\sum_{i}|v_{i}| and 𝐯=maxi|vi|\|\mathbf{v}\|_{\infty}=\max_{i}|v_{i}| denote its Euclidean norm, 1\ell_{1} norm, and \ell_{\infty} norm, respectively. Finally, for an index set II, let 𝐯I\mathbf{v}_{I} denote the subvector of 𝐯\mathbf{v} formed with the elements of 𝐯\mathbf{v} whose indices iIi\in I, |I||I| denote its cardinality, and IcI^{c} denote its complement.

Consider a set of nn independent observations {(yi,𝐗i,𝐙i);i=1,,n}\{(y_{i},\mathbf{X}_{i}^{\top},\mathbf{Z}_{i}^{\top});i=1,\ldots,n\} where, without loss of generality, we assume E(𝐗i)=E(𝐙i)=𝟎E(\mathbf{X}_{i})=E(\mathbf{Z}_{i})=\bm{0}. Each triplet (yi,𝐗i,𝐙i)(y_{i},\mathbf{X}_{i}^{\top},\mathbf{Z}_{i}^{\top}) follows the multiple index model (1) coupled with the linear instrumental variable model. That is, the full model is given by

yi=f(𝐗i𝐁,εi),𝐗i=𝚪𝐙i+𝐔i,i=1,,n,\displaystyle y_{i}=f(\mathbf{X}_{i}^{\top}\mathbf{B},~{}\varepsilon_{i}),~{}\quad\mathbf{X}_{i}=\bm{\Gamma}^{\top}\mathbf{Z}_{i}+\mathbf{U}_{i},~{}\quad i=1,\ldots,n, (2)

where 𝐁p×d\mathbf{B}\in\mathbb{R}^{p\times d}, 𝚪q×p\bm{\Gamma}\in\mathbb{R}^{q\times p}, εi\varepsilon_{i}\in\mathbb{R} denotes the random noise for the outcome model, and 𝐄ip\mathbf{E}_{i}\in\mathbb{R}^{p} denotes the random noise for the instrumental variable model. The vector (𝐔i,εi)p+1(\mathbf{U}_{i}^{\top},\varepsilon_{i})^{\top}\in\mathbb{R}^{p+1} is assumed to have mean zero and finite covariance matrix 𝚺\bm{\Sigma}, whose diagonal elements are denoted by σj2\sigma_{j}^{2} for j=1,,p+1j=1,\ldots,p+1. To characterize endogeneity between 𝐗\mathbf{X} and ε\varepsilon, the off-diagonal elements in the last row and column of 𝚺\bm{\Sigma} are generally non-zero, while the instrumental variable 𝐙i\mathbf{Z}_{i} satisfies 𝐙i(𝐔i,εi)\mathbf{Z}_{i}\perp(\mathbf{U}_{i}^{\top},\varepsilon_{i})^{\top} for all i=1,,ni=1,\ldots,n.

We seek to estimate the column space of 𝐁\mathbf{B}, when the dimensions pp and qq are growing exponentially with the sample size nn. A common assumption in such settings is that sparsity holds. Specifically: i) each covariate XjX_{j} depends upon only a few instrumental variables, such that in the instrumental variable model the jjth column of 𝚪\bm{\Gamma} is a sparse vector with at most rj<qr_{j}<q non-zero elements, where r=maxj=1,,prjr=\max_{j=1,\ldots,p}r_{j} is bounded, and ii) the multiple index model is sparse such that yy depends only on s<ps<p covariates i.e., at most ss rows of 𝐁\mathbf{B} are non-zero row vectors. Equivalently, if 𝒫(𝐁)=𝐁(𝐁𝐁)1𝐁\mathcal{P}(\mathbf{B})=\mathbf{B}(\mathbf{B}^{\top}\mathbf{B})^{-1}\mathbf{B}^{\top} denotes the corresponding projection matrix, then only ss diagonal elements of 𝒫(𝐁)\mathcal{P}(\mathbf{B}) are non-zero. Without loss of generality, we assume S={1,,s}S=\{1,\ldots,s\} i.e., the first ss covariates indexes this set of non-zero elements, or, equivalently, the support set of the column space of 𝐁\mathbf{B}. We let 𝒳,𝒵\mathcal{X},\mathcal{Z} and 𝐲\mathbf{y} denote the n×pn\times p matrix of covariates, n×qn\times q matrix of instrumental variables, and n×1n\times 1 vector of responses, whose iith rows are given by 𝐗i\mathbf{X}_{i}^{\top}, 𝐙i\mathbf{Z}_{i}^{\top} and yiy_{i}, respectively. Let 𝐱j\mathbf{x}_{j} and 𝐳j\mathbf{z}_{j} denote the jjth column of 𝒳\mathcal{X} and 𝒵\mathcal{Z}, respectively. For the remainder of this article, we use y,𝐗y,\mathbf{X}, and 𝐙\mathbf{Z} to denote the random variables from which (yi,𝐗i,𝐙i)(y_{i},\mathbf{X}_{i}^{\top},\mathbf{Z}_{i}^{\top}) is drawn.

2.1 Review of sliced inverse regression and Lasso SIR

The sliced inverse regression or SIR estimator (Li,, 1991) of the column space of 𝐁\mathbf{B} was developed under the assumption that the random noise ε\varepsilon is independent of the covariates 𝐗\mathbf{X}. The key assumption of SIR is the following linearity condition, which is satisfied when the covariates 𝐗\mathbf{X} follow an elliptical distribution.

Condition 2.1.

The conditional expectation E(𝐗𝐁𝐗)E(\mathbf{X}\mid\mathbf{B}^{\top}\mathbf{X}) is a linear function of 𝐁𝐗\mathbf{B}^{\top}\mathbf{X}.

Under this condition, the population foundation for SIR is that the inverse regression curve E(𝐗y)E(\mathbf{X}\mid y) is contained in the linear subspace spanned by 𝚺𝐗𝐁\bm{\Sigma}_{\mathbf{X}}\mathbf{B} (Theorem 3.1, Li,, 1991). As a result, letting {𝜼1,,𝜼d}\{\bm{\eta}^{*}_{1},\ldots,\bm{\eta}^{*}_{d}\} denote the set of eigenvectors associated with the dd largest eigenvalues of 𝚲=Cov{E(𝐗y)}\bm{\Lambda}^{*}=\operatorname{\text{Cov}}\left\{E(\mathbf{X}\mid y)\right\}, we have 𝚺𝐗𝜷k𝜼k\bm{\Sigma}_{\mathbf{X}}\bm{\beta}_{k}\propto\bm{\eta}^{*}_{k} for k=1,,dk=1,\ldots,d. Since 𝚲=𝚺𝐗E{Cov(𝐗y)}\bm{\Lambda}^{*}=\bm{\Sigma}_{\mathbf{X}}-E\left\{\operatorname{\text{Cov}}(\mathbf{X}\mid y)\right\}, we can estimate 𝚲\bm{\Lambda}^{*} from estimates of 𝚺𝐗\bm{\Sigma}_{\mathbf{X}} and 𝐓=E{Cov(𝐗y)}\mathbf{T}=E\left\{\operatorname{\text{Cov}}(\mathbf{X}\mid y)\right\}. This approach to estimation for SIR is recommended by Li, (1991), and used by many authors in the literature (Tan et al.,, 2018; Zou,, 2006), although it is not the only approach possible (Lin et al.,, 2019).

In more detail, suppose we estimate 𝚺𝐗\bm{\Sigma}_{\mathbf{X}} by the sample covariance 𝚺^𝐗=n1𝒳𝒳\widehat{\bm{\Sigma}}_{\mathbf{X}}=n^{-1}\mathcal{X}^{\top}\mathcal{X}. When n<pn<p, this sample covariance matrix is not invertible, but the Lasso SIR estimator generally does not require such invertibility as we will see in a moment. Then, partitioning the data into HH non-overlapping slices S1,,ShS_{1},\ldots,S_{h} based on the order statistics of yy, we compute the sample covariance matrices within each slice and average them to form

𝐓^=1Hh=1H{1c1iSh(𝐗i𝐗¯h)(𝐗i𝐗¯h)},where𝐗¯h=1ciSh𝐗i,\displaystyle\widehat{\mathbf{T}}=\dfrac{1}{H}\sum_{h=1}^{H}\left\{\frac{1}{c-1}\sum_{i\in S_{h}}(\mathbf{X}_{i}-\bar{\mathbf{X}}_{h})(\mathbf{X}_{i}-\bar{\mathbf{X}}_{h})^{\top}\right\},\text{where}\;\bar{\mathbf{X}}_{h}=\dfrac{1}{c}\sum_{i\in S_{h}}\mathbf{X}_{i},

where cc denotes the slice size and we have assumed n=cHn=cH to simplify the notation. Zhu et al., (2006) showed that 𝐓^\widehat{\mathbf{T}} can be rewritten in the form

𝐓^=1H(c1)𝒳(𝐈H𝐏c)𝒳,where𝐏c=𝐈c1c𝐞c𝐞c,\displaystyle\widehat{\mathbf{T}}=\dfrac{1}{H(c-1)}\mathcal{X}^{\top}(\mathbf{I}_{H}\otimes\mathbf{P}_{c})\mathcal{X},\text{where}\;\mathbf{P}_{c}=\mathbf{I}_{c}-\dfrac{1}{c}\mathbf{e}_{c}\mathbf{e}_{c}^{\top},

and 𝐞c\mathbf{e}_{c} denotes the c×1c\times 1 vector of ones. We can then estimate 𝚲\bm{\Lambda}^{*} by 𝚲^=𝚺^𝐗𝐓^=n1𝒳𝐃𝒳\widehat{\bm{\Lambda}}^{*}=\widehat{\bm{\Sigma}}_{\mathbf{X}}-\widehat{\mathbf{T}}=n^{-1}\mathcal{X}^{\top}\mathbf{D}\mathcal{X}, where 𝐃=𝐈n{c/(c1)}(𝐈H𝐏c)\mathbf{D}=\mathbf{I}_{n}-\{c/(c-1)\}(\mathbf{I}_{H}\otimes\mathbf{P}_{c}). It follows that the eigenvector 𝜼k\bm{\eta}^{*}_{k} can be estimated by 𝜼^k\hat{\bm{\eta}}^{*}_{k}, the eigenvector associated with the kkth largest eigenvalue λ^k\hat{\lambda}^{*}_{k} of 𝚲^\widehat{\bm{\Lambda}}^{*}.

For settings where pp is large and 𝜷k\bm{\beta}_{k} is sparse, consider replacing the population relationship 𝚺𝐗𝜷k𝜼k\bm{\Sigma}_{\mathbf{X}}\bm{\beta}_{k}\propto\bm{\eta}^{*}_{k} by its sample counterpart. Then we have 𝒳𝒳𝜷k𝜼^k=(λ^k)1𝒳𝐃𝒳𝜼^k\mathcal{X}^{\top}\mathcal{X}\bm{\beta}_{k}\propto\hat{\bm{\eta}}^{*}_{k}=(\hat{\lambda}^{*}_{k})^{-1}\mathcal{X}^{\top}\mathbf{D}\mathcal{X}\hat{\bm{\eta}}^{*}_{k}, from which Lin et al., (2019) proposed the Lasso SIR estimator

𝜷^k=argmin𝜷12n𝒚~k𝒳𝜷22+μk𝜷1,\hat{\bm{\beta}}_{k}^{*}=\arg\!\min_{\bm{\beta}}\frac{1}{2n}\left\lVert\tilde{\bm{y}}^{*}_{k}-\mathcal{X}\bm{\beta}\right\rVert_{2}^{2}+\mu_{k}\|\bm{\beta}\|_{1}, (3)

with the pseudo-response vector 𝒚~k=(λ^k)1𝐃𝒳𝜼^k\tilde{\bm{y}}^{*}_{k}=(\hat{\lambda}^{*}_{k})^{-1}\mathbf{D}\mathcal{X}\hat{\bm{\eta}}^{*}_{k}, and μk>0\mu_{k}>0 denotes a tuning parameter for the kkth pseudo-response. The Lasso penalty can be straightforwardly replaced by a more general sparsity-inducing penalty j=1pPn(|βj|)\sum_{j=1}^{p}P_{n}(|\beta_{j}|) (e.g., the adaptive Lasso penalty in Nghiem et al., 2023c, ). With an appropriate choice of μk\mu_{k}, the solution 𝜷^k\hat{\bm{\beta}}_{k} is sparse, facilitating variable selection for each dimension of the estimator. Variable selection for the estimated central subspace is obtained by taking the union of the estimated support sets across all dimensions k=1,,dk=1,\ldots,d.

2.2 Inconsistency of SIR under endogeneity

When endogeneity is present, the inverse regression curve E(𝐗y)E(\mathbf{X}\mid y) in no longer guaranteed to lie in the subspace spanned by 𝚺X𝐁\bm{\Sigma}_{X}\mathbf{B}, even when Condition 2.1 holds for the marginal 𝐗\mathbf{X}. The following proposition establishes this result in the case (𝐗,ε)(\mathbf{X},\varepsilon)^{\top} follows a multivariate Gaussian distribution.

Proposition 1.

Assume (𝐗,ε)(\mathbf{X}^{\top},\varepsilon)^{\top} follows a (p+1)(p+1)-dimensional Gaussian distribution. Let 𝚺𝐗,ε=Cov(𝐗,ε)\bm{\Sigma}_{\mathbf{X},\varepsilon}=\operatorname{\text{Cov}}(\mathbf{X},\varepsilon). Then the inverse regression curve E(𝐗y)E(\mathbf{X}\mid y) is contained in the linear subspace spanned by 𝚺𝐗𝐁\bm{\Sigma}_{\mathbf{X}}\mathbf{B} if and only if 𝚺𝐗1𝚺𝐗,εcol(𝐁)\bm{\Sigma}_{\mathbf{X}}^{-1}\bm{\Sigma}_{\mathbf{X},\varepsilon}\in\text{col}(\mathbf{B}), where col(𝐁)\text{col}(\mathbf{B}) denotes the column space of 𝐁\mathbf{B}.

In Proposition 1 the matrix 𝚺𝐗1𝚺𝐗,ε\bm{\Sigma}_{\mathbf{X}}^{-1}\bm{\Sigma}_{\mathbf{X},\varepsilon} is the coefficient vector of the best linear predictor of ε\varepsilon from 𝐗\mathbf{X}, so this proposition implies that even when endogeneity is present, the population foundation for SIR only holds when the random noise can be predicted from the linear combination 𝐁𝐗\mathbf{B}^{\top}\mathbf{X} itself. This condition is obviously restrictive, since it requires the same set of linear combinations to be used to predict both the response yy and the random noise ε\varepsilon.

When the condition in Proposition 1 is not satisfied, the Lasso SIR estimator in (3) is generally not a consistent estimator of the column space of 𝐁\mathbf{B}. The following proposition makes this precise in a simple setting where the number of covariates pp is fixed and the penalization is asymptotically negligible.

Proposition 2.

Let 𝐁^\widehat{\mathbf{B}}^{*} denote the Lasso SIR estimator formulated in (3) with tuning parameters μk0\mu_{k}\to 0 for all k=1,,dk=1,\ldots,d as nn\to\infty. Then, 𝒫(𝐁^)𝒫(𝐁)\mathcal{P}(\widehat{\mathbf{B}}^{*})\to\mathcal{P}(\mathbf{B}^{*}), where 𝐁=𝚺X1𝛈\mathbf{B}^{*}=\bm{\Sigma}_{X}^{-1}\bm{\eta}^{*}, with 𝛈\bm{\eta}^{*} the eigenvectors of 𝚲=Cov{E(𝐗y)}\bm{\Lambda}^{*}=\operatorname{\text{Cov}}\left\{E(\mathbf{X}\mid y)\right\}. As a result, if the covariance vector 𝚺𝐗,ε𝚺𝐗col(𝐁)\bm{\Sigma}_{\mathbf{X},\varepsilon}\notin\bm{\Sigma}_{\mathbf{X}}\text{col}(\mathbf{B}), then 𝐁^\widehat{\mathbf{B}}^{*} is inconsistent for the column space of 𝐁\mathbf{B}, and P(𝐁^)P(𝐁)op(1)\|P(\widehat{\mathbf{B}}^{*})-P(\mathbf{B})\|\neq o_{p}(1) as nn\to\infty.

Proposition 1 implies that when the covariance vector 𝚺𝐗,ε𝚺𝐗col(𝐁)\bm{\Sigma}_{\mathbf{X},\varepsilon}\notin\bm{\Sigma}_{\mathbf{X}}\text{col}(\mathbf{B}), endogeneity invalidates SIR as an estimator of the overall central subspace. Note this does not necessarily imply selection inconsistency, i.e. inconsistency in estimating the support set S={1,,s}S=\{1,\ldots,s\} corresponding to the non-zero elements of the diagonal of 𝒫(𝐁)\mathcal{P}(\mathbf{B}). Selection consistency can still be achieved if the column space of 𝐁\mathbf{B}^{*} has the same support as that of 𝐁\mathbf{B}; see the Supplementary Material S1 for an empirical demonstration. On the other hand, when the number of covariates pp grows then it is unlikely 𝐁\mathbf{B}^{*} will share the same support as 𝐁\mathbf{B}. This motivates us to propose a new SIR method to handle endogeneity in high dimensional settings.

3 A two-stage Lasso SIR estimator

To overcome the general inconsistency of the SIR estimator when endogeneity is present, we propose a two-stage approach for SDR and variable selection inspired by the classical two-stage least-squares estimator (Hansen et al.,, 2008). Let 𝐗^=E(𝐗𝐙)=𝚪𝐙\widehat{\mathbf{X}}=E(\mathbf{X}\mid\mathbf{Z})=\bm{\Gamma}^{\top}\mathbf{Z}, the conditional expectation of the covariates on the instruments. We first impose a new linearity condition to ensure the conditional expectation E(𝐗^y)E(\widehat{\mathbf{X}}\mid y) behaves similarly to E(𝐗y)E(\mathbf{X}\mid y) when no endogeneity is present.

Condition 3.1.

The conditional expectation E(𝐗^𝐁𝐗)E(\widehat{\mathbf{X}}\mid\mathbf{B}^{\top}\mathbf{X}) is a linear function of 𝐁𝐗\mathbf{B}^{\top}\mathbf{X}.

Despite being stronger than Condition 2.1, Condition 3.1 is satisfied when the linear instrument variable model holds and the p+qp+q-dimensional random vector (𝐗,𝐙)(\mathbf{X}^{\top},\mathbf{Z}^{\top})^{\top} follows an elliptical distribution. In our simulation study in Section 5, we show the proposed method still exhibits reasonable performance even when this condition is not satisfied, for example, when 𝐙\mathbf{Z} is binary. We then have the following result.

Lemma 1.

Assume Condition (3.1) holds. Then the quantity E(𝐗^y)E(\widehat{\mathbf{X}}\mid y) is contained in the linear subspace spanned by 𝚺𝐗^𝐁\bm{\Sigma}_{\widehat{\mathbf{X}}}\mathbf{B}.

Note the covariance matrix 𝚺𝐗^\bm{\Sigma}_{\widehat{\mathbf{X}}} need not be of full-rank and invertible: this occurs if the number of endogenous covariates pp is greater than the number of instruments qq. Nevertheless, Lemma 1 motivates us to propose a two-stage Lasso SIR (abbreviated to 2SLSIR) estimator for the column space of 𝐁\mathbf{B}, assuming its entries are sparse, based on the observed vector 𝐲\mathbf{y}, covariate matrix 𝒳\mathcal{X}, and instrument matrix 𝒵\mathcal{Z}. As we shall see, while theoretically we need to understand the behavior of its inverse, computationally 𝚺𝐗^\bm{\Sigma}_{\widehat{\mathbf{X}}} does not have to be inverted to construct the 2SLSIR estimator.

The two stages of the 2SLSIR estimator are detailed as follows.

  • Stage 1: Identify and estimate the effects of the instruments 𝐙\mathbf{Z}, and subsequently obtain the predicted values of the covariates 𝐗\mathbf{X}. To accomplish this, we fit a multivariate instrumental variable model given by the second equation in (2), with the covariates 𝐗\mathbf{X} being the pp-dimensional response and instruments 𝐙\mathbf{Z} being the qq-dimensional predictor. We assume each endogenous covariate depends only on a few instrumental variables (Belloni et al.,, 2012; Lin et al.,, 2015), thus motivating a sparse first-stage estimator 𝚪^\hat{\bm{\Gamma}} that can be computed in parallel by computing the estimated jjth column as

    𝜸^j=argmin𝜸12n𝐱j𝒵𝜸F2+μ1j𝜸1,j=1,,p,\displaystyle\hat{\bm{\gamma}}_{j}=\arg\!\min_{\bm{\gamma}}\dfrac{1}{2n}\|\mathbf{x}_{j}-\mathcal{Z}\bm{\gamma}\|_{F}^{2}+\mu_{1j}\|\bm{\gamma}\|_{1},\;j=1,\ldots,p,

    where μ1j>0\mu_{1j}>0 denotes the tuning parameter for the jjth covariate in the first stage. After the estimator 𝚪^=[𝜸^1,,𝜸^p]\hat{\bm{\Gamma}}=[\hat{\bm{\gamma}}_{1},\ldots,\hat{\bm{\gamma}}_{p}] is formed, we construct the first-stage prediction matrix 𝒳^=𝒵𝚪^\widehat{\mathcal{X}}=\mathcal{Z}\widehat{\bm{\Gamma}}.

  • Stage 2: Compute a sparse estimator for the column space of 𝐁\mathbf{B} by applying the Lasso SIR estimator (3) with the response vector 𝐲\mathbf{y} and the first-stage prediction matrix 𝒳^\hat{\mathcal{X}}, Specifically, let 𝚲^=(nc)1𝒳^𝐃𝒳^=(nc)1𝚪^𝒵𝐃𝒵𝚪^\widehat{\bm{\Lambda}}=(nc)^{-1}\widehat{\mathcal{X}}^{\top}\mathbf{D}\widehat{\mathcal{X}}=(nc)^{-1}\widehat{\bm{\Gamma}}^{\top}{\mathcal{Z}}^{\top}\mathbf{D}\mathcal{Z}\widehat{\bm{\Gamma}}, and define the pseudo-response vector 𝒚~k=(λ^k)1𝐃𝒳^𝜼^k\tilde{\bm{y}}_{k}=(\hat{\lambda}_{k})^{-1}\mathbf{D}\widehat{\mathcal{X}}\hat{\bm{\eta}}_{k}, where 𝜼^k\hat{\bm{\eta}}_{k} denotes the eigenvector associated with the kkth largest eigenvalue λ^k\hat{\lambda}_{k} of 𝚲^\hat{\bm{\Lambda}}, and 𝐃\mathbf{D} is defined as in Section 2.1. Then the 2SLSIR estimator for the kkth column of 𝐁\mathbf{B} is given by

    𝜷^k=argmin𝜷12n𝒚~k𝒳^𝜷22+μ2k𝜷1,k=1,,d,\hat{\bm{\beta}}_{k}=\arg\!\min_{\bm{\beta}}\dfrac{1}{2n}\|\tilde{\bm{y}}_{k}-\widehat{\mathcal{X}}\bm{\beta}\|_{2}^{2}+\mu_{2k}\|\bm{\beta}\|_{1},\;k=1,\ldots,d, (4)

    where μ2k>0\mu_{2k}>0 is a tuning parameter for the kkth pseudo-response in the second stage.

We highlight that, computationally, both stages can be implemented straightforwardly and in parallel using standard coordinate descent algorithms. In this paper, we apply the R package glmnet (Friedman et al.,, 2010) to implement the 2SLSIR estimator, and select the tuning parameters μj\mu_{j} and αk\alpha_{k} via default standard ten-fold cross-validation as available in the software.

4 Theoretical analysis

We study the properties of the 2SLSIR estimator for the column space of 𝐁\mathbf{B} under the multiple index model (2), when the dimensionality of the covariates pp and instruments qq can grow exponentially with sample size nn. Compared with the theoretical results for the linear outcome model in Lin et al., (2015) say, the major challenge here lies in the nonlinear nature of the conditional expectation E(𝐗^y)E(\widehat{\mathbf{X}}\mid y), and subsequently the behavior of the eigenvectors corresponding to the covariance matrix 𝚲=Cov{E(𝐗^y)}\bm{\Lambda}=\operatorname{\text{Cov}}\left\{E(\widehat{\mathbf{X}}\mid y)\right\} which require more complex technical developments. We will use CC, CC^{\prime}, C′′C^{\prime\prime} to denote generic positive constants. Also, without loss of generality, we assume each column of the instrumental matrix 𝒵\mathcal{Z} is standardized such that 𝐳j𝐳j=1\mathbf{z}_{j}^{\top}\mathbf{z}_{j}=1 for j=1,,qj=1,\ldots,q.

Following the notation in Wainwright, (Chapter 7, 2019), we say an n×mn\times m matrix 𝐀\mathbf{A} satisfies a restricted eigenvalue (RE) condition over a set SS with parameters (κ,ν)(\kappa,\nu) if 𝐀1/2𝜹22κ𝜹22,\|\mathbf{A}^{1/2}\bm{\delta}\|_{2}^{2}\geq\kappa\|\bm{\delta}\|_{2}^{2}, for all 𝜹𝒞(S,ν)={δm:𝜹Sc1ν𝜹S1}\bm{\delta}\in\mathcal{C}(S,\nu)=\left\{\delta\in\mathbb{R}^{m}:~{}\|\bm{\delta}_{S^{c}}\|_{1}\leq\nu\|\bm{\delta}_{S}\|_{1}\right\}. Also, for a vector 𝐯\mathbf{v} let supp(𝐯)={j:vj0}\text{supp}(\mathbf{v})=\{j:v_{j}\neq 0\} be the support set of 𝐯\mathbf{v}. Let σmax=max1jpσj\sigma_{\max}=\max_{1\leq j\leq p}\sigma_{j}, where we recall σj2\sigma^{2}_{j} are the diagonal elements of the covariance matrix 𝚺\bm{\Sigma} corresponding to (𝐔i,εi)p+1(\mathbf{U}_{i}^{\top},\varepsilon_{i})^{\top}\in\mathbb{R}^{p+1}. We assume each σj2\sigma_{j}^{2} and hence σmax\sigma_{\max} to be a constant with respect to sample size.

The following technical conditions on the distribution of the instruments 𝐙\mathbf{Z} and the conditional expectation E(𝐗𝐙)=𝚪𝐙E(\mathbf{X}\mid\mathbf{Z})=\bm{\Gamma}^{\top}\mathbf{Z} are required.

Condition 4.1.

The instrument random vector 𝐙\mathbf{Z} is sub-Gaussian with a q×qq\times q covariance matrix 𝚺𝐙\bm{\Sigma}_{\mathbf{Z}}, such that for any 𝐯q\mathbf{v}\in\mathbb{R}^{q} there exists a constant CC such that 𝐙ψ2C\left\lVert\mathbf{Z}\right\rVert_{\psi_{2}}\leq C, where 𝐙ψ2=sup𝐯q,𝐯=1𝐯𝐙ψ2,\left\lVert\mathbf{Z}\right\rVert_{\psi_{2}}=\sup_{\mathbf{v}\in\mathbb{R}^{q},\left\lVert\mathbf{v}\right\rVert=1}\left\lVert\mathbf{v}^{\top}\mathbf{Z}\right\rVert_{\psi_{2}}, and uψ2=supmm1/2(E|u|m)1/m\left\lVert u\right\rVert_{\psi_{2}}=\sup_{m}m^{-1/2}\left(E|u|^{m}\right)^{1/m}.

Condition 4.2.

There exists a constant L>0L>0 such that 𝚪1L\|\bm{\Gamma}\|_{1}\leq L.

Condition 4.3.

The quantities p,q,r,sp,q,r,s satisfy rs{(logpq)/n}1/2=o(1)rs\left\{(\log pq)/n\right\}^{1/2}=o(1).

The sub-Gaussian condition is often imposed in high-dimensional data analysis (e.g. Tan et al.,, 2018; Wainwright,, 2019; Nghiem et al., 2023b, ). Note a direct consequence of Condition 4.1 is that the instrumental matrix 𝒵\mathcal{Z} satisfies the RE condition with parameter (κ,3)(\kappa,3) with high probability when n>Crlogqn>Cr\log q for some finite κ\kappa (Banerjee et al.,, 2014); we will condition on this event in our development below. Furthermore, letting 𝚺^𝐙=𝒵𝒵/n\widehat{\bm{\Sigma}}_{\mathbf{Z}}=\mathcal{Z}^{\top}\mathcal{Z}/n denote the sample covariance of 𝒵\mathcal{Z} data, then

𝚺^𝐙𝚺𝐙max=Clogqn,\left\|\widehat{\bm{\Sigma}}_{\mathbf{Z}}-\bm{\Sigma}_{\mathbf{Z}}\right\|_{\max}=C\sqrt{\frac{\log q}{n}}, (5)

with probability at least 1exp(Clogq)1-\exp(-C^{\prime}\log q) (see Ravikumar et al.,, 2011; Tan et al.,, 2018, for analogous results). Condition 4.2 is also imposed in the theoretical analysis of the two-stage penalized least square methods in Lin et al., (2015), and is necessary to bound the estimation error from the first stage. Finally, in the theoretical analysis below we allow the number of covariates pp and instruments qq, and the sparsity levels ss and rr to depend on the sample size nn, subject to the rate of growth of these parameters dictated by Condition 4.3. As a special case, if rr and ss are bounded then pp and qq can grow exponentially with nn.

4.1 First stage results

In the first stage of the 2SLSIR estimation procedure, each column of the matrix 𝚪\bm{\Gamma} is a Lasso estimator from the regression of 𝐱j\mathbf{x}_{j} on the design matrix 𝒵\mathcal{Z}. We obtain the following result as a straightforward application of Banerjee et al., (2014); all proofs are provided in the Supplementary Materials S2.

Theorem 1.

Assume Conditions 4.1 and 4.2 are satisfied, and set

μ1j=Cσjlogpqn,j=1,,p.\mu_{1j}=C\sigma_{j}\sqrt{\dfrac{\log pq}{n}},~{}j=1,\ldots,p. (6)

Then with probability at least 1exp(Clogq)1-\exp(-C^{\prime}\log q), the columns of 𝚪^\widehat{\bm{\Gamma}} satisfy
𝛄^j𝛄j1Cσmaxr{log(pq)/n}1/2.\left\lVert\hat{\bm{\gamma}}_{j}-\bm{\gamma}_{j}\right\rVert_{1}\leq C\sigma_{\max}r\{\log(pq)/n\}^{1/2}. It follows that with probability at least 1exp(Clogpq)1-\exp(-C^{\prime}\log pq), we obtain

𝚪^𝚪1Cσmaxrlogpqn.\left\|\widehat{{\bm{\Gamma}}}-{\bm{\Gamma}}\right\|_{1}\leq C\sigma_{\max}r\sqrt{\frac{\log pq}{n}}. (7)

Theorem 1 allows us to establish estimation consistency for the column space of 𝐁\mathbf{B} in the following section.

4.2 Second stage results

In the second stage, we compute the Lasso SIR estimator using the outcome yy and the values 𝒳^=𝒵𝚪^\widehat{\mathcal{X}}=\mathcal{Z}\widehat{\bm{\Gamma}} obtained from the first stage. We require additional technical assumptions.

Condition 4.4.

There exists a constant cc such that λdc>0\lambda_{d}\geq c>0, where λd\lambda_{d} is the ddth largest eigenvalue of 𝚲=Cov{E(𝐗^y)}\bm{\Lambda}=\operatorname{\text{Cov}}\left\{E\left(\widehat{\mathbf{X}}\mid y\right)\right\}.

Condition 4.5.

The covariance matrix 𝚺𝐗^=𝚪𝚺𝐙𝚪\bm{\Sigma}_{\widehat{\mathbf{X}}}=\bm{\Gamma}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\Gamma} satisfies the RE condition with parameters (κ1,3)\kappa_{1},3), for some constant κ1>0\kappa_{1}>0 and 𝚺𝐗^maxC<\|\bm{\Sigma}_{\widehat{\mathbf{X}}}\|_{\max}\leq C<\infty .

Condition 4.6.

Let y(1),,y(n)y_{(1)},\ldots,y_{(n)} be the order statistics of the sample response data y1,,yny_{1},\ldots,y_{n}. Let C>0C>0 and let Πn(C)\Pi_{n}(C) denote the collection of all the nn-point partitions Cy(1)y(n)C-C\leq y_{(1)}\leq\cdots\leq y_{(n)}\leq C on the interval [C,C][-C,C]. We assume the vector-valued random variable 𝐦(y(i))=E(𝐙y(i))\bm{m}(y_{(i)})=E(\mathbf{Z}\mid y_{(i)}) to have a total variation of order 1/41/4. That is, for any fixed C>0C>0, we have

limn1n1/4supΠn(C)i=2n𝒎{y(i)}𝒎{y(i1)}=0.\displaystyle\lim_{n\rightarrow\infty}\frac{1}{n^{1/4}}\sup_{\Pi_{n}(C)}\sum_{i=2}^{n}\left\|\bm{m}\left\{y_{(i)}\right\}-\bm{m}\left\{y_{(i-1)}\right\}\right\|_{\infty}=0.

Condition 4.4 is analogous to the coverage condition imposed in the SIR literature for 𝚲\bm{\Lambda} (e.g., Li,, 1991). Condition 4.5 ensures the loss function corresponding to the second stage of the proposed Lasso SIR estimator does not have too small curvature, and is similar to a condition imposed by Lin et al., (2015) for the two-stage penalized least square method. Finally, Condition 4.6 controls the smoothness of the conditional expectation 𝒎(y)=E(𝐙y)\bm{m}(y)=E(\mathbf{Z}\mid y). Such a similar condition is used in Tan et al., (2018) when no endogeneity is present, and similar conditions on the total variation have been introduced in much earlier work on SIR (e.g., Hsing and Carroll,, 1992; Zhu et al.,, 2006).

We first consider the case of d=1d=1 i.e., the single index model version of (2). Here, Lemma 1 implies the covariance matrix 𝚲=Cov{E(𝐗^y)}\bm{\Lambda}=\operatorname{\text{Cov}}\left\{E\left(\widehat{\mathbf{X}}\mid y\right)\right\} has rank d=1d=1 and hence can be written in the form 𝚲=λ𝜼𝜼\bm{\Lambda}=\lambda\bm{\eta}\bm{\eta}^{\top}, where λ>0\lambda>0 denotes the non-zero eigenvalue of 𝚲\bm{\Lambda} and 𝜼\bm{\eta} is the corresponding eigenvector. Without loss of generality, assume 𝜷=1\|\bm{\beta}\|=1. For convenience below, we omit the subscript kk and use λ^\hat{\lambda} and 𝜼^\hat{\bm{\eta}} to denote the largest eigenvalue and its corresponding eigenvector of 𝚲^\hat{\bm{\Lambda}}, respectively, noting both were defined above (4). We also write 𝐲~1=𝐲~\tilde{\mathbf{y}}_{1}=\tilde{\mathbf{y}}. With this notation, the 2SLSIR estimator for the column space of 𝜷\bm{\beta} is given by

𝜷^=argmin𝜷p{12n𝐲~𝒳^𝜷22+μ2𝜷1}.\hat{\bm{\beta}}=\operatorname*{arg\,min}_{\bm{\beta}\in\mathbb{R}^{p}}\left\{\dfrac{1}{2n}\|\tilde{\mathbf{y}}-\widehat{\mathcal{X}}\bm{\beta}\|_{2}^{2}+\mu_{2}\|\bm{\beta}\|_{1}\right\}. (8)

By Lemma (1), we have 𝚺𝐗^𝜷=ξ𝜼\bm{\Sigma}_{\widehat{\mathbf{X}}}\bm{\beta}=\xi\bm{\eta}, where ξ=𝚺𝐗^𝜷2>0\xi=\left\lVert\bm{\Sigma}_{\widehat{\mathbf{X}}}\bm{\beta}\right\rVert_{2}>0 is a constant. Letting 𝜼~=P(𝜼)𝜼^=𝜼𝜼𝜼^=𝚺𝐗^𝜷~\tilde{\bm{\eta}}=P(\bm{\eta})\hat{\bm{\eta}}=\bm{\eta}\bm{\eta}^{\top}\hat{\bm{\eta}}=\bm{\Sigma}_{\widehat{\mathbf{X}}}\tilde{\bm{\beta}}, where 𝜷~=ξ1𝜷𝜼𝜼^\tilde{\bm{\beta}}=\xi^{-1}\bm{\beta}{\bm{\eta}}^{\top}\hat{\bm{\eta}}, then it is straightforward to see 𝜷~\tilde{\bm{\beta}} has the same direction as 𝜷\bm{\beta}, i.e., 𝒫(𝜷~)=𝒫(𝜷)\mathcal{P}(\tilde{\bm{\beta}})=\mathcal{P}(\bm{\beta}), when 𝜼𝜼^0\bm{\eta}^{\top}\hat{\bm{\eta}}\neq 0. The following result guarantees 𝜼𝜼^0\bm{\eta}^{\top}\hat{\bm{\eta}}\neq 0 holds with high probability.

Lemma 2.

Let ρ^=𝛈𝛈^\hat{\rho}=\bm{\eta}^{\top}\hat{\bm{\eta}}, and assume Conditions 4.1-4.6 are satisfied. Then with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(-C^{\prime}\log q)-\exp(-C^{\prime\prime}\log pq), it holds that

ρ^21CLλ{Llogqn+σmaxrlogpqn}.\displaystyle\hat{\rho}^{2}\geq 1-\frac{CL}{\lambda}\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}.

We now proceed to condition on 𝜼𝜼^0{\bm{\eta}}^{\top}\hat{\bm{\eta}}\neq 0 and an RE property for the sample covariance 𝚺^𝐗^\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}.

Lemma 3.

Suppose we condition on ρ^=𝛈𝛈^0\hat{\rho}={\bm{\eta}}^{\top}\hat{\bm{\eta}}\neq 0 and the sample covariance matrix 𝚺^𝐗^\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}} satisfies an RE property with parameters (κ2,3)(\kappa_{2},3); see Proposition S2 in Supplementary Material S2 which shows the latter holds with high probability. If the tuning parameter satisfies μ22𝛈^n1𝒳^𝒳^𝛃~\mu_{2}\geq 2\left\|\hat{\bm{\eta}}-n^{-1}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\right\|_{\infty}, then any solution 𝛃^\hat{\bm{\beta}} defined in (8) satisfies

𝜷^𝜷~212κ2μ2s,\displaystyle\left\lVert\hat{\bm{\beta}}-\tilde{\bm{\beta}}\right\rVert_{2}\leq\dfrac{12}{\kappa_{2}}\mu_{2}\sqrt{s},

We emphasize that the theoretical analyses for our proposed 2SLSIR estimator do not require the covariance matrix 𝚺𝐗^\bm{\Sigma}_{\widehat{\mathbf{X}}} to have full rank, meaning the results in this section still hold even when q<pq<p. Following Lemma 3, we obtain the following main result on the consistency of the 2SLSIR estimator for the single index model.

Theorem 2.

Assume Conditions 4.1-4.6 are satisfied, and choose the first stage tuning parameters μ1j\mu_{1j} as in (6). If we further select the second stage tuning parameters to satisfy

μ2=C(Lλ+L)withL=L{Llogqn+σmaxrlogpqn},\mu_{2}=C\left(\sqrt{\dfrac{L^{*}}{\lambda}}+L^{*}\right)\;\text{with}\;L^{*}=L\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}, (9)

then with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(-C^{\prime}\log q)-\exp(-C^{\prime\prime}\log pq), the 2SLSIR estimator 𝛃^\hat{\bm{\beta}} satisfies

P(𝜷^)P(𝜷)FCsκ2(Lλ+L).\displaystyle\left\|P(\hat{\bm{\beta}})-P({\bm{\beta}})\right\|_{F}\leq C\frac{\sqrt{s}}{\kappa_{2}}\left(\sqrt{\dfrac{L^{*}}{\lambda}}+L^{*}\right).

Compared with the bound for the one-stage Lasso SIR estimator established by Lin et al., (2019), the theoretical bound for the 2SLSIR estimator has an additional factor of σmaxr{log(pq)/n}1/2\sigma_{\max}r\left\{\log(pq)/n\right\}^{1/2}, reflecting the consequences of endogeneity. Moreover, compared with the result for the two-stage penalized least squares estimator in the linear outcome model of Lin et al., (2015), the bound in Lemma 2 involves an additional term depending upon the largest eigenvalue λ\lambda of Cov{E(𝐗^y}\operatorname{\text{Cov}}\left\{E(\widehat{\mathbf{X}}\mid y\right\}, reflecting the consequences of non-linearity. Intuitively, the estimation error of 𝜷^\hat{\bm{\beta}} depends on the estimation error of 𝚺𝐗^=𝚪𝚺𝐙𝚪\bm{\Sigma}_{\widehat{\mathbf{X}}}=\bm{\Gamma}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\Gamma}, which in turn depends upon the estimation error of both the coefficients 𝚪^\hat{\bm{\Gamma}} in the first stage and the covariance matrix 𝚺𝐙\bm{\Sigma}_{\mathbf{Z}}. Both pp and qq enter this bound logarithmically.

Next, we establish the selection consistency of the 2SLSIR estimator. Let S^={j:β^j0}\hat{S}=\{j:\hat{\beta}_{j}\neq 0\}, and partition the matrix 𝚺𝐗^=𝚪𝚺𝐙𝚪\bm{\Sigma}_{\widehat{\mathbf{X}}}=\bm{\Gamma}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\Gamma} as

𝚺𝐗^=[𝐂SS𝐂ScS𝐂SSc𝐂ScSc].\displaystyle\bm{\Sigma}_{\widehat{\mathbf{X}}}=\begin{bmatrix}\mathbf{C}_{SS}&\mathbf{C}_{S^{c}S}\\ \mathbf{C}_{SS^{c}}&\mathbf{C}_{S^{c}S^{c}}\end{bmatrix}.

Let ϕ=(𝐂SS)1\phi=\|(\mathbf{C}_{SS})^{-1}\|_{\infty}, and note in this case we have ξ=𝚺𝐗^𝜷2=𝐂SS𝜷S2\xi=\left\lVert\bm{\Sigma}_{\widehat{\mathbf{X}}}\bm{\beta}\right\rVert_{2}=\left\lVert\mathbf{C}_{SS}\bm{\beta}_{S}\right\rVert_{2}.

Condition 4.7.

There exists a constant 0<α10<\alpha\leq 1 such that 𝐂ScS(𝐂SS)11α\|\mathbf{C}_{S^{c}S}(\mathbf{C}_{SS})^{-1}\|_{\infty}\leq 1-\alpha.

Condition 4.7 is similar to the mutual incoherence condition for the ordinary Lasso estimator in the linear model without endogeneity (e.g., see Section 7.5.1, Wainwright,, 2019) and is standard in many high-dimensional regression contexts Li et al., (2017).

Theorem 3.

Assume Conditions 4.1-4.7 are satisfied, and we choose the first stage tuning parameters μ1j\mu_{1j} as in (6). If there exists a constant CC such that

CsL{Llogqn+σmaxrlogpqn}αϕ(4α),CsL\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}\leq\frac{\alpha}{\phi(4-\alpha)}, (10)

and the second stage tuning parameters are also selected to satisfy

minjS|βj|>2ξρ^2αφμ2,\min_{j\in S}|\beta_{j}|>\frac{2\xi\hat{\rho}}{2-\alpha}\varphi\mu_{2}, (11)

then with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(C^{\prime}\log q)-\exp(C^{\prime\prime}\log pq), there exists a solution 𝛃^\hat{\bm{\beta}} of (8) whose support satisfies S^=S\hat{S}=S.

Since the mutual incoherence condition 4.7 is imposed for the population quantity 𝚺𝐗^\bm{\Sigma}_{\widehat{\mathbf{X}}}, the additional assumption in (10) is necessary to ensure that this condition is satisfied for the sample covariance matrix 𝚺^𝐗^\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}} with high probability. Furthermore, (11) ensures that the minimum non-zero value β~j\tilde{\beta}_{j} can be identified, and has an additional factor ξρ^\xi\hat{\rho} in the numerator, reflecting another consequence of non-linearity. Note the lower bound is small when α\alpha is small, that is, the lower the correlation between the important and non-important variables, the smaller the magnitude of non-zero components which can be identified.

4.3 Extension to multiple index models

Finally, we generalize the above results to multiple index models. For k=1,,dk=1,\ldots,d, let 𝒮^k={j:|β^jk|0}\widehat{\mathcal{S}}_{k}=\{j:|\hat{\beta}_{jk}|\neq 0\} be the estimated support set for the vector 𝜷^k\hat{\bm{\beta}}_{k}, and 𝒮^=k𝒮k\widehat{\mathcal{S}}=\cup_{k}{\mathcal{S}_{k}}. Recall that 𝐁\mathbf{B} is only identifiable up to rotational transformation, and the latter only preserves the zero rows of 𝐁\mathbf{B}, then selection in the multiple index models refers only to selecting non-zero rows of 𝐁\mathbf{B}. Equivalently, we wish that the set 𝒮^\hat{\mathcal{S}} consistently estimates the support set 𝒮\mathcal{S} of the diagonal of 𝒫(𝐁)\mathcal{P}(\mathbf{B}).

Theorem 4.

Assume Conditions 4.1-4.6 are satisfied, and choose the first stage tuning parameters μ1j\mu_{1j} as in (6). If the second stage tuning parameters μ2k\mu_{2k} are also selected to satisfy

μ2k=C(Lλd+L)whereL=L{Llogqn+σmaxrlogpqn},\displaystyle\mu_{2k}=C\left(\sqrt{\dfrac{L^{*}}{\lambda_{d}}}+L^{*}\right)\;\text{where}\;L^{*}=L\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\},

for k=1,,dk=1,\ldots,d:

  1. (a)

    Then with probability at least 1dexp(Clogq)dexp(C′′logpq)1-d\exp(-C^{\prime}\log q)-d\exp(-C^{\prime\prime}\log pq), the 2SLSIR estimator 𝐁^=(𝜷^1,,𝜷^d)\widehat{\mathbf{B}}=(\hat{\bm{\beta}}_{1},\ldots,\hat{\bm{\beta}}_{d}), whose columns 𝜷^k\hat{\bm{\beta}}_{k} are defined by (4), satisfies

    P(𝐁^)P(𝐁)FCsκ2(Lλd+L).\displaystyle\left\|P(\widehat{\mathbf{B}})-P({\mathbf{B}})\right\|_{F}\leq C\frac{\sqrt{s}}{\kappa_{2}}\left(\sqrt{\dfrac{L^{*}}{\lambda_{d}}}+L^{*}\right).
  2. (b)

    If Condition 4.7 also holds and at least one tuning parameter μ2k\mu_{2k} satisfies

    minj𝒮k|βjk|>2ξkρ^k2αφμ2k,\displaystyle\min_{j\in\mathcal{S}^{k}}|\beta_{jk}|>\frac{2\xi_{k}\hat{\rho}_{k}}{2-\alpha}\varphi\mu_{2k},

    where ξk=Σ𝐗^𝜷k2\xi_{k}=\|\Sigma_{\widehat{\mathbf{X}}}\bm{\beta}_{k}\|_{2} and ρ^k=𝜼k𝜼^k\hat{\rho}_{k}={\bm{\eta}}_{k}^{\top}\widehat{\bm{\eta}}_{k}, k=1,,dk=1,\ldots,d, then 𝒮^=𝒮\widehat{\mathcal{S}}=\mathcal{S} with the same probability,.

Parts (a) and (b) in Theorem 4 for multiple index models essentially require Theorems 2 and 3, respectively, to hold for every dimension. As such, the upper bound on the estimation error of P(𝐁^)P(\widehat{\mathbf{B}}) depends on the lowest non-zero eigenvalue λd\lambda_{d} of 𝚲\bm{\Lambda}, and requires all sample eigenvectors 𝜼^k\widehat{\bm{\eta}}_{k} to not be orthogonal to their corresponding eigenvector 𝜼k\bm{\eta}_{k} for k=1,,dk=1,\ldots,d. Under these conditions though, we can establish bounds for the estimation and selection consistency for the endogeneous multiple index model (2).

5 Simulation studies

We conducted a numerical study to evaluate the performance of the proposed 2SLSIR estimator for recovering the (sparsity patterns of the) central subspace. We compared this against two alternatives: the sparse Lasso SIR (LSIR) estimator of Lin et al., (2018, 2019) which ignores endogeneity and does not use the instruments, and, in the case of single index model (d=1)(d=1), the two-stage Lasso (2SLasso) estimator of Lin et al., (2015) which assumes linearity and additive errors in the response model i.e., yi=𝐗i𝐁+εiy_{i}=\mathbf{X}_{i}^{\top}\mathbf{B}+\varepsilon_{i}.

We simulated data {(yi,𝐗i,𝐙i);i=1,,n}\{(y_{i},\mathbf{X}_{i},\mathbf{Z}_{i});i=1,\ldots,n\}, from a linear instrumental variable model 𝐗i=𝚪𝐙i+𝐔i\mathbf{X}_{i}=\bm{\Gamma}^{\top}\mathbf{Z}_{i}+\mathbf{U}_{i} coupled with one of the following four models for the response:

(i):yi\displaystyle(i):\;y_{i} =exp(𝐗i𝜷1+εi)(ii):yi=(1/2)(𝐗i𝜷1+εi)3\displaystyle=\exp(\mathbf{X}_{i}^{\top}\bm{\beta}_{1}+\varepsilon_{i})\quad(ii):\;y_{i}=(1/2)(\mathbf{X}_{i}^{\top}\bm{\beta}_{1}+\varepsilon_{i})^{3}
(iii):yi\displaystyle(iii):\;y_{i} =(𝐗i𝜷2)exp(𝐗i𝜷1/4+εi)(iv):yi=exp(𝐗i𝜷1+εi)3/2+𝐗i𝜷2+εi.\displaystyle=(\mathbf{X}_{i}^{\top}\bm{\beta}_{2})\exp(\mathbf{X}_{i}^{\top}\bm{\beta}_{1}/4+\varepsilon_{i})\quad(iv):\;y_{i}=\frac{\exp(\mathbf{X}_{i}^{\top}\bm{\beta}_{1}+\varepsilon_{i})}{3/2+\mathbf{X}_{i}^{\top}\bm{\beta}_{2}+\varepsilon_{i}}.

These single and double index models are similar to those previously used in the literature to assess SDR methods (e.g., Lin et al.,, 2019). The true sparse matrix 𝐁=𝜷1\mathbf{B}=\bm{\beta}_{1} in the single index models (i) and (ii), and 𝐁=[𝜷1,𝜷2]\mathbf{B}=[\bm{\beta}_{1},\bm{\beta}_{2}] in the multiple index models (iii) and (iv), was constructed by randomly selecting s=5s=5 nonzero rows of 𝐁\mathbf{B} and generating each entry in these rows from a uniform distribution on the two disjoint intervals [1,0.5][-1,-0.5] and [0.5,1][0.5,1], which we denote as U([1,0.5][0.5,1])U([-1,-0.5]\cup[0.5,1]). All remaining entries are set to zero. We assumed the number of dimensions dd was known throughout the simulation study.

For the instrumental variable model, we followed the same generation mechanism as in Lin et al., (2015). Specifically, to represent different levels of instrument strength, we considered three choices for 𝚪\bm{\Gamma}: i) all instruments are strong, where we sampled r=5r=5 nonzero entries for each column from the uniform distribution U([b,a][a,b])U([-b,-a]\cup[a,b]) with (a,b)=(0.75,1)(a,b)=(0.75,1); ii) all instruments are weak, where we sampled the same number of nonzero entries but with (a,b)=(0.5,0.75)(a,b)=(0.5,0.75); iii) instruments are of mixed strength, such that for each column of 𝚪\bm{\Gamma} we sampled r=50r=50 nonzero entries comprising five strong/weak instruments with (a,b)=(0.5,1)(a,b)=(0.5,1) and 45 very weak instruments with (a,b)=(0.05,0.1)(a,b)=(0.05,0.1). Next we generated the random noise vector (𝐔i,εi)(\mathbf{U}_{i}^{\top},\varepsilon_{i})^{\top} from a (p+1)(p+1)-variate Gaussian distribution with zero mean vector and covariance matrix 𝚺=(σij)\bm{\Sigma}=(\sigma_{ij}) specified as follows. We set σij=(0.2)|ij|\sigma_{ij}=(0.2)^{|i-j|} for i,j=i,j= 1,,p1,\ldots,p, and σp+1,p+1=1\sigma_{p+1,p+1}=1. For the last column/row of 𝚺\bm{\Sigma} characterizing endogeneity, the nonzero entries were then given by the nonzero row of 𝐁\mathbf{B}, with five of these entries randomly set to 0.30.3. Finally, we generated the instruments 𝐙i\mathbf{Z}_{i} from a qq-variate Gaussian distribution with zero mean vector and two choices for the covariance matrix 𝚺𝐙\bm{\Sigma}_{\mathbf{Z}}, namely the q×qq\times q identity matrix and the AR(1) covariance matrix with autocorrelation 0.50.5. In Supplementary Material S4, we conducted additional simulation results where the instruments 𝐙\mathbf{Z} were generated from a Bernoulli distribution, with largely similar trends to those seen below.

We set p=q=100p=q=100 and varied the sample size n{500,1000}n\in\{500,1000\}, simulating 100100 datasets for each configuration. For each simulated dataset, all tuning parameters across the three methods (2SLSIR, LSIR, 2SLasso) were selected via ten-fold cross-validation. We assessed performance using the estimation error on the projection matrix 𝒫(𝐁)𝒫(𝐁^)F\|\mathcal{P}(\mathbf{B})-\mathcal{P}(\hat{\mathbf{B}})\|_{F}, along with the F1 score for variable selection.

Across both single and double index models (Tables 1 and 2, respectively), the proposed 2SLSIR estimator performed best overall in terms of both estimation error and variable selection. Indeed, it is the only estimator among the three whose estimation error consistently decreased when sample size increased from n=500n=500 to n=1000n=1000; this lends empirical support to estimation consistency results established in Section 4. For the single index models (i) and (ii), the two-stage Lasso (2SLasso) estimator performed even worse than the single stage Lasso SIR (LSIR) estimator in terms of estimation error, particularly when the degree of non-linearity in the response model was relatively large.

Table 1: Simulation results for the Lasso SIR (LSIR) estimator, the two-stage Lasso (2SLasso) estimator, and the proposed two-stage Lasso SIR (2SLSIR) estimator, in the single index models (i) and (ii). Standard errors are included in parentheses. The lowest estimation error and the highest F1 score are highlighted in each row.
𝚺𝐙\bm{\Sigma}_{\mathbf{Z}} nn Model 𝚪\bm{\Gamma} Error F1
LSIR 2SLasso 2SLSIR LSIR 2SLasso 2SLSIR
Diag 500 (i) Strong 0.16 (0.05) 1.06 (0.22) 0.12 (0.05) 1.00 (0.01) 0.51 (0.18) 1.00 (0.00)
Weak 0.26 (0.07) 1.05 (0.21) 0.17 (0.08) 0.99 (0.03) 0.51 (0.19) 1.00 (0.01)
Mixed 0.16 (0.05) 1.06 (0.22) 0.13 (0.06) 1.00 (0.00) 0.54 (0.19) 1.00 (0.00)
(ii) Strong 0.16 (0.05) 0.28 (0.09) 0.12 (0.05) 1.00 (0.01) 0.96 (0.07) 1.00 (0.00)
Weak 0.26 (0.06) 0.33 (0.10) 0.19 (0.07) 0.99 (0.03) 0.95 (0.08) 1.00 (0.01)
Mixed 0.16 (0.06) 0.25 (0.08) 0.12 (0.05) 1.00 (0.00) 0.98 (0.04) 1.00 (0.00)
1000 (i) Strong 0.17 (0.05) 0.97 (0.22) 0.09 (0.04) 1.00 (0.00) 0.57 (0.18) 1.00 (0.00)
Weak 0.28 (0.06) 0.93 (0.23) 0.13 (0.05) 0.99 (0.03) 0.58 (0.20) 1.00 (0.00)
Mixed 0.20 (0.05) 0.96 (0.22) 0.09 (0.03) 1.00 (0.01) 0.58 (0.15) 1.00 (0.00)
(ii) Strong 0.18 (0.06) 0.18 (0.07) 0.09 (0.03) 1.00 (0.00) 1.00 (0.02) 1.00 (0.00)
Weak 0.29 (0.07) 0.22 (0.08) 0.12 (0.05) 0.99 (0.04) 0.99 (0.04) 1.00 (0.00)
Mixed 0.20 (0.06) 0.17 (0.05) 0.08 (0.03) 0.99 (0.04) 1.00 (0.01) 1.00 (0.00)
AR1 500 (i) Strong 0.18 (0.06) 1.09 (0.19) 0.13 (0.06) 1.00 (0.00) 0.53 (0.18) 1.00 (0.00)
Weak 0.29 (0.08) 1.07 (0.23) 0.19 (0.09) 0.99 (0.04) 0.56 (0.21) 1.00 (0.01)
Mixed 0.19 (0.06) 1.07 (0.18) 0.14 (0.06) 1.00 (0.01) 0.52 (0.14) 1.00 (0.00)
(ii) Strong 0.18 (0.06) 0.28 (0.09) 0.14 (0.05) 1.00 (0.01) 0.97 (0.06) 1.00 (0.00)
Weak 0.29 (0.06) 0.34 (0.11) 0.20 (0.08) 0.99 (0.04) 0.94 (0.08) 1.00 (0.02)
Mixed 0.18 (0.07) 0.27 (0.09) 0.13 (0.06) 1.00 (0.03) 0.97 (0.06) 1.00 (0.00)
1000 (i) Strong 0.21 (0.07) 1.03 (0.21) 0.10 (0.04) 1.00 (0.02) 0.55 (0.17) 1.00 (0.00)
Weak 0.33 (0.07) 0.99 (0.23) 0.14 (0.06) 0.97 (0.05) 0.57 (0.19) 1.00 (0.00)
Mixed 0.25 (0.06) 1.04 (0.21) 0.10 (0.04) 0.99 (0.02) 0.54 (0.17) 1.00 (0.00)
(ii) Strong 0.22 (0.06) 0.19 (0.06) 0.09 (0.04) 1.00 (0.03) 0.99 (0.03) 1.00 (0.00)
Weak 0.33 (0.07) 0.23 (0.07) 0.13 (0.06) 0.98 (0.06) 0.98 (0.04) 1.00 (0.00)
Mixed 0.25 (0.06) 0.18 (0.05) 0.09 (0.04) 0.99 (0.03) 1.00 (0.02) 1.00 (0.00)
Table 2: Simulation results for the Lasso SIR (LSIR) estimator and the proposed two-stage Lasso SIR (2SLSIR) estimator, in the double index models (iii) and (iv). Standard errors are included in parentheses. The lowest estimation error and the highest F1 score are highlighted in each row.
𝚺𝐙\bm{\Sigma}_{\mathbf{Z}} nn Model 𝚪\bm{\Gamma} Error F1
LSIR 2SLSIR LSIR 2SLSIR
diag 500 (iii) Strong 0.81 (0.22) 0.67 (0.24) 0.64 (0.14) 0.76 (0.13)
Weak 1.00 (0.21) 0.98 (0.24) 0.58 (0.13) 0.67 (0.16)
Mixed 0.92 (0.22) 0.71 (0.24) 0.52 (0.14) 0.77 (0.15)
(iv) Strong 0.42 (0.16) 0.41 (0.17) 0.92 (0.11) 0.91 (0.11)
Weak 0.51 (0.19) 0.49 (0.21) 0.86 (0.15) 0.86 (0.13)
Mixed 0.44 (0.20) 0.43 (0.21) 0.89 (0.12) 0.90 (0.12)
1000 (iii) Strong 0.91 (0.21) 0.47 (0.21) 0.58 (0.14) 0.87 (0.14)
Weak 1.04 (0.20) 0.68 (0.27) 0.55 (0.11) 0.78 (0.14)
Mixed 0.99 (0.16) 0.50 (0.20) 0.47 (0.07) 0.86 (0.12)
(iv) Strong 0.36 (0.17) 0.29 (0.17) 0.95 (0.12) 0.97 (0.08)
Weak 0.46 (0.17) 0.34 (0.19) 0.88 (0.13) 0.94 (0.11)
Mixed 0.41 (0.20) 0.27 (0.15) 0.89 (0.15) 0.97 (0.08)
AR1 500 (iii) Strong 0.88 (0.25) 0.73 (0.28) 0.63 (0.15) 0.76 (0.15)
Weak 1.03 (0.22) 1.02 (0.30) 0.57 (0.13) 0.70 (0.17)
Mixed 0.99 (0.21) 0.76 (0.27) 0.51 (0.15) 0.75 (0.15)
(iv) Strong 0.46 (0.18) 0.44 (0.20) 0.89 (0.13) 0.89 (0.12)
Weak 0.55 (0.19) 0.53 (0.22) 0.84 (0.15) 0.83 (0.14)
Mixed 0.47 (0.22) 0.45 (0.24) 0.86 (0.14) 0.88 (0.13)
1000 (iii) Strong 0.99 (0.20) 0.52 (0.20) 0.54 (0.13) 0.84 (0.12)
Weak 1.09 (0.20) 0.70 (0.26) 0.52 (0.10) 0.78 (0.16)
Mixed 1.06 (0.16) 0.52 (0.24) 0.45 (0.07) 0.85 (0.14)
(iv) Strong 0.39 (0.16) 0.30 (0.18) 0.93 (0.10) 0.97 (0.08)
Weak 0.52 (0.17) 0.36 (0.18) 0.85 (0.12) 0.93 (0.11)
Mixed 0.46 (0.21) 0.31 (0.20) 0.86 (0.14) 0.96 (0.08)

6 Real data applications

We demonstrate our proposed method on two examples with endogeneity, one due to omitted variables and the other due to measurement error.

6.1 Omitted variables: Mouse obesity data

We applied the proposed 2SLSIR estimator to perform joint SDR and variable selection on a mouse obesity dataset from Wang et al., (2006), and compared the results obtained to those from Lin et al., (2015) who analyzed the same data using their two-stage Lasso (2SLasso) estimator, along with the single stage Lasso SIR (LSIR) estimator. Briefly, the study comprised 334 mice fed a high-fat diet, and who were then genotyped using 1327 single nucleotide polymorphisms (SNPs) with gene expressions profiled on microarrays that include probes for 23,38823,388 genes. We followed the same pre-processing steps as in Lin et al., (2015), and obtained a complete dataset with p=2825p=2825 genes and q=1250q=1250 SNPs on n=287n=287 mice. We treated the SNPs as instruments (𝐙\mathbf{Z}), and focused on identifying important genes (𝐗\mathbf{X}) related to the body weight (yy). As mentioned in Section 1, the relationship between gene expression covariates and the outcome is likely confounded by omitted variables, experimental conditions, and environmental perturbations. As such, endogeneity due to omitted variable bias is expected to be a critical issue in this application. As justified in Lin et al., (2015), genetic variants (represented by SNP) are valid instruments because the mice in this study come from the same genetic cross.

To select the number of structural dimensions dd, we used kk-means clustering of the (normalized) eigenvalues of the estimated matrix 𝚲^\widehat{\bm{\Lambda}} similar to that of Lin et al., (2019), with 2SLSIR and 2SLasso both selecting d=1d=1. Next, for each of the three estimators, we applied stability selection to compute the empirical selection probability (Meinshausen and Bühlmann,, 2010) for each gene over 100 random subsamples of size n/2\lfloor n/2\rfloor at each value of the tuning parameter. The upper bound for the per-family error rate was one while the selection cutoff was set to 0.750.75, meaning up to 75% number of components can be included in the estimates in each subsample. The results in Figure 1 present the selection path for each gene as a function of the second-stage tuning parameter μ2\mu_{2}. With the maximum selection probability across all the tuning parameters set to 0.50.5, LSIR selected seven genes, 2SLasso selected nine genes, while the proposed 2SLSIR selected six genes. Only one gene was chosen with a very high selection probability across all the three methods, while one additional gene is commonly selected by 2SLSIR and 2SLasso.

Refer to caption
Figure 1: Stability selection paths for three different methods applied to the mouse obesity data based on 100 random subsamples. With the maximum selection probability of at least 0.5, the gene selected by all the three methods is displayed in red, while the additional gene only selected by the two-stage methods is displayed in blue.

To further interpret the biological consequences of the selected genes, we conducted a Gene Ontology (GO) enrichment analysis for the genes chosen by each method. By annotating these genes for their participation in gene ontology, this analysis aims to identify which biological functions, known as GO terms, are over-represented in the selected genes (Bleazard et al.,, 2015). This over-representation is measured by the pp-values, adjusted for multiple comparisons, of the null hypothesis that this gene set is randomly associated with the GO, so a smaller (larger) pp-value indicates a more significant biological function. Furthermore, the gene ontology is divided into three main groups: biological processes (BP), molecular function (MF), and cellular component (CC). Figure 2 indicates that all the three methods capture a strong association between the obesity and lipid-related processes, including lipid transport, metabolism, and regulation. The one-stage LSIR focuses on genes regulating lipid transport and metabolic processes (e.g., lipid transporter activity, monocarboxylic acid transport), linking BMI to lipid dynamics at the cellular level. By contrast, both the 2SLasso and 2SLSIR estimators emphasize plasma lipoprotein metabolism (e.g., plasma lipoprotein particle clearance, protein-lipid complex). However, while the former puts more emphasis on the interplay between lipid metabolism and immune regulation (e.g., immunoglobulin binding), the latter focuses more on lipid catabolism and extracellular lipid transport (e.g., extracellular matrix structural constituent, lipase activity).

Refer to caption
Figure 2: Top ten GO terms (biological processes, cellular components and molecular functions) obtained from the enrichment analysis for the genes selected by LassoSIR (left), two-stage Lasso (middle), and two-stage LassoSIR (right).

6.2 Measurement error: NHANES data

The National Health and Nutrition Examination Survey (NHANES) survey aims to assess the health and nutritional status of individuals in the United States, tracking the evolution of this over time (Centers for Disease Control and Prevention,, 2018). During the 2009-2010 survey period, participants were interviewed and asked to provide their demographic background, nutritional habits, and to undertake a series of health examinations. To assess the nutritional habits of participants, dietary data were collected using two 24-hour recall interviews, wherein the participants self-reported the consumed amounts of a set of food items during the 24 hours before each interview. Based on these recalls, daily aggregated consumption of water, food energy, and other nutrition components such as total fat and total sugar consumption were computed. In this section, we revisit the data application in Nghiem et al., 2023a , seeking to understand the relationship between total cholesterol level (yy) for n=3343n=3343 women and their long-term intakes of p=42p=42 nutrition components, particularly a linear combination/s of these components carrying the full regression information, and whether these combinations are sparse in nature.

Let 𝐖ij\mathbf{W}_{ij} denote the vector of 42 nutritional covariates at the jjth interview for j=1,2j=1,2. Dietary recall data are well-known to be subject to measurement errors, so we assume 𝐖ij=𝐗i+𝐄ij\mathbf{W}_{ij}=\mathbf{X}_{i}+\mathbf{E}_{ij}, where 𝐗i\mathbf{X}_{i} represents the iith latent long-term nutritional intake, and 𝐄ij\mathbf{E}_{ij} denotes an independent additive measurement error term following a multivariate Gaussian distribution with zero mean and covariance 𝚺u\bm{\Sigma}_{u}. We used an instrumental variable approach to account for this measurement error. Specifically, following Carroll et al., (Section 6.5.1, 2006) we treated the first replicate 𝐖i1=𝐖i\mathbf{W}_{i1}=\mathbf{W}_{i} as a surrogate for 𝐗i\mathbf{X}_{i} and the second replicate 𝐖i2=𝐙i\mathbf{W}_{i2}=\mathbf{Z}_{i} as an instrument. Note in this case, the number of instruments q=pq=p. We then applied the linear instrumental variable model 𝐗i=𝚪𝐙i+𝐔i\mathbf{X}_{i}=\bm{\Gamma}^{\top}\mathbf{Z}_{i}+\mathbf{U}_{i} where 𝚪\bm{\Gamma} is allowed to be a q×pq\times p column-sparse matrix. It follows that we can write 𝐖i=𝚪𝐙i+𝐔~i\mathbf{W}_{i}=\bm{\Gamma}^{\top}\mathbf{Z}_{i}+\tilde{\mathbf{U}}_{i} where 𝐔~i=𝐔i+𝐄i1\tilde{\mathbf{U}}_{i}=\mathbf{U}_{i}+\mathbf{E}_{i1}. We then applied the proposed 2SLSIR estimator with the true covariate matrix 𝒳\mathcal{X} replaced by the n×pn\times p matrix 𝒲\mathcal{W}, where we selected d=1d=1 as the structural dimension following Nghiem et al., 2023a .

We first examined the estimated matrix coefficients 𝚪^\widehat{\bm{\Gamma}} from the first stage of the 2SLSIR estimation procedure. Figure S1 in the Supplementary Materials shows that the biggest element in each column of 𝚪^\widehat{\bm{\Gamma}} is typically the diagonal element, so the most important instrument to model the consumption of any dietary component in the second interview is, unsurprisingly, the consumption of the same component in the first interview. Overall, each column of 𝚪^\widehat{\bm{\Gamma}} is relatively sparse, with the number of non-zero coefficients in each column ranging from 3 to 20.

Next, we applied the same stability selection approach as in Section 6.1 to the 2SLSIR estimator and compared the resulting paths with the LSIR and 2SLasso estimators. With the maximum selection probability across all the tuning parameters being at least 0.5, LSIR selected five nutritional covariates, 2SLasso selected five covariates, while the proposed 2SLSIR selected six covariates. Figure 3 presents the selection path as a function of the tuning parameter μ2\mu_{2} for each nutrition component. The two components commonly selected by all the three methods were vitamin D and vitamin K, while both 2SLSIR and 2SLasso additionally select one more common component in folic acid. Vitamin D is the component with the largest estimated coefficients in both the 2SLSIR and 2SLasso estimators, and was the second largest in the LSIR estimator.

Refer to caption
Figure 3: Stability selection paths for three different methods applied to the NHANES data based on 100 random subsamples. With a maximum selection probability of at least 0.5, the components that are selected by all the three methods are displayed in red, while the additional component only selected by the two-stage methods is displayed in blue.

7 Discussion

In this paper, we propose an innovative two-stage Lasso SIR method to perform simultaneous sufficient dimension reduction and variable selection when endogeneity is present. We establish theoretical bounds for estimation and selection consistency of the proposed estimator, allowing the dimensionality of both the covariates and instruments to grow exponentially with sample size. Our simulation studies demonstrate that the 2SLSIR estimator exhibits superior performance compared with the one-stage Lasso SIR method that ignores endogeneity, and the two-stage Lasso method, that ignores non-linearity in the outcome model. We apply the proposed method in two data applications where endogeneity is well-known to be present, one in nutrition and one in statistical genomics.

Future research can extend the proposed two-stage Lasso SIR method to accommodate a possible non-linear relationship between the covariates and the instruments in the first stage. Different sparsity structures on the coefficient matrix 𝚪\bm{\Gamma}, such as a low-rank matrix, can also be considered. One can also examine endogeneity in other sufficient dimension reduction methods, such as inverse regression models Cook and Forzani, (2008); Bura et al., (2022). Finally, the methods can be adapted to more complex settings, such as clustered or longitudinal data, for example as in (Nghiem and Hui,, 2024).

Acknowledgments

This work was supported by the Australian Research Council Discovery Project Grant DP230101908. Thanks to Shila Ghazanfar for useful discussions regarding the mouse obesity data application.

References

  • Angrist and Krueger, (2001) Angrist, J. D. and Krueger, A. B. (2001). Instrumental variables and the search for identification: From supply and demand to natural experiments. Journal of Economic Perspectives, 15(4):69–85.
  • Banerjee et al., (2014) Banerjee, A., Chen, S., Fazayeli, F., and Sivakumar, V. (2014). Estimation with norm regularization. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, K., editors, Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc.
  • Belloni et al., (2012) Belloni, A., Chen, D., Chernozhukov, V., and Hansen, C. (2012). Sparse models and methods for optimal instruments with an application to eminent domain. Econometrica, 80(6):2369–2429.
  • Bleazard et al., (2015) Bleazard, T., Lamb, J. A., and Griffiths-Jones, S. (2015). Bias in microrna functional enrichment analysis. Bioinformatics, 31(10):1592–1598.
  • Bura et al., (2022) Bura, E., Forzani, L., Arancibia, R. G., Llop, P., and Tomassi, D. (2022). Sufficient reductions in regression with mixed predictors. Journal of Machine Learning Research, 23(102):1–47.
  • Caner, (2009) Caner, M. (2009). Lasso-type GMM estimator. Econometric Theory, 25(1):270–290.
  • Caner et al., (2018) Caner, M., Han, X., and Lee, Y. (2018). Adaptive elastic net GMM estimation with many invalid moment conditions: Simultaneous model and moment selection. Journal of Business & Economic Statistics, 36(1):24–46.
  • Card, (1999) Card, D. (1999). The causal effect of education on earnings. Handbook of Labor Economics, 3:1801–1863.
  • Carroll et al., (2006) Carroll, R. J., Ruppert, D., and Stefanski, L. A. (2006). Measurement error in nonlinear models, volume 105. CRC press.
  • Centers for Disease Control and Prevention, (2018) Centers for Disease Control and Prevention (2018). National health and nutrition examination survey data. https://wwwn.cdc.gov/nchs/nhanes/continuousnhanes/default.aspx?BeginYear=2009.
  • Chernozhukov et al., (2015) Chernozhukov, V., Hansen, C., and Spindler, M. (2015). Post-selection and post-regularization inference in linear models with many controls and instruments. American Economic Review, 105(5):486–490.
  • Cook and Forzani, (2008) Cook, R. D. and Forzani, L. (2008). Principal fitted components for dimension reduction in regression. Statistical Science, 23(4):485 – 501.
  • Dai et al., (2024) Dai, S., Wu, P., and Yu, Z. (2024). New forest-based approaches for sufficient dimension reduction. Statistics and Computing, 34(5):176.
  • Fan and Liao, (2014) Fan, J. and Liao, Y. (2014). Endogeneity in high dimensions. Annals of Statistics, 42(3):872.
  • Friedman et al., (2010) Friedman, J., Hastie, T., and Tibshirani, R. (2010). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33:1–22.
  • Gao et al., (2020) Gao, J., Kim, N., and Saart, P. W. (2020). On endogeneity and shape invariance in extended partially linear single index models. Econometric Reviews, 39(4):415–435.
  • Hansen et al., (2008) Hansen, C., Hausman, J., and Newey, W. (2008). Estimation with many instrumental variables. Journal of Business & Economic Statistics, 26(4):398–422.
  • Horn and Johnson, (2012) Horn, R. A. and Johnson, C. R. (2012). Matrix analysis. Cambridge university press.
  • Hsing and Carroll, (1992) Hsing, T. and Carroll, R. J. (1992). An asymptotic theory for sliced inverse regression. The Annals of Statistics, pages 1040–1061.
  • Hu et al., (2015) Hu, Y., Shiu, J.-L., and Woutersen, T. (2015). Identification and estimation of single-index models with measurement error and endogeneity. The Econometrics Journal, 18(3):347–362.
  • Li, (2018) Li, B. (2018). Sufficient dimension reduction: Methods and applications with R. CRC Press, Florida.
  • Li et al., (2017) Li, J., Cheng, K., Wang, S., Morstatter, F., Trevino, R. P., Tang, J., and Liu, H. (2017). Feature selection: A data perspective. ACM computing surveys (CSUR), 50(6):1–45.
  • Li, (1991) Li, K.-C. (1991). Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86:316–327.
  • Lin et al., (2018) Lin, Q., Zhao, Z., and Liu, J. S. (2018). On consistency and sparsity for sliced inverse regression in high dimensions. The Annals of Statistics, 46:580–610.
  • Lin et al., (2019) Lin, Q., Zhao, Z., and Liu, J. S. (2019). Sparse sliced inverse regression via lasso. Journal of the American Statistical Association, 114(528):1726–1739.
  • Lin et al., (2015) Lin, W., Feng, R., and Li, H. (2015). Regularization methods for high-dimensional instrumental variables regression with an application to genetical genomics. Journal of the American Statistical Association, 110(509):270–288.
  • Ma and Zhu, (2013) Ma, Y. and Zhu, L. (2013). A review on dimension reduction. International Statistical Review, 81:134–150.
  • Meinshausen and Bühlmann, (2010) Meinshausen, N. and Bühlmann, P. (2010). Stability selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 72(4):417–473.
  • Nghiem and Hui, (2024) Nghiem, L. H. and Hui, F. K. C. (2024). Random effects model-based sufficient dimension reduction for independent clustered data. Journal of Americal Statistical Association, Accepted for publication.
  • (30) Nghiem, L. H., Hui, F. K. C., Muller, S., and Welsh, A. H. (2023a). Likelihood-based surrogate dimension reduction. Statistics and Computing, 34(1):51.
  • (31) Nghiem, L. H., Hui, F. K. C., Müller, S., and Welsh, A. H. (2023b). Screening methods for linear errors-in-variables models in high dimensions. Biometrics, 79(2):926–939.
  • (32) Nghiem, L. H., Hui, F. K. C., Müller, S., and Welsh, A. H. (2023c). Sparse sliced inverse regression via cholesky matrix penalization. Statistica Sinica, 33:1–33.
  • Ravikumar et al., (2011) Ravikumar, P., Wainwright, M. J., Raskutti, G., and Yu, B. (2011). High-dimensional covariance estimation by minimizing 1\ell_{1}-penalized log-determinant divergence. Electronic Journal of Statistics, 5.
  • Sheng and Yin, (2016) Sheng, W. and Yin, X. (2016). Sufficient dimension reduction via distance covariance. Journal of Computational and Graphical Statistics, 25(1):91–104.
  • Tan et al., (2018) Tan, K. M., Wang, Z., Zhang, T., Liu, H., and Cook, R. D. (2018). A convex formulation for high-dimensional sparse sliced inverse regression. Biometrika, 105(4):769–782.
  • Wainwright, (2019) Wainwright, M. J. (2019). High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press.
  • Wang et al., (2006) Wang, S., Yehya, N., Schadt, E. E., Wang, H., Drake, T. A., and Lusis, A. J. (2006). Genetic and genomic analysis of a fat mass trait with complex inheritance reveals marked sex specificity. PLoS Genetics, 2(2):e15.
  • Windmeijer et al., (2019) Windmeijer, F., Farbmacher, H., Davies, N., and Davey Smith, G. (2019). On the use of the lasso for instrumental variables estimation with some invalid instruments. Journal of the American Statistical Association, 114(527):1339–1350.
  • Xia et al., (2002) Xia, Y., Tong, H., Li, W., and Zhu, L.-X. (2002). An adaptive estimation of dimension reduction space. Journal of the Royal Statistical Society Series B, 64:363–410.
  • Xu et al., (2022) Xu, K., Zhu, L., and Fan, J. (2022). Distributed sufficient dimension reduction for heterogeneous massive data. Statistica Sinica, 32:2455–2476.
  • Yang, (1977) Yang, S.-S. (1977). General distribution theory of the concomitants of order statistics. The Annals of Statistics, pages 996–1002.
  • Zhou and Zhu, (2016) Zhou, J. and Zhu, L. (2016). Principal minimax support vector machine for sufficient dimension reduction with contaminated data. Computational Statistics & Data Analysis, 94:33–48.
  • Zhu et al., (2006) Zhu, L., Miao, B., and Peng, H. (2006). On sliced inverse regression with high-dimensional covariates. Journal of the American Statistical Association, 101(474):630–643.
  • Zou, (2006) Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101:1418–1429.

Supplementary Materials

S1 Performance of Lasso SIR under endogenity

In this subsection, we consider a simple setting to demonstrate the performance of Lasso SIR when endogeneity is present. Suppose the (p+1)(p+1)-random vector (𝐗,ε)(\mathbf{X},\varepsilon)^{\top} has the multivariate Gaussian distribution with zero mean, marginal covariance matrix 𝚺X=𝐈p\bm{\Sigma}_{X}=\mathbf{I}_{p}, and the marginal variance Var(ε)=1\operatorname{\text{Var}}(\varepsilon)=1. In this case, it follows that

[𝐗ε]Np+1(𝟎,[𝐈p𝚺𝐗,ε𝚺𝐗,ε1]),\begin{bmatrix}\mathbf{X}\\ \varepsilon\\ \end{bmatrix}\sim N_{p+1}\left(\bm{0},\begin{bmatrix}\mathbf{I}_{p}&\bm{\Sigma}_{\mathbf{X},\varepsilon}\\ \bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}&1\end{bmatrix}\right), (12)

where 𝚺𝐗,ε\bm{\Sigma}_{\mathbf{X},\varepsilon} is the correlation between 𝐗\mathbf{X} and ε\varepsilon. Furthermore, assume that the outcome yy is generated from a linear model y=𝐗𝜷+ε.y=\mathbf{X}\bm{\beta}+\varepsilon. Then we have

[𝐗y]Np+1(𝟎,[𝐈p𝜷+𝚺𝐗,ε𝜷+𝚺𝐗,ε𝜷𝜷+2𝚺𝐗,ε𝜷+1]),\begin{bmatrix}\mathbf{X}\\ y\\ \end{bmatrix}\sim N_{p+1}\left(\bm{0},\begin{bmatrix}\mathbf{I}_{p}&\bm{\beta}+\bm{\Sigma}_{\mathbf{X},\varepsilon}\\ \bm{\beta}^{\top}+\bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}&\bm{\beta}^{\top}\bm{\beta}+2\bm{\Sigma}_{\mathbf{X},\varepsilon}\bm{\beta}+1\end{bmatrix}\right),

and, as a result,

E(𝐗y)=1𝜷𝜷+2𝚺𝐗,ε𝜷+1(𝜷+𝚺𝐗,ε)y,E(\mathbf{X}\mid y)=\dfrac{1}{\bm{\beta}^{\top}\bm{\beta}+2\bm{\Sigma}_{\mathbf{X},\varepsilon}\bm{\beta}+1}(\bm{\beta}+\bm{\Sigma}_{\mathbf{X},\varepsilon})y,

so the eigenvector 𝜼\bm{\eta}^{*} of 𝚲=Cov{E(𝐗y)}\bm{\Lambda}^{*}=\operatorname{\text{Cov}}\left\{E(\mathbf{X}\mid y)\right\} is the normalized eigenvector of 𝜷+𝚺𝐗,ε\bm{\beta}+\bm{\Sigma}_{\mathbf{X},\varepsilon}.

To demonstrate the performance of Lasso SIR, we conducted a small simulation study where we set p=4p=4 and 𝜷=(1,1,0,0)\bm{\beta}=(1,1,0,0)^{\top}, so the target of estimation for Lasso SIR is both the column space of 𝜷\bm{\beta} as well as its support set 𝒮={1,2}\mathcal{S}=\left\{1,2\right\}. We generated (𝐗i,εi)(\mathbf{X}_{i},\varepsilon_{i}) from the multivariate Gaussian distribution (12), with three scenarios for the correlation vector 𝚺𝐗,ε\bm{\Sigma}_{\mathbf{X},\varepsilon}, including (I) 𝚺𝐗,ε=(0.5,0.5,0,0)\bm{\Sigma}_{\mathbf{X},\varepsilon}=(0.5,0.5,0,0)^{\top}, (II) 𝚺𝐗,ε=(0.5,0.5,0,0)\bm{\Sigma}_{\mathbf{X},\varepsilon}=(0.5,-0.5,0,0)^{\top}, and (III) 𝚺𝐗,ε=(0.5,0.5,0.5,0.5)\bm{\Sigma}_{\mathbf{X},\varepsilon}=(-0.5,-0.5,0.5,0.5)^{\top}. We then simulated the outcome from either the linear model y=𝐗𝜷+εy=\mathbf{X}\bm{\beta}+\varepsilon or the non-linear model y=sin(𝐗𝜷+ε)y=\sin(\mathbf{X}\bm{\beta}+\varepsilon). We varied n{100,500,1000}n\in\{100,500,1000\} and for each simulation configuration, we generated 10001000 samples.

For each generated sample, we computed the Lasso SIR estimator 𝜷^\hat{\bm{\beta}}^{*} with the tuning parameter μ\mu selected by 10-fold cross-validation, as implemented by the LassoSIR package. We reported the estimation error in estimating the column space spanned by 𝜷\bm{\beta}, as measured by 𝒫(𝜷)𝒫(𝜷^)F\left\lVert\mathcal{P}(\bm{\beta})-\mathcal{P}(\hat{\bm{\beta}})\right\rVert_{F}. For variable selection performance, we computed the area under the receiver operating curve (AUC); an AUC of one indicates a clear separation between the estimates of non-important variables and the those of important variables in 𝜷^\hat{\bm{\beta}}^{*}.

Table S3 shows that the performance of the Lasso SIR estimator aligns with our theoretical results. In the first case, the correlation vector 𝚺𝐗,ε\bm{\Sigma}_{\mathbf{X},\varepsilon} is proportional to the true 𝜷\bm{\beta}, so the Lasso SIR maintains both estimation and variable selection consistency, as evidenced by a decreasing estimation error when nn increases and an AUC of 1. In the second case, the Lasso SIR estimator gives a consistent estimate of the space spanned by 𝜼=(1.5,0.5,0,0)/2.25\bm{\eta}^{*}=(1.5,0.5,0,0)/\sqrt{2.25}, which has a different direction but the same support compared to the true 𝜷\bm{\beta}. Hence, the Lasso SIR is not estimation consistent, but still has an AUC of 1. In the last case, the Lasso SIR estimator gives a consistent estimate for the space spanned by 𝜼=(0.5,0.5,0.5,0.5)\bm{\eta}^{*}=(0.5,0.5,0.5,0.5)^{\top}, which has both a different direction and a different support from those of the true 𝜷\bm{\beta}. As a result, the Lasso SIR estimator is not selection or estimation consistent.

Table S3: Performance of the Lasso SIR estimator under endogeneity, as measured by the estimation error 𝒫(𝜷)𝒫(𝜷^)F\left\lVert\mathcal{P}(\bm{\beta})-\mathcal{P}(\hat{\bm{\beta}}^{*})\right\rVert_{F} and variable selection performance (AUC).
𝚺𝐗,ε\bm{\Sigma}_{\mathbf{X},\varepsilon} nn Linear link Sine link
Error AUC Error AUC
(I) 100 0.095 1.000 0.876 0.853
500 0.039 1.000 0.402 0.992
1000 0.028 1.000 0.258 1.000
(II) 100 0.659 1.000 0.768 0.907
500 0.643 1.000 0.674 0.995
1000 0.639 1.000 0.661 1.000
(III) 100 1.001 0.758 1.003 0.739
500 1.000 0.744 0.999 0.756
1000 1.000 0.728 0.999 0.747

S2 Proofs

S2.1 Proof of Proposition 1

Without loss of generality, assume the expectation of E(𝐗,ε)=𝟎\text{E}(\mathbf{X},\varepsilon)=\bm{0}, the marginal covariance matrix 𝚺X=𝐈p\bm{\Sigma}_{X}=\mathbf{I}_{p}, and the marginal variance Var(ε)=1\operatorname{\text{Var}}(\varepsilon)=1. In this case, it follows that

[𝐗ε]Np+1(𝟎,[𝐈p𝚺𝐗,ε𝚺𝐗,ε1]).\begin{bmatrix}\mathbf{X}\\ \varepsilon\\ \end{bmatrix}\sim N_{p+1}\left(\bm{0},\begin{bmatrix}\mathbf{I}_{p}&\bm{\Sigma}_{\mathbf{X},\varepsilon}\\ \bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}&1\end{bmatrix}\right).

By the law of iterated expectation, we have

E(𝐗y)=E{E(𝐗𝐁𝐗,ε,y)y}=E{E(𝐗𝐁𝐗,ε)y}.E(\mathbf{X}\mid y)=E\left\{E(\mathbf{X}\mid\mathbf{B}^{\top}\mathbf{X},\varepsilon,y)\mid y\right\}=E\left\{E(\mathbf{X}\mid\mathbf{B}^{\top}\mathbf{X},\varepsilon)\mid y\right\}.

Next, by the property of the multivariate Gaussian distribution we obtain

E(𝐗𝐁𝐗,ε)=[𝐁𝚺𝐗,ε][𝐁𝐁𝐁𝚺𝐗,ε𝚺𝐗,ε𝐁1]1[𝐁𝐗ε]\displaystyle E(\mathbf{X}\mid\mathbf{B}^{\top}\mathbf{X},\varepsilon)=\begin{bmatrix}\mathbf{B}&\bm{\Sigma}_{\mathbf{X},\varepsilon}\end{bmatrix}\begin{bmatrix}\mathbf{B}^{\top}\mathbf{B}&\mathbf{B}^{\top}\bm{\Sigma}_{\mathbf{X},\varepsilon}\\ \bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}\mathbf{B}&1\end{bmatrix}^{-1}\begin{bmatrix}\mathbf{B}^{\top}\mathbf{X}\\ \varepsilon\end{bmatrix}
=[𝐁𝚺𝐗,ε][(𝐁𝐁𝐁𝚺𝐗,ε𝚺𝐗,ε𝐁)1(𝐁𝐁𝐁𝚺𝐗,ε𝚺𝐗,ε𝐁)1𝐁𝚺𝐗,ε𝚺𝐗,ε𝐁(𝐁𝐁𝐁𝚺𝐗,ε𝚺𝐗,ε𝐁)11+𝚺𝐗,ε𝐁(𝐁𝐁𝐁𝚺𝐗,ε𝚺𝐗,ε𝐁)𝐁𝚺𝐗,ε][𝐁𝐗ε]\displaystyle=\begin{bmatrix}\mathbf{B}&\bm{\Sigma}_{\mathbf{X},\varepsilon}\end{bmatrix}\begin{bmatrix}(\mathbf{B}^{\top}\mathbf{B}-\mathbf{B}^{\top}\bm{\Sigma}_{\mathbf{X},\varepsilon}\bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}\mathbf{B})^{-1}&-(\mathbf{B}^{\top}\mathbf{B}-\mathbf{B}^{\top}\bm{\Sigma}_{\mathbf{X},\varepsilon}\bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}\mathbf{B})^{-1}\mathbf{B}^{\top}\bm{\Sigma}_{\mathbf{X},\varepsilon}\\ -\bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}\mathbf{B}(\mathbf{B}^{\top}\mathbf{B}-\mathbf{B}^{\top}\bm{\Sigma}_{\mathbf{X},\varepsilon}\bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}\mathbf{B})^{-1}&1+\bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}\mathbf{B}(\mathbf{B}^{\top}\mathbf{B}-\mathbf{B}^{\top}\bm{\Sigma}_{\mathbf{X},\varepsilon}\bm{\Sigma}_{\mathbf{X},\varepsilon}^{\top}\mathbf{B})\mathbf{B}^{\top}\bm{\Sigma}_{\mathbf{X},\varepsilon}\end{bmatrix}\begin{bmatrix}\mathbf{B}^{\top}\mathbf{X}\\ \varepsilon\end{bmatrix}

which is in the subspace spanned by 𝐁\mathbf{B} if and only if 𝚺𝐗,ε\bm{\Sigma}_{\mathbf{X},\varepsilon} can be written in the form 𝚺𝐗,ε=𝐁𝐀\bm{\Sigma}_{\mathbf{X},\varepsilon}=\mathbf{B}\mathbf{A} for a d×dd\times d matrix 𝐀\mathbf{A}, i.e 𝚺𝐗,εcol(𝐁)\bm{\Sigma}_{\mathbf{X},\varepsilon}\in\text{col}(\mathbf{B}).

S2.2 Proof of Proposition 2

Let 𝜼~k=𝜼k𝜼k𝜼^k\tilde{\bm{\eta}}_{k}^{*}=\bm{\eta}_{k}^{*}\bm{\eta}_{k}^{*\top}\hat{\bm{\eta}}^{*}_{k}, so each element of 𝜼~k\tilde{\bm{\eta}}_{k}^{*} always has the same sign as that of 𝜼^k\hat{\bm{\eta}}_{k}^{*}. Let 𝜷~k=𝚺𝐗1𝜼~k=𝜷k𝜼k𝜼^k\tilde{\bm{\beta}}_{k}^{*}=\bm{\Sigma}_{\mathbf{X}}^{-1}\tilde{\bm{\eta}}_{k}^{*}=\bm{\beta}_{k}^{*}\bm{\eta}_{k}^{*\top}\hat{\bm{\eta}}^{*}_{k}. Since pp is fixed, the cross product 𝜼k𝜼^k0{\bm{\eta}}_{k}^{*\top}\hat{\bm{\eta}}_{k}^{*}\neq 0 with probability one for all k=1,,dk=1,\ldots,d. As a result, we have 𝒫(𝐁~)=𝒫(𝐁)\mathcal{P}(\widetilde{\mathbf{B}}^{*})=\mathcal{P}({\mathbf{B}}^{*}) with probability one.

From the definition of the Lasso-SIR estimator and writing 𝜷^k=𝜷~k+𝜹k\hat{\bm{\beta}}_{k}^{*}=\tilde{\bm{\beta}}_{k}^{*}+\bm{\delta}_{k}^{*}, we have

1n𝒚~k𝒳𝜷~k𝒳𝜹k2+μk𝜷~k+𝜹k11n𝒚~k𝒳𝜷k2+μk𝜷~k1.\frac{1}{n}\left\lVert\tilde{\bm{y}}^{*}_{k}-\mathcal{X}\tilde{\bm{\beta}}^{*}_{k}-\mathcal{X}\bm{\delta}_{k}^{*}\right\rVert^{2}+\mu_{k}\|\tilde{\bm{\beta}}^{*}_{k}+\bm{\delta}_{k}^{*}\|_{1}\leq\frac{1}{n}\left\lVert\tilde{\bm{y}}^{*}_{k}-\mathcal{X}{\bm{\beta}}^{*}_{k}\right\rVert^{2}+\mu_{k}\|\tilde{\bm{\beta}}^{*}_{k}\|_{1}.

Equivalently, we have

𝜹k(𝒳𝒳n)𝜹k2𝜹k(𝒳𝒚~kn𝒳𝒳n𝜷~k)μk(𝜷~k1𝜷~k+𝜹k1).\bm{\delta}_{k}^{*^{\top}}\left(\frac{\mathcal{X}^{\top}\mathcal{X}}{n}\right)\bm{\delta}_{k}^{*}-2\bm{\delta}_{k}^{*^{\top}}\left(\dfrac{\mathcal{X}^{\top}\tilde{\bm{y}}^{*}_{k}}{n}-\dfrac{\mathcal{X}^{\top}\mathcal{X}}{n}\tilde{\bm{\beta}}_{k}^{*}\right)\leq\mu_{k}\left(\|\tilde{\bm{\beta}}^{*}_{k}\|_{1}-\|\tilde{\bm{\beta}}^{*}_{k}+\bm{\delta}_{k}^{*}\|_{1}\right).

By construction, we have n1𝒳𝒚~k=𝜼^kn^{-1}\mathcal{X}^{\top}\tilde{\bm{y}}_{k}^{*}=\hat{\bm{\eta}}_{k}^{*}, the eigenvector associated with the kkth largest eigenvalue of 𝚲^\hat{\bm{\Lambda}}^{*}. By the law of large numbers, we have 𝚲^𝑝𝚲\hat{\bm{\Lambda}}^{*}\xrightarrow{p}\bm{\Lambda}^{*}, so 𝜼^k𝜼~k=op(1)\left\lVert\hat{\bm{\eta}}_{k}^{*}-\tilde{\bm{\eta}}_{k}^{*}\right\rVert=o_{p}(1). Furthermore, as nn\to\infty, we have n1𝒳𝒳𝑝𝚺𝐗n^{-1}{\mathcal{X}^{\top}\mathcal{X}}\xrightarrow{p}\bm{\Sigma}_{\mathbf{X}} so n1𝒳𝒳𝜷~k𝑝𝜼~kn^{-1}\mathcal{X}^{\top}\mathcal{X}\tilde{\bm{\beta}}_{k}^{*}\xrightarrow{p}\tilde{\bm{\eta}}_{k}^{*}. Therefore, as nn\to\infty. the vector 𝜹k\bm{\delta}_{k}^{*} satisfies

𝜹k𝚺𝐗𝜹kμk(𝜷~k1𝜷~k+𝜹k1)+2𝜹k(𝜼^k𝜼~k).\bm{\delta}_{k}^{*^{\top}}\bm{\Sigma}_{\mathbf{X}}\bm{\delta}_{k}^{*}\leq\mu_{k}\left(\|\tilde{\bm{\beta}}^{*}_{k}\|_{1}-\|\tilde{\bm{\beta}}^{*}_{k}+\bm{\delta}_{k}^{*}\|_{1}\right)+2\bm{\delta}_{k}^{*}\left(\hat{\bm{\eta}}_{k}^{*}-\tilde{\bm{\eta}}_{k}^{*}\right).

Since μk=o(1)\mu_{k}=o(1) and 𝜼^k𝜼~k=op(1)\left\lVert\hat{\bm{\eta}}_{k}^{*}-\tilde{\bm{\eta}}_{k}^{*}\right\rVert=o_{p}(1), we have 𝜹k=op(1)\left\lVert\bm{\delta}_{k}^{*}\right\rVert=o_{p}(1), so 𝒫(𝐁k)𝑝𝒫(𝐁~)=𝒫(𝐁)\mathcal{P}(\mathbf{B}_{k}^{*})\xrightarrow{p}\mathcal{P}(\widetilde{\mathbf{B}}^{*})=\mathcal{P}({\mathbf{B}}^{*}) as nn\to\infty.

S2.3 Proof of Lemma 1

The instrument 𝐙\mathbf{Z} is independent of both ε\varepsilon and 𝐔\mathbf{U}, so the conditional expectation 𝐗^=E(𝐗𝐙)=𝚪𝐙\hat{\mathbf{X}}=E(\mathbf{X}\mid\mathbf{Z})=\bm{\Gamma}\mathbf{Z} is also independent of both ε\varepsilon and 𝐔\mathbf{U}. By the properties of conditional expectation, we have

E(𝐗^y)\displaystyle E(\widehat{\mathbf{X}}\mid y) =E[E(𝐗^𝐁𝐗,y)y]\displaystyle=E\left[E\left(\widehat{\mathbf{X}}\mid\mathbf{B}^{\top}\mathbf{X},y\right)\mid y\right]
=E[E(𝐗^𝐁𝐗,ε)y]\displaystyle=E\left[E\left(\widehat{\mathbf{X}}\mid\mathbf{B}^{\top}\mathbf{X},\varepsilon\right)\mid y\right] (since y is a deterministic function of 𝐗𝐁 and ε)\displaystyle\hfill\text{(since $y$ is a deterministic function of $\mathbf{X}^{\top}\mathbf{B}$ and $\varepsilon$})
=E[E(𝐗^𝐁𝐗)y]\displaystyle=E\left[E\left(\widehat{\mathbf{X}}\mid\mathbf{B}^{\top}\mathbf{X}\right)\mid y\right] (since ε\varepsilon and 𝐗^\widehat{\mathbf{X}} are independent)
=(i)Cov(𝐗^,𝐗)𝐁[Var(𝐁𝐗)]1𝐁E(𝐗y),\displaystyle\stackrel{{\scriptstyle(i)}}{{=}}\operatorname{\text{Cov}}(\widehat{\mathbf{X}},\mathbf{X})\mathbf{B}\left[\operatorname{\text{Var}}(\mathbf{B}^{\top}\mathbf{X})\right]^{-1}\mathbf{B}^{\top}E(\mathbf{X}\mid y),

where step (i)(i) follows from Condition 3.1 in the main text and Lemma 1.01 of Li, (2018). The last equation shows that the conditional expectation of E(𝐗^Y)E(\widehat{\mathbf{X}}\mid Y) is contained in the space spanned by 𝚺𝐗^𝐁\bm{\Sigma}_{\widehat{\mathbf{X}}}\mathbf{B}, and the required result follows.

S2.4 Proof of Lemma 2

Recall 𝜼\bm{\eta} and 𝜼^\hat{\bm{\eta}} are the eigenvectors associated with 𝚲\bm{\Lambda} and 𝚲^\widehat{\bm{\Lambda}}, respectively. Hence, 𝜼^\hat{\bm{\eta}} maximizes the function v𝚲^v\textbf{v}^{\top}\hat{\bm{\Lambda}}\textbf{v} subject to v=1\left\lVert\textbf{v}\right\rVert=1, and

𝜼𝚲^𝜼𝜼^𝚲^𝜼^.\bm{\eta}^{\top}\hat{\bm{\Lambda}}\bm{\eta}\leq\hat{\bm{\eta}}^{\top}\hat{\bm{\Lambda}}\hat{\bm{\eta}}.

Equivalently, we have 𝚲^,𝜼𝜼𝚲^,𝜼^𝜼^\langle\hat{\bm{\Lambda}},\bm{\eta}\bm{\eta}^{\top}\rangle\leq\langle\hat{\bm{\Lambda}},\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle and𝚲^,𝜼𝜼𝜼^𝜼^0\langle\hat{\bm{\Lambda}},\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle\leq 0, where for two matrices 𝐀1\mathbf{A}_{1} and 𝐀2\mathbf{A}_{2}, we let 𝐀1,𝐀2=trace(𝐀1𝐀2)\langle\mathbf{A}_{1},\mathbf{A}_{2}\rangle=\text{trace}(\mathbf{A}_{1}^{\top}\mathbf{A}_{2}) be their dot product. Writing 𝚲^=𝚲+(𝚲^𝚲)\hat{\bm{\Lambda}}=\bm{\Lambda}+(\hat{\bm{\Lambda}}-\bm{\Lambda}), we obtain

𝚲,𝜼𝜼𝜼^𝜼^𝚲^𝚲,𝜼𝜼𝜼^𝜼^.\langle\bm{\Lambda},\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle\leq\langle\hat{\bm{\Lambda}}-\bm{\Lambda},\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle. (13)

For the left hand side of (13), since 𝚲=λ𝜼𝜼\bm{\Lambda}=\lambda\bm{\eta}\bm{\eta}^{\top} and 𝜼𝜼=1\bm{\eta}^{\top}\bm{\eta}=1, then we have 𝚲,𝜼𝜼𝜼^𝜼^=λ(1ρ^2)\langle\bm{\Lambda},\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle=\lambda(1-\hat{\rho}^{2}).

Turning to the right hand side of (13), applying Hölder’s inequality we obtain

𝚲^𝚲,𝜼𝜼𝜼^𝜼^𝚲^𝚲max𝜼𝜼𝜼^𝜼^12𝚲^𝚲max,\langle\hat{\bm{\Lambda}}-\bm{\Lambda},\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle\leq\left\lVert\hat{\bm{\Lambda}}-\bm{\Lambda}\right\rVert_{\max}\left\lVert\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\right\rVert_{1}\leq\sqrt{2}\left\lVert\hat{\bm{\Lambda}}-\bm{\Lambda}\right\rVert_{\max},

where for any matrix 𝐀\mathbf{A}, we let 𝐀max\left\lVert\mathbf{A}\right\rVert_{\max} denote its elementwise maximum norm, and the second inequality above follows from 𝜼𝜼𝜼^𝜼^1𝜼22+𝜼^22=2\left\lVert\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\right\rVert_{1}\leq\sqrt{\left\lVert\bm{\eta}\right\rVert_{2}^{2}+\left\lVert\hat{\bm{\eta}}\right\rVert_{2}^{2}}=\sqrt{2}. Section S2.5 studies the properties of 𝚲^\widehat{\bm{\Lambda}}; in particular, it establishes that there exist positive constants CC and CC^{\prime} such that

𝚲^𝚲maxCL{Llogqn+σmaxrlogpqn}\left\lVert\hat{\bm{\Lambda}}-\bm{\Lambda}\right\rVert_{\max}\leq CL\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}

with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(-C^{\prime}\log q)-\exp(-C^{\prime\prime}\log pq). With the same probability then, we obtain

ρ^21CLλ{Llogqn+σmaxrlogpqn},\hat{\rho}^{2}\geq 1-\frac{CL}{\lambda}\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\},

as required.

S2.5 Properties of 𝚲^\hat{\bm{\Lambda}}

The developments below follow closely the work of Tan et al., (2018). Note

𝚲=Cov{E(𝐗^y)}=𝚪Cov{E(𝐙y)}𝚪=𝚪(𝚺𝐙𝚵)𝚪,\bm{\Lambda}=\operatorname{\text{Cov}}\left\{E\left(\widehat{\mathbf{X}}\mid y\right)\right\}=\bm{\Gamma}^{\top}\operatorname{\text{Cov}}\left\{E\left({\mathbf{Z}}\mid y\right)\right\}\bm{\Gamma}=\bm{\Gamma}^{\top}\left(\bm{\Sigma}_{\mathbf{Z}}-{\mathbf{\Xi}}\right)\bm{\Gamma},

where 𝚵=E{Cov(𝐙y)}\mathbf{\Xi}=E\left\{\operatorname{\text{Cov}}\left({\mathbf{Z}}\mid y\right)\right\}. Similarly, we can write 𝚲^=𝚪^(𝚺^𝐙𝚵^)𝚪^\widehat{\bm{\Lambda}}=\widehat{\bm{\Gamma}}^{\top}(\widehat{\bm{\Sigma}}_{\mathbf{Z}}-\widehat{\mathbf{\Xi}})\widehat{\bm{\Gamma}}, with

𝚵^=1H(c1)𝒵(𝐈H𝐏c)𝒵,\widehat{\mathbf{\Xi}}=\dfrac{1}{H(c-1)}\mathcal{Z}^{\top}(\mathbf{I}_{H}\otimes\mathbf{P}_{c})\mathcal{Z}, (14)

with 𝐏c=𝐈c𝐞c𝐞c\mathbf{P}_{c}=\mathbf{I}_{c}-\mathbf{e}_{c}\mathbf{e}_{c}^{\top}, 𝐞c=(1,,1)\mathbf{e}_{c}=(1,\ldots,1)^{\top} being the c×1c\times 1 vector of ones, and 𝐈c\mathbf{I}_{c} denoting the c×cc\times c identity matrix. Define 𝐐c=𝐏c(1c1)𝐈c=c1(𝐈c𝐞c𝐞cT)\mathbf{Q}_{c}=\mathbf{P}_{c}-\left(1-c^{-1}\right)\mathbf{I}_{c}=c^{-1}\left(\mathbf{I}_{c}-\mathbf{e}_{c}\mathbf{e}_{c}^{T}\right), and let 𝛀=𝚺𝐙𝚵\bm{\Omega}=\bm{\Sigma}_{\mathbf{Z}}-\mathbf{\Xi}, and 𝛀^=𝚺^𝐙𝚵^\widehat{\bm{\Omega}}=\widehat{\bm{\Sigma}}_{\mathbf{Z}}-\widehat{\mathbf{\Xi}}.

We now bound the elementwise maximum norm 𝚲^𝚲max\left\lVert\widehat{\bm{\Lambda}}-\bm{\Lambda}\right\rVert_{\max}. By the triangle inequality, for any j,k=1,,qj,k=1,\ldots,q, we have

|𝜸j𝛀𝜸k𝜸^j𝛀^𝜸^k|\displaystyle|\bm{\gamma}_{j}^{\top}\bm{\Omega}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Omega}}\hat{\bm{\gamma}}_{k}| (15)
=|(𝜸j𝛀𝜸k𝜸^j𝛀𝜸k)+(𝜸^j𝛀𝜸k𝜸^j𝛀^𝜸k)+(𝜸^j𝛀^𝜸k𝜸^j𝛀^𝜸^k)|\displaystyle=|(\bm{\gamma}_{j}^{\top}\bm{\Omega}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\bm{\Omega}\bm{\gamma}_{k})+(\hat{\bm{\gamma}}_{j}^{\top}\bm{\Omega}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Omega}}\bm{\gamma}_{k})+(\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Omega}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Omega}}\hat{\bm{\gamma}}_{k})|
|𝜸j𝛀𝜸k𝜸^j𝛀𝜸k|+|𝜸^j𝛀𝜸k𝜸^j𝛀^𝜸k|+|𝜸^j𝛀^𝜸k𝜸^j𝛀^𝜸^k|\displaystyle\leq|\bm{\gamma}_{j}^{\top}{\bm{\Omega}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}{\bm{\Omega}}\bm{\gamma}_{k}|+|\hat{\bm{\gamma}}_{j}^{\top}{\bm{\Omega}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Omega}}\bm{\gamma}_{k}|+|\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Omega}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Omega}}\hat{\bm{\gamma}}_{k}|
𝜸^j𝜸j1𝛀𝜸k+𝜸^j1𝛀𝛀^max𝜸k1+𝛀^𝜸^j𝜸^k𝜸k1\displaystyle\leq\|\hat{\bm{\gamma}}_{j}-\bm{\gamma}_{j}\|_{1}\|{\bm{\Omega}}\bm{\gamma}_{k}\|_{\infty}+\|\hat{\bm{\gamma}}_{j}\|_{1}\|\bm{\Omega}-\widehat{\bm{\Omega}}\|_{\max}\|\bm{\gamma}_{k}\|_{1}+\|\widehat{\bm{\Omega}}\hat{\bm{\gamma}}_{j}\|_{\infty}\|\hat{\bm{\gamma}}_{k}-\bm{\gamma}_{k}\|_{1}
𝜸^j𝜸j1𝛀max𝜸k1+𝜸^j1𝛀𝛀^max𝜸k1+𝛀^max𝜸^j1𝜸^k𝜸k1\displaystyle\leq\|\hat{\bm{\gamma}}_{j}-\bm{\gamma}_{j}\|_{1}\left\lVert\bm{\Omega}\right\rVert_{\max}\left\lVert\bm{\gamma}_{k}\right\rVert_{1}+\|\hat{\bm{\gamma}}_{j}\|_{1}\|\bm{\Omega}-\widehat{\bm{\Omega}}\|_{\max}\|\bm{\gamma}_{k}\|_{1}+\left\lVert\widehat{\bm{\Omega}}\right\rVert_{\max}\left\lVert\hat{\bm{\gamma}}_{j}\right\rVert_{1}\|\hat{\bm{\gamma}}_{k}-\bm{\gamma}_{k}\|_{1}
=T1+T2+T3.\displaystyle=T_{1}+T_{2}+T_{3}.

From Theorem 1 in the main text we have 𝜸^j𝜸j1=Cσmaxrlog(pq)/n\|\hat{\bm{\gamma}}_{j}-\bm{\gamma}_{j}\|_{1}=C\sigma_{\max}r\sqrt{\log(pq)/n} with probability at least 1exp(Clogpq)1-\exp(-C^{\prime}\log pq) for any j1,,qj\in 1,\ldots,q. Furthermore by Condition 4.2 in the main text, we have 𝜸k1L\left\lVert\bm{\gamma}_{k}\right\rVert_{1}\leq L and 𝛀maxC\|\bm{\Omega}\|_{\max}\leq C. Therefore, obtain

T1CLσmaxrlogpqn,T_{1}\leq CL\sigma_{\max}r\sqrt{\frac{\log pq}{n}}, (16)

For the second term T2T_{2}, by the triangle inequality we have that

𝜸^j1=𝜸^j𝜸j+𝜸j1𝜸^j𝜸j1+𝜸j1Cσmaxrlogpqn+LCL,\|\hat{\bm{\gamma}}_{j}\|_{1}=\|\hat{\bm{\gamma}}_{j}-\bm{\gamma}_{j}+\bm{\gamma}_{j}\|_{1}\leq\|\hat{\bm{\gamma}}_{j}-\bm{\gamma}_{j}\|_{1}+\|\bm{\gamma}_{j}\|_{1}\leq C{\sigma_{\max}}r\sqrt{\dfrac{\log pq}{n}}+L\leq C^{\prime}L,

with probability at least 1exp(C′′logpq)1-\exp(-C^{\prime\prime}\log pq), assuming σmaxrlogq/n=o(1){\sigma_{\max}}r\sqrt{\log q/{n}}=o(1). Next, Lemma 4 provides an upper bound on 𝛀𝛀^max\|\bm{\Omega}-\widehat{\bm{\Omega}}\|_{\max} as follows.

Lemma 4.

Under the Conditions 4.1-4.5 in the main text, we have 𝛀𝛀^maxClog(q)/n\|\bm{\Omega}-\widehat{\bm{\Omega}}\|_{\max}\leq C\sqrt{\log(q)/n} with probability at least 1exp(Clogq)1-\exp(-C^{\prime}\log q).

The proof for Lemma 4and other technical results in this section are deferred to section S3. Combining Lemma 1 and union bound then, we obtain

T2CL2logqn.T_{2}\leq CL^{2}\sqrt{\dfrac{\log q}{n}}. (17)

with a probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(-C^{\prime}\log q)-\exp(-C^{\prime\prime}\log pq).

Finally for the term T3T_{3}, by the triangle inequality we also have

𝛀^max𝛀^𝛀max+𝛀maxC,\left\lVert\widehat{\bm{\Omega}}\right\rVert_{\max}\leq\left\lVert\widehat{\bm{\Omega}}-\bm{\Omega}\right\rVert_{\max}+\left\lVert{\bm{\Omega}}\right\rVert_{\max}\leq C,

with a probability at least 1exp(Clogq)1-\exp(-C^{\prime}\log q), assuming (logq)/n=o(1)(\log q)/n=o(1). Hence, with a probability of at least 1exp(Clogq)exp(C′′logpq)1-\exp(-C^{\prime}\log q)-\exp(-C^{\prime\prime}\log pq), we have

T3CLσmaxrlogpqn.T_{3}\leq CL\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}. (18)

Substituting (16)-(18) into (15) and taking the union bound, we finally obtain

𝚲^𝚲max=maxj,k|𝜸j𝛀𝜸k𝜸^j𝛀^𝜸^k|CL{Llogqn+σmaxrlogpqn},\left\lVert\widehat{\bm{\Lambda}}-\bm{\Lambda}\right\rVert_{\max}=\max_{j,k}|\bm{\gamma}_{j}^{\top}\bm{\Omega}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Omega}}\hat{\bm{\gamma}}_{k}|\leq CL\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}, (19)

with a probability at least 1exp(Clogpq)exp(C′′logpq)1-\exp(-C^{\prime}\log pq)-\exp(-C^{\prime\prime}\log pq).

S2.6 Proof of Lemma 3

By the optimality of 𝜷^\hat{\bm{\beta}}, we obtain

12n𝐲~𝒳^𝜷^22+μ2𝜷^112n𝐲~𝒳^𝜷~22+μ2𝜷~1.\dfrac{1}{2n}\|\tilde{\mathbf{y}}-\hat{\mathcal{X}}\hat{\bm{\beta}}\|_{2}^{2}+\mu_{2}\|\hat{\bm{\beta}}\|_{1}\leq\dfrac{1}{2n}\|\tilde{\mathbf{y}}-\hat{\mathcal{X}}{\tilde{\bm{\beta}}}\|_{2}^{2}+\mu_{2}\|{\tilde{\bm{\beta}}}\|_{1}. (20)

Letting 𝜹=𝜷^𝜷~\bm{\delta}=\hat{\bm{\beta}}-\tilde{\bm{\beta}} and substituting into (20), we obtain

12n𝐲~𝒳^𝜷~𝒳^𝜹22+μ2𝜹+𝜷~112n𝐲~𝒳^𝜷~22+μ2𝜷~1.\dfrac{1}{2n}\|\tilde{\mathbf{y}}-\hat{\mathcal{X}}\tilde{\bm{\beta}}-\hat{\mathcal{X}}\bm{\delta}\|_{2}^{2}+\mu_{2}\|\bm{\delta}+\tilde{\bm{\beta}}\|_{1}\leq\dfrac{1}{2n}\|\tilde{\mathbf{y}}-\hat{\mathcal{X}}{\tilde{\bm{\beta}}}\|_{2}^{2}+\mu_{2}\|{\tilde{\bm{\beta}}}\|_{1}.

Expanding the square, cancelling the same terms on both sides, and rearranging terms, we have

12n𝒳^𝜹221n𝜹𝒳^(𝐲~𝒳^𝜷~)μ2(𝜷~1𝜷~+𝜹1).\dfrac{1}{2n}\|\hat{\mathcal{X}}\bm{\delta}\|_{2}^{2}-\dfrac{1}{n}\bm{\delta}^{\top}\hat{\mathcal{X}}^{\top}(\tilde{\mathbf{y}}-\hat{\mathcal{X}}\tilde{\bm{\beta}})\leq\mu_{2}\left(\|\tilde{\bm{\beta}}\|_{1}-\|\tilde{\bm{\beta}}+\bm{\delta}\|_{1}\right). (21)

By definition, we have 𝜼^=n1𝒳^𝐲~\hat{\bm{\eta}}=n^{-1}\hat{\mathcal{X}}^{\top}\tilde{\mathbf{y}}, and so (21) is equivalent to

12n𝒳^𝜹22𝜹(𝜼^1n𝒳^𝒳^𝜷~)+μ2(𝜷~1𝜷~+𝜹1).\dfrac{1}{2n}\|\hat{\mathcal{X}}\bm{\delta}\|_{2}^{2}\leq\bm{\delta}^{\top}\left(\hat{\bm{\eta}}-\dfrac{1}{n}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\right)+\mu_{2}\left(\|\tilde{\bm{\beta}}\|_{1}-\|\tilde{\bm{\beta}}+\bm{\delta}\|_{1}\right).

Applying Hölder’s inequality and the condition on μ2\mu_{2} from the Lemma, we obtain

𝜹(𝜼^1n𝒳^𝒳^𝜷~)\displaystyle\bm{\delta}^{\top}\left(\hat{\bm{\eta}}-\dfrac{1}{n}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\right) 𝜹1𝜼^1n𝒳^𝒳^𝜷~12μ2𝜹1.\displaystyle\leq\|\bm{\delta}\|_{1}\left\|\hat{\bm{\eta}}-\dfrac{1}{n}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\right\|_{\infty}\leq\dfrac{1}{2}\mu_{2}\|\bm{\delta}\|_{1}.

It follows that

12𝜹1+(𝜷~1𝜷~+𝜹1)0.\frac{1}{2}\|\bm{\delta}\|_{1}+\left(\|\tilde{\bm{\beta}}\|_{1}-\|\tilde{\bm{\beta}}+\bm{\delta}\|_{1}\right)\geq 0. (22)

Furthermore, applying the (reverse) triangle inequality gives

𝜷~1𝜷~+𝜹1=𝜷~S1𝜷~S+𝜹S1𝜹Sc1𝜹S1𝜹Sc1.\displaystyle\left\lVert\tilde{{\bm{\beta}}}\right\rVert_{1}-\left\lVert\tilde{\bm{\beta}}+\bm{\delta}\right\rVert_{1}=\left\lVert\tilde{{\bm{\beta}}}_{S}\right\rVert_{1}-\left\lVert\tilde{{\bm{\beta}}}_{S}+{\bm{\delta}}_{S}\right\rVert_{1}-\left\lVert\bm{\delta}_{S^{c}}\right\rVert_{1}\leq\left\lVert{\bm{\delta}}_{S}\right\rVert_{1}-\left\lVert{\bm{\delta}}_{S^{c}}\right\rVert_{1}.

Since 𝜹1=𝜹S1+𝜹Sc1\left\lVert{\bm{\delta}}\right\rVert_{1}=\left\lVert{\bm{\delta}}_{S}\right\rVert_{1}+\left\lVert{\bm{\delta}}_{S^{c}}\right\rVert_{1}, then equation (22) further leads to

0\displaystyle 0 12𝜹1+𝜷~1𝜷~+𝜹1\displaystyle\leq\dfrac{1}{2}\left\lVert{\bm{\delta}}\right\rVert_{1}+\left\lVert\tilde{{\bm{\beta}}}\right\rVert_{1}-\left\lVert\tilde{\bm{\beta}}+\bm{\delta}\right\rVert_{1}
(iv)12𝜹S1+12𝜹Sc1+𝜹S1𝜹Sc1=32𝜹S112𝜹Sc1\displaystyle\stackrel{{\scriptstyle(iv)}}{{\leq}}\dfrac{1}{2}{\left\lVert{\bm{\delta}}_{S}\right\rVert_{1}+\dfrac{1}{2}\left\lVert{\bm{\delta}}_{S^{c}}\right\rVert_{1}}+\left\lVert{\bm{\delta}}_{S}\right\rVert_{1}-\left\lVert{\bm{\delta}}_{S^{c}}\right\rVert_{1}=\dfrac{3}{2}\left\lVert{\bm{\delta}}_{S}\right\rVert_{1}-\dfrac{1}{2}\left\lVert{\bm{\delta}}_{S^{c}}\right\rVert_{1}
(v)32𝜹S1.\displaystyle\stackrel{{\scriptstyle(v)}}{{\leq}}\dfrac{3}{2}\left\lVert{\bm{\delta}}_{S}\right\rVert_{1}.

It follows from step (iv) that 𝜹Sc13𝜹S1\left\lVert{\bm{\delta}}_{S^{c}}\right\rVert_{1}\leq 3\left\lVert{\bm{\delta}}_{S}\right\rVert_{1}, or 𝜹𝒞(S,3)\bm{\delta}\in\mathcal{C}(S,3).

Next, we apply the following result which establishes the sample covariance matrix n1𝒳^𝒳^n^{-1}\widehat{\mathcal{X}}^{\top}\widehat{\mathcal{X}} satisfies the RE condition with a parameter κ2\kappa_{2} with high probability.

Proposition 3.

Assume Conditions 4.1-4.2, 4.5 and the requirements of Theorem 1 are satisfied. Then there exists a constant κ2>0\kappa_{2}>0 such that 𝚺^𝐗^1/2𝛅22κ2𝛅22\left\lVert\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}^{1/2}\bm{\delta}\right\rVert_{2}^{2}\geq\kappa_{2}\left\lVert\bm{\delta}\right\rVert_{2}^{2}, for any δ𝒞(S,3)\delta\in\mathcal{C}(S,3), with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(-C^{\prime}\log q)-\exp(-C^{\prime\prime}\log pq), for constants CC^{\prime} and C′′C^{\prime\prime}.

Conditioned on this RE condition, step (v) implies

12κ2𝜹2212n𝒳^𝜹2212μ2𝜹1+μ2(𝜷~1𝜷^1)32μ2𝜹S132μ2s𝜹S232μ2s𝜹2.\begin{split}\dfrac{1}{2}\kappa_{2}\left\lVert\bm{\delta}\right\rVert_{2}^{2}{\leq}\dfrac{1}{2n}\left\lVert\hat{\mathcal{X}}{\bm{\delta}}\right\rVert_{2}^{2}&{\leq}\dfrac{1}{2}\mu_{2}\left\lVert{\bm{\delta}}\right\rVert_{1}+\mu_{2}\left(\left\lVert\tilde{{\bm{\beta}}}\right\rVert_{1}-\left\lVert\hat{{\bm{\beta}}}\right\rVert_{1}\right)\\ &{\leq}\dfrac{3}{2}\mu_{2}\left\lVert\bm{\delta}_{S}\right\rVert_{1}{\leq}\dfrac{3}{2}\mu_{2}\sqrt{s}\left\lVert\bm{\delta}_{S}\right\rVert_{2}\leq\dfrac{3}{2}\mu_{2}\sqrt{s}\left\lVert\bm{\delta}\right\rVert_{2}.\end{split}

As a result, we obtain

𝜷^𝜷~2=𝜹23κ2μ2s,\left\lVert\hat{\bm{\beta}}-\tilde{\bm{\beta}}\right\rVert_{2}=\left\lVert\bm{\delta}\right\rVert_{2}\leq\dfrac{3}{\kappa_{2}}\mu_{2}\sqrt{s},

as required.

S2.7 Proof of Theorem 2

Recall that 𝜷~=ξ1𝜷(𝜼^𝜼)\tilde{\bm{\beta}}=\xi^{-1}\bm{\beta}(\hat{\bm{\eta}}^{\top}\bm{\eta}) with ξ=Σ𝐗^𝜷2>0\xi=\left\lVert\Sigma_{\widehat{\mathbf{X}}}\bm{\beta}\right\rVert_{2}>0. By Lemma 2, the term 𝜼^𝜼\hat{\bm{\eta}}^{\top}\bm{\eta} is non-zero with high probability. Hence, P(𝜷)=P(𝜷^)P(\bm{\beta})=P(\hat{\bm{\beta}}) with the same probability. Next, by the Cauchy-Schwartz inequality we have

P(𝜷^)P(𝜷)F=P(𝜷^)P(𝜷~)F4𝜷^𝜷~2𝜷~2=C𝜷^𝜷~2.\left\|P(\hat{\bm{\beta}})-P({\bm{\beta}})\right\|_{F}=\left\|P(\hat{\bm{\beta}})-P({\tilde{\bm{\beta}}})\right\|_{F}\leq 4\frac{\|\hat{\bm{\beta}}-\tilde{\bm{\beta}}\|_{2}}{\|\tilde{\bm{\beta}}\|_{2}}=C\|\hat{\bm{\beta}}-\tilde{\bm{\beta}}\|_{2}.

Thus the required result will follow from Lemma 3.2, if the choice of μ2\mu_{2} as given in theorem satisfies μ22𝜼^n1𝒳^𝒳^𝜷~\mu_{2}\geq 2\|\hat{\bm{\eta}}-n^{-1}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\|_{\infty} with probability tending to one. It remains then to find an upper bound for 𝜼^n1𝒳^𝒳^𝜷~\|\hat{\bm{\eta}}-n^{-1}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\|_{\infty}.

By the triangle inequality, we have

𝜼^1n𝒳^𝒳^𝜷~\displaystyle\left\|\hat{\bm{\eta}}-\dfrac{1}{n}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\right\|_{\infty} =𝜼^𝚺𝐗^𝜷~+𝚺𝐗^𝜷~1n𝒳^𝒳^𝜷~\displaystyle=\left\|\hat{\bm{\eta}}-\bm{\Sigma}_{\widehat{\mathbf{X}}}\tilde{\bm{\beta}}+\bm{\Sigma}_{\widehat{\mathbf{X}}}\tilde{\bm{\beta}}-\dfrac{1}{n}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\right\|_{\infty}
𝜼^𝚺𝐗^𝜷~+(𝚺𝐗^𝚺^𝐗^)𝜷~.\displaystyle\leq\left\|\hat{\bm{\eta}}-\bm{\Sigma}_{\widehat{\mathbf{X}}}\tilde{\bm{\beta}}\right\|_{\infty}+\left\|\left(\bm{\Sigma}_{\widehat{\mathbf{X}}}-\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}\right)\tilde{\bm{\beta}}\right\|_{\infty}. (23)

We will upper bound each term on the right-hand side of (23). For the first term, note by definition, 𝚺𝐗^𝜷~=𝜼~=𝜼𝜼𝜼^\bm{\Sigma}_{\widehat{\mathbf{X}}}\tilde{\bm{\beta}}=\tilde{\bm{\eta}}=\bm{\eta}\bm{\eta}^{\top}\widehat{\bm{\eta}} i.e., the eigenvector associated with the largest eigenvalue of 𝚲=Var{E(𝐗^y)}\bm{\Lambda}=\operatorname{\text{Var}}\left\{E(\widehat{\mathbf{X}}\mid y)\right\}. Hence, we obtain

𝜼^𝚺𝐗^𝜷~=𝜼^𝜼~\displaystyle\left\|\widehat{\bm{\eta}}-\bm{\Sigma}_{{\widehat{\mathbf{X}}}}\tilde{\bm{\beta}}\right\|_{\infty}=\left\|\widehat{\bm{\eta}}-\tilde{\bm{\eta}}\right\|_{\infty} 𝜼^𝜼~2\displaystyle\leq\left\lVert\widehat{\bm{\eta}}-\tilde{\bm{\eta}}\right\rVert_{2}
=(i)1ρ^2\displaystyle\stackrel{{\scriptstyle(i)}}{{=}}\sqrt{1-\hat{\rho}^{2}}
(ii)CLλ{Llogqn+σmaxrlogpqn},\displaystyle\stackrel{{\scriptstyle(ii)}}{{\leq}}C\sqrt{\frac{L}{\lambda}\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}},

for a large positive constant CC, where step (i) above follows since

𝜼^𝜼~22=(𝜼^𝜼~)(𝜼^𝜼~)\displaystyle\left\lVert\widehat{\bm{\eta}}-\tilde{\bm{\eta}}\right\rVert_{2}^{2}=(\widehat{\bm{\eta}}-\tilde{\bm{\eta}})^{\top}(\widehat{\bm{\eta}}-\tilde{\bm{\eta}}) =𝜼^𝜼^2𝜼^𝜼~+𝜼~𝜼~\displaystyle=\widehat{\bm{\eta}}^{\top}\widehat{\bm{\eta}}-2\widehat{\bm{\eta}}^{\top}\tilde{\bm{\eta}}+\tilde{\bm{\eta}}^{\top}\tilde{\bm{\eta}}
=12(𝜼^𝜼)(𝜼𝜼^)+𝜼^𝜼𝜼𝜼𝜼𝜼^\displaystyle=1-2(\widehat{\bm{\eta}}^{\top}\bm{\eta})(\bm{\eta}^{\top}\widehat{\bm{\eta}})+\widehat{\bm{\eta}}^{\top}\bm{\eta}\bm{\eta}^{\top}\bm{\eta}\bm{\eta}^{\top}\widehat{\bm{\eta}}
=12ρ^2+ρ^2=1ρ^2,\displaystyle=1-2\hat{\rho}^{2}+\hat{\rho}^{2}=1-\hat{\rho}^{2},

and step (ii) follows from applying Lemma 2.

For the second term on the right-hand side of (23), using (42) we have

(𝚺𝐗^1n𝒳^𝒳^)𝜷~\displaystyle\left\|\left(\bm{\Sigma}_{\widehat{\mathbf{X}}}-\dfrac{1}{n}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\right)\tilde{\bm{\beta}}\right\|_{\infty} 𝜷~1max1j,kp|𝜸j𝚺𝐙𝜸k𝜸^j𝚺^𝐙𝜸^k|\displaystyle\leq\|\tilde{\bm{\beta}}\|_{1}\max_{1\leq j,k\leq p}|\bm{\gamma}_{j}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\hat{\bm{\gamma}}_{k}|
CL{Llogqn+σmaxrlogpqn}\displaystyle\leq CL\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}

Hence with the choice of μ2\mu_{2} as stated in the theorem, we have

μ22𝜼^n1𝒳^𝒳^𝜷~\mu_{2}\geq 2\|\hat{\bm{\eta}}-n^{-1}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\|_{\infty} (24)

with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp\left(-C^{\prime}\log q\right)-\exp(-C^{\prime\prime}\log pq). Applying the results from Lemma 1, we finally have

P(𝜷^)P(𝜷)FCsκ2(L1λ+L1),L1=L{Llogqn+σmaxrlogpqn}\left\|P(\hat{\bm{\beta}})-P({\bm{\beta}})\right\|_{F}\leq C\frac{\sqrt{s}}{\kappa_{2}}\left(\sqrt{\dfrac{L_{1}}{\lambda}}+L_{1}\right),~{}L_{1}=L\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}

with the same probability, as required.

S2.8 Proof of Theorem 3

The proof of this theorem has the same steps as the primal-dual witness technique, which are used in the proof for variable selection consistency of the Lasso estimator in the regular linear model (see Section 7.5.2, Wainwright,, 2019), as well as in the two-stage Lasso estimator of Lin et al., (2015). In our context, the proposed two-stage Lasso SIR estimator is the minimizer of

L(𝜷)=12n𝐲~𝒳^𝜷22+μ2𝜷1.L(\bm{\beta})=\dfrac{1}{2n}\|\tilde{\mathbf{y}}-\hat{\mathcal{X}}\bm{\beta}\|_{2}^{2}+\mu_{2}\|\bm{\beta}\|_{1}.

Applying the Karush–Kuhn-Tucker optimality condition, the estimator 𝜷^\hat{\bm{\beta}} has to satisfy

1n𝒳^S^(𝐲~𝒳^𝜷^)=μ2sgn(𝜷^S^),\dfrac{1}{n}\hat{\mathcal{X}}^{\top}_{\hat{S}}(\tilde{\mathbf{y}}-\hat{\mathcal{X}}\hat{\bm{\beta}})=\mu_{2}\text{sgn}(\hat{\bm{\beta}}_{\hat{S}}), (25)

and

1n𝒳^S^c(𝐲~𝒳^𝜷^)μ2.\left\|\dfrac{1}{n}\hat{\mathcal{X}}^{\top}_{\hat{S}^{c}}(\tilde{\mathbf{y}}-\hat{\mathcal{X}}\hat{\bm{\beta}})\right\|_{\infty}\leq\mu_{2}. (26)

It suffices to find a solution 𝜷^\hat{\bm{\beta}} such that both (25) and (26) hold. To achieve this, the idea of the primal-dual witness technique is first let 𝜷^Sc=0\hat{\bm{\beta}}_{S^{c}}=0, then construct 𝜷^S\hat{\bm{\beta}}_{S} from (25), and finally show that the obtained 𝜷^\hat{\bm{\beta}} satisfies (26). Central to this development is the following lemma regarding the bound of the sample covariance 𝚺^𝐗^\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}.

Lemma 5.

Assume Condition 4.7 is satisfied. If there exists a constant CC such that

CsL{Llogqn+σmaxrlogpqn}αϕ(4α)CsL\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}\leq\frac{\alpha}{\phi(4-\alpha)} (27)

then with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(C^{\prime}\log q)-\exp(C^{\prime\prime}\log pq), the matrix 𝚺^𝐗^=𝚪^𝚺^𝐙𝚪^\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}=\widehat{\bm{\Gamma}}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\widehat{\bm{\Gamma}}, where 𝚪^\widehat{\bm{\Gamma}} is defined from the first stage with the tuning parameters μ1,j\mu_{1,j} as stated in Theorem 1, satisfies

𝐂^SS14α2(2α)φ,𝐂^ScS𝐂^SS11α2.\left\|\widehat{\mathbf{C}}_{SS}^{-1}\right\|_{\infty}\leq\frac{4-\alpha}{2(2-\alpha)}\varphi,\quad\left\|\widehat{\mathbf{C}}_{S^{c}S}\widehat{\mathbf{C}}_{SS}^{-1}\right\|_{\infty}\leq 1-\frac{\alpha}{2}. (28)

Using a similar argument as in the proof of Theorem 2, there also exist constants CC^{\prime} and C′′C^{\prime\prime} such that

𝜼^S𝐂^SS𝜷~S𝜼^1n𝒳^𝒳^𝜷~α4αμ2,\left\|\hat{\bm{\eta}}_{S}-\widehat{\mathbf{C}}_{SS}\tilde{\bm{\beta}}_{S}\right\|_{\infty}\leq\|\hat{\bm{\eta}}-\dfrac{1}{n}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\|_{\infty}\leq\dfrac{\alpha}{4-\alpha}\mu_{2}, (29)

with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(-C^{\prime}\log q)-\exp(-C^{\prime\prime}\log pq). Hence, we will condition on both events (28) and (29) in the remaining part of the proof.

Next, recall 𝐂^SS=n1𝒳^S𝒳^S\widehat{\mathbf{C}}_{SS}=n^{-1}\hat{\mathcal{X}}^{\top}_{S}\hat{\mathcal{X}}_{S}, and by definition of 𝐲~\tilde{\mathbf{y}} we have n1𝒳^𝐲~=𝜼^n^{-1}\hat{\mathcal{X}}^{\top}\tilde{\mathbf{y}}=\hat{\bm{\eta}}, which is the eigenvector associated with the largest eigenvalue of 𝚲^\hat{\bm{\Lambda}}. Since 𝜷^Sc=0\hat{\bm{\beta}}_{S^{c}}=0, then (25) is equivalent to

μ2sgn(𝜷^S^)=1n𝒳^S^(𝐲~𝒳^𝜷^)\displaystyle\mu_{2}\text{sgn}(\hat{\bm{\beta}}_{\hat{S}})=\dfrac{1}{n}\hat{\mathcal{X}}^{\top}_{\hat{S}}(\tilde{\mathbf{y}}-\hat{\mathcal{X}}\hat{\bm{\beta}}) =𝜼^S1n𝒳^S𝒳^S𝜷^S\displaystyle=\hat{\bm{\eta}}_{S}-\dfrac{1}{n}\hat{\mathcal{X}}_{S}^{\top}\hat{\mathcal{X}}_{S}\hat{\bm{\beta}}_{S}
=𝜼^S𝐂^SS(𝜷^S𝜷~S)𝐂^SS𝜷~S\displaystyle=\hat{\bm{\eta}}_{S}-\widehat{\mathbf{C}}_{SS}(\hat{\bm{\beta}}_{S}-\tilde{\bm{\beta}}_{S})-\widehat{\mathbf{C}}_{SS}\tilde{\bm{\beta}}_{S}
=(𝜼^S𝐂^SS𝜷~S)𝐂^SS(𝜷^S𝜷~S),\displaystyle=(\hat{\bm{\eta}}_{S}-\widehat{\mathbf{C}}_{SS}\tilde{\bm{\beta}}_{S})-\widehat{\mathbf{C}}_{SS}(\hat{\bm{\beta}}_{S}-\tilde{\bm{\beta}}_{S}),

where 𝜷~\tilde{\bm{\beta}} is defined previously. Hence, we obtain

𝜷^S𝜷~S=𝐂^SS1{(𝜼^S𝐂^SS𝜷~S)μ2sgn(𝜷^S^)},\hat{\bm{\beta}}_{S}-\tilde{\bm{\beta}}_{S}=\widehat{\mathbf{C}}_{SS}^{-1}\left\{(\hat{\bm{\eta}}_{S}-\widehat{\mathbf{C}}_{SS}\tilde{\bm{\beta}}_{S})-\mu_{2}\text{sgn}(\hat{\bm{\beta}}_{\hat{S}})\right\}, (30)

and therefore

𝜷^S𝜷~S𝐂^SS1{𝜼^S𝐂^SS𝜷~S+μ2}.\|\hat{\bm{\beta}}_{S}-\tilde{\bm{\beta}}_{S}\|_{\infty}\leq\|\widehat{\mathbf{C}}_{SS}^{-1}\|_{\infty}\left\{\left\|\hat{\bm{\eta}}_{S}-\widehat{\mathbf{C}}_{SS}\tilde{\bm{\beta}}_{S}\right\|_{\infty}+\mu_{2}\right\}. (31)

Substituting (28) and (29) into (31), we obtain

𝜷^S𝜷~S4α2(2α)φ(α4αμ2+μ2)=2(2α)φμ2<minjS|β~j|,\|\hat{\bm{\beta}}_{S}-\tilde{\bm{\beta}}_{S}\|_{\infty}\leq\frac{4-\alpha}{2(2-\alpha)}\varphi\left(\dfrac{\alpha}{4-\alpha}\mu_{2}+\mu_{2}\right)=\dfrac{2}{(2-\alpha)}\varphi\mu_{2}<\min_{j\in S}|\tilde{\beta}_{j}|,

where the last step follows from

minjS|β~j|=k1|𝜼𝜼^|minjS|βj|>k1ρ^minjS|βj|>2(2α)φμ2,\min_{j\in S}|\tilde{\beta}_{j}|=k^{-1}|\bm{\eta}^{\top}\widehat{\bm{\eta}}|\min_{j\in S}|\beta_{j}|>k^{-1}\hat{\rho}\min_{j\in S}|\beta_{j}|>\dfrac{2}{(2-\alpha)}\varphi\mu_{2},

and by the choice of μ2\mu_{2} as stated in the theorem. Thus we have sgn(𝜷^S)=sgn(𝜷~S)\text{sgn}(\hat{\bm{\beta}}_{S})=\text{sgn}(\tilde{\bm{\beta}}_{S}), and since 𝜷^Sc=𝟎\hat{\bm{\beta}}_{S^{c}}=\bm{0} by construction then we have S^=S\hat{S}=S.

It remains to show that 𝜷^\hat{\bm{\beta}} satisfies (26). We have

1n𝒳^S^c(𝐲~𝒳^𝜷^)\displaystyle\dfrac{1}{n}\hat{\mathcal{X}}^{\top}_{\hat{S}^{c}}(\tilde{\mathbf{y}}-\hat{\mathcal{X}}\hat{\bm{\beta}}) =𝜼^Sc1n𝒳^Sc𝒳^S𝜷^S\displaystyle=\hat{\bm{\eta}}_{S^{c}}-\dfrac{1}{n}\hat{\mathcal{X}}_{S^{c}}^{\top}\hat{\mathcal{X}}_{S}\hat{\bm{\beta}}_{S}
=𝜼^Sc1n𝒳^Sc𝒳^S(𝜷^S𝜷~S)1n𝒳^Sc𝒳^S𝜷~S\displaystyle=\hat{\bm{\eta}}_{S^{c}}-\dfrac{1}{n}\hat{\mathcal{X}}^{\top}_{S^{c}}\hat{\mathcal{X}}_{S}(\hat{\bm{\beta}}_{S}-\tilde{\bm{\beta}}_{S})-\dfrac{1}{n}\hat{\mathcal{X}}_{S^{c}}^{\top}\hat{\mathcal{X}}_{S}\tilde{\bm{\beta}}_{S}
=(𝜼^Sc1n𝒳^Sc𝒳^S𝜷~S)𝐂^ScS𝐂^SS1{(𝜼^S𝐂^SS𝜷~S)μ2sgn(𝜷^S)},\displaystyle=\left(\hat{\bm{\eta}}_{S^{c}}-\dfrac{1}{n}\hat{\mathcal{X}}_{S^{c}}^{\top}\hat{\mathcal{X}}_{S}\tilde{\bm{\beta}}_{S}\right)-\widehat{\mathbf{C}}_{S^{c}S}\widehat{\mathbf{C}}_{SS}^{-1}\left\{(\hat{\bm{\eta}}_{S}-\widehat{\mathbf{C}}_{SS}\tilde{\bm{\beta}}_{S})-\mu_{2}\text{sgn}(\hat{\bm{\beta}}_{{S}})\right\},

where the last equality follows from (30). Hence, we obtain

1n𝒳^S^c(𝐲~𝒳^𝜷^)\displaystyle\left\|\dfrac{1}{n}\hat{\mathcal{X}}^{\top}_{\hat{S}^{c}}(\tilde{\mathbf{y}}-\hat{\mathcal{X}}\hat{\bm{\beta}})\right\|_{\infty} 𝜼^Sc1n𝒳^Sc𝒳^S𝜷~S+𝐂^ScS𝐂^SS1(𝜼^S𝐂^SS𝜷~S)μ2sgn(𝜷^S)\displaystyle\leq\left\|\hat{\bm{\eta}}_{S^{c}}-\dfrac{1}{n}\hat{\mathcal{X}}_{S^{c}}^{\top}\hat{\mathcal{X}}_{S}\tilde{\bm{\beta}}_{S}\right\|_{\infty}+\|\widehat{\mathbf{C}}_{S^{c}S}\widehat{\mathbf{C}}_{SS}^{-1}\|_{\infty}\|(\hat{\bm{\eta}}_{S}-\widehat{\mathbf{C}}_{SS}\tilde{\bm{\beta}}_{S})-\mu_{2}\text{sgn}(\hat{\bm{\beta}}_{{S}})\|_{\infty}
𝜼^1n𝒳^𝒳^𝜷~+𝐂^ScS𝐂^SS1(𝜼^S1n𝒳^𝒳^𝜷~)μ2sgn(𝜷^S)\displaystyle\leq\left\|\hat{\bm{\eta}}-\dfrac{1}{n}\hat{\mathcal{X}}^{\top}\hat{\mathcal{X}}\tilde{\bm{\beta}}\right\|_{\infty}+\|\widehat{\mathbf{C}}_{S^{c}S}\widehat{\mathbf{C}}_{SS}^{-1}\|_{\infty}\left\|(\hat{\bm{\eta}}_{S}-\dfrac{1}{n}\widehat{\mathcal{X}}^{\top}\widehat{\mathcal{X}}\tilde{\bm{\beta}})-\mu_{2}\text{sgn}(\hat{\bm{\beta}}_{{S}})\right\|_{\infty}
α4αμ2+(1α2)(α4αμ2+μ2)=μ2.\displaystyle\leq\dfrac{\alpha}{4-\alpha}\mu_{2}+\left(1-\frac{\alpha}{2}\right)\left(\dfrac{\alpha}{4-\alpha}\mu_{2}+\mu_{2}\right)=\mu_{2}.

The required results follow from this inequality.

S2.9 Proof of Theorem 4

Consider the multiple index model with 𝐁p×d\mathbf{B}\in\mathbb{R}^{p\times d}. In this case, the quantities 𝜼\bm{\eta} and 𝜼^\widehat{\bm{\eta}} are the p×dp\times d semi-orthogonal matrices of eigenvectors of 𝚲\bm{\Lambda} and 𝚲^\widehat{\bm{\Lambda}}, respectively. Let 𝜼~=P(𝜼)𝜼^=𝜼𝜼𝜼^\widetilde{\bm{\eta}}=P(\bm{\eta})\widehat{\bm{\eta}}=\bm{\eta}\bm{\eta}^{\top}\widehat{\bm{\eta}} be the projection of 𝜼^\widehat{\bm{\eta}} on 𝜼\bm{\eta}. By Lemma 1, we have Σ𝐗^𝐁=𝜼𝐃~\Sigma_{\widehat{\mathbf{X}}}\mathbf{B}=\bm{\eta}\widetilde{\mathbf{D}} where 𝐃~\widetilde{\mathbf{D}} is a non-zero d×dd\times d diagonal matrix, such that 𝜼=Σ𝐗^𝐁𝐃~1\bm{\eta}=\Sigma_{\widehat{\mathbf{X}}}\mathbf{B}\widetilde{\mathbf{D}}^{-1}. Hence, we can write

𝜼~=Σ𝐗^𝐁𝐃~1𝜼𝜼~=𝚺𝐗^𝐁~,\widetilde{\bm{\eta}}=\Sigma_{\widehat{\mathbf{X}}}\mathbf{B}\widetilde{\mathbf{D}}^{-1}\bm{\eta}^{\top}{\tilde{\bm{\eta}}}=\bm{\Sigma}_{\widehat{\mathbf{X}}}\widetilde{\mathbf{B}},

where 𝜷~=𝐃~1𝜼𝜼^\tilde{\bm{\beta}}=\widetilde{\mathbf{D}}^{-1}\bm{\eta}^{\top}{\widehat{\bm{\eta}}}. When 𝐇=𝜼𝜼^\mathbf{H}=\bm{\eta}^{\top}{\widehat{\bm{\eta}}} is a d×dd\times d invertible matrix, the column space spanned by 𝐁~\tilde{\mathbf{B}} and that spanned by 𝐁\mathbf{B} are the same, and so are the index sets for their non-zero rows.

Proposition 4 establishes a lower bound on the trace of 𝐇2\mathbf{H}^{2}.

Proposition 4.

Assume Conditions 4.1–4.6 are satisfied. Then there exist positive constants CC, CC^{\prime}, and C′′C^{\prime\prime} such that

trace(𝐇2)dCLdλd{Llogqn+σmaxrlogpqn},\text{trace}(\mathbf{H}^{2})\geq d-\dfrac{CL\sqrt{d}}{\lambda_{d}}\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\},

with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(C^{\prime}\log q)-\exp(C^{\prime\prime}\log pq).

Furthermore, by the Cauchy-Schwartz inequality, we have trace(𝐇2){trace(𝐇)}2d\text{trace}(\mathbf{H}^{2})\leq\left\{\text{trace}(\mathbf{H})\right\}^{2}\leq d. By Condition 4.3 with r2log(pq)/n0r^{2}\log(pq)/n\to 0 and Condition 4.4 with λd=O(1)\lambda_{d}=O(1), we thus have trace(𝐇2)d\text{trace}(\mathbf{H}^{2})\to d with a probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(C^{\prime}\log q)-\exp(C^{\prime\prime}\log pq). This implies that with the same probability, the matrix 𝐇\mathbf{H} is invertible.

Conditioning on the invertibility of 𝐇\mathbf{H}, Lemma 3 holds for each column 𝜷~k,k=1,,d\tilde{\bm{\beta}}_{k},~{}k=1,\ldots,d. As a result, we have

𝜷^k𝜷~k23κ2μ2ks,\left\lVert\hat{\bm{\beta}}_{k}-\tilde{\bm{\beta}}_{k}\right\rVert_{2}\leq\dfrac{3}{\kappa_{2}}\mu_{2k}\sqrt{s},

for any μ2k2𝜼kn1𝒳^𝒳^𝜷~k\mu_{2k}\geq 2\left\lVert\bm{\eta}_{k}-n^{-1}\widehat{\mathcal{X}}^{\top}\widehat{\mathcal{X}}\widetilde{\bm{\beta}}_{k}\right\rVert_{\infty}. Hence, by a similar argument to the proof of Theorem 2, and with the choice of μ2\mu_{2} as stated in the theorem, there exist constants C,CC,C^{\prime}, and C′′C^{\prime\prime} such that

P(𝜷^)P(𝜷)FCsκ2(L1λd+L1),L1=L{Llogqn+σmaxrlogpqn}\left\|P(\hat{\bm{\beta}})-P({\bm{\beta}})\right\|_{F}\leq C\frac{\sqrt{s}}{\kappa_{2}}\left(\sqrt{\dfrac{L_{1}}{\lambda_{d}}}+L_{1}\right),~{}L_{1}=L\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}

with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(C^{\prime}\log q)-\exp(C^{\prime\prime}\log pq).

Finally, turning to variable selection consistency, let 𝒮k={k:βjk0}\mathcal{S}_{k}=\left\{k:\beta_{jk}\neq 0\right\} and 𝒮^k={k:β^jk0}\widehat{\mathcal{S}}_{k}=\left\{k:\hat{\beta}_{jk}\neq 0\right\}. By a similar argument to that in the proof of Theorem 3, under the condition on μk\mu_{k} as stated in Theorem 4, we have 𝒮^k=𝒮k\widehat{\mathcal{S}}_{k}=\mathcal{S}_{k} for k=1,,dk=1,\ldots,d with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(C^{\prime}\log q)-\exp(C^{\prime\prime}\log pq). Note the diagonal element of the estimated projection matrix [𝒫(𝐁^)]jj=0[\mathcal{P}(\widehat{\mathbf{B}})]_{jj}=0 if and only if all one β^jk=0\hat{\beta}_{jk}=0 across k=1,,dk=1,\ldots,d. Hence, taking the union bound, we have 𝒮=𝒮\mathcal{S}=\mathcal{S} with a probability at least 1dexp(Clogq)dexp(C′′logpq)1-d\exp(C^{\prime}\log q)-d\exp(C^{\prime\prime}\log pq).

S3 Additional derivations

S3.1 Proof of Lemma 4

By the triangle inequality, we have

𝛀^𝛀max=(𝚺𝐙𝚺^𝐙)+(𝚵𝚵^)max𝚺^𝐙𝚺𝐙max+𝚵^𝚵max.\left\lVert\widehat{\bm{\Omega}}-\bm{\Omega}\right\rVert_{\max}=\left\lVert(\bm{\Sigma}_{\mathbf{Z}}-\widehat{\bm{\Sigma}}_{\mathbf{Z}})+(\mathbf{\Xi}-\widehat{\mathbf{\Xi}})\right\rVert_{\max}\leq\left\lVert\widehat{\bm{\Sigma}}_{\mathbf{Z}}-\bm{\Sigma}_{\mathbf{Z}}\right\rVert_{\max}+\left\lVert\widehat{\mathbf{\Xi}}-\mathbf{\Xi}\right\rVert_{\max}.

Also, by the property of sub-Gaussian random variables, there exist positive constants CC and CC^{\prime} such that

𝚺^𝐙𝚺𝐙max=Clogqn,\left\|\widehat{\bm{\Sigma}}_{\mathbf{Z}}-\bm{\Sigma}_{\mathbf{Z}}\right\|_{\max}=C\sqrt{\frac{\log q}{n}}, (32)

with probability at least 1exp(Clogq)1-\exp(-C^{\prime}\log q) (Ravikumar et al.,, 2011; Tan et al.,, 2018). Next, we will bound the elementwise maximum norm 𝚵^𝚵max.\left\lVert\widehat{\mathbf{\Xi}}-\mathbf{\Xi}\right\rVert_{\max}.

Following Zhu et al., (2006), we introduce the following notations. Let y(1),,y(n)y_{(1)},\ldots,y_{(n)} denote the order statistics of the outcome data y1,,yny_{1},\ldots,y_{n}, and 𝐙(1),,𝐙(n){\mathbf{Z}}_{(1)},\ldots,\mathbf{Z}_{(n)} be the concomitants in the 𝒵\mathcal{Z} data. Define 𝒎(y(i))=E(𝐙(i)y(i))q\bm{m}(y_{(i)})=E(\mathbf{Z}_{(i)}\mid y_{(i)})\in\mathbb{R}^{q} and ϵ(i)=𝐙(i)𝒎(y(i))\bm{\epsilon}_{(i)}=\mathbf{Z}_{(i)}-\bm{m}(y_{(i)}), and let 𝐌n\mathbf{M}_{n} be the n×pn\times p matrix whose iith row is 𝒎(y(i))\bm{m}(y_{(i)}) and 𝐄n\mathbf{E}_{n} be the n×pn\times p matrix whose iith row is ϵi\bm{\epsilon}_{i}. Note we have E(ϵ(i))=0E(\bm{\epsilon}_{(i)})=0 and Cov{𝒎(y(i)),ϵ(i)}=𝟎\operatorname{\text{Cov}}\left\{\bm{m}(y_{(i)}),\bm{\epsilon}_{(i)}\right\}=\bm{0} for all i=1,,ni=1,\ldots,n.

Let Z(i)jZ_{(i)j}, mj(y(i))m_{j}(y_{(i)}) and ϵ(i)j\epsilon_{(i)j} be the jjth element of 𝐙(i)\mathbf{Z}_{(i)}, 𝒎(y(i))\bm{m}(y_{(i)}) and ϵ(i)\epsilon_{(i)}, respectively, for j=1,,qj=1,\ldots,q. Similar to Proposition 2 in the Supplementary Materials of Tan et al., (2018), we obtain the following properties of 𝒎(yi)\bm{m}(y_{i}) and ϵ(i)\epsilon_{(i)}, whose proof is omitted.

Proposition 5.

Under Condition 4.1, Z(i)jZ_{(i)j}, mj(y(i))m_{j}(y_{(i)}) and ϵ(i)j\epsilon_{(i)j} are sub-Gaussian with sub-Gaussian norms 𝐙jψ2C(𝚺𝐙)jj\left\lVert\mathbf{Z}_{j}\right\rVert_{\psi_{2}}\leq C\sqrt{(\bm{\Sigma}_{\mathbf{Z}})_{jj}}, 𝐙jψ2\left\lVert\mathbf{Z}_{j}\right\rVert_{\psi_{2}} and 2𝐙jψ22\left\lVert\mathbf{Z}_{j}\right\rVert_{\psi_{2}}, respectively, for i=1,,n,j=1,,qi=1,\ldots,n,~{}j=1,\ldots,q.

Noting n=cHn=cH and using the same decomposition as in Zhu et al., (2006), write

𝚵^\displaystyle\widehat{\mathbf{\Xi}} =1H(c1){(11c)𝐄n𝐄n}+1H(c1){𝐄n(𝐈H𝐐c)𝐄n}\displaystyle=\frac{1}{H(c-1)}\left\{\left(1-\frac{1}{c}\right)\mathbf{E}_{n}^{\top}\mathbf{E}_{n}\right\}+\frac{1}{H(c-1)}\left\{\mathbf{E}_{n}^{\top}\left(\mathbf{I}_{H}\otimes\mathbf{Q}_{c}\right)\mathbf{E}_{n}\right\} (33)
+1H(c1){𝐄n(𝐈H𝐏c)𝐌n+𝐌n(𝐈H𝐏c)𝐄n}+1H(c1)𝐌n(𝐈H𝐏c)𝐌n\displaystyle+\frac{1}{H(c-1)}\left\{\mathbf{E}_{n}^{\top}\left(\mathbf{I}_{H}\otimes\mathbf{P}_{c}\right)\mathbf{M}_{n}+\mathbf{M}_{n}^{\top}\left(\mathbf{I}_{H}\otimes\mathbf{P}_{c}\right)\mathbf{E}_{n}\right\}+\frac{1}{H(c-1)}\mathbf{M}_{n}^{\top}\left(\mathbf{I}_{H}\otimes\mathbf{P}_{c}\right)\mathbf{M}_{n}
:=𝐉1+𝐉2+𝐉3+𝐉4,\displaystyle:=\mathbf{J}_{1}+\mathbf{J}_{2}+\mathbf{J}_{3}+\mathbf{J}_{4},

where \otimes denotes the Kronecker product, 𝐏c=𝐈cc1𝐞c𝐞c\mathbf{P}_{c}=\mathbf{I}_{c}-c^{-1}\mathbf{e}_{c}\mathbf{e}_{c}^{\top}, 𝐐c=c1(𝐈c𝐞c𝐞c)\mathbf{Q}_{c}=c^{-1}(\mathbf{I}_{c}-\mathbf{e}_{c}\mathbf{e}_{c}^{\top}), and 𝐞c\mathbf{e}_{c} is the c×1c\times 1 vector of ones. By the triangle inequality, we obtain

𝚵𝚵^max𝐉1𝚵max+𝐉2max+𝐉3max+𝐉4max.\|\mathbf{\Xi}-\widehat{\mathbf{\Xi}}\|_{\max}\leq\|\mathbf{J}_{1}-{\mathbf{\Xi}}\|_{\max}+\left\lVert\mathbf{J}_{2}\right\rVert_{\max}+\left\lVert\mathbf{J}_{3}\right\rVert_{\max}+\left\lVert\mathbf{J}_{4}\right\rVert_{\max}. (34)

We proceed by bounding each term on the right hand side of (34).

For the first term 𝐉1𝚵max\|\mathbf{J}_{1}-{\mathbf{\Xi}}\|_{\max}, noting that 𝐉1=n1𝐄n𝐄n=n1i=1nϵiϵi\mathbf{J}_{1}=n^{-1}\mathbf{E}_{n}^{\top}\mathbf{E}_{n}=n^{-1}\sum_{i=1}^{n}\bm{\epsilon}_{i}\bm{\epsilon}_{i}^{\top}, then E(𝐉1)=𝚵\text{E}(\mathbf{J}_{1})=\mathbf{\Xi}. On combining this with Proposition 5, it follows that each element of 𝐉1𝚵\mathbf{J}_{1}-\mathbf{\Xi} is a centered sub-exponential random variable with sub-exponential norm bounded by CL2CL^{2}, since 𝐙ψ2L\left\lVert\mathbf{Z}\right\rVert_{\psi_{2}}\leq L by condition 1. Therefore, with probability at least 1exp(Clogq)1-\exp(-C^{\prime}\log q), we have

𝐉1𝚵maxClogqn.\|\mathbf{J}_{1}-{\mathbf{\Xi}}\|_{\max}\leq C\sqrt{\frac{\log q}{n}}. (35)

For term 𝐉2max\left\lVert\mathbf{J}_{2}\right\rVert_{\max}, we note that 𝐐c\mathbf{Q}_{c} has all diagonal elements zero and off-diagonal elements c1-c^{-1}, so each element of 𝐉2\mathbf{J}_{2} is the sum of nn terms of the form [H(c1)]1ϵijϵij[H(c-1)]^{-1}\epsilon_{ij}\epsilon_{i^{\prime}j^{\prime}} with iii\neq i^{\prime} (note jj and jj^{\prime} can be the same). Next, conditional on the order statistics y(1),,y(n)y_{(1)},\ldots,y_{(n)}, the quantities ϵi\bm{\epsilon}_{i} and ϵi\bm{\epsilon}_{i^{\prime}} are independent (Yang,, 1977). On combining this with Proposition 5, we have that ϵijϵij\epsilon_{ij}\epsilon_{i^{\prime}j^{\prime}} is zero-mean sub-exponential with sub-exponential norm bounded by CL2CL^{2}. Therefore, with probability at least 1exp(Clogq)1-\exp(-C^{\prime}\log q), we have

𝐉2maxClogqn.\|\mathbf{J}_{2}\|_{\max}\leq C\sqrt{\frac{\log q}{n}}. (36)

Next, we consider the term 𝐉3\mathbf{J}_{3}, where it suffices to consider the term 𝐄n(𝐈H𝐏c)𝐌n\mathbf{E}_{n}^{\top}(\mathbf{I}_{H}\otimes\mathbf{P}_{c})\mathbf{M}_{n}. Let 𝐌~n\widetilde{\mathbf{M}}_{n} denote the n×pn\times p matrix whose iith row is given by 𝒎(y(i))𝒎(y(i+1))\bm{m}(y_{(i)})-\bm{m}(y_{(i+1)}) for 1in11\leq i\leq n-1, and whose nnth row is 𝒚(n)\bm{y}_{(n)}. It is straightforward to see that 𝐌~n=𝐀n𝐌n\widetilde{\mathbf{M}}_{n}=\mathbf{A}_{n}\mathbf{M}_{n} and 𝐌n=𝐀n1𝐌~n\mathbf{M}_{n}=\mathbf{A}_{n}^{-1}\widetilde{\mathbf{M}}_{n}, where for any tt, we have

𝐀t=(11111)t×t𝐀t1=(111111)t×t.\mathbf{A}_{t}=\left(\begin{array}[]{cccc}1&-1&&\\ &1&\ddots&\\ &&\ddots&-1\\ &&&1\end{array}\right)_{t\times t}~{}\mathbf{A}_{t}^{-1}=\left(\begin{array}[]{cccc}1&1&\ldots&1\\ &1&\ldots&\vdots\\ &&\ddots&1\\ &&&1\end{array}\right)_{t\times t}.

Hence, by the property of the Kronecker product, we obtain

𝐄n(𝐈H𝐏c)𝐌nmax\displaystyle\left\lVert\mathbf{E}_{n}^{\top}\left(\mathbf{I}_{H}\otimes\mathbf{P}_{c}\right)\mathbf{M}_{n}\right\rVert_{\max} =𝐄n(𝐈H𝐏c)𝐀n1𝐌~nTmax\displaystyle=\left\lVert\mathbf{E}_{n}^{\top}\left(\mathbf{I}_{H}\otimes\mathbf{P}_{c}\right)\mathbf{A}_{n}^{-1}\widetilde{\mathbf{M}}_{n}^{T}\right\rVert_{\max}
=𝐄n{𝐈H(𝐏c𝐀c1)}𝐌~nmax\displaystyle=\left\lVert\mathbf{E}_{n}^{\top}\left\{\mathbf{I}_{H}\otimes\left(\mathbf{P}_{c}\mathbf{A}_{c}^{-1}\right)\right\}\widetilde{\mathbf{M}}_{n}\right\rVert_{\max}
𝐄nmax{𝐈H(𝐏c𝐀c1)}𝐌~n1.\displaystyle\leq\left\lVert\mathbf{E}_{n}\right\rVert_{\max}\left\lVert\left\{\mathbf{I}_{H}\otimes\left(\mathbf{P}_{c}\mathbf{A}_{c}^{-1}\right)\right\}\widetilde{\mathbf{M}}_{n}\right\rVert_{1}.

We have 𝐄nmax=maxi,j|ϵij|\left\lVert\mathbf{E}_{n}\right\rVert_{\max}=\max_{i,j}|\epsilon_{ij}|, where all ϵi,j\epsilon_{i,j} are sub-Gaussian random variables by Proposition 5 for all i=1,,ni=1,\ldots,n and j=1,,qj=1,\ldots,q. Therefore, with probability at least 1exp(Clogq)1-\exp(C^{\prime}\log q), we have that 𝐄nmaxClogd\left\lVert\mathbf{E}_{n}\right\rVert_{\max}\leq C\sqrt{\log d}. Also note that in the matrix 𝐈H(𝐏c𝐀c)\mathbf{I}_{H}\otimes\left(\mathbf{P}_{c}\mathbf{A}_{c}\right), all the columns of 𝐈H(𝐏c𝐀c)\mathbf{I}_{H}\otimes\left(\mathbf{P}_{c}\mathbf{A}_{c}\right) have their sum of absolute values less than two, i.e 𝐈H(𝐏c𝐀c)12\left\lVert\mathbf{I}_{H}\otimes\left(\mathbf{P}_{c}\mathbf{A}_{c}\right)\right\rVert_{1}\leq 2. On combining these results, we obtain

{𝐈H(𝐏c𝐀c)}𝐌~n12𝐌~n1i=2n𝒎(y(i))𝒎(y(i1))Cn1/4,\left\lVert\left\{\mathbf{I}_{H}\otimes\left(\mathbf{P}_{c}\mathbf{A}_{c}\right)\right\}\widetilde{\mathbf{M}}_{n}\right\rVert_{1}\leq 2\left\lVert\widetilde{\mathbf{M}}_{n}\right\rVert_{1}\leq\sum_{i=2}^{n}\left\lVert\bm{m}\left(y_{(i)}\right)-\bm{m}\left(y_{(i-1)}\right)\right\rVert_{\infty}\leq Cn^{1/4},

where the last inequality follows from condition 4.6. Therefore, with probability at least 1exp(Clogq)1-\exp(C^{\prime}\log q), we obtain

𝐉3maxC(1nlogqn1/4)=Cn1/4logqn.\left\lVert\mathbf{J}_{3}\right\rVert_{\max}\leq C\left(\dfrac{1}{n}\sqrt{\log q}~{}n^{1/4}\right)=\dfrac{C}{n^{1/4}}\sqrt{\frac{\log q}{n}}. (37)

Finally, we consider the term 𝐉4\mathbf{J}_{4}. Substituting 𝐌n=𝐀n1𝐌~n\mathbf{M}_{n}=\mathbf{A}_{n}^{-1}\widetilde{\mathbf{M}}_{n} and using properties of Kronecker products, we have

𝐌n(𝐈H𝐏c)𝐌nmax=𝐌~n(𝐈H𝐀c𝐏c𝐀c1)𝐌~nmax.\left\lVert\mathbf{M}_{n}^{\top}\left(\mathbf{I}_{H}\otimes\mathbf{P}_{c}\right)\mathbf{M}_{n}\right\rVert_{\max}=\left\lVert\widetilde{\mathbf{M}}_{n}^{\top}\left(\mathbf{I}_{H}\otimes\mathbf{A}_{c}^{-\top}\mathbf{P}_{c}\mathbf{A}_{c}^{-1}\right)\widetilde{\mathbf{M}}_{n}\right\rVert_{\max}.

Since all the elements of 𝐈H𝐀c𝐏c𝐀c1\mathbf{I}_{H}\otimes\mathbf{A}_{c}^{-\top}\mathbf{P}_{c}\mathbf{A}_{c}^{-1} have absolute value less than (c/4)(c/4), then

𝐉4maxCn𝐌~n12Cn[supΠn(B)i=2nm{y(i)}m{y(i1)}]2Cn,\left\lVert\mathbf{J}_{4}\right\rVert_{\max}\leq\dfrac{C}{n}\left\lVert\widetilde{\mathbf{M}}_{n}\right\rVert_{1}^{2}\leq\dfrac{C}{n}\left[\sup_{\Pi_{n}(B)}\sum_{i=2}^{n}\left\|m\left\{y_{(i)}\right\}-m\left\{y_{(i-1)}\right\}\right\|_{\infty}\right]^{2}\leq\dfrac{C}{\sqrt{n}}, (38)

where the last inequality follows from Condition 4.6.

Substituting (35)-(38) into (34), we obtain

𝚵𝚵^maxClogqn,\left\lVert\mathbf{\Xi}-\widehat{\mathbf{\Xi}}\right\rVert_{\max}\leq C\sqrt{\frac{\log q}{n}}, (39)

with probability at least 1exp(Clogq)1-\exp(-C^{\prime}\log q). Hence, with the same probability we obtain

𝛀^𝛀maxClogqn,\left\lVert\widehat{\bm{\Omega}}-\bm{\Omega}\right\rVert_{\max}\leq C\sqrt{\frac{\log q}{n}},

as required.

S3.2 Proof of Proposition 3

Let 𝜹𝒞(S,3)\bm{\delta}\in\mathcal{C}(S,3). Then we have

𝚺^𝐗^1/2𝜹22=𝜹𝚺^𝐗^𝜹=𝜹(𝚺^𝐗^𝚺𝐗^)𝜹+𝜹𝚺𝐗^𝜹\displaystyle\left\lVert\widehat{\bm{\Sigma}}^{1/2}_{\widehat{\mathbf{X}}}\bm{\delta}\right\rVert_{2}^{2}=\bm{\delta}^{\top}\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}\bm{\delta}=\bm{\delta}^{\top}(\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}-\bm{\Sigma}_{\widehat{\mathbf{X}}})\bm{\delta}+\bm{\delta}^{\top}\bm{\Sigma}_{\widehat{\mathbf{X}}}\bm{\delta} κ1𝜹22𝜹(𝚺^𝐗^𝚺𝐗^)𝜹,\displaystyle\geq\kappa_{1}\left\lVert\bm{\delta}\right\rVert_{2}^{2}-\bm{\delta}^{\top}(\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}-\bm{\Sigma}_{\widehat{\mathbf{X}}})\bm{\delta}, (40)

where the last inequality follows from Condition 4.5 on the restricted eigenvalue of 𝚺𝐗^\bm{\Sigma}_{\widehat{\mathbf{X}}}. Next, we provide an upper bound on the second term. We have

𝜹(𝚺^𝐗^𝚺𝐗^)𝜹𝜹12𝚺^𝐗^𝚺𝐗^max.\bm{\delta}^{\top}(\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}-\bm{\Sigma}_{\widehat{\mathbf{X}}})\bm{\delta}\leq\left\lVert\bm{\delta}\right\rVert_{1}^{2}\left\lVert\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}-\bm{\Sigma}_{\widehat{\mathbf{X}}}\right\rVert_{\max}.

Since 𝜹𝒞(S,3)\bm{\delta}\in\mathcal{C}(S,3), i.e 𝜹Sc13𝜹S1\left\lVert\bm{\delta}_{S^{c}}\right\rVert_{1}\leq 3\left\lVert\bm{\delta}_{S}\right\rVert_{1}, then we obtain 𝜹12=(𝜹S1+𝜹Sc1)216𝜹S1216s𝜹22\left\lVert\bm{\delta}\right\rVert_{1}^{2}=(\left\lVert\bm{\delta}_{S}\right\rVert_{1}+\left\lVert\bm{\delta}_{S^{c}}\right\rVert_{1})^{2}\leq 16\left\lVert\bm{\delta}_{S}\right\rVert_{1}^{2}\leq 16s\left\lVert\bm{\delta}\right\rVert_{2}^{2}. Furthermore, noting that 𝚺𝐗^=𝚪𝚺𝐙𝚪\bm{\Sigma}_{\widehat{\mathbf{X}}}=\bm{\Gamma}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\Gamma} and 𝚺^𝐗^=𝚪^𝚺^𝐙𝚪^\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}=\widehat{\bm{\Gamma}}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\widehat{\bm{\Gamma}}, then we have

𝚺^𝐗^𝚺𝐗^max=maxj,k|𝜸j𝚺𝐙𝜸k𝜸^j𝚺^𝐙𝜸^k|.\left\lVert\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}-\bm{\Sigma}_{\widehat{\mathbf{X}}}\right\rVert_{\max}=\max_{j,k}|\bm{\gamma}_{j}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\hat{\bm{\gamma}}_{k}|.

By the triangle inequality, we obtain

|𝜸j𝚺𝐙𝜸k𝜸^j𝚺^𝐙𝜸^k|\displaystyle|\bm{\gamma}_{j}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\hat{\bm{\gamma}}_{k}| =|(𝜸j𝚺𝐙𝜸k𝜸^j𝚺𝐙𝜸k)+(𝜸^j𝚺𝐙𝜸k𝜸^j𝚺^𝐙𝜸k)+(𝜸^j𝚺^𝐙𝜸k𝜸^j𝚺^𝐙𝜸^k)|\displaystyle=|(\bm{\gamma}_{j}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k})+(\hat{\bm{\gamma}}_{j}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\bm{\gamma}_{k})+(\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\hat{\bm{\gamma}}_{k})| (41)
|𝜸j𝚺𝐙𝜸k𝜸^j𝚺𝐙𝜸k|+|𝜸^j𝚺𝐙𝜸k𝜸^j𝚺^𝐙𝜸k|+|𝜸^j𝚺^𝐙𝜸k𝜸^j𝚺^𝐙𝜸^k|\displaystyle\leq|\bm{\gamma}_{j}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k}|+|\hat{\bm{\gamma}}_{j}^{\top}\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\bm{\gamma}_{k}|+|\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\widehat{\bm{\Sigma}}_{\mathbf{Z}}\hat{\bm{\gamma}}_{k}|
𝜸^j𝜸j1𝚺𝐙𝜸k+𝜸^j1𝚺𝐙𝚺^𝐙max𝜸k1+𝚺^𝐙𝜸^j𝜸^k𝜸k1\displaystyle\leq\|\hat{\bm{\gamma}}_{j}-\bm{\gamma}_{j}\|_{1}\|\bm{\Sigma}_{\mathbf{Z}}\bm{\gamma}_{k}\|_{\infty}+\|\hat{\bm{\gamma}}_{j}\|_{1}\|\bm{\Sigma}_{\mathbf{Z}}-\widehat{\bm{\Sigma}}_{\mathbf{Z}}\|_{\max}\|\bm{\gamma}_{k}\|_{1}+\|\widehat{\bm{\Sigma}}_{\mathbf{Z}}\hat{\bm{\gamma}}_{j}\|_{\infty}\|\hat{\bm{\gamma}}_{k}-\bm{\gamma}_{k}\|_{1}
=T~1+T~2+T~3.\displaystyle=\tilde{T}_{1}+\tilde{T}_{2}+\tilde{T}_{3}.

The three terms T~1\tilde{T}_{1}, T~2\tilde{T}_{2} and T~3\tilde{T}_{3} are similar to T1,T2T_{1},T_{2} and T3T_{3} in Section S2.5 with 𝛀\bm{\Omega} and 𝛀^\widehat{\bm{\Omega}} replaced by 𝚺𝐙\bm{\Sigma}_{\mathbf{Z}} and 𝚺^𝐙\widehat{\bm{\Sigma}}_{\mathbf{Z}}, respectively. It follows that by a similar argument, we obtain

𝚺^𝐗^𝚺𝐗^max=maxj,k|𝜸j𝚺^𝐗^𝜸k𝜸^j𝚺𝐗^𝜸^k|CL{Llogqn+σmaxrlogpqn},\left\lVert\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}-\bm{\Sigma}_{\widehat{\mathbf{X}}}\right\rVert_{\max}=\max_{j,k}|\bm{\gamma}_{j}^{\top}\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\bm{\Sigma}_{\widehat{\mathbf{X}}}\hat{\bm{\gamma}}_{k}|\leq CL\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}, (42)

with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(-C^{\prime}\log q)-\exp(-C^{\prime\prime}\log pq).

Substituting the above into (40) and under the assumption L12σmaxrs(logpq)/n=o(1)L_{1}^{2}\sigma_{\max}rs\sqrt{(\log pq)/{n}}=o(1), then there exists a positive constant 0<κ2<κ10<\kappa_{2}<\kappa_{1} such that

𝚺^𝐗^1/2𝜹22κ2𝜹22,\left\lVert\widehat{\bm{\Sigma}}^{1/2}_{\widehat{\mathbf{X}}}\bm{\delta}\right\rVert_{2}^{2}\geq\kappa_{2}\left\lVert\bm{\delta}\right\rVert_{2}^{2},

with probability at least 1exp(Clogq)exp(C′′logpq)1-\exp(-C^{\prime}\log q)-\exp(-C^{\prime\prime}\log pq), as required.

S3.3 Proof of Lemma 28

From equation (42) in the proof of Proposition 3, with probability at least 1exp(Cq)exp(Clogpq)1-\exp(-Cq)-\exp(-C^{\prime}\log pq) we have

𝚺^𝐗^𝚺𝐗^max=maxj,k|𝜸j𝚺^𝐗^𝜸k𝜸^j𝚺𝐗^𝜸^k|CL{Llogqn+σmaxrlogpqn}.\displaystyle\left\lVert\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}-\bm{\Sigma}_{\widehat{\mathbf{X}}}\right\rVert_{\max}=\max_{j,k}|\bm{\gamma}_{j}^{\top}\widehat{\bm{\Sigma}}_{\widehat{\mathbf{X}}}\bm{\gamma}_{k}-\hat{\bm{\gamma}}_{j}^{\top}\bm{\Sigma}_{\widehat{\mathbf{X}}}\hat{\bm{\gamma}}_{k}|\leq CL\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}.

Conditioning on this event, and combining it with (27), we obtain

φ𝐂^SS𝐂SSCϕsL{Llogqn+σmaxrlogpqn}α4α,\varphi\|\widehat{\mathbf{C}}_{SS}-\mathbf{C}_{SS}\|_{\infty}\leq C\phi sL\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\}\leq\frac{\alpha}{4-\alpha},

and similarly,

φ𝐂^ScS𝐂ScSα4α.\varphi\|\widehat{\mathbf{C}}_{S^{c}S}-\mathbf{C}_{S^{c}S}\|_{\infty}\leq\frac{\alpha}{4-\alpha}.

Next, by an error bound for matrix inversion (Horn and Johnson,, 2012,  Section 5.8), we obtain

(𝐂^SS)1(𝐂SS)1φ𝐂^SS𝐂SS1φ𝐂^SS𝐂SSφα2(2α)φ.\|(\widehat{\mathbf{C}}_{SS})^{-1}-(\mathbf{C}_{SS})^{-1}\|_{\infty}\leq\frac{\varphi\|\widehat{\mathbf{C}}_{SS}-\mathbf{C}_{SS}\|_{\infty}}{1-\varphi\|\widehat{\mathbf{C}}_{SS}-\mathbf{C}_{SS}\|_{\infty}}\varphi\leq\frac{\alpha}{2(2-\alpha)\varphi}.

Applying the triangle inequality, we have

(𝐂^SS)1(𝐂SS)1+(𝐂^SS)1(𝐂SS)1φ+α2(2α)φ=4α2(2α)φ,\|(\widehat{\mathbf{C}}_{SS})^{-1}\|_{\infty}\leq\|(\mathbf{C}_{SS})^{-1}\|_{\infty}+\|(\widehat{\mathbf{C}}_{SS})^{-1}-(\mathbf{C}_{SS})^{-1}\|_{\infty}\leq\varphi+\frac{\alpha}{2(2-\alpha)\varphi}=\frac{4-\alpha}{2(2-\alpha)\varphi},

which proves the first inequality in (28).

To show the second inequality, we can write

𝐂^SS(𝐂^SS)1𝐂SS(𝐂SS)1=(𝐂^SS𝐂SS)(𝐂^SS)1𝐂SS(𝐂^SS)1(𝐂^SS𝐂SS)(𝐂SS)1.\widehat{\mathbf{C}}_{SS}(\widehat{\mathbf{C}}_{SS})^{-1}-\mathbf{C}_{SS}(\mathbf{C}_{SS})^{-1}=(\widehat{\mathbf{C}}_{SS}-\mathbf{C}_{SS})(\widehat{\mathbf{C}}_{SS})^{-1}-\mathbf{C}_{SS}(\widehat{\mathbf{C}}_{SS})^{-1}(\widehat{\mathbf{C}}_{SS}-\mathbf{C}_{SS})(\mathbf{C}_{SS})^{-1}.

It follows that

𝐂^ScS(𝐂^SS)1𝐂ScS(𝐂SS)1\displaystyle\|\widehat{\mathbf{C}}_{S^{c}S}(\widehat{\mathbf{C}}_{SS})^{-1}-\mathbf{C}_{S^{c}S}(\mathbf{C}_{SS})^{-1}\|_{\infty}
𝐂^ScS𝐂ScS(𝐂^SS)1+𝐂ScS(𝐂SS)1𝐂^SS𝐂SS(𝐂^SS)1\displaystyle\leq\|\widehat{\mathbf{C}}_{S^{c}S}-\mathbf{C}_{S^{c}S}\|_{\infty}\|(\widehat{\mathbf{C}}_{SS})^{-1}\|_{\infty}+\|\mathbf{C}_{S^{c}S}(\mathbf{C}_{SS})^{-1}\|_{\infty}\|\widehat{\mathbf{C}}_{SS}-\mathbf{C}_{SS}\|_{\infty}\|(\widehat{\mathbf{C}}_{SS})^{-1}\|_{\infty}
α(4α)φ×4α2(2α)φ+(1α)×α(4α)φ×4α2(2α)φ=α2,\displaystyle\leq\frac{\alpha}{(4-\alpha)\varphi}\times\dfrac{4-\alpha}{2(2-\alpha)}\varphi+(1-\alpha)\times\frac{\alpha}{(4-\alpha)\varphi}\times\dfrac{4-\alpha}{2(2-\alpha)}\varphi=\frac{\alpha}{2},

as required.

S3.4 Proof of Proposition 4

The proof of this proposition generalizes the proof of Lemma 2 above. By the variational definition of eigenvectors, 𝜼^\hat{\bm{\eta}} maximizes trace(v𝚲^v)\text{trace}(\textbf{v}^{\top}\hat{\bm{\Lambda}}\textbf{v}) subject to vv=𝐈d\textbf{v}^{\top}\textbf{v}=\mathbf{I}_{d}, and so

trace(𝜼𝚲^𝜼)trace(𝜼^𝚲^𝜼^),\text{trace}(\bm{\eta}^{\top}\hat{\bm{\Lambda}}\bm{\eta})\leq\text{trace}(\hat{\bm{\eta}}^{\top}\hat{\bm{\Lambda}}\hat{\bm{\eta}}),

or equivalently 𝚲^,𝜼𝜼𝚲^,𝜼^𝜼^\langle\hat{\bm{\Lambda}},\bm{\eta}\bm{\eta}^{\top}\rangle\leq\langle\hat{\bm{\Lambda}},\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle, where for two matrices 𝐀1\mathbf{A}_{1} and 𝐀2\mathbf{A}_{2} we let 𝐀1,𝐀2=trace(𝐀1𝐀2)\langle\mathbf{A}_{1},\mathbf{A}_{2}\rangle=\text{trace}(\mathbf{A}_{1}^{\top}\mathbf{A}_{2}) denote their dot product. Writing 𝚲^=𝚲+(𝚲^𝚲)\hat{\bm{\Lambda}}=\bm{\Lambda}+(\hat{\bm{\Lambda}}-\bm{\Lambda}), it follows that

𝚲,𝜼𝜼𝜼^𝜼^𝚲^𝚲,𝜼𝜼𝜼^𝜼^.\langle\bm{\Lambda},\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle\leq\langle\hat{\bm{\Lambda}}-\bm{\Lambda},\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle. (43)

For the left-hand side of (43), by an eigendecomposition we have 𝚲=𝜼diag(λ1,,λd)𝜼\bm{\Lambda}=\bm{\eta}\text{diag}(\lambda_{1},\ldots,\lambda_{d})\bm{\eta}^{\top} and 𝜼𝜼=𝐈d\bm{\eta}^{\top}\bm{\eta}=\mathbf{I}_{d}. It follows that

𝚲,𝜼𝜼𝜼^𝜼^\displaystyle\langle\bm{\Lambda},\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle =trace(𝚲)trace(𝜼diag(λ1,,λd)𝜼𝜼^𝜼^)\displaystyle=\text{trace}(\bm{\Lambda})-\text{trace}(\bm{\eta}\text{diag}(\lambda_{1},\ldots,\lambda_{d})\bm{\eta}^{\top}\widehat{\bm{\eta}}\widehat{\bm{\eta}}^{\top})
=trace(𝚲)trace(diag(λ1,,λd)𝐇2)\displaystyle=\text{trace}(\bm{\Lambda})-\text{trace}(\text{diag}(\lambda_{1},\ldots,\lambda_{d})\mathbf{H}^{2})
=k=1dλk(1𝐇kk2)λd(dtrace(𝐇2)).\displaystyle=\sum_{k=1}^{d}\lambda_{k}(1-\mathbf{H}^{2}_{kk})\geq\lambda_{d}(d-\text{trace}(\mathbf{H}^{2})).

For the right hand side of (13), applying Hölder’s inequality we have

𝚲^𝚲,𝜼𝜼𝜼^𝜼^\displaystyle\langle\hat{\bm{\Lambda}}-\bm{\Lambda},\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\rangle 𝚲^𝚲max𝜼𝜼𝜼^𝜼^1\displaystyle\leq\left\lVert\hat{\bm{\Lambda}}-\bm{\Lambda}\right\rVert_{\max}\left\lVert\bm{\eta}\bm{\eta}^{\top}-\hat{\bm{\eta}}\hat{\bm{\eta}}^{\top}\right\rVert_{1}
d𝚲^𝚲max\displaystyle\leq\sqrt{d}\left\lVert\hat{\bm{\Lambda}}-\bm{\Lambda}\right\rVert_{\max}
CLd{Llogqn+σmaxrlogpqn},\displaystyle\leq CL\sqrt{d}\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\},

where the last inequality follows from Lemma 4 above . Hence, we have

λd{dtrace(𝐇2)}CLd{Llogqn+σmaxrlogpqn},\lambda_{d}\left\{d-\text{trace}(\mathbf{H}^{2})\right\}\leq CL\sqrt{d}\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\},

or equivalently

trace(𝐇2)dCLdλd{Llogqn+σmaxrlogpqn},\text{trace}(\mathbf{H}^{2})\geq d-\dfrac{CL\sqrt{d}}{\lambda_{d}}\left\{L\sqrt{\dfrac{\log q}{n}}+\sigma_{\max}r\sqrt{\dfrac{\log pq}{n}}\right\},

as required.

S4 Additional simulation results

Table S4: Simulation results for the Lasso SIR (LSIR) estimator, the two-stage Lasso (2SLasso) estimator, and the proposed two-stage Lasso SIR (2SLSIR) estimator, in the single index models (i) and (ii) and the instrumental variables 𝐙\mathbf{Z} are independently generated from the signed Bernoulli distribution. Standard errors are included in parentheses. The lowest estimation error and the highest F1 score are highlighted in each row.
nn Model 𝚪\bm{\Gamma} Error F1
LSIR 2SLasso 2SLSIR LSIR 2SLasso 2SLSIR
500 (i) Strong 0.28 (0.07) 1.05 (0.22) 0.13 (0.04) 0.99 (0.03) 0.52 (0.18) 1.00 (0.00)
Weak 0.37 (0.08) 1.01 (0.24) 0.18 (0.06) 0.96 (0.07) 0.54 (0.18) 0.99 (0.03)
Mixed 0.29 (0.08) 1.04 (0.20) 0.11 (0.03) 0.98 (0.05) 0.57 (0.18) 1.00 (0.00)
(ii) Strong 0.26 (0.08) 0.24 (0.07) 0.13 (0.05) 1.00 (0.02) 0.98 (0.04) 1.00 (0.00)
Weak 0.35 (0.07) 0.31 (0.08) 0.18 (0.07) 0.97 (0.05) 0.94 (0.08) 1.00 (0.01)
Mixed 0.30 (0.06) 0.24 (0.08) 0.11 (0.04) 0.99 (0.03) 0.98 (0.05) 1.00 (0.00)
1000 (i) Strong 0.30 (0.07) 0.98 (0.22) 0.09 (0.03) 0.99 (0.03) 0.57 (0.18) 1.00 (0.00)
Weak 0.38 (0.07) 0.90 (0.23) 0.13 (0.04) 0.96 (0.06) 0.63 (0.16) 1.00 (0.01)
Mixed 0.35 (0.06) 0.89 (0.24) 0.06 (0.02) 0.97 (0.07) 0.63 (0.17) 1.00 (0.00)
(ii) Strong 0.31 (0.07) 0.18 (0.05) 0.09 (0.03) 0.99 (0.04) 1.00 (0.02) 1.00 (0.00)
Weak 0.39 (0.07) 0.22 (0.07) 0.12 (0.04) 0.94 (0.08) 0.98 (0.05) 1.00 (0.01)
Mixed 0.34 (0.06) 0.16 (0.05) 0.07 (0.02) 0.97 (0.07) 1.00 (0.01) 1.00 (0.00)
Table S5: Simulation results for the Lasso SIR (LSIR) estimator, the two-stage Lasso (2SLasso) estimator, and the proposed two-stage Lasso SIR (2SLSIR) estimator, in double index models (iii) and (iv) and the instrumental variables 𝐙\mathbf{Z} are independently generated from the signed Bernoulli distribution. Standard errors are included in parentheses. The lowest estimation error and the highest F1 score are highlighted in each row.
nn Model 𝚪\bm{\Gamma} Error F1
LSIR 2SLSIR LSIR 2SLSIR
500 (iii) Strong 0.88 (0.23) 0.67 (0.26) 0.61 (0.16) 0.73 (0.16)
Weak 1.02 (0.21) 0.92 (0.29) 0.56 (0.12) 0.62 (0.17)
Mixed 0.98 (0.19) 0.70 (0.26) 0.49 (0.11) 0.70 (0.16)
(iv) Strong 0.40 (0.21) 0.40 (0.24) 0.93 (0.11) 0.93 (0.13)
Weak 0.50 (0.20) 0.49 (0.25) 0.88 (0.13) 0.85 (0.15)
Mixed 0.43 (0.21) 0.39 (0.23) 0.88 (0.14) 0.91 (0.14)
1000 (iii) Strong 0.97 (0.21) 0.49 (0.29) 0.55 (0.13) 0.86 (0.15)
Weak 1.08 (0.20) 0.66 (0.29) 0.53 (0.10) 0.73 (0.18)
Mixed 1.05 (0.16) 0.50 (0.28) 0.45 (0.07) 0.84 (0.14)
(iv) Strong 0.38 (0.21) 0.31 (0.24) 0.95 (0.13) 0.97 (0.11)
Weak 0.49 (0.22) 0.35 (0.24) 0.88 (0.15) 0.93 (0.13)
Mixed 0.43 (0.22) 0.29 (0.22) 0.88 (0.16) 0.96 (0.12)

S5 Additional results for the application

Refer to caption
Figure S4: Heatmap corresponding to the absolute value of the elements of 𝚪^\widehat{\bm{\Gamma}} from the first stage of the 2SLSIR estimator applied to the NHANES data. Each column represents the consumption of one dietary component in the first interview, which is treated as a covariate. Each column represents the consumption of one dietary component in the second interview, which is treated as an instrument.