This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Cross-Validated Loss-Based Covariance Matrix Estimator Selection in High Dimensions

Philippe Boileau
Graduate Group in Biostatistics and Center for Computational Biology, UC Berkeley
Nima S. Hejazi
Division of Biostatistics, Department of Population Health Sciences, Weill Cornell Medicine
Mark J. van der Laan
Division of Biostatistics, Department of Statistics,
and Center for Computational Biology, UC Berkeley
Sandrine Dudoit  
Department of Statistics, Division of Biostatistics,
and Center for Computational Biology, UC Berkeley
To whom correspondence should be addressed: [email protected]
Abstract

The covariance matrix plays a fundamental role in many modern exploratory and inferential statistical procedures, including dimensionality reduction, hypothesis testing, and regression. In low-dimensional regimes, where the number of observations far exceeds the number of variables, the optimality of the sample covariance matrix as an estimator of this parameter is well-established. High-dimensional regimes do not admit such a convenience. Thus, a variety of estimators have been derived to overcome the shortcomings of the canonical estimator in such settings. Yet, selecting an optimal estimator from among the plethora available remains an open challenge. Using the framework of cross-validated loss-based estimation, we develop the theoretical underpinnings of just such an estimator selection procedure. We propose a general class of loss functions for covariance matrix estimation and establish accompanying finite-sample risk bounds and conditions for the asymptotic optimality of the cross-validation selector. In numerical experiments, we demonstrate the optimality of our proposed selector in moderate sample sizes and across diverse data-generating processes. The practical benefits of our procedure are highlighted in a dimension reduction application to single-cell transcriptome sequencing data.

Keywords: covariance matrix estimation; cross-validation; dimension reduction; high-dimensional statistics; loss-based estimation

1 Introduction

The covariance matrix underlies numerous exploratory and inferential statistical procedures central to analyses regularly performed in diverse fields. For instance, in computational biology, this statistical parameter serves as a key ingredient in many popular dimensionality reduction, clustering, and classification methods which are regularly relied upon in quality control assessments, exploratory data analysis, and, recently, the discovery and characterization of novel types of cells. Other important areas in which the covariance matrix is critical include financial economics, climate modeling and weather forecasting, imaging data processing and analysis, probabilistic graphical modeling, and text corpora compression and retrieval. Even more fundamentally, the covariance matrix plays a key role in assessing the strengths of linear relationships within multivariate data, in generating simultaneous confidence bands and regions, and in the construction and evaluation of hypothesis tests. Accurate estimation of this parameter is therefore essential.

When the number of observations in a data set far exceeds the number of features, the estimator of choice for the covariance matrix is the sample covariance matrix: it is an efficient estimator under minimal regularity assumptions on the data-generating distribution (Anderson, 2003; Smith, 2005). In high-dimensional regimes, however, this simple estimator has undesirable properties. When the number of features outstrips the number of observations, the sample covariance matrix is singular. Even when the number of observations slightly exceeds the number of features, the sample covariance matrix is numerically unstable on account of an overly large condition number (Golub and Van Loan, 1996). Its eigenvalues are also generally over-dispersed when compared to those of the population covariance matrix (Johnstone, 2001; Ledoit and Wolf, 2004): the leading eigenvalues are positively biased, while the trailing eigenvalues are negatively biased (Marčenko and Pastur, 1967).

High-dimensional data have become increasingly widespread in myriad scientific domains, with many examples arising from challenges posed by cutting-edge biological sequencing technologies. Accordingly, researchers have turned to developing novel covariance matrix estimators to remediate the deficiencies of the sample covariance matrix. Under certain sparsity assumptions, Bickel and Levina (2008b, a); Rothman et al. (2009); Lam and Fan (2009); Cai et al. (2010), and Cai and Liu (2011), among others, demonstrated that the true covariance matrix can be estimated consistently under specific losses by applying element-wise thresholding or tapering functions to the sample covariance matrix. Another thread of the literature, which includes notable contributions by Stock and Watson (2002); Bai (2003); Fan et al. (2008); Fan et al. (2013, 2016, 2019), and Onatski (2012), has championed methods employing factor models in covariance matrix estimation. Other popular proposals include the families of estimators inspired by the empirical Bayes framework (Robbins, 1964; Efron, 2012), formulated by Schäfer and Strimmer (2005) and Ledoit and Wolf (2004, 2012, 2015, 2018). We briefly review several of these estimator families in Section LABEL:supp:lit of the Online Supplement.

Despite the flexibility afforded by the apparent wealth of candidate estimators, this variety poses many practical issues. Namely, identifying the most appropriate estimator from among a collection of candidates is itself a significant challenge. A partial answer to this problem has come in the form of data-adaptive approaches designed to select the optimal estimator within a particular class (for example, Bickel and Levina, 2008b, a; Cai and Liu, 2011; Fan et al., 2013; Fang et al., 2016; Bartz, 2016). Such approaches, however, are inherently limited by their focus on relatively narrow families of covariance matrix estimators. The successful application of such estimator selection frameworks requires, as a preliminary step, that the practitioner make a successful choice among estimator families, injecting a degree of subjectivity in their deployment. The broader question of selecting an optimal estimator from among a diverse library of candidates has remained unaddressed. We offer a general loss-based framework building upon the seminal work of van der Laan and Dudoit (2003); Dudoit and van der Laan (2005); van der Vaart et al. (2006) for asymptotically optimal covariance matrix estimator selection based upon cross-validation.

In the cross-validated loss-based estimation framework, the parameter of interest is defined as the risk minimizer, with respect to the data-generating distribution, based on a loss function chosen to reflect the problem at hand. Candidate estimators may be generated in a variety of manners, including as empirical risk minimizers with respect to an empirical distribution over parameter subspaces corresponding to models for the data-generating distribution. One would ideally select as optimal estimator that which minimizes the “true” risk with respect to the data-generating distribution. However, as this distribution is typically unknown, one turns to cross-validation for risk estimation. van der Laan and Dudoit (2003); Dudoit and van der Laan (2005); van der Vaart et al. (2006) have shown that, under general conditions on the data-generating distribution and loss function, the cross-validated estimator selector is asymptotically optimal in the sense that it performs asymptotically as well in terms of risk as an optimal oracle selector based on the true, unknown data-generating distribution. These results extend prior work summarized by Györfi et al. (2002, Ch. 7–8).

Focusing specifically on the covariance matrix as the parameter of interest, we address the choice of loss function and candidate estimators, and derive new, high-dimensional asymptotic optimality results for multivariate cross-validated estimator selection procedures. Requiring generally nonrestrictive assumptions about the structure of the true covariance matrix, the proposed framework accommodates arbitrary families of covariance matrix estimators. The method therefore enables the objective selection of an optimal estimator while fully taking advantage of the plethora of available estimators. As such, it generalizes existing, but more limited, data-adaptive estimator selection frameworks where the library of candidate estimators is narrowed based on available subject matter knowledge, or, as is more commonly the case, for convenience’s sake.

2 Problem Formulation and Background

Consider a data set 𝐗n×J={X1,,Xn:XiJ}\mathbf{X}_{n\times J}=\{X_{1},\ldots,X_{n}:X_{i}\in\mathbb{R}^{J}\}, comprising nn independent and identically distributed (i.i.d.) random vectors, where nJn\approx J or n<Jn<J. Let XiP0X_{i}\sim P_{0}\in\mathcal{M}, where P0P_{0} denotes the true data-generating distribution and \mathcal{M} the statistical model, that is, a collection of possible data-generating distributions PP for XiX_{i}. We assume a nonparametric statistical model \mathcal{M} for P0P_{0}, minimizing assumptions on the form of P0P_{0}. We denote by PnP_{n} the empirical distribution of the nn random vectors forming 𝐗n×J\mathbf{X}_{n\times J}. Letting 𝔼[Xi]=0\mathbb{E}[X_{i}]=0 without loss of generality and defining 𝝍0Var[Xi]\bm{\psi}_{0}\coloneqq\text{Var}[X_{i}], we take as our goal the estimation of the covariance matrix 𝝍0\bm{\psi}_{0}. This is accomplished by identifying the “optimal” estimator of 𝝍0\bm{\psi}_{0} from among a collection of candidates, where, as detailed below, optimality is defined in terms of risk.

For any distribution PP\in\mathcal{M}, define its covariance matrix as 𝝍=Ψ(P)\bm{\psi}=\Psi(P), where Ψ\Psi is a mapping from the model \mathcal{M} to the set of J×JJ\times J symmetric, positive semi-definite matrices. Furthermore, candidate estimators of the covariance matrix are defined as 𝝍^kΨ^k(Pn)\hat{\bm{\psi}}_{k}\coloneqq\hat{\Psi}_{k}(P_{n}) for k=1,,Kk=1,\ldots,K in terms of mappings Ψ^k\hat{\Psi}_{k} from the empirical distribution PnP_{n} to 𝚿{𝝍J×J|𝝍=𝝍}\bm{\Psi}\coloneqq\{\bm{\psi}\in\mathbb{R}^{J\times J}|\bm{\psi}=\bm{\psi}^{\top}\}. While this notation suggests that the number of candidate estimators KK is fixed, and we treat it as such throughout, this framework may be extended such that KK grows as a function of nn and JJ. It also follows that {𝝍=Ψ(P):P}𝚿\{\bm{\psi}=\Psi(P):P\in\mathcal{M}\}\subset\bm{\Psi}; that is, the set of all covariance matrices corresponding to the data-generating distributions PP belonging to the model \mathcal{M} is a subset of 𝚿\bm{\Psi}.

In order to assess the optimality of estimators in the set 𝚿\bm{\Psi}, we introduce a generic loss function L(X;𝝍,η)L(X;\bm{\psi},\eta) characterizing a cost applicable to any 𝝍𝚿\bm{\psi}\in\bm{\Psi} and XPX\sim P\in\mathcal{M}, and where η\eta is a (possibly empty) nuisance parameter. Specific examples of loss functions for the covariance estimation setting are proposed in Section 3.1. Define HH as the mapping from the model \mathcal{M} to the nuisance parameter space 𝐇{η=H(P):P}\mathbf{H}\coloneqq\{\eta=H(P):P\in\mathcal{M}\} and let η^H^(Pn)\hat{\eta}\coloneqq\hat{H}(P_{n}) denote a generic nuisance parameter estimator, where H^\hat{H} is a mapping from PnP_{n} to 𝐇\mathbf{H}. Given any η𝐇\eta\in\mathbf{H}, the risk under PP\in\mathcal{M} for any 𝝍𝚿\bm{\psi}\in\bm{\Psi} is defined as the expected value of L(X;𝝍,η)L(X;\bm{\psi},\eta) with respect to PP:

Θ(𝝍,η,P)L(x;𝝍,η)𝑑P(x)=𝔼P[L(X;𝝍,η)].\begin{split}\Theta(\bm{\psi},\eta,P)&\coloneqq\int L(x;\bm{\psi},\eta)dP(x)\\ &=\mathbb{E}_{P}\left[L(X;\bm{\psi},\eta)\right].\end{split}

Under the additional constraint on the loss function that a risk minimizer exists under the true data-generating distribution P0P_{0}, the minimizer is given by the parameter of interest

𝝍0argmin𝝍𝚿Θ(𝝍,η0,P0),\bm{\psi}_{0}\coloneqq\operatorname*{\arg\!\min}_{\bm{\psi}\in\mathbf{\Psi}}\Theta(\bm{\psi},\eta_{0},P_{0}), (1)

where η0H(P0)\eta_{0}\coloneqq H(P_{0}). The risk minimizer need not be unique. The optimal risk under P0P_{0} is

θ0min𝝍𝚿Θ(𝝍,η0,P0),\theta_{0}\coloneqq\min_{\bm{\psi}\in\mathbf{\Psi}}\;\Theta(\bm{\psi},\eta_{0},P_{0}),

which is to say that a risk minimizer 𝝍0\bm{\psi}_{0} attains risk θ0\theta_{0}.

For any given estimator 𝝍^k\hat{\bm{\psi}}_{k} of 𝝍0\bm{\psi}_{0}, its conditional risk given PnP_{n} with respect to the true data-generating distribution P0P_{0} is

θ~n(k,η0)𝔼P0[L(X;Ψ^k(Pn),η0)Pn]=Θ(𝝍^k,η0,P0).\begin{split}\tilde{\theta}_{n}(k,\eta_{0})&\coloneqq\mathbb{E}_{P_{0}}[L(X;\hat{\Psi}_{k}(P_{n}),\eta_{0})\mid P_{n}]\\ &=\Theta(\hat{\bm{\psi}}_{k},\eta_{0},P_{0}).\end{split}

Defining the risk difference of the kthk^{\text{th}} estimator as θ~n(k,η0)θ0\tilde{\theta}_{n}(k,\eta_{0})-\theta_{0}, the index of the estimator that achieves the minimal risk difference is

k~nargmink{1,,K}θ~n(k,η0)θ0.\tilde{k}_{n}\coloneqq\operatorname*{\arg\!\min}_{k\in\{1,\ldots,K\}}\;\tilde{\theta}_{n}(k,\eta_{0})-\theta_{0}.

The subscript nn emphasizes that the risk and optimal estimator index are conditional on the empirical distribution PnP_{n}. They are therefore random variables.

Given the high-dimensional nature of the data, it is generally most convenient to study the performance of estimators of 𝝍0\bm{\psi}_{0} using Kolmogorov asymptotics, that is, in the setting in which both nn\rightarrow\infty and JJ\rightarrow\infty such that J/nm<J/n\rightarrow m<\infty. Historically, estimators have been derived within this high-dimensional asymptotic regime to improve upon the finite sample results of estimators brought about by traditional asymptotic arguments. After all, the sample covariance matrix retains its asymptotic optimality properties when JJ is fixed, even though it is known to perform poorly in high-dimensional settings.

Naturally, it would be desirable for an estimator selection procedure to select the estimator indexed by k~n\tilde{k}_{n}; however, this quantity depends on the true, unknown data-generating distribution P0P_{0}. As a substitute for the candidates’ true conditional risks, we employ instead the cross-validated estimators of these same conditional risks.

Cross-validation (CV) consists of randomly, and possibly repeatedly, partitioning a data set into a training set and a validation set. The observations in the training set are fed to the candidate estimators and the observations in the validation set are used to evaluate the performance of these estimators (Breiman and Spector, 1992; Friedman et al., 2001). A range of CV schemes have been proposed and investigated, both theoretically and computationally; Dudoit and van der Laan (2005) provide a thorough review of popular CV schemes and their properties. Among the variety, VV-fold stands out as an approach that has gained traction on account of its relative computational feasibility and good performance. Any CV scheme can be expressed in terms of a binary random vector BnB_{n}, which assigns observations into either the training or validation set. Observation ii is said to lie in the training set when Bn(i)=0B_{n}(i)=0 and in the validation set otherwise. The training set therefore contains i(1Bn(i))=n(1pn)\sum_{i}(1-B_{n}(i))=n(1-p_{n}) observations and the validation set iBn(i)=npn\sum_{i}B_{n}(i)=np_{n} observations, where pnp_{n} is the fixed validation set proportion corresponding to the chosen CV procedure. The empirical distributions of the training and validation sets are denoted by Pn,Bn0P_{n,B_{n}}^{0} and Pn,Bn1P_{n,B_{n}}^{1}, respectively, for any given realization of BnB_{n}. BnB_{n}, we emphasize, is independent of PnP_{n}.

Using this general definition, the cross-validated estimator of a candidate Ψ^k\hat{\Psi}_{k}’s risk is

θ^pn,n(k,H^(Pn,Bn0))𝔼Bn[Θ(Ψ^k(Pn,Bn0),H^(Pn,Bn0),Pn,Bn1)]=𝔼Bn[1npni=1n𝕀(Bn(i)=1)L(Xi;Ψ^k(Pn,Bn0),H^(Pn,Bn0))],\begin{split}\hat{\theta}_{p_{n},n}(k,\hat{H}(P_{n,B_{n}}^{0}))&\coloneqq\mathbb{E}_{B_{n}}\left[\Theta(\hat{\Psi}_{k}(P_{n,B_{n}}^{0}),\hat{H}(P_{n,B_{n}}^{0}),P_{n,B_{n}}^{1})\right]\\ &=\mathbb{E}_{B_{n}}\left[\frac{1}{np_{n}}\sum_{i=1}^{n}\mathbb{I}(B_{n}(i)=1)L(X_{i};\hat{\Psi}_{k}(P^{0}_{n,B_{n}}),\hat{H}(P_{n,B_{n}}^{0}))\right],\end{split}

for a nuisance parameter estimator mapping H^\hat{H}. Here, 𝔼Bn[]\mathbb{E}_{B_{n}}[\cdot] denotes the expectation with respect to BnB_{n}. The corresponding cross-validated selector is

k^pn,nargmink{1,,K}θ^pn,n(k,H^(Pn,Bn0)).\hat{k}_{p_{n},n}\coloneqq\operatorname*{\arg\!\min}_{k\in\{1,\ldots,K\}}\hat{\theta}_{p_{n},n}(k,\hat{H}(P_{n,B_{n}}^{0})).

As a benchmark, the unknown cross-validated conditional risk of the kkth estimator is

θ~pn,n(k,η0)𝔼Bn[Θ(Ψ^k(Pn,Bn0),η0,P0)].\tilde{\theta}_{p_{n},n}(k,\eta_{0})\coloneqq\mathbb{E}_{B_{n}}\left[\Theta(\hat{\Psi}_{k}(P_{n,B_{n}}^{0}),\eta_{0},P_{0})\right].

The cross-validated oracle selector is then

k~pn,nargmink{1,,K}θ~pn,n(k,η0).\tilde{k}_{p_{n},n}\coloneqq\operatorname*{\arg\!\min}_{k\in\{1,\ldots,K\}}\tilde{\theta}_{p_{n},n}(k,\eta_{0}).

As before, the pnp_{n} and nn subscripts highlight the dependence of these objects on the CV procedure and the empirical distribution PnP_{n}, respectively, thus making them random variables.

Ideally, the cross-validated estimator selection procedure should identify a k^pn,n\hat{k}_{p_{n},n} that is asymptotically (in nn, JJ, and possibly KK) equivalent in terms of risk to the oracle k~pn,n\tilde{k}_{p_{n},n}, under a set of nonrestrictive assumptions based on the choice of loss function, target parameter space, estimator ranges, and, if applicable, nuisance parameter space, in the sense that

θ~pn,n(k^pn,n,η0)θ0θ~pn,n(k~pn,n,η0)θ0𝑃1 as n,J.\frac{\tilde{\theta}_{p_{n},n}(\hat{k}_{p_{n},n},\eta_{0})-\theta_{0}}{\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n},\eta_{0})-\theta_{0}}\overset{P}{\longrightarrow}1\text{ as }n,J\rightarrow\infty. (2)

That is, the estimator selected via CV is equivalent in terms of risk to the CV scheme’s oracle estimator chosen from among all candidates.

When Equation (2) holds, a further step may be taken by relating the performance of the cross-validated selector to that of the full-dataset oracle selector, k~n\tilde{k}_{n}:

θ~n(k^pn,n,η0)θ0θ~n(k~n,η0)θ0𝑃1 as n,J.\frac{\tilde{\theta}_{n}(\hat{k}_{p_{n},n},\eta_{0})-\theta_{0}}{\tilde{\theta}_{n}(\tilde{k}_{n},\eta_{0})-\theta_{0}}\overset{P}{\longrightarrow}1\text{ as }n,J\rightarrow\infty. (3)

When the cross-validated selection procedure’s full-dataset conditional risk difference converges in probability to that of the full-dataset oracle’s, the chosen estimator is asymptotically optimal. In other words, the data-adaptively selected estimator performs asymptotically as well, with respect to the chosen loss, as the candidate that would be picked from the collection of estimators if the true data-generating distribution were known.

3 Loss Functions and Estimator Selection

3.1 Proposed Loss Function

The choice of loss function should reflect the goals of the estimation task. While loss functions based on the sample covariance matrix and either the Frobenius or the spectral norms are often employed in the covariance matrix estimation literature, Dudoit and van der Laan (2005)’s estimator selection framework is more amenable to loss functions that operate over random vectors. Accordingly, we propose the observation-level Frobenius loss:

L(X;𝝍,η0)XX𝝍F,η02=j=1Jl=1Jη0(jl)(X(j)X(l)ψ(jl))2,\begin{split}L(X;\bm{\psi},\eta_{0})&\coloneqq\lVert XX^{\top}-\bm{\psi}\rVert^{2}_{F,\eta_{0}}\\ &=\sum_{j=1}^{J}\sum_{l=1}^{J}\eta_{0}^{(jl)}(X^{(j)}X^{(l)}-\psi^{(jl)})^{2},\end{split} (4)

where X(j)X^{(j)} is the jjth element of a random vector XPX\sim P\in\mathcal{M}, ψ(jl)\psi^{(jl)} is the entry in the jjth row and llth column of an arbitrary covariance matrix 𝝍𝚿\bm{\psi}\in\mathbf{\Psi}, and η0\eta_{0} is a J×JJ\times J matrix acting as a scaling factor, that is, a potential nuisance parameter. For an estimator η^\hat{\eta} of η0\eta_{0}, the cross-validated risk estimator of the kkth candidate estimator Ψ^k\hat{\Psi}_{k} under the observation-level Frobenius loss is

θ^pn,n(k,H^(Pn,Bn0))=𝔼Bn[1npni=1n𝕀(Bn(i)=1)XiXiΨ^k(Pn,Bn0)F,H^(Pn,Bn0)2].\begin{split}\hat{\theta}_{p_{n},n}(k,\hat{H}(P_{n,B_{n}}^{0}))&=\mathbb{E}_{B_{n}}\left[\frac{1}{np_{n}}\sum_{i=1}^{n}\mathbb{I}(B_{n}(i)=1)\lVert X_{i}X^{\top}_{i}-\hat{\Psi}_{k}(P_{n,B_{n}}^{0})\rVert^{2}_{F,\hat{H}(P_{n,B_{n}}^{0})}\right].\end{split}

Ledoit and Wolf (2004), Bickel and Levina (2008a), and Rothman et al. (2009), among others, have employed analogous (scaled) Frobenius losses to prove various optimality results, defining η0(jl)=1/J\eta_{0}^{(jl)}=1/J, j,l\forall j,l. This particular choice of scaling factor is such that whatever the value of JJ, 𝐈J×JF,η0=1\lVert\mathbf{I}_{J\times J}\rVert_{F,\eta_{0}}=1. With such a scaling factor, the loss function may be viewed as a relative loss whose yardstick is the J×JJ\times J identity matrix. A similarly reasonable option for when the true covariance matrix is assumed to be dense is η0(jl)=1/J2\eta_{0}^{(jl)}=1/J^{2}. This weighting scheme effectively computes the average squared error across every entry of the covariance matrix; however, when the scaling factor is constant, it only impacts the interpretation of the loss. Constant scaling factors have no impact on our asymptotic analysis. Since it need not be estimated, it is not a nuisance parameter in the conventional sense.

When the scaling factor of Equation (4) is constant, the risk minimizers are identical for the cross-validated observation-level Frobenius risk and the common cross-validated Frobenius risk (used by, for example, Bickel and Levina, 2008a; Rothman et al., 2009; Fan et al., 2013; Fang et al., 2016).

Proposition 1

Define the cross-validated Frobenius risk for an estimator Ψ^k\hat{\Psi}_{k} as

R^n(Ψ^k,η0)𝔼Bn[𝐒n(Pn,Bn1)Ψ^k(Pn,Bn0)F,η02],\hat{R}_{n}(\hat{\Psi}_{k},\eta_{0})\coloneqq\mathbb{E}_{B_{n}}\left[\lVert\mathbf{S}_{n}(P_{n,B_{n}}^{1})-\hat{\Psi}_{k}(P_{n,B_{n}}^{0})\rVert^{2}_{F,\eta_{0}}\right], (5)

where 𝐒n(Pn,Bn1)\mathbf{S}_{n}(P_{n,B_{n}}^{1}) is the sample covariance matrix computed over the validation set Pn,Bn1P_{n,B_{n}}^{1}, and η0\eta_{0} is some constant scaling matrix. Then, R^n(Ψ^k,η0)θ^pn,n(k,η0)\hat{R}_{n}(\hat{\Psi}_{k},\eta_{0})-\hat{\theta}_{p_{n},n}(k,\eta_{0}) is constant with respect to Ψ^k(Pn,Bn0)\hat{\Psi}_{k}(P_{n,B_{n}}^{0}) such that

k^pn,n=argmink{1,,K}θ^pn,n(k,η0)=argmink{1,,K}R^n(Ψ^k,η0).\begin{split}\hat{k}_{p_{n},n}&=\operatorname*{\arg\!\min}_{k\in\{1,\ldots,K\}}\hat{\theta}_{p_{n},n}(k,\eta_{0})\\ &=\operatorname*{\arg\!\min}_{k\in\{1,\ldots,K\}}\hat{R}_{n}(\hat{\Psi}_{k},\eta_{0}).\end{split}

Note that the traditional Frobenius loss corresponds to the sum of the squared eigenvalues of the difference between the sample covariance matrix and the estimate. Proposition 1 therefore implies the existence of a similar relationship for our observation-level Frobenius loss. It may therefore serve as a surrogate for a loss based on the spectral norm.

We are not restricted to a constant scaling factor matrix. One might consider weighting the covariance matrix’s off-diagonal elements’ errors by their corresponding diagonal entries, especially useful when the random variables are of different scales. Such a scaling factor might offer a more equitable evaluation across all entries of the parameter:

Lweighted(X;𝝍,η0)j=1Jl=1J1ψ0(jj)ψ0(ll)(X(j)X(l)ψ(jl))2.\begin{split}L_{\text{weighted}}(X;\bm{\psi},\eta_{0})&\coloneqq\sum_{j=1}^{J}\sum_{l=1}^{J}\frac{1}{\sqrt{\psi_{0}^{(jj)}\psi_{0}^{(ll)}}}(X^{(j)}X^{(l)}-\psi^{(jl)})^{2}.\end{split}

Here, η0=diag(𝝍0)\eta_{0}=\text{diag}(\bm{\psi}_{0}) is a genuine nuisance parameter which can be estimated via the diagonal entries of the sample covariance matrix.

Finally, the covariance matrix 𝝍0\bm{\psi}_{0} is the risk minimizer of the observation-level Frobenius loss if the integral with respect to XX and the partial differential operators with respect to 𝝍\bm{\psi} are interchangeable.

Proposition 2

Let the integral with respect to XX and the partial differential operators with respect to 𝛙\bm{\psi} be interchangeable, and let η\eta be some fixed J×JJ\times J matrix. Then

𝝍0=argmin𝝍𝚿Θ(𝝍,η,P0)\bm{\psi}_{0}=\operatorname*{\arg\!\min}_{\bm{\psi}\in\bm{\Psi}}\;\Theta(\bm{\psi},\eta,P_{0})

for Θ()\Theta(\cdot) defined under the observation-level Frobenius loss.

The proof is provided in the Online Supplement. Our proposed loss therefore satisfies the condition of Equation (1). The main results of the paper, however, relate only to the constant scaling factor case. In a minor abuse of notation, we set η0=𝟏\eta_{0}=\bm{1}, and suppress dependence of the loss function on the scaling factor wherever possible throughout the remainder of the text.

3.2 Optimality of the Cross-validated Estimator Selector

Having defined a suitable loss function, we turn to a discussion of the theoretical properties of the cross-validated estimator selection procedure. Specifically, we present, in Theorem 1, sufficient conditions under which the method is asymptotically equivalent in terms of risk to the commensurate CV oracle selector (as per Equation (2)). This theorem extends the general framework of Dudoit and van der Laan (2005) for use in high-dimensional multivariate estimator selection. Adapting their existing theory to this setting requires a judicious choice of loss function, new assumptions, and updated proofs reflecting the use of high-dimensional asymptotics. Corollary 1 then builds on Theorem 1 and details conditions under which the procedure produces asymptotically optimal selections in the sense of Equation (3). All proofs are provided in Section LABEL:supp:proofs of the Online Supplement.

Theorem 1

Let X1,,XnX_{1},\ldots,X_{n} be a random sample of nn i.i.d. random vectors of dimension JJ, such that XiP0,i=1,,nX_{i}\sim P_{0}\in\mathcal{M},i=1,\ldots,n. Assume, without loss of generality, that 𝔼[Xi]=0\mathbb{E}[X_{i}]=0, and define 𝛙0Var[Xi]\bm{\psi}_{0}\coloneqq\text{Var}[X_{i}]. Denote the set of KK candidate estimators by {Ψ^k():k=1,,K}\{\hat{\Psi}_{k}(\cdot):k=1,\ldots,K\}. Next, define the observation-level Frobenius loss function as L(X;𝛙)XX𝛙F,12L(X;\bm{\psi})\coloneqq\lVert X^{\top}X-\bm{\psi}\rVert^{2}_{F,1}. Finally, designate pnp_{n} as the proportion of observations in any given cross-validated validation set. Consider the following assumptions:

Assumption 1

For any PP\in\mathcal{M} and XPX\sim P, maxj=1,,J(|X(j)|)<M1<\max_{j=1,\ldots,J}\>(\lvert X^{(j)}\rvert)<\sqrt{M_{1}}<\infty almost surely (a.s.).

Assumption 2

Define 𝚿{𝛙J×J|𝛙=𝛙,|ψ(jl)|<M2<\bm{\Psi}\coloneqq\{\bm{\psi}\in\mathbb{R}^{J\times J}|\;\bm{\psi}=\bm{\psi}^{\top},\lvert\psi^{(jl)}\rvert<M_{2}<\infty, j,l=1,,J}\forall\;j,l=1,\ldots,J\}, and assume that Ψ^k(Pn),𝛙0𝚿\hat{\Psi}_{k}(P_{n}),\bm{\psi}_{0}\in\bm{\Psi}.

Finite-Sample Result. Let M¯(J)4(M1+M2)2J2\overline{M}(J)\coloneqq 4(M_{1}+M_{2})^{2}J^{2} and c(δ,M¯(J))2(1+δ)2M¯(J)(1/3+1/δ)c(\delta,\overline{M}(J))\coloneqq 2(1+\delta)^{2}\overline{M}(J)(1/3+1/\delta). Then, for any δ>0\delta>0,

0𝔼P0[θ~pn,n(k^pn,n)θ0](1+2δ)𝔼P0[θ~pn,n(k~pn,n)θ0]+2c(δ,M¯(J))1+log(K)npn.0\leq\mathbb{E}_{P_{0}}[\tilde{\theta}_{p_{n},n}(\hat{k}_{p_{n},n})-\theta_{0}]\leq(1+2\delta)\mathbb{E}_{P_{0}}[\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}]+2c(\delta,\overline{M}(J))\frac{1+\log(K)}{np_{n}}. (6)

High-Dimensional Asymptotic Result. The finite-sample result in Equation (6) has the following asymptotic implications: If c(δ,M¯(J))(1+log(K))/(npn𝔼P0[θ~pn,n(k~pn,n)θ0])0c(\delta,\overline{M}(J))(1+\log(K))/(np_{n}\mathbb{E}_{P_{0}}[\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}])\rightarrow 0 and J/nm<J/n\rightarrow m<\infty as n,Jn,J\rightarrow\infty, then

𝔼P0[θ~pn,n(k^pn,n)θ0]𝔼P0[θ~pn,n(k~pn,n)θ0]1.\frac{\mathbb{E}_{P_{0}}[\tilde{\theta}_{p_{n},n}(\hat{k}_{p_{n},n})-\theta_{0}]}{\mathbb{E}_{P_{0}}[\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}]}\rightarrow 1. (7)

Further, if c(δ,M¯(J))(1+log(K))/(npn(θ~pn,n(k~pn,n)θ0))𝑃0c(\delta,\overline{M}(J))(1+\log(K))/(np_{n}(\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}))\overset{P}{\rightarrow}0 as n,Jn,J\rightarrow\infty, then

θ~pn,n(k^pn,n)θ0θ~pn,n(k~pn,n)θ0𝑃1.\frac{\tilde{\theta}_{p_{n},n}(\hat{k}_{p_{n},n})-\theta_{0}}{\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}}\overset{P}{\rightarrow}1. (8)

The proof relies on special properties of the random variable ZkL(X;Ψ^k(Pn))L(X;𝝍0)Z_{k}~{}\coloneqq~{}L(X;\hat{\Psi}_{k}(P_{n}))~{}-~{}L(X;\bm{\psi}_{0}) and on an application of Bernstein’s inequality (Bennett, 1962). Together, they are used to show that 2c(δ,M¯(J))(1+log(K))/(npn)2c(\delta,\overline{M}(J))(1+\log(K))/(np_{n}) is a finite-sample bound for comparing the performance of the cross-validated selector, k^pn,n\hat{k}_{p_{n},n}, against that of the oracle selector over the training sets, k~pn,n\tilde{k}_{p_{n},n}. Once this bound is established, the high-dimensional asymptotic results follow immediately.

Only a few sufficient conditions are required to provide finite-sample bounds on the expected risk difference of the estimator selected via our CV procedure. First, that each element of the random vector XX be bounded, and, second, that the entries of all covariance matrices in the parameter space and the set of possible estimates be bounded. Together, these assumptions allow for the definition of M¯(J)\overline{M}(J), the object permitting the extension of the loss-based estimation framework to the high-dimensional covariance matrix estimation problem.

The first assumption is technical in nature — it makes the proofs tractable. While it may appear stringent, and, for instance, is not satisfied by Gaussian distributions, we believe it to be generally applicable. We stress that parametric data-generating distributions, like those exhibiting Gaussianity, rarely reflect reality, that is, they are merely mathematical conveniences111Anecdotally, one cannot help but find themself reminded that “Everyone is sure of this [that errors are normally distributed] \ldots since the experimentalists believe that it is a mathematical theorem, and the mathematicians that it is an experimentally determined fact.” (Poincaré, 1912, p. 171). Most random variables, or transformations thereof, arising in scientific practice are bounded by limitations of the physical, electronic, or biological measurement process; thus, our method remains widely applicable. For example, in microarray and next-generation sequencing experiments, the raw data are images on a 16-bit scale, constraining them to [0,216)[0,2^{16}). Similarly, the measurement of immunologic markers, of substantial interest in vaccine efficacy trials of HIV-1, COVID-19, and other infectious diseases, are bounded by the limits of detection and/or quantitation imposed by assay biotechnology.

Verifying that the additional assumptions required by Theorem 1’s asymptotic results hold proves to be more challenging. We write f(y)=O(g(y))f(y)=O(g(y)) if |f|\lvert f\rvert is bounded above by gg, f(y)=o(g(y))f(y)=o(g(y)) if ff is dominated by gg, f(y)=Ω(g(y))f(y)=\Omega(g(y)) if ff is bounded below by gg, and f(y)=ω(g(y))f(y)=\omega(g(y)) if ff dominates gg, all in asymptotics with respect to nn and JJ. Further, a subscript “P” might be added to these bounds, denoting a convergence in probability. Now, note that c(δ,M¯(J))(1+log(K))/(npn)=O(J)c(\delta,\overline{M}(J))(1+\log(K))/(np_{n})=O(J) for fixed pnp_{n} and as J/nm>0J/n\rightarrow m>0. Then the conditions associated with Equation (7) and Equation (8) hold so long as 𝔼P0[θ~pn,n(k~pn,n)θ0]=ω(J)\mathbb{E}_{P_{0}}[\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}]=\omega(J) and θ~pn,n(k~pn,n)θ0=ωP(J)\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}=\omega_{P}(J), respectively.

These requirements do not seem particularly restrictive given that the complexity of the problem generally increases as a function of the number of features. There are many more entries in the covariance matrix requiring estimation than there are observations. This intuition is corroborated by our extensive simulation study in the following section. Consistent estimation in terms of the Frobenius risk is therefore not possible in high-dimensions without additional assumptions about the data-generating process.

Some additional insight might be gained by identifying conditions under which these assumptions are unmet for popular structural beliefs about the true covariance matrix. In particular, we consider the sparse covariance matrices defined in Bickel and Levina (2008a) and accompanying hard-thresholding estimators (see Section LABEL:supp:thresh-est of the Online Supplement):

Proposition 3

In addition to Assumptions 1 and 2 of Theorem 1, assume that ψ0\psi_{0} is a member of the following set of matrices:

{ψ:ψ(jj)<M2,l=1JI(ψ(jl)0)<s(J) for all ,j=1,,J}\left\{\psi:\psi^{(jj)}<M_{2},\sum_{l=1}^{J}I(\psi^{(jl)}\neq 0)<s(J)\text{ for all },j=1,\ldots,J\right\}

where s(J)s(J) is the row sparsity, that the hard-thresholding estimator is in the library of candidates, and that it uses a “sufficiently large” thresholding hyperparameter value in the sense of Bickel and Levina (2008a). Then, by Theorem 2 of Bickel and Levina (2008a), we have
𝔼P0[θ~pn,n(k~pn,n)θ0]=o(J)\mathbb{E}_{P_{0}}[\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}]=o(J) if s(J)/J=o(1/logJ)s(J)/J=o(1/\log J) asymptotically in nn and JJ.

Proposition 3 states that the conditions for achieving the asymptotic results of Theorem 1 are not met if the proportion of non-zero elements in the covariance matrix’s row with the most non-zero elements converges to zero faster than 1/logJ1/\log J and the library of candidates possesses a hard-thresholding estimator whose thresholding hyperparameter is reasonable in the sense of Bickel and Levina (2008a)’s Theorem 2 and its subsequent discussion. Plainly, the true covariance matrix cannot be too sparse if the collection of considered estimators contains the hard-thresholding estimator with a correctly specified thresholding hyperparameter value.

This implies that banded covariance matrices whose number of bands are fixed for JJ do not meet the criteria for our theory to apply, assuming that one of the candidate estimators correctly specifies the number of bands. Nevertheless, we observe empirically in Section 4 that our cross-validated procedure selects an optimal estimator when the true covariance matrix is banded or tapered more quickly in terms of nn and JJ than any other type of true covariance matrix.

These results are likely explained by the relatively low complexity of the estimation problem in this setting. High-dimensional asymptotic arguments are perhaps unnecessary when the proportion of entries needing to be estimated in the true covariance matrix quickly converges to zero. These limitations of our theory reflect stringent, and typically unverifiable, structural assumptions about the estimand. We reiterate that the conditions of Theorem 1 are generally satisfied. In situations where the true covariance matrix is known to possess this level sparsity, practitioners might instead appeal to Equation (39) of Bickel and Levina (2008a) to support their use of a cross-validated estimator selection procedure. This result, coupled with that of Proposition 1, likely explains the aforementioned simulation findings of the banded and tapered covariance matrices.

Now, Theorem 1’s high-dimensional asymptotic results relate the performance of the cross-validated selector to that of the oracle selector for the CV scheme. As indicated by the expression in Equation (3), however, we would like our cross-validated procedure to be asymptotically equivalent to the oracle over the entire data set. The conditions to obtain this desired result are provided in Corollary 1, a minor adaptation of previous work by Dudoit and van der Laan (2005). This extension accounts for increasing JJ, thereby permitting its use in high-dimensional asymptotics.

Corollary 1

Building upon Assumptions 1 and 2 of Theorem 1, we introduce the additional assumptions that, as n,Jn,J\rightarrow\infty and J/nm<J/n\rightarrow m<\infty, pn0p_{n}\rightarrow 0, c(δ,M¯(J))(1+log(K))/(npn(θ~pn,n(k~pn,n)θ0))𝑃0c(\delta,\overline{M}(J))(1+\log(K))/(np_{n}(\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}))\overset{P}{\rightarrow}0, and

θ~pn,n(k~pn,n)θ0θ~n(k~n)θ0𝑃1.\frac{\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}}{\tilde{\theta}_{n}(\tilde{k}_{n})-\theta_{0}}\overset{P}{\rightarrow}1. (9)

Under these assumptions, it follows that

θ~pn,n(k^pn,n)θ0θ~n(k~n)θ0𝑃1.\frac{\tilde{\theta}_{p_{n},n}(\hat{k}_{p_{n},n})-\theta_{0}}{\tilde{\theta}_{n}(\tilde{k}_{n})-\theta_{0}}\overset{P}{\longrightarrow}1. (10)

The proof is a direct application of the asymptotic results of Theorem 1.

As before, the assumption that c(δ,M¯(J))(1+log(K))/(npn(θ~pn,n(k~pn,n)θ0))𝑃0c(\delta,\overline{M}(J))(1+\log(K))/(np_{n}(\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}))\overset{P}{\rightarrow}0 remains difficult to verify, but essentially requires the estimation error of the oracle to increase quickly as the number of features grows. That is, npn(θ~pn,n(k~pn,n)θ0)=ωP(J)np_{n}(\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0})=\omega_{P}(J). We posit that this condition is generally satisfied, similarly to the asymptotic results of Theorem 1.

Now, a sufficient condition for Equation (9) is that there exists a γ>0\gamma>0 such that

(nγ(θ~n(k~n)θ0),(n(1pn))γ(θ~pn,n(k~pn,n)θ0))𝑑(Z,Z),\left(n^{\gamma}(\tilde{\theta}_{n}(\tilde{k}_{n})-\theta_{0}),\;(n(1-p_{n}))^{\gamma}(\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0})\right)\overset{d}{\rightarrow}(Z,Z), (11)

for a single random variable ZZ with (Z>a)=1\mathbb{P}(Z>a)=1 for some a>0a>0. For single-split validation, where (Bn=b)=1\mathbb{P}(B_{n}=b)=1 for some b{0,1}nb\in\{0,1\}^{n}, it suffices to assume that there exists a γ>0\gamma>0 such that nγ(θ~n(k~n)θ0)𝑑Zn^{\gamma}(\tilde{\theta}_{n}(\tilde{k}_{n})-\theta_{0})\overset{d}{\rightarrow}Z for a random variable ZZ with (Z>a)=1\mathbb{P}(Z>a)=1 for some a>0a>0.

Equation (9) essentially requires that the (appropriately scaled) distributions of the cross-validated and full-dataset conditional risk differences of their respective oracle selections converge in distribution as pn0p_{n}\rightarrow 0. Again, this condition is unrestrictive. As pn0p_{n}\rightarrow 0, the composition of each training set becomes increasingly similar to that of the full-dataset. The resulting estimates produced by each candidate estimator over the training sets and the full-dataset will correspondingly converge. Naturally, so too will the cross-validated and full-dataset conditional risk difference distributions of their respective selections.

While the number of candidates in the estimator library KK has been assumed to be fixed in this discussion of the proposed method’s asymptotic results, it may be allowed to grow as a function of nn and JJ. Of course, this will negatively impact the convergence rates of c(δ,M¯(J))(1+log(K))/(npn𝔼P0[θ~pn,n(k~pn,n)θ0])c(\delta,\overline{M}(J))(1+\log(K))/(np_{n}\mathbb{E}_{P_{0}}[\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}]) and c(δ,M¯(J))(1+log(K))/(npn(θ~pn,n(k~pn,n)θ0))c(\delta,\overline{M}(J))(1+\log(K))/(np_{n}(\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0})). The sufficient conditions outlined in the asymptotic results of Theorem 1 are achieved so long as the library of candidates does not grow too aggressively. That is, we can make the additional assumptions that K=o(exp{𝔼P0[θ~pn,n(k~pn,n)θ0]/J})K=o(\text{exp}\{\mathbb{E}_{P_{0}}[\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0}]/J\}) and K=oP(exp{(θ~pn,n(k~pn,n)θ0)/J})K=o_{P}(\text{exp}\{(\tilde{\theta}_{p_{n},n}(\tilde{k}_{p_{n},n})-\theta_{0})/J\}) such that the results of Equations (7) and (8) are achieved, respectively.

Finally, we have assumed thus far that 𝔼P0[X]\mathbb{E}_{P_{0}}[X] is known. This is generally not the case in practice. In place of a random vector centered at zero, we might instead consider the set of nn demeaned random vectors 𝐗~n×J\tilde{\mathbf{X}}_{n\times J} where X~i=XiX¯\tilde{X}_{i}=X_{i}-\bar{X} and X¯(j)=1/nXi(j)\bar{X}^{(j)}=1/n\sum X_{i}^{(j)}. It follows from the details given in Remark LABEL:supp:remark:unknown-mean of the Online Supplement that the asymptotic results of Theorem 1 and Corollary 1 apply to 𝐗~n×J\tilde{\mathbf{X}}_{n\times J}.

4 Simulation Study

4.1 Simulation Study Design

We conducted a series of simulation experiments using prominent covariance models to verify the theoretical results of our cross-validated estimator selection procedure. These models are described below.

Model 1:

A dense covariance matrix, where

ψ(jl)={1,j=l0.5,otherwise.\psi^{(jl)}=\begin{cases}1,&j=l\\ 0.5,&\text{otherwise}\\ \end{cases}.
Model 2:

An AR(1) model, where ψ(jl)=0.7|jl|\psi^{(jl)}=0.7^{\lvert j-l\rvert}. This covariance matrix, corresponding to a common timeseries model, is approximately sparse for large JJ, since the off-diagonal elements quickly shrink to zero.

Model 3:

An MA(1) model, where ψ(jl)=0.7|jl|𝕀(|jl|1)\psi^{(jl)}=0.7^{\lvert j-l\rvert}\cdot\mathbb{I}(\lvert j-l\rvert\leq 1). This covariance matrix, corresponding to another common timeseries model, is truly sparse. Only the diagonal, subdiagonal, and superdiagonal contain non-zero elements.

Model 4:

An MA(2) model, where

ψ(jl)={1,j=l0.6,|jl|=10.3,|jl|=20,otherwise.\psi^{(jl)}=\begin{cases}1,&j=l\\ 0.6,&\lvert j-l\rvert=1\\ 0.3,&\lvert j-l\rvert=2\\ 0,&\text{otherwise}\end{cases}.

This timeseries model is similar to Model 3, but slightly less sparse.

Model 5:

A random covariance matrix model. First, a J×JJ\times J random matrix whose elements are i.i.d. Uniform(0,1)\text{Uniform}(0,1) is generated. Next, entries below 1/41/4 are set to 11, entries between 1/41/4 and 1/21/2 are set to 1-1, and the remaining entries are set to 0. The square of this matrix is then computed and added to the identity matrix 𝐈J×J\mathbf{I}_{J\times J}. Finally, the corresponding correlation matrix is computed and used as the model’s covariance matrix.

Model 6:

A Toeplitz covariance matrix, where

ψ(jl)={1,j=l0.6|jl|1.3,otherwise.\psi^{(jl)}=\begin{cases}1,&j=l\\ 0.6\lvert j-l\rvert^{-1.3},&\text{otherwise}\end{cases}.

Like the AR(1) model, this covariance matrix is approximately sparse for large JJ. However, the off-diagonal entries decay less quickly as their distance from the diagonal increases.

Model 7:

A Toeplitz covariance matrix with alternating signs, where

ψ(jl)={1,j=l(1)|jl|0.6|jl|1.3,otherwise.\psi^{(jl)}=\begin{cases}1,&j=l\\ (-1)^{\lvert j-l\rvert}0.6\lvert j-l\rvert^{-1.3},&\text{otherwise}\end{cases}.

This model is almost identical to Model 6, though the signs of the covariance matrix’s entries are alternating.

Model 8:

A covariance matrix inspired by the latent variable model described in the Online Supplement’s Equation (LABEL:supp:factor-model). Let 𝜷J×3=(β1,,βJ)\bm{\beta}_{J\times 3}~{}=~{}(\beta_{1},\ldots,\beta_{J})^{\top}, where βj\beta_{j} are randomly generated using a N(0,𝐈3×3)N(0,\mathbf{I}_{3\times 3}) distribution for j=1,,Jj=1,\ldots,J. Then 𝝍=𝜷𝜷+𝐈J×J\bm{\psi}=\bm{\beta}\bm{\beta}^{\top}+\mathbf{I}_{J\times J} is the covariance matrix of a model with three latent factors.

Each covariance model was used to generate data sets consisting of n{50,100,200,500}n\in\{50,100,200,500\} i.i.d. multivariate Gaussian, mean-zero observations. The uniform boundedness condition of Theorem 1’s Assumption 1 is therefore not satisfied; we do this purposefully to further stress that this assumption is not limiting in many practical settings. For each model and sample size, five data dimension ratios were considered: J/n{0.3,0.5,1,2,5}J/n\in\{0.3,0.5,1,2,5\}. Together, the eight covariance models, four sample sizes, and five dimensionality ratios result in 160160 distinct simulation settings. For each such setting, the performance of the cross-validated selector with respect to the various oracle selectors and several well-established estimators is evaluated based on aggregation across 200200 Monte Carlo repetitions.

We applied our estimator selection procedure, which we refer to as cvCovEst, using a 5-fold CV scheme. The library of candidate estimators is provided in the Online Supplement’s Table LABEL:supp:table:sim-hyperparams, which includes details on these estimators’ possible hyperparameters. Seventy-four estimators make up the library of candidates. We note that no penalty is attributed to estimators generating rank-deficient estimates, like the sample covariance matrix when J>nJ>n, though this limitation is generally of practical importance. When the situation dictates that the resulting estimate must be positive-definite, the library of candidates should be assembled accordingly.

4.2 Simulation Study Results

To examine empirically the optimality results of Theorem 1, we computed analytically, for each replication, the cross-validated conditional risk differences of the cross-validated selection, k^pn,n\hat{k}_{p_{n},n}, and the cross-validated oracle selection, k~pn,n\tilde{k}_{p_{n},n}. The Monte Carlo expectations of the risk differences stratified by nn, J/nJ/n, and the covariance model were computed from the cross-validated conditional risk differences. The ratios of the expected risk differences are presented in Figure 1A. These results make clear that, for virtually all models considered, the estimator chosen by our proposed cross-validated selection procedure has a risk difference asymptotically identical on average to that of the cross-validated oracle.

Refer to caption
Figure 1: (A) Comparison of the cross-validated selection and cross-validated oracle selection’s mean cross-validated conditional risk differences. (B) Comparison of the cross-validated selection and oracle selection’s full-dataset conditional risk differences. Note the differing y-axis scales for the different models.

A stronger result, corresponding to Equation (8) of Theorem 1, is presented in the Online Supplement’s Figure LABEL:supp:sim:conv-in-prob. For all but Models 1 and 8, we find that our algorithm’s selection is virtually equivalent to the cross-validated oracle selection for n200n\geq 200 and J/n0.5J/n\geq 0.5. Even for Model 8, in which the covariance matrices are more difficult to estimate due to their dense structures, we find that our selector identifies the optimal estimator with probability tending to 11 for n200n\geq 200 and J/n=5J/n=5.

More impressive still are the results presented in Figure 1B that characterize the full-dataset conditional risk difference ratios. For all covariance matrix models considered, with the exception of Model 1, our procedure’s selections attain near asymptotic optimality for moderate values of nn and J/nJ/n. This suggests that our loss-based estimator selection approach’s theoretical guarantee, as outlined in Corollary 1, is achievable in many practical settings.

In addition to verifying our method’s asymptotic behavior, we compared its estimates against those of competing approaches. We computed the Frobenius and spectral norms of each procedure’s estimate against the corresponding true covariance matrix. The mean norms over all simulations were then computed for each covariance matrix estimation procedure, again stratified by nn, J/nJ/n, and the covariance matrix model (Figures LABEL:supp:mean-frobenius and LABEL:supp:mean-spectral of the Online Supplement). Our data-adaptive procedure is found to perform at least as well as the best alternative estimation strategy across all simulation scenarios under both norms.

5 Real Data Examples

Single-cell transcriptome sequencing (scRNA-seq) allows researchers to study the gene expression profiles of individual cells. The fine-grained transcriptomic data that it provides have been used to identify rare cell populations and to elucidate the developmental relationships between diverse cellular states.

Refer to caption
Figure 2: Comparisons of scRNA-seq data sets’ UMAP embeddings based on vanilla PCA or PCA with the cross-validated selection’s covariance matrix estimate. The data sets consist of (A) 285 cells collected from the visual cortex of mice and (B) 2,816 mouse brain cells. Distinct cell types are indicated by color.

Given that a typical scRNA-seq data set possesses tens of thousands of features (genes), most workflows prescribe a dimensionality reduction step. In addition to reducing the amount of computational resources needed to analyze the data, reducing the dimensions mitigates the effect of corrupting noise on interesting biological signal. The lower-dimensional embedding is then used in downstream analyses, like novel cell-type discovery via clustering.

One of the most popular methods used to reduce the dimensionality of scRNA-seq data is uniform manifold approximation and projection (UMAP) (McInnes et al., 2018). This method captures the most salient non-linear relationships among a high-dimensional data set’s features and projects them onto a reduced-dimensional space. Instead of applying UMAP directly, the scRNA-seq data set’s leading principal components (PCs) are often used as an initialization.

This initial dimensionality reduction by PCA is believed to play a helpful role in denoising. However, PCA typically relies on the sample covariance matrix, and so when the data set is high-dimensional, the resulting principal components are known to be poor estimates of those of the population (Johnstone and Lu, 2009). We hence posit that our cross-validated estimator selection procedure could form a basis for an improved PCA. That is, we hope that the eigenvectors resulting from the eigendecomposition of our estimated covariance matrix could be used to generate a set of estimates closer to the true PCs in terms of risk. These PCs could then be fed to UMAP to produce an enhanced embedding. Indeed, additional simulation results provided in Section LABEL:supp:spectral-norm-sim of the Online Supplement suggest that cvCovEst produces estimates of the leading eigenvalue at least as well as those produced by the sample covariance matrix, in terms of the spectral norm.

We applied our procedure to two scRNA-seq data sets for which the cell types are known a priori. These data were obtained from the scRNAseq Bioconductor R package (Risso and Cole, 2020), and prepared for analysis using a workflow outlined in Amezquita et al. (2020). A 5-fold CV scheme was used; the library of candidate estimators is provided in the Online Supplement’s Table LABEL:supp:table:scrnaseq-hyperparams. We expect that cells of the same class will form tight, distinct clusters within the low-dimensional representations. The resulting embeddings, which we refer to as the cvCovEst-based embeddings, were then compared to those produced by UMAP using traditional PCA for initialization, which we refer to as the PCA-based embeddings. For each embedding, the 20 leading PCs were fed to UMAP. The first data set is a collection of 285 mouse visual cortex cells (Tasic et al., 2016), and the second data set consists of 2,816 mouse brain cells (Zeisel et al., 2015). The 1,000 most variable genes of each data set were used to compute the PCs of both embeddings.

Refer to caption
Figure 3: Tasic dataset: Diagnostic plots and tables generated using the cvCovEst R package. (A) The top-left plot presents the cross-validated Frobenius risk of the estimator selected by our method. kk represents the number of potential latent factors, and lambda the thresholding value used. The top-right panel contains a line plot of the selected estimator’s eigenvalues. The bottom-left plot displays the absolute values of the estimated correlation matrix output by the cvCovEst selection, and the bottom-right table lists the best performing estimators from all classes of estimators considered. (B) Side-by-side line plots of the estimated leading eigenvalues of the cvCovEst selection and the sample covariance matrix.

The resulting UMAP plots are presented in Figure 2. Though the two embeddings generated for each data set are qualitatively similar, the low-dimensional representation relying on our loss-based approach is more refined in Figure 2A. A number of cells erroneously clustered in the PCA-based embedding are correctly represented in the cvCovEst-based embedding. This explains the 41% increase in average silhouette width of our method relative to the traditional approach. Further insight is gleaned from the diagnostic plots of Figure 3. Figure 3A indicates that cvCovEst selected the POET estimator (Fan et al., 2013) with 5 latent factors and a thresholding hyperparameter of 0.3. It that the selected estimator significantly improves upon the sample covariance matrix in terms of the cross-validated Frobenius risk. Figure 3B provides further insight into the discrepancies between the UMAP results of Figure 2A: the sample covariance matrix likely over-estimates many of the leading eigenvalues.

The embeddings in Figure 2B qualitatively identical, and so too are their average silhouette widths. This is expected, the Zeisel et al. (2015) is not truly high-dimensional. The sample covariance matrix likely is a reasonable estimator in this setting. Ideally, data-adaptive selection procedures should be cognizant of this. Indeed, cvCovEst, when applied to the Zeisel et al. (2015) data set, selects an estimator whose cross-validated empirical risk is only slightly smaller than that of the sample covariance matrix, and whose leading PCs are virtually identical (Figure LABEL:supp:fig:zeisel-diagnostics of the Online Supplement).

6 Discussion

This work extends Dudoit and van der Laan (2005)’s framework for asymptotically optimal, data-adaptive estimator selection to the problem of covariance matrix estimation in high-dimensional settings. We provide sufficient conditions under which our cross-validated procedure is asymptotically optimal in terms of risk, and show that it generalizes the cross-validated hyperparameter selection procedures employed by existing estimation approaches. Future work might derive analogous results for other loss functions, or perhaps even for other parameters like the precision matrix.

The simulation study provides evidence that near-optimal results are achieved in data sets with relatively modest numbers of observations and many features across models indexed by diverse covariance matrix structures. These results also establish that our cross-validated procedure performs as well as the best bespoke estimation procedure in a variety of settings. Our scRNA-seq data examples further illustrate the utility of our approach in fields where high-dimensional data are collected routinely.

Practitioners need no longer rely upon intuition alone when deciding which candidate estimator is best from among a library of diverse estimators. We expect that a variety of computational procedures relying upon the accurate estimation of the covariance matrix beyond the exploratory analyses considered here, like clustering and latent variable estimation, stand to benefit from the application of this framework.


SUPPLEMENTARY MATERIAL

Online Supplement:

All technical proofs and additional simulation results are included in the online supplement. A brief review of popular covariance matrix estimators is also included.

cvCovEst R package:

The cross-validated covariance matrix estimator selection procedure is implemented in cvCovEst (Boileau et al., 2021), an open-source R package (R Core Team, 2021). This package is available via the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=cvCovEst. Internally, the cvCovEst package relies upon the generalized CV framework of the origami R package (Coyle and Hejazi, 2018).

Simulation and Analysis Code:

Results in Sections 4 and 5 were produced using version 0.1.3 of the cvCovEst R package. The code and data used to produce these analyses are publicly available on GitHub at https://github.com/PhilBoileau/pub_cvCovEst.

Acknowledgements

We thank Brian Collica and Jamarcus Liu for significant contributions to the development of the cvCovEst R package. PB is funded by the Fonds de recherche du Québec - Nature et technologies, and the Natural Sciences and Engineering Research Council of Canada. NSH gratefully acknowledges the support of the National Science Foundation (award no. DMS 2102840).

References

  • Amezquita et al. (2020) Amezquita, R. A., Lun, A. T. L., Becht, E., Carey, V. J., Carpp, L. N., Geistlinger, L., Marini, F., Rue-Albrecht, K., Risso, D., Soneson, C., Waldron, L., Pagès, H., Smith, M. L., Huber, W., Morgan, M., Gottardo, R., and Hicks, S. C. (2020), “Orchestrating single-cell analysis with Bioconductor,” Nature Methods, 17, 137–145, URL https://doi.org/10.1038/s41592-019-0654-x.
  • Anderson (2003) Anderson, T. (2003), An introduction to multivariate statistical analysis, Hoboken, New Jersey: John Wiley & Sons, Inc., third edition.
  • Bai (2003) Bai, J. (2003), “Inferential Theory for Factor Models of Large Dimensions,” Econometrica, 71, 135–171.
  • Bartz (2016) Bartz, D. (2016), “Cross-validation based Nonlinear Shrinkage,” URL https://arxiv.org/abs/1611.00798.
  • Bennett (1962) Bennett, G. (1962), “Probability inequalities for the sum of independent random variables,” Journal of the American Statistical Association, 57, 33–45.
  • Bickel and Levina (2008a) Bickel, P. J. and Levina, E. (2008a), “Covariance Regularization by Thresholding,” Annals of Statistics, 36, 2577–2604, URL http://www.jstor.org/stable/25464728.
  • Bickel and Levina (2008b) — (2008b), “Regularized Estimation of Large Covariance Matrices,” Annals of Statistics, 36, 199–227, URL http://www.jstor.org/stable/25464621.
  • Boileau et al. (2021) Boileau, P., Hejazi, N. S., Collica, B., van der Laan, M. J., and Dudoit, S. (2021), “cvCovEst: Cross-validated covariance matrix estimator selection and evaluation in R,” Journal of Open Source Software, 6, 3273, URL https://doi.org/10.21105/joss.03273.
  • Breiman and Spector (1992) Breiman, L. and Spector, P. (1992), “Submodel selection and evaluation in regression. The X-random case,” International Statistical Review, 291–319.
  • Cai and Liu (2011) Cai, T. and Liu, W. (2011), “Adaptive Thresholding for Sparse Covariance Matrix Estimation,” Journal of the American Statistical Association, 106, 672–684, URL https://doi.org/10.1198/jasa.2011.tm10560.
  • Cai et al. (2010) Cai, T. T., Zhang, C.-H., and Zhou, H. H. (2010), “Optimal rates of convergence for covariance matrix estimation,” Annals of Statistics, 38, 2118–2144.
  • Coyle and Hejazi (2018) Coyle, J. and Hejazi, N. (2018), “origami: A Generalized Framework for Cross-Validation in R,” Journal of Open Source Software, 3, 512, URL https://doi.org/10.21105/joss.00512.
  • Dudoit and van der Laan (2005) Dudoit, S. and van der Laan, M. J. (2005), “Asymptotics of cross-validated risk estimation in estimator selection and performance assessment,” Statistical Methodology, 2, 131–154.
  • Efron (2012) Efron, B. (2012), Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction, Cambridge University Press.
  • Fan et al. (2008) Fan, J., Fan, Y., and Lv, J. (2008), “High dimensional covariance matrix estimation using a factor model,” Journal of Econometrics, 147, 186 – 197, URL http://www.sciencedirect.com/science/article/pii/S0304407608001346. Econometric modelling in finance and risk management: An overview.
  • Fan et al. (2013) Fan, J., Liao, Y., and Mincheva, M. (2013), “Large covariance estimation by thresholding principal orthogonal complements,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75, 603–680.
  • Fan et al. (2016) Fan, J., Liao, Y., and Wang, W. (2016), “Projected principal component analysis in factor models,” Annals of Statistics, 44, 219–254.
  • Fan et al. (2019) Fan, J., Wang, W., and Zhong, Y. (2019), “Robust covariance estimation for approximate factor models,” Journal of Econometrics, 208, 5 – 22. Special Issue on Financial Engineering and Risk Management.
  • Fang et al. (2016) Fang, Y., Wang, B., and Feng, Y. (2016), “Tuning-parameter selection in regularized estimations of large covariance matrices,” Journal of Statistical Computation and Simulation, 86, 494–509, URL https://doi.org/10.1080/00949655.2015.1017823.
  • Friedman et al. (2001) Friedman, J., Hastie, T., and Tibshirani, R. (2001), The Elements of Statistical Learning, volume 1, Springer series in statistics New York.
  • Golub and Van Loan (1996) Golub, G. H. and Van Loan, C. F. (1996), Matrix Computations, Johns Hopkins Universtiy Press, 3rd edition.
  • Györfi et al. (2002) Györfi, L., Kohler, M., Krzyzak, A., and Walk, H. (2002), A Distribution-Free Theory of Nonparametric Regression., Springer series in statistics, Springer.
  • Johnstone (2001) Johnstone, I. M. (2001), “On the distribution of the largest eigenvalue in principal components analysis,” Annals of Statistics, 295–327.
  • Johnstone and Lu (2009) Johnstone, I. M. and Lu, A. Y. (2009), “On Consistency and Sparsity for Principal Components Analysis in High Dimensions,” Journal of the American Statistical Association, 104, 682–693. PMID: 20617121.
  • Lam and Fan (2009) Lam, C. and Fan, J. (2009), “Sparsistency and rates of convergence in large covariance matrix estimation,” Annals of Statistics, 37, 4254–4278.
  • Ledoit and Wolf (2004) Ledoit, O. and Wolf, M. (2004), “A well-conditioned estimator for large-dimensional covariance matrices,” Journal of Multivariate Analysis, 88, 365 – 411.
  • Ledoit and Wolf (2012) — (2012), “Nonlinear shrinkage estimation of large-dimensional covariance matrices,” Annals of Statistics, 40, 1024–1060.
  • Ledoit and Wolf (2015) — (2015), “Spectrum estimation: A unified framework for covariance matrix estimation and PCA in large dimensions,” Journal of Multivariate Analysis, 139, 360 – 384.
  • Ledoit and Wolf (2018) — (2018), “Analytical nonlinear shrinkage of large-dimensional covariance matrices,” ECON - Working Papers 264, Department of Economics - University of Zurich, URL https://EconPapers.repec.org/RePEc:zur:econwp:264.
  • Marčenko and Pastur (1967) Marčenko, V. A. and Pastur, L. A. (1967), “DISTRIBUTION OF EIGENVALUES FOR SOME SETS OF RANDOM MATRICES,” Mathematics of the USSR-Sbornik, 1, 457–483.
  • McInnes et al. (2018) McInnes, L., Healy, J., and Melville, J. (2018), “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction,” arXiv, 1802.03426, URL http://arxiv.org/abs/1802.03426.
  • Onatski (2012) Onatski, A. (2012), “Asymptotics of the principal components estimator of large factor models with weakly influential factors,” Journal of Econometrics, 168, 244 – 258.
  • Poincaré (1912) Poincaré, H. (1912), Calcul des probabilités, volume 1, Gauthier-Villars.
  • R Core Team (2021) R Core Team (2021), R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, URL https://www.R-project.org/.
  • Risso and Cole (2020) Risso, D. and Cole, M. (2020), scRNAseq: Collection of Public Single-Cell RNA-Seq Datasets. R package version 2.3.17.
  • Robbins (1964) Robbins, H. (1964), “The empirical Bayes approach to statistical decision problems,” Annals of Mathematical Statistics, 35, 1–20.
  • Rothman et al. (2009) Rothman, A. J., Levina, E., and Zhu, J. (2009), “Generalized Thresholding of Large Covariance Matrices,” Journal of the American Statistical Association, 104, 177–186, URL https://doi.org/10.1198/jasa.2009.0101.
  • Schäfer and Strimmer (2005) Schäfer, J. and Strimmer, K. (2005), “A Shrinkage Approach to Large-Scale Covariance Matrix Estimation and Implications for Functional Genomics,” Statistical Applications in Genetics and Molecular Biology, 4.
  • Smith (2005) Smith, S. (2005), “Covariance, subspace, and intrinsic Crame/spl acute/r-Rao bounds,” IEEE Transactions on Signal Processing, 53, 1610–1630.
  • Stock and Watson (2002) Stock, J. H. and Watson, M. W. (2002), “Forecasting Using Principal Components from a Large Number of Predictors,” Journal of the American Statistical Association, 97, 1167–1179, URL http://www.jstor.org/stable/3085839.
  • Tasic et al. (2016) Tasic, B., Menon, V., Nguyen, T. N., Kim, T. K., Jarsky, T., Yao, Z., Levi, B., Gray, L. T., Sorensen, S. A., Dolbeare, T., Bertagnolli, D., Goldy, J., Shapovalova, N., Parry, S., Lee, C., Smith, K., Bernard, A., Madisen, L., Sunkin, S. M., Hawrylycz, M., Koch, C., and Zeng, H. (2016), “Adult mouse cortical cell taxonomy revealed by single cell transcriptomics,” Nature Neuroscience, 19, 335–346, URL https://doi.org/10.1038/nn.4216.
  • van der Laan and Dudoit (2003) van der Laan, M. J. and Dudoit, S. (2003), “Unified Cross-Validation Methodology For Selection Among Estimators and a General Cross-Validated Adaptive Epsilon-Net Estimator: Finite Sample Oracle Inequalities and Examples,” Working Paper 130, University of California, Berkeley, URL https://biostats.bepress.com/ucbbiostat/paper130/.
  • van der Vaart et al. (2006) van der Vaart, A. W., Dudoit, S., and van der Laan, M. J. (2006), “Oracle inequalities for multi-fold cross validation,” Statistics and Decisions, 24, 351–371.
  • Zeisel et al. (2015) Zeisel, A., Muñoz-Manchado, A. B., Codeluppi, S., Lönnerberg, P., La Manno, G., Juréus, A., Marques, S., Munguba, H., He, L., Betsholtz, C., Rolny, C., Castelo-Branco, G., Hjerling-Leffler, J., and Linnarsson, S. (2015), “Cell types in the mouse cortex and hippocampus revealed by single-cell RNA-seq,” Science, 347, 1138–1142.