This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Sharper rates of convergence for the tensor graphical Lasso estimator

Shuheng Zhou                           Kristjan Greenewald
University of California, Riverside        MIT-IBM Watson AI Lab, Cambridge, MA
Abstract

Many modern datasets exhibit dependencies among observations as well as variables. This gives rise to the challenging problem of analyzing high-dimensional matrix-variate data with unknown dependence structures. To address this challenge, Kalaitzis et. al. (2013) proposed the Bigraphical Lasso (BiGLasso), an estimator for precision matrices of matrix-normals based on the Cartesian product of graphs. Subsequently, Greenewald, Zhou and Hero (GZH 2019) introduced a multiway tensor generalization of the BiGLasso estimator, known as the TeraLasso estimator. In this paper, we provide sharper rates of convergence in the Frobenius and operator norm for both BiGLasso and TeraLasso estimators for estimating inverse covariance matrices. This improves upon the rates presented in GZH 2019. In particular, (a) we strengthen the bounds for the relative errors in the operator and Frobenius norm by a factor of approximately logp\log p; (b) Crucially, this improvement allows for finite-sample estimation errors in both norms to be derived for the two-way Kronecker sum model. The two-way regime is important because it is the setting that is the most theoretically challenging, and simultaneously the most common in applications. Normality is not needed in our proofs; instead, we consider sub-gaussian ensembles and derive tight concentration of measure bounds, using tensor unfolding techniques. The proof techniques may be of independent interest.

1 Introduction

Matrix and tensor-valued data with complex dependencies are ubiquitous in modern statistics and machine learning, flowing from sources as diverse as medical and radar imaging modalities, spatial-temporal and meteorological data collected from sensor networks and weather stations, and biological, neuroscience and spatial gene expression data aggregated over trials and time points. Learning useful structures from these large scale, complex and high-dimensional data in the low sample regime is an important task. Undirected graphs are often used to describe high dimensional distributions. Under sparsity conditions, the graph can be estimated using 1\ell_{1}-penalization methods, such as the graphical Lasso (GLasso) [12] and multiple (nodewise) regressions [27]. Under suitable conditions, including the independence among samples, such approaches yield consistent (and sparse) estimation in terms of graphical structure and fast convergence rates with respect to the operator and Frobenius norm for the covariance matrix and its inverse. The independence assumptions as mentioned above substantially simplify mathematical derivations but they tend to be very restrictive.

To remedy this, recent work has demonstrated another regime where further improvements in the sample size lower bounds are possible under additional structural assumptions, which arise naturally in the above mentioned contexts for data with complex dependencies. For example, the matrix-normal model [7] as studied in [1, 24, 35, 43] restricts the topology of the graph to tensor product graphs where the precision matrix corresponds to a Kronecker product over two component graphs. In [43], the author showed that one can estimate the covariance and inverse covariance matrices well using only one instance from the matrix variate normal distribution. However, such a normality assumption is also not needed, as elaborated in a recent paper by the same author [44]. More specifically, while the precision matrix encodes conditional independence relations for Gaussian distributions, for the more general sub-gaussian matrix variate model, this no longer holds. However, the inverse covariance matrix still encodes certain zero correlation relations between the residual errors and the covariates in a regression model, analogous to the commonly known Gaussian regression model [23]. See [44], where such regression model is introduced for sub-gaussian matrix variate data. See also [17] and references therein for more recent applications of matrix variate models in genomics.

Along similar lines, the Bigraphical Lasso framework was proposed to parsimoniously model conditional dependence relationships of matrix variate data based on the Cartesian product of graphs. As pointed out in [19], the associativity of the Kronecker sum yields an approach to the modeling of datasets organized into 3 or higher-order tensors. In [16], we explored this possibility to a great extent, by (a) introducing the tensor graphical Lasso (TeraLasso) procedure for estimating sparse LL-way decomposable inverse covariance matrices for all L2L\geq 2; and (b) showing the rates of convergence in the Frobenius and the operator norm for estimating this class of inverse covariance matrices for sub-gaussian tensor-valued data

Consider the mean zero LL-order random tensor 𝓧d1××dL\boldsymbol{\mathscr{X}}\in\mathbb{R}^{d_{1}\times\dots\times d_{L}}, and assume that we are given nn independent samples 𝓧1,,𝓧n𝓧\boldsymbol{\mathscr{X}}_{1},\ldots,\boldsymbol{\mathscr{X}}_{n}\sim\boldsymbol{\mathscr{X}}. Here \sim represents that two vectors follow the same distribution. Denote by 𝐩=[d1,,dL]\mathbf{p}=[d_{1},\dots,d_{L}] the vector of component dimensions and pp the product of djd_{j}s. Hence

vec(𝓧)p,where p=kdkand\displaystyle\mathrm{vec}(\boldsymbol{\mathscr{X}})\in\mathbb{R}^{p},\quad\text{where }\quad p=\prod\nolimits_{k}d_{k}\quad\text{and}
mk=ikdi=p/dk\displaystyle m_{k}=\prod\nolimits_{i\neq k}d_{i}={p}/{d_{k}} (1)

is the effective sample size we have to estimate the relations among the dkd_{k} features along the kthk^{th} mode in the tensor model. It was shown that due to the element replication inherent in the Cartesian product structure, the precision matrix in the TeraLasso model can be accurately estimated from limited data samples of high dimensional variables with multiway coordinates such as space, time and replicates. In particular, single sample convergence was proved for L>2L>2 and empirically observed for all LL. In contrast, direct application of the models in [12, 45] and the analysis frameworks in [30, 45, 46] require the sample size nn to scale proportionally to pp, when the total number of edges is p\propto p, which is still often too large to be practical. As a result, it is common to assume certain axes of 𝓧\boldsymbol{\mathscr{X}} are i.i.d., often an overly simplistic model.

Contributions. Previously in [16], we provided theoretical guarantees for the TeraLasso estimator (2), when the sample size is low, including single-sample convergence when L3L\geq 3. In the present work, we strengthen the bounds for the relative errors in the operator and Frobenius norm in [16] by a factor of logp\log p, improving upon those in Theorem 3.2 (as originally proved in [16]). These faster rates of convergence are stated in Theorem 3.1 in the present paper. This significant improvement is due to the tighter error bounds on the diagonal component of the loss function, cf. Lemma 2.2. We now show that the TeraLasso estimator achieves low errors with a constant number of replicates, namely n=O(1)n=O(1), even for the L=2L=2 regime. This closes the gap between the low single-sample error for the two-way model empirically observed in [16] and the theoretical bounds therein. The key technical innovation in the present work is the uniform concentration of measure bounds on the trace terms appearing in the diagonal component of the loss function (2). The proof techniques may be of independent interest. There, we highlight tensor unfolding techniques and applications of Hanson-Wright inequality, which are crucial in analyzing the sub-gaussian tensor-valued data with complex dependencies. Experiments that confirm the derived statistical rates are provided in the arXiv preprint arXiv:2304.00441v1.

1.1 Related Work

Models similar to the Kronecker sum precision model have been successfully used in a variety of fields, including regularization of multivariate splines [39, 9, 21, 40], design of physical networks [18, 36, 11], and Sylvester equations arising from the discretization of separable LL-dimensional PDEs with tensorized finite elements [13, 22, 4, 34, 10]. Additionally, Kronecker sums find extensive use in applied mathematics and statistics, including beam propagation physics [2], control theory [26, 5], fluid dynamics [8], errors-in-variables [32], and spatio-temporal modeling and neural processes [14, 33]. When the data indeed follows a matrix normal model, the BiGLasso [19] and TeraLasso [16] also effectively recovers the conditional dependence graphs and precision matrices simultaneously for a class of Gaussian graphical models by restricting the topology to Cartesian product graphs. We provided a composite gradient-based optimization algorithm, and obtained algorithmic and statistical rates of convergence for estimating structured precision matrix for tensor-valued data [16]. Recently, several methods have arisen that can speed up the numerical convergence of the optimization of the BiGLasso objective of [19], cf. (2) with L=2L=2. A Newton-based optimization algorithm for L=2L=2 was presented in [41] that provides significantly faster convergence in ill-conditioned settings. Subsequently, [25] also developed a scalable flip-flop approach, building upon the original BiGLasso flip-flop algorithm as derived in [19]. Using the Kronecker sum eigenvalue decomposition similar to that of [16] to make the memory requirements scalable, their algorithm also provides faster numerical convergence than the first-order algorithm presented in [16], especially when the precision matrix is ill-conditioned. They also provided a Gaussian copula approach for applying the model to certain non-Gaussian data. As our focus in the current work is the statistical convergence rate, not optimization convergence, we continue to use the first-order optimization algorithm of [16] in our experiments because it is flexible to handle for L>2L>2. For L=2L=2, it will recover the same estimate as the algorithms of [25] and [41] since the objective function is convex and all three algorithms are guaranteed to converge to the unique global minimum. Although non-convex regularizers were also considered in [16], we do not consider these in the present work. Instead, we focus on deriving sharp statistical rates of convergence in the Frobenius and operator norm for estimating precision matrix (5) with convex function (2). Subsequent to [16], a related SG-PALM was presented in [37], where the precision matrix is the square of an LL-way Kronecker sum. See [38] for a recent survey of multiway covariance models.

Organization. The rest of the paper is organized as follows. In Section 2, we define the model and the method, with discussions. Section 3 presents the main technical results. Section 4 presents key technical proofs regarding the diagonal component of the loss function. We conclude in Section 5. We place additional technical proofs in the supplementary material.

1.2 Definitions and notations

Let e1,,ene_{1},\ldots,e_{n} be the canonical basis of n{\mathbb{R}}^{n}. Let B2n{B}_{2}^{n} and 𝕊n1\mathbb{S}^{n-1} be the unit Euclidean ball and the unit sphere of n{\mathbb{R}}^{n}, respectively. For a set J{1,,n}J\subset\{1,\ldots,n\}, denote EJ=span{ej:jJ}E_{J}=\operatorname*{span}\{e_{j}:j\in J\}. We denote by [n][n] the set {1,,n}\{1,\ldots,n\}. We use AA for matrices, 𝓐\boldsymbol{\mathscr{A}} for tensors, and 𝐚\mathbf{a} for vectors. For 𝓐d1×d2×dN\boldsymbol{\mathscr{A}}\in\mathbb{R}^{d_{1}\times d_{2}\ldots\times d_{N}}, we use vec(𝓐)d1×d2××dN\mathrm{vec}(\boldsymbol{\mathscr{A}})\in\mathbb{R}^{d_{1}\times d_{2}\times\ldots\times d_{N}} as in [20], and define 𝓐TdN×d2×d1\boldsymbol{\mathscr{A}}^{T}\in\mathbb{R}^{d_{N}\times\dots d_{2}\times d_{1}} by analogy to the matrix transpose, i.e. [𝓐T]i1,,iN=𝓐iN,,i1[\boldsymbol{\mathscr{A}}^{T}]_{i_{1},\dots,i_{N}}=\boldsymbol{\mathscr{A}}_{i_{N},\dots,i_{1}}.

We refer to a vector vnv\in{\mathbb{R}}^{n} with at most d[n]d\in[n] nonzero entries as a dd-sparse vector. For a finite set VV, the cardinality is denoted by |V|\left\lvert V\right\rvert. For a vector x=(x1,,xn)nx=(x_{1},\ldots,x_{n})\in{\mathbb{R}}^{n}, denote by x2=i=1nxi2\left\lVert x\right\rVert_{2}=\sqrt{\sum_{i=1}^{n}x_{i}^{2}} and |x|1:=j|xj|\left|x\right|_{1}:=\sum_{j}\left\lvert x_{j}\right\rvert. For a given vector xmx\in{\mathbb{R}}^{m}, diag(x)\mathrm{diag}(x) denotes the diagonal matrix whose main diagonal entries are the entries of xx. For a symmetric matrix AA, let ϕmax(A)\phi_{\max}(A) and ϕmin(A)\phi_{\min}(A) be the largest and the smallest eigenvalue of AA respectively. For a matrix AA, we use A2\left\lVert A\right\rVert_{2} to denote its operator norm and AF\left\lVert A\right\rVert_{F} the Frobenius norm, given by AF=(i,jaij2)1/2\left\lVert A\right\rVert_{F}=(\sum_{i,j}a_{ij}^{2})^{1/2}. For a matrix A=(aij)A=(a_{ij}) of size m×nm\times n, let A=maxij=1n|aij|\left\lVert A\right\rVert_{\infty}=\max_{i}\sum_{j=1}^{n}|a_{ij}| and A1=maxji=1m|aij|\left\lVert A\right\rVert_{1}=\max_{j}\sum_{i=1}^{m}|a_{ij}| denote the maximum absolute row and column sum of the matrix AA respectively. Let Amax=maxi,j|aij|\left\lVert A\right\rVert_{\max}=\max_{i,j}|a_{ij}| denote the componentwise matrix maximum norm. Let diag(A)\mathrm{diag}(A) be the diagonal of AA. Let offd(A)=Adiag(A)\mathrm{offd}(A)=A-\mathrm{diag}(A). Let κ(A)=ϕmax(A)/ϕmin(A)\kappa(A)=\phi_{\max}(A)/\phi_{\min}(A) denote the condition number for matrix AA. Denote by |A|\left\lvert A\right\rvert the determinant of AA. We use the inner product A,B=tr(ATB)\;\langle{\,A,B\,}\rangle\;={\rm tr}(A^{T}B) throughout. The inner product of two same-sized tensors 𝓧,𝓨d1×d2××dN\boldsymbol{\mathscr{X}},\boldsymbol{\mathscr{Y}}\in{\mathbb{R}}^{d_{1}\times d_{2}\times\ldots\times d_{N}} is sum of the products of their entries, i.e.,

𝓧,𝓨\displaystyle\;\langle{\,\boldsymbol{\mathscr{X}},\boldsymbol{\mathscr{Y}}\,}\rangle\; =\displaystyle= i1=1d1i2=1d2iN=1dNxi1i2iNyi1i2iN,\displaystyle\sum_{i_{1}=1}^{d_{1}}\sum_{i_{2}=1}^{d_{2}}\ldots\sum_{i_{N}=1}^{d_{N}}x_{i_{1}i_{2}\ldots\ldots i_{N}}y_{i_{1}i_{2}\ldots i_{N}}, (2)

where xi1,,iNx_{i_{1},\dots,i_{N}} denotes the (i1,,iN)(i_{1},\dots,i_{N}) element of 𝓧\boldsymbol{\mathscr{X}}. Fibers are the higher-order analogue of matrix rows and columns. Following definition by [20], tensor unfolding or matricization of 𝓧\boldsymbol{\mathscr{X}} along the kkth-mode is denoted as 𝐗(k){\bf{X}}^{(k)}, and is formed by arranging the mode-kk fibers as columns of the resulting matrix of dimension dk×mkd_{k}\times m_{k}. Denote by 𝐗(k)T{\bf{X}}^{(k)T} its transpose. When extracted from the tensor, fibers are always assumed to be oriented as column vectors. The specific permutation of columns is not important so long as it is consistent across related calculations [20]. Denote by Xj(k)X_{j}^{(k)} the jthj^{th} column vector of 𝐗(k)dk×mk{\bf{X}}^{(k)}\in{\mathbb{R}}^{d_{k}\times m_{k}}. One can compute the corresponding (rescaled) Gram matrix SkS^{k} with 𝐗(k)𝐗(k)T/mk{\bf{X}}^{(k)}{\bf{X}}^{(k)T}/m_{k}. Thus, we have mkm_{k} columns to compute mkSk=𝐗(k)𝐗(k)Tm_{k}S^{k}={\bf{X}}^{(k)}{\bf{X}}^{(k)T}:

𝐗(k)𝐗(k)T=j=1mkXj(k)Xj(k).\displaystyle{\bf{X}}^{(k)}{\bf{X}}^{(k)T}=\sum_{j=1}^{m_{k}}X_{j}^{(k)}\otimes X_{j}^{(k)}. (3)

For two numbers a,ba,b, ab:=min(a,b)a\wedge b:=\min(a,b), and ab:=max(a,b)a\vee b:=\max(a,b). We write aba\asymp b if cabCaca\leq b\leq Ca for some positive absolute constants c,Cc,C that are independent of n,mn,m, sparsity, and sampling parameters. We write f=O(g)f=O(g) or fgf\ll g if |f|Cg\left\lvert f\right\rvert\leq Cg for some absolute constant C<C<\infty and f=Ω(g)f=\Omega(g) or fgf\gg g if g=O(f)g=O(f). We write f=o(g)f=o(g) if f/g0f/g\to 0 as nn\to\infty, where the parameter nn will be the size of the matrix under consideration. In this paper, C,c,c,C0,C1,C,c,c^{\prime},C_{0},C_{1},\ldots, etc, denote various absolute positive constants which may change line by line.

2 The model and the method

For a random variable ZZ, the subgaussian (or ψ2\psi_{2}) norm of ZZ denoted by Zψ2\left\lVert Z\right\rVert_{\psi_{2}} is defined as Zψ2=inf{t>0:𝔼exp(Z2/t2)2}.\left\lVert Z\right\rVert_{\psi_{2}}=\inf\{t>0\;:\;{\mathbb{E}}\exp(Z^{2}/t^{2})\leq 2\}. Consider the tensor-valued data 𝓧\boldsymbol{\mathscr{X}} generated from a subgaussian random vector Z=(Zj)pZ=(Z_{j})\in{\mathbb{R}}^{p} with independent mean-zero unit variance components whose ψ2\psi_{2} norms are uniformly bounded:

vec{𝓧}\displaystyle\rm{vec}\{\,\boldsymbol{\mathscr{X}}\,\} =\displaystyle= Σ01/2Z, where \displaystyle\Sigma_{0}^{1/2}Z,\;\;\text{ where }\; (4)
𝔼(Zj)=0,\displaystyle\mathbb{E}\left(Z_{j}\right)=0, 𝔼Zj2=1, andZjψ2C0,i,j.\displaystyle{\mathbb{E}}Z_{j}^{2}=1,\;\text{ and}\;\left\lVert Z_{j}\right\rVert_{\psi_{2}}\leq C_{0},\forall i,j.

We refer to 𝓧d1××dL\boldsymbol{\mathscr{X}}\in\mathbb{R}^{d_{1}\times\dots\times d_{L}} as an order-LL sub-gaussian random tensor with covariance Σ0p×p\Sigma_{0}\in\mathbb{R}^{p\times p} throughout this paper. Let Ω0=Σ01\Omega_{0}=\Sigma_{0}^{-1}. We assume that the precision matrix Ω0=Ψ1ΨL\Omega_{0}=\Psi_{1}\oplus\dots\oplus\Psi_{L} of 𝓧\boldsymbol{\mathscr{X}} is the LL-way Kronecker sum of matrix components {Ψk}k=1L\{\Psi_{k}\}_{k=1}^{L}. As such, we have

Ω0\displaystyle\Omega_{0} =\displaystyle= k=1LI[d1:k1]ΨkI[dk+1:L],\displaystyle\sum_{k=1}^{L}I_{[d_{1:k-1}]}\otimes\Psi_{k}\otimes I_{[d_{k+1:L}]}, (5)
where I[dk:]:=IdkIdk+1factors,\displaystyle{I}_{[d_{k:\ell}]}:=\underbrace{{I}_{d_{k}}\otimes\dots\otimes{I}_{d_{\ell}}}_{\ell-k+1\;\mathrm{factors}},

where \otimes denotes the Kronecker (direct) product and k\ell\geq k. The precision matrix (5) has an immediate connection to the LL positive-semidefinite Gram matrices Snk0Rdk×dkS_{n}^{k}\succeq 0\in R^{d_{k}\times d_{k}} associated with each mode of the tensor 𝓧\boldsymbol{\mathscr{X}}, through tensor unfolding. Let 𝓧1,,𝓧nd1××dL𝓧\boldsymbol{\mathscr{X}}_{1},\ldots,\boldsymbol{\mathscr{X}}_{n}\in\mathbb{R}^{d_{1}\times\dots\times d_{L}}\sim\boldsymbol{\mathscr{X}} be nn i.i.d. random tensors following (4). We define the mode-kk Gram matrix SnkS_{n}^{k} and its corresponding factor-wise marginal covariance Σ0(k)=𝔼[Sk]\Sigma_{0}^{(k)}=\mathbb{E}[S^{k}] as follows:

Snk=1nmki=1n𝐗(k,i)[𝐗(k,i)]Tand\displaystyle S_{n}^{k}=\frac{1}{nm_{k}}\sum_{i=1}^{n}{\bf{X}}^{(k,i)}[{\bf{X}}^{(k,i)}]^{T}\quad\mathrm{and} (6)
Σ0(k)=1mk𝔼[𝐗(k)𝐗(k)T],k=1,,L;\displaystyle\Sigma_{0}^{(k)}=\frac{1}{m_{k}}\mathbb{E}[{\bf{X}}^{(k)}{\bf{X}}^{(k)T}],\quad k=1,\dots,L; (7)

See [15]. Denote by 𝒦𝐩\mathcal{K}_{\mathbf{p}}^{\sharp} the set of positive definite matrices that are decomposable into a Kronecker sum of fixed factor dimensions 𝐩=[d1,,dL]\mathbf{p}=[d_{1},\ldots,d_{L}]:

𝒦𝐩={A0|A𝒦𝐩p×p},\displaystyle\quad\quad\quad\mathcal{K}_{\mathbf{p}}^{\sharp}=\{A\succ 0|A\in\mathcal{K}_{\mathbf{p}}\subset\mathbb{R}^{p\times p}\}, (8)
𝒦𝐩={A:Bkdk×dks.t.A=B1BL}.\displaystyle\mathcal{K}_{\mathbf{p}}=\{{A}:\exists\>{B}_{k}\in\mathbb{R}^{d_{k}\times d_{k}}\>\mathrm{s.t.}\>{A}={B}_{1}\oplus\dots\oplus{B}_{L}\}.

The TeraLasso estimator [16] minimizes the negative 1\ell_{1}-penalized Gaussian log-likelihood function Q(Ω)Q(\Omega) over the domain 𝒦𝐩\mathcal{K}_{\mathbf{p}}^{\sharp} of precision matrices Ω\Omega, where

Q(Ω):=log|Ω|+S^,Ω+k=1Lmkρn,k|Ψk|1,off,\displaystyle\quad Q(\Omega):=-\log\left\lvert\Omega\right\rvert+\;\langle{\,\widehat{S},\Omega\,}\rangle\;+\sum_{k=1}^{L}m_{k}\rho_{n,k}\left\lvert{\Psi}_{k}\right\rvert_{1,{\rm off}}, (9)
 where S^=1ni=1nvec{𝓧iT}(vec{𝓧iT})T,\displaystyle\quad\quad\quad\text{ where }\;\;\widehat{S}=\frac{1}{n}\sum_{i=1}^{n}\rm{vec}\{\,\boldsymbol{\mathscr{X}}_{i}^{T}\,\}(\rm{vec}\{\,\boldsymbol{\mathscr{X}}_{i}^{T}\,\})^{T}, (10)
and for each k,|Ψk|1,off=ij|Ψ|k,ij, and ρn,k>0\displaystyle\text{and for each $k$},\quad\left\lvert{\Psi}_{k}\right\rvert_{1,{\rm off}}=\sum_{i\neq j}{\left\lvert\Psi\right\rvert_{k,ij}},\text{ and }\quad\rho_{n,k}>0

is a penalty parameter to be specified. Here, the weight mk=p/dkm_{k}=p/d_{k} (1) for each kk is determined by the number of times for which a structure Ψk\Psi_{k} is replicated in Ω0\Omega_{0}. Then for 𝒦𝐩\mathcal{K}_{\mathbf{p}}^{\sharp} as in (8), we have

(TeraLasso)Ω^:=arg minΩ𝒦𝐩Q(Ω)=\displaystyle\quad\quad\quad\text{(TeraLasso)}\quad\widehat{\Omega}:=\mathop{\text{arg\,min}\kern 0.86108pt}_{\Omega\in\mathcal{K}_{\mathbf{p}}^{\sharp}}Q(\Omega)=
arg minΩ𝒦𝐩(log|Ω|+k=1Lmk(Snk,Ψk+ρn,k|Ψk|1,off)).\displaystyle\mathop{\text{arg\,min}\kern 0.86108pt}_{\Omega\in\mathcal{K}_{\mathbf{p}}^{\sharp}}\big{(}-\log|\Omega|+\sum_{k=1}^{L}m_{k}\big{(}\langle S_{n}^{k},\Psi_{k}\rangle+\rho_{n,k}\left\lvert{\Psi}_{k}\right\rvert_{1,{\rm off}}\big{)}\big{)}.

The objective function (2) depends on the training data via the coordinate-wise Gram matrices SnkS_{n}^{k}, where we replace the trace term S^,Ω\;\langle{\,\widehat{S},\Omega\,}\rangle\; in (9) with the weighted sum over component-wise trace terms in (2), in view of (5) and (6), cf. Lemma 2.1. Here and in [16], the set of penalty parameters {ρn,k,k=1,,L}\{\rho_{n,k},k=1,\ldots,L\} are chosen to dominate the maximum of entrywise errors for estimating the population Σ0(k)\Sigma_{0}^{(k)} (7) with sample Snk{S}^{k}_{n} as in (6), for each kLk\leq L on event 𝒯\mathcal{T}; cf. (12). This choice works equally well for the subgaussian model (4). For L=2L=2 and Ω0=Ψ1Ψ2=Ψ1Id1+Id2Ψ2\Omega_{0}=\Psi_{1}\oplus\Psi_{2}=\Psi_{1}\otimes I_{d_{1}}+I_{d_{2}}\otimes\Psi_{2}, the objective function (2) is similar in spirit to the BiGLasso objective [19], where Snk,k=1,2S^{k}_{n},k=1,2 correspond to the Gram matrices computed from row and column vectors of matrix variate samples X1,,Xnd1×d2X_{1},\ldots,X_{n}\in{\mathbb{R}}^{d_{1}\times d_{2}} respectively. When Ω0=Ψ1Ψ2\Omega_{0}=\Psi_{1}\otimes\Psi_{2} is a Kronecker product rather than a Kronecker sum over the factors, the objective function (2) is also closely related to the Gemini estimators in [43], where log|Ω0|\log\left\lvert\Omega_{0}\right\rvert is a linear combination of log|Ψk|,k=1,2\log\left\lvert\Psi_{k}\right\rvert,k=1,2. When 𝓧\boldsymbol{\mathscr{X}} follows a multivariate Gaussian distribution and the precision matrix Ω0\Omega_{0} has a decomposition of the form (5), the sparsity pattern of Ψk\Psi_{k} for each kk corresponds to the conditional independence graph across the kthk^{\text{th}} dimension of the data. Similar to the graphical Lasso, incorporating an 1\ell_{1}-penalty promotes a sparse graphical structure in the Ψk\Psi_{k} and by extension Ω^\widehat{\Omega}. See for example [6, 3, 42, 46, 29, 19, 43, 16] and references therein.

2.1 The projection perspective

Throughout this paper, the subscript nn is omitted from SnkS_{n}^{k} and ρn,k\rho_{n,k} (δn,k\delta_{n,k}) in case n=1n=1 to avoid clutter in the notation. Lemma 2.1 explains the smoothing ideas. Intuitively, we use the mkm_{k} fibers to estimate relations between and among the dkd_{k} features along the kthk^{th} mode, as encoded in Ψk\Psi_{k}. Hence, this forms the aggregation of all data from modes other than kk, which allows uniform concentration of measure bounds as shown in Lemma 2.2 to be accomplished.

Lemma 2.1.

(KS trace: Projection lemma) Consider the mean zero LL-order random tensor 𝓧d1××dL\boldsymbol{\mathscr{X}}\in\mathbb{R}^{d_{1}\times\dots\times d_{L}}. Denote by Xj(k)dkX_{j}^{(k)}\in{\mathbb{R}}^{d_{k}} the jthj^{\text{th}} column vector in the matrix 𝐗(k)dk×mk{\bf{X}}^{(k)}\in{\mathbb{R}}^{d_{k}\times m_{k}} formed by tensor unfolding. Now let 𝐘(k)=𝐗(k)T{\bf{Y}}^{(k)}={\bf{X}}^{(k)T}. Denote by Yi(k)mkY_{i}^{(k)}\in{\mathbb{R}}^{m_{k}} the ithi^{\text{th}} row vector in 𝐗(k){\bf{X}}^{(k)}. Denote by T:=S^,Ω0T:=\;\langle{\,\widehat{S},\Omega_{0}\,}\rangle\;. Then for sample covariance S^:=vec{𝓧T}vec{𝓧T}\widehat{S}:=\rm{vec}\{\,\boldsymbol{\mathscr{X}}^{T}\,\}\otimes\rm{vec}\{\,\boldsymbol{\mathscr{X}}^{T}\,\} and Ω0\Omega_{0} as in (8),

T=k=1LmkSk,Ψk=k=1Lj=1mkΨk,Xj(k)Xj(k),\displaystyle T=\sum_{k=1}^{L}\;\langle{\,m_{k}S^{k},\Psi_{k}\,}\rangle\;=\sum_{k=1}^{L}\sum_{j=1}^{m_{k}}\;\langle{\,\Psi_{k},X_{j}^{(k)}\otimes X_{j}^{(k)}\,}\rangle\;,

where mkSkm_{k}S^{k} is the same as in (3). Then we also have

T=k=1Li,jdkΨk,ijYi(k),Yj(k)=k=1Ltr(𝐘(k)Ψk𝐘(k)T)\displaystyle T=\sum_{k=1}^{L}\sum_{i,j}^{d_{k}}\Psi_{k,ij}\;\langle{\,Y^{(k)}_{i},Y^{(k)}_{j}\,}\rangle\;=\sum_{k=1}^{L}{\rm tr}({\bf{Y}}^{(k)}\Psi_{k}{\bf{Y}}^{(k)T})

Here vec{A}\rm{vec}\{\,A\,\} of a matrix Adk×mkA^{d_{k}\times m_{k}} is obtained by stacking columns of AA into a long vector of size p=dk×mkp=d_{k}\times m_{k}.

Lemma 2.2.

Let dmax=maxkdkd_{\max}=\max_{k}d_{k} and mmin:=minkmkm_{\min}:=\min_{k}m_{k}. Let ΔΩ𝒦𝐩\Delta_{\Omega}\in\mathcal{K}_{\mathbf{p}}. Under the conditions in Lemma 2.1, we have for Σ0=Ω01\Sigma_{0}=\Omega_{0}^{-1},

|diag(ΔΩ),S^Σ0|Σ02diag(ΔΩ)F\displaystyle\frac{\left\lvert\;\langle{\,\mathrm{diag}(\Delta_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert}{\left\lVert\Sigma_{0}\right\rVert_{2}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}} \displaystyle\leq CdiagdmaxL(1+dmaxmmin)\displaystyle C_{\mathrm{diag}}\sqrt{d_{\max}L}\big{(}1+\sqrt{\frac{d_{\max}}{m_{\min}}}\big{)}

with probability at least 1k=1L2exp(cdk)1-\sum_{k=1}^{L}2\exp(-cd_{k}).

Discussions. For simplicity, we state Lemma 2.1 for the trace term S^,Ω0\;\langle{\,\widehat{S},\Omega_{0}\,}\rangle\; in case n=1n=1, with obvious extensions for n>1n>1 and for any Ω𝒦𝐩\Omega\in\mathcal{K}_{\mathbf{p}}^{\sharp}. Notice that following (3), Yj(k)mk,j[dk]Y_{j}^{(k)}\in{\mathbb{R}}^{m_{k}},\forall j\in[d_{k}] and

i,j[dk],mkSijk=Yi(k),Yj(k),\displaystyle\forall i,j\in[d_{k}],m_{k}S^{k}_{ij}=\;\langle{\,Y_{i}^{(k)},Y_{j}^{(k)}\,}\rangle\;,

which in turn can be interpreted as the tensor inner product (2) with N=L1N=L-1. We mention in passing that mminm_{\min} (resp. nmminnm_{\min}) appears in the rates of convergence in Theorems 3.1 and 3.2 as the effective sample size for estimating Ω0\Omega_{0} for n=1n=1 (resp. n>1n>1). We prove Lemma 2.2 in Section 4. Before leaving this section, we define the support set of Ω0\Omega_{0}.

Definition 2.3.

(The support set of Ω0\Omega_{0}) For each Ψk\Psi_{k}, k=1,,Lk=1,\dots,L, denote its support set by supp(offd(Ψk))={(i,j):ij,Ψk,ij0}\mathop{\text{\rm supp}\kern 0.86108pt}(\mathrm{offd}(\Psi_{k}))=\{(i,j):i\neq j,\Psi_{k,ij}\neq 0\}. Let sk:=|supp(offd(Ψk))|s_{k}:=\left\lvert\mathop{\text{\rm supp}\kern 0.86108pt}(\mathrm{offd}(\Psi_{k}))\right\rvert, for all kk. Similarly, denote the support set of Ω0\Omega_{0} by 𝒮={(i,j):ij,Ω0,ij0}\mathcal{S}=\{(i,j):i\neq j,\Omega_{0,ij}\neq 0\}, with s:=|𝒮|=k=1Lmksks:=\left\lvert\mathcal{S}\right\rvert=\sum_{k=1}^{L}m_{k}s_{k}.

3 The main theorem

In the present work, due to the tighter error bound on the diagonal component of the loss function as stated in Lemma 2.2, we achieve the sharper rates of convergence in Theorem 3.1, which significantly improve upon earlier results in [16] as stated in Theorem 3.2. Specifically, we replace the plogpp\log p in the earlier factor with pp for the relative errors in the operator and Frobenius norm in Theorem 3.1 in the present work. Under suitable assumptions on the sparsity parameter ss and dimensions dk,k[L]d_{k},\forall k\in[L], consistency and the rate of convergence in the operator norm can be obtained for all nn and LL. First, we state the set of assumptions, (A1), (A2) and (A3).

  • (A1)

    Let minkmklogp\min_{k}m_{k}\geq\log p. Denote by δn,kΣ02logpnmk\delta_{n,k}\asymp\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{\frac{\log p}{nm_{k}}} for k=1,,Lk=1,\ldots,L. Suppose ρn,k=δn,k/εk\rho_{n,k}=\delta_{n,k}/\varepsilon_{k}, where 0<εk<10<\varepsilon_{k}<1 for all kk.

  • (A2)

    The smallest eigenvalue ϕmin(Ω0)=k=1Lϕmin(Ψk)k¯Ω>0\phi_{\min}({\Omega}_{0})=\sum_{k=1}^{L}\phi_{\min}({\Psi}_{k})\geq\underline{k}_{\Omega}>0, and the largest eigenvalue ϕmax(Ω0)=k=1Lϕmax(Ψk)k¯Ω<\phi_{\max}({\Omega}_{0})=\sum_{k=1}^{L}\phi_{\max}({\Psi}_{k})\leq\overline{k}_{\Omega}<\infty.

  • (A3)

    The sample size nn satisfies the following:

    n(mmin)2C2(L+1)κ(Σ0)4(slogp+Lp),\displaystyle n(m_{\min})^{2}\geq C^{2}(L+1)\kappa(\Sigma_{0})^{4}(s\log p+Lp),

    where mmin:=minkmkm_{\min}:=\min_{k}m_{k}, s=kmksks=\sum_{k}m_{k}s_{k} is as in Definition 2.3, and CC is an absolute constant.

Theorem 3.1.

(Main result) Suppose (A1), (A2), and (A3) hold. Then for absolute constants C,cC,c, and CL:=CL+1C_{L}:=C\sqrt{L+1}, with probability 1Lexp(clogp)\geq 1-L\exp(-c\log p),

Ω^Ω0F/Ω02\displaystyle{\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{F}}/{\left\lVert\Omega_{0}\right\rVert_{2}} \displaystyle\leq Cκ(Σ0)(slogp+Lpnmmin)1/2,\displaystyle C\kappa(\Sigma_{0})\big{(}\frac{s\log p+Lp}{nm_{\min}}\big{)}^{1/2},
Ω^Ω02/Ω02\displaystyle{\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{2}}/{\left\lVert\Omega_{0}\right\rVert_{2}} \displaystyle\leq CLκ(Σ0)(slogp+Lpnmmin2)1/2,\displaystyle C_{L}\kappa(\Sigma_{0})\big{(}\frac{{s\log p+Lp}}{nm^{2}_{\min}}\big{)}^{1/2},
Ω^Ω0F/Ω0F\displaystyle{\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{F}}/{\left\lVert\Omega_{0}\right\rVert_{F}} \displaystyle\leq CLκ(Σ0)(slogp+Lpnmmin2)1/2.\displaystyle C_{L}\kappa(\Sigma_{0})\big{(}\frac{{s\log p+Lp}}{nm^{2}_{\min}}\big{)}^{1/2}.

The condition number for Σ0=Ω01\Sigma_{0}=\Omega_{0}^{-1} is defined as

κ(Σ0)=κ(Ω0)=Ω02Ω012=k=1Lϕmax(Ψk)k=1Lϕmin(Ψk),\displaystyle\kappa(\Sigma_{0})=\kappa(\Omega_{0})=\left\lVert\Omega_{0}\right\rVert_{2}\left\lVert\Omega_{0}^{-1}\right\rVert_{2}=\frac{\sum_{k=1}^{L}\phi_{\max}({\Psi}_{k})}{\sum_{k=1}^{L}\phi_{\min}({\Psi}_{k})},

where we have used the additivity of the eigenvalues of the Kronecker sum. Following [16], we focus on error bounds on the estimate of Ω0\Omega_{0} itself, rather than the individually factors in the present work. We emphasize that we retain essentially the same error bound as that in [16] for the off-diagonal component of the trace terms in (2). Denote by 𝒯=i=1L𝒯k\mathcal{T}=\bigcap_{i=1}^{L}\mathcal{T}_{k}, where

event 𝒯k={maxij|Sn,ijkΣ0,ij(k)|δn,k},\displaystyle\text{ event }\;\mathcal{T}_{k}=\left\{\max_{i\not=j}\left\lvert{S}^{k}_{n,ij}-\Sigma_{0,ij}^{(k)}\right\rvert\leq\delta_{n,k}\right\}, (12)
for δn,kΣ02logp/(nmk)>0.\displaystyle\text{ for }\delta_{n,k}\asymp\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{{\log p}/{(nm_{k})}}>0. (13)

Event 𝒯\mathcal{T} is needed to control the off-diagonal component of the loss function as in the supplementary Lemma A.3, cf. Proof of Lemma 12 [15].

Intuitively, we use nmknm_{k} fibers to estimate relations between and among the dkd_{k} features along the kthk^{th} mode as encoded in Ψk\Psi_{k} and this allows optimal statistical rates of convergence to be derived, in terms of entrywise errors for estimating Σ0(k)\Sigma_{0}^{(k)} (7) with Snk{S}^{k}_{n} (6). Correspondingly, events {𝒯k,k=1,,L}\{\mathcal{T}_{k},k=1,\ldots,L\} in (12), where were originally defined in [16], are used in the present work that reflect this sample aggregation with nmknm_{k} being the effective size for estimating Ψk\Psi_{k}. Indeed, as we will show in Theorem 3.2 [16], these entrywise error bounds already enabled a significant improvement in the sample size lower bound in order to estimate parameters and the associated conditional independence graphs along different coordinates such as space, time and experimental conditions. However, these entrywise error bounds are not sufficient to achieve the operator norm-type of bounds as in Theorem 3.1 for inverse covariance estimation. In fact, using the entrywise error bounds to control the diagonal components of the trace terms will result in an extra logp\log p factor in the sample size lower bound and correspondingly a slower rate of convergence. This extraneous logp\log p factor is undesirable since the diagonal component of the loss function dominates the overall rate of convergence in sparse settings for inverse covariance estimation.

Theorem 3.2 ([16], restated).

Suppose (A1) and (A2) hold and n(mmin)2C2κ(Σ0)4(s+p)(L+1)2logpn(m_{\min})^{2}\geq C^{2}\kappa(\Sigma_{0})^{4}(s+p)(L+1)^{2}\log p, where s=kmksks=\sum_{k}m_{k}s_{k} is as in Definition 2.3. Then

Ω^Ω0FΩ02\displaystyle\frac{\|\widehat{{\Omega}}-{\Omega}_{0}\|_{F}}{\|\Omega_{0}\|_{2}} =\displaystyle= Op(κ(Σ0)L+1((s+p)logpnmmin)1/2),\displaystyle O_{p}\big{(}\kappa(\Sigma_{0})\sqrt{L+1}\big{(}\frac{(s+p)\log p}{nm_{\min}}\big{)}^{1/2}\big{)},
Ω^Ω02Ω02\displaystyle\frac{\|\widehat{{\Omega}}-{\Omega}_{0}\|_{2}}{\|\Omega_{0}\|_{2}} =\displaystyle= Op(κ(Σ0)(L+1)((s+p)logpnmmin2)1/2).\displaystyle O_{p}\big{(}\kappa(\Sigma_{0})(L+1)\big{(}\frac{(s+p)\log p}{nm^{2}_{\min}}\big{)}^{1/2}\big{)}.

Summary. The worst aspect ratio is defined as maxk(dk/mk)\max_{k}({d_{k}}/{m_{k}}), that is, dmax/mmin=p/mmin2{d_{\max}}/{m_{\min}}={p}/{m_{\min}^{2}}. Clearly, a smaller aspect ratio implies a faster rate of convergence for the relative errors in the operator and Frobenius norm. First, observe that for relative error in the operator norm in Theorem 3.2,

(L+1)(s+p)logptherein is replaced with\displaystyle(L+1)(s+p)\log p\quad\text{therein is replaced with}
slogp+Lp,cf. Theorem 3.1.\displaystyle\quad s\log p+Lp,\quad\text{cf. Theorem~{}\ref{thm::main}}.

The same improvement holds true for the Frobenius norm error. Here we eliminate the extraneous logp\log p factor from the diagonal component of the error through new concentration of measure analysis in the present work; cf. Lemma 2.2. This is a significant improvement for two reasons: (a) since pp is the product of the dkd_{k}s, logp=O(klogdk)\log p=O(\sum_{k}\log d_{k}) is often nontrivial, especially for larger LL; and (b) more importantly, for L=2L=2 and n=O(1)n=O(1) (in contrast to L>2L>2), the error bound in the operator norm in Theorem 3.2 from [16] will diverge for any s0s\geq 0 as p=d1d2p=d_{1}d_{2} increases, since

plogpmmin2=d1d2logp(d1d2)2logp,\displaystyle\frac{p\log p}{m_{\min}^{2}}=\frac{d_{1}d_{2}\log p}{(d_{1}\wedge d_{2})^{2}}\geq\log p, (14)

where mmin=p/(d1d2)=d1d2m_{\min}=p/(d_{1}\vee d_{2})=d_{1}\wedge d_{2} and equality holds only when d1=d2d_{1}=d_{2}. As a result, in Theorem 3.2 [16], the sample lower bound, namely, n(mmin)2C2κ(Σ0)4(s+p)(L+1)2logpn(m_{\min})^{2}\geq C^{2}\kappa(\Sigma_{0})^{4}(s+p)(L+1)^{2}\log p implies that n=Ω(logp)n=\Omega(\log p), since mmin2pm_{\min}^{2}\leq p in view of (14). In contrast, the lower bound on nmmin2nm^{2}_{\min} in (A3) is less stringent, saving a factor of O(logp)O(\log p) so long as p=Ω(slogp)p=\Omega(s\log p).

This is consistent with the successful finite sample experiments in both [16] and the present work, where for L=2L=2, bounded errors in the operator norm are observed as pp increases. As a result, our new bound supports the use of the TeraLasso estimator when L=2L=2, so long as a small number of replicates are available, that is, when n=o(logp)n=o(\log p), in a way that the previous Theorem 3.2 cannot. More precisely, for finite sample settings, namely, when n=O(1)n=O(1), the relative errors will still be bounded at Op(1)O_{p}(1) for L=2L=2, for example, when the two dimensions are at the same order: d1d2d_{1}\asymp d_{2}, and rapidly converge to zero for L>2L>2; cf. Theorem 3.4.

Single sample convergence. First of all, both Theorems 3.1 and 3.2 imply n=1n=1 convergence for the relative error in the operator norm, when L3L\geq 3 and d1dLd_{1}\asymp\ldots\asymp d_{L}, which we refer to as the cubic tensor settings, since potentially mmin2mmindmaxlogp=plogpm^{2}_{\min}\geq m_{\min}d_{\max}\log p=p\log p will hold. See Theorem 3.4 in the sequel. However, when the dkd_{k}s are skewed, this may not be the case. To make this clear, we first state Corollary 3.3.

Corollary 3.3 (Dependence on aspect ratio for n=1n=1).

Suppose (A1), (A2) and (A3) hold for n=1n=1. Then with probability at least 1Lexp(clogp)1-L\exp(c\log p), we have for some absolute constants c,Cc,C,

(Ω^Ω02/Ω02)(Ω^Ω0F/Ω0F)\displaystyle(\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{2}/{\left\lVert\Omega_{0}\right\rVert_{2}})\vee(\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{F}/\left\lVert\Omega_{0}\right\rVert_{F})
\displaystyle\leq Cκ(Σ0)(Ldmaxmmin)1/2(k=1Lsklogpdk+L)1/2.\displaystyle C\kappa(\Sigma_{0})\big{(}{\frac{Ld_{\max}}{m_{\min}}}\big{)}^{1/2}\big{(}\sum_{k=1}^{L}\frac{s_{k}\log p}{d_{k}}+L\big{)}^{1/2}.

Under the bounded aspect ratio regime, the relative errors in the operator and Frobenius norm for estimating the precision matrix Ω0\Omega_{0} depend on the decay of the worst aspect ratio dmax/mmin{d_{\max}}/{m_{\min}} and the average of sklogp/dk{s_{k}\log p}/{d_{k}} over all modes, which represents relative sparsity levels (sparsity / dimension) in an average sense. For L>2L>2, typically the aspect ratio is much less than 1 and convergence happens rapidly. If the sparse support set is small relative to nominal dimension dkd_{k} along each mode, for example, when sklogpdk=O(1)\frac{s_{k}\log p}{d_{k}}=O(1), this convergence is at the rate of decay of the worst aspect ratio. In this case, the diagonal component dominates the rate of convergence and this is essentially optimal, since in the largest component with dimension dmaxd_{\max}, it has dmaxd_{\max} parameters to be estimated and mmin=p/dmaxm_{\min}=p/d_{\max} effective samples for the task. Moreover, LL is needed in the bound since we estimate LL components all together using one sample in case n=1n=1. As a first example, we consider the cubic setting where d1dLp1/Ld_{1}\asymp\ldots\asymp d_{L}\asymp p^{1/L}.

Theorem 3.4.

(The cubic tensor: n=1n=1) Suppose all conditions in Theorem 3.1 hold. Suppose dk=O(mk)d_{k}=O(m_{k}) for all kk. Suppose m1m2mLm_{1}\asymp m_{2}\ldots\asymp m_{L}. Then,

Ω^Ω0Fκ(Σ0)Ω02=OP((k=1Lsklogp+Ldmax)1/2), and\displaystyle\frac{\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{F}}{\kappa(\Sigma_{0})\left\lVert\Omega_{0}\right\rVert_{2}}=O_{P}\big{(}\big{(}\sum_{k=1}^{L}s_{k}\log p+Ld_{\max}\big{)}^{1/2}\big{)},\text{ and}
Ω^Ω02κ(Σ0)Ω02=OP((Lk=1Lsklogp+L2dmax)1/2/mmin1/2).\displaystyle\frac{\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{2}}{\kappa(\Sigma_{0})\left\lVert\Omega_{0}\right\rVert_{2}}=O_{P}\big{(}\big{(}L\sum_{k=1}^{L}s_{k}\log p+L^{2}d_{\max}\big{)}^{1/2}/{m_{\min}^{1/2}}\big{)}.

Suppose in addition d1dL=Ω((logp/L)ksk).d_{1}\asymp\ldots\asymp d_{L}=\Omega\big{(}({\log p}/{L})\sum_{k}s_{k}\big{)}. Then

Ω^Ω02Ω02Ω^Ω0FΩ0F=Op(Lκ(Σ0)p1/L1/2).\displaystyle\frac{\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{2}}{\left\lVert\Omega_{0}\right\rVert_{2}}\vee\frac{\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{F}}{\left\lVert\Omega_{0}\right\rVert_{F}}=O_{p}\big{(}L\kappa(\Sigma_{0})p^{{1}/{L}-{1}/{2}}\big{)}.

The proof of Theorems 3.1, 3.4, and Corollary 3.3 appear in the supplementary Section A.

Optimality of the rate. In words, a tensor is cubical if all djd_{j}s are at the same order. Then

aspect ratio:=dmaxmminp1/Lp11/L=p2/L1.\displaystyle\text{aspect ratio}\quad:=\frac{d_{\max}}{m_{\min}}\asymp\frac{p^{1/L}}{p^{1-1/L}}=p^{2/L-1}. (15)

Note that for L>2L>2, we obtain a fast rate of convergence in the operator norm for n=1n=1, since in the cubic tensor settings, the effective sample size mminm_{\min} increases significantly faster than p\sqrt{p} given that dmax=o(p1/2)d_{\max}=o(p^{1/2}). More precisely, we state Theorem 3.4, where we consider the cubic tensor setting and n=1n=1. Theorem 3.4 shows that convergence will occur for the dense cubic case, so long as mmin=p/dmax=Ω(Llogpj=1Lsj+L2dmax),m_{\min}=p/d_{\max}=\Omega\big{(}L\log p\sum_{j=1}^{L}s_{j}+L^{2}d_{\max}\big{)}, which is a reasonable assumption in case L>2L>2 and holds under (A3). In other words, the relative errors in the operator and Frobenius norm are bounded so long as the effective sample size mminm_{\min} is at least L2dmaxLkdkL^{2}d_{\max}\geq L\sum_{k}d_{k}, which is roughly LL times the total number of (unique) diagonal entries in {Ψk,k=1,,L}\{\Psi_{k},k=1,\ldots,L\}, and also at least LlogpL\log p times ksk\sum_{k}s_{k}, which in turn denotes the size of total supports k|𝒮k|\sum_{k}\left\lvert\mathcal{S}_{k}\right\rvert over off-diagonal components of factor matrices {Ψ1,,Ψk}\{\Psi_{1},\ldots,\Psi_{k}\}. Consider now an even more special case. Suppose that in the cubic tensor setting, we have dmax=Ω(logpjsj/L),jd_{\max}=\Omega(\log p\sum_{j}s_{j}/L),\forall j in addition. Then the error in the operator norm is again dominated by the square root of the aspect ratio parameter. In other words, to achieve the near optimal rate of OP(p1/L1/2)O_{P}(p^{{1}/{L}-{1}/{2}}), it is sufficient for each axis dimension dk,k[L]d_{k},k\in[L] to dominate the average sparsity across all factors, namely, ksk/L\sum_{k}s_{k}/L by a logp\log p factor. A more general result has been stated in Corollary 3.3.

4 The new concentration bounds

Throughout this proof, we assume n=1n=1 for simplicity. We now provide outline for proving the upper bound on the diagonal component of the main result of the paper. Recall the true parameter Ω0=Ψ1ΨL\Omega_{0}=\Psi_{1}\oplus\dots\oplus\Psi_{L}, where Ψkdk×dk\Psi_{k}\in{\mathbb{R}}^{d_{k}\times d_{k}} (5). Let Ω𝒦𝐩\Omega\in\mathcal{K}_{\mathbf{p}}. Since Ω0𝒦𝐩\Omega_{0}\in\mathcal{K}_{\mathbf{p}}, we have

ΔΩ\displaystyle\Delta_{\Omega} :=\displaystyle:= ΩΩ0=ΔΨ1ΔΨ2ΔΨL,\displaystyle\Omega-\Omega_{0}=\Delta_{\Psi_{1}}\oplus\Delta_{\Psi_{2}}\oplus\ldots\oplus\Delta_{\Psi_{L}}, (16)

for some ΔΨkdk×dk\Delta_{\Psi_{k}}\in{\mathbb{R}}^{d_{k}\times d_{k}} whose off-diagonal (but not diagonal) elements are uniquely determined. For self-containment, we state Lemma 4.2, where we also state the notation we use throughout this section. Here we use the trace-zero convention which guarantees the uniqueness of the ΔΨk\Delta^{\prime}_{\Psi_{k}} in (17). We will then restate Lemma 2.2 in Lemma 4.3. The off-diagonal component has been dealt with in [16], cf. the supplementary Lemma A.3. Hence in the rest of this section, we prove Lemma 4.3. First we show the following bounds on the ε\varepsilon-net of 𝕊dk1,k\mathbb{S}^{d_{k}-1},\forall k.

Lemma 4.1.

[28] Let 1/2>ε>01/2>\varepsilon>0. For each k[L]k\in[L], one can construct an ε\varepsilon-net Πdk\Pi_{d_{k}}, which satisfies

Πdk𝕊dk1 and |Πdk|(1+2/ε)dk.\Pi_{d_{k}}\subset\mathbb{S}^{d_{k}-1}\;\text{ and }\;\left\lvert\Pi_{d_{k}}\right\rvert\leq(1+2/\varepsilon)^{d_{k}}.
Lemma 4.2.

(Decomposition lemma) [15] Let Ω𝒦𝐩\Omega\in\mathcal{K}_{\mathbf{p}}. Then ΔΩ=ΩΩ0𝒦𝐩\Delta_{\Omega}=\Omega-\Omega_{0}\in\mathcal{K}_{\mathbf{p}}. To obtain a uniquely determined representation, we rewrite (16) as follows:

ΔΩ\displaystyle\Delta_{\Omega} =\displaystyle= ΔΩ+τΩIp, whereτΩ=tr(ΔΩ)/p, and\displaystyle\Delta^{\prime}_{\Omega}+\tau_{\Omega}I_{p},\quad\text{ where}\quad\tau_{\Omega}={\rm tr}(\Delta_{\Omega})/p,\quad\text{ and }
ΔΩ=ΔΨ1ΔΨL, where tr(ΔΨk)=0\displaystyle\Delta^{\prime}_{\Omega}=\Delta^{\prime}_{\Psi_{1}}\oplus\ldots\oplus\Delta^{\prime}_{\Psi_{L}},\;\;\text{ where }\;\;{\rm tr}(\Delta^{\prime}_{\Psi_{k}})=0 (17)

for all kk. Thus we have

diag(ΔΩ)\displaystyle\mathrm{diag}(\Delta^{\prime}_{\Omega}) =\displaystyle= k=1Ldiag(Δ~k) where\displaystyle\sum_{k=1}^{L}\mathrm{diag}(\widetilde{\Delta}_{k})\quad\text{ where} (18)
diag(Δ~k)\displaystyle\mathrm{diag}(\widetilde{\Delta}_{k}) :=\displaystyle:= I[d1:k1]diag(ΔΨk)I[dk+1:L],\displaystyle I_{[d_{1:k-1}]}\otimes\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\otimes I_{[d_{k+1:L}]}, (19)

and moreover,

diag(ΔΩ)F2=k=1Lmkdiag(ΔΨk)F2+pτΩ2,\displaystyle\quad\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}^{2}=\sum_{k=1}^{L}m_{k}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}^{2}+p\tau_{\Omega}^{2}, (20)
k=1Ldkdiag(ΔΨk)FLdmaxmmindiag(ΔΩ)F.\displaystyle\sum_{k=1}^{L}\sqrt{d_{k}}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}\leq\sqrt{\frac{Ld_{\max}}{m_{\min}}}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}.
Proof.

The existence of such parameterization in (17) is given in Lemma 7 [15], from which (20) immediately follows, by orthogonality of the decomposition. Now we have by elementary inequalities:

k=1Ldkdiag(ΔΨk)F=k=1Ldkmkmkdiag(ΔΨk)F\displaystyle\sum_{k=1}^{L}\sqrt{d_{k}}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}=\sum_{k=1}^{L}\sqrt{\frac{d_{k}}{m_{k}}}\sqrt{m_{k}}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}
\displaystyle\leq maxkdkmkL(k=1Lmkdiag(ΔΨk)F2)1/2dmaxmminLdiag(ΔΩ)F.\displaystyle\max_{k}\sqrt{\frac{d_{k}}{m_{k}}}\sqrt{L}\big{(}\sum_{k=1}^{L}m_{k}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}^{2}\big{)}^{1/2}\leq\frac{\sqrt{d_{\max}}}{\sqrt{m_{\min}}}\sqrt{L}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}.

Thus the lemma holds in view of (20).  \;\;\scriptstyle\Box

Lemma 4.3.

(New diagonal bound) Let c,c,C0,C1,c,c^{\prime},C_{0},C_{1},\ldots be absolute constants. Following the notation as in Lemma 4.2, where τΩ=tr(ΔΩ)/p\tau_{\Omega}={\rm tr}(\Delta_{\Omega})/p, we have with probability at least 1kexp(cdk)c/p41-\sum_{k}\exp(-cd_{k})-c^{\prime}/{p^{4}},

|diag(ΔΩ),S^Σ0|/Σ02\displaystyle\left\lvert\;\langle{\,\mathrm{diag}(\Delta_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert/\left\lVert\Sigma_{0}\right\rVert_{2}\leq
C0k=1Ldkdiag(ΔΨk)F+C1Ldmaxdiag(ΔΩ)F,\displaystyle C_{0}\sum_{k=1}^{L}d_{k}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}+C_{1}\sqrt{Ld_{\max}}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F},

and hence Lemma 2.2 holds.

Note when we have dkpd_{k}\leq\sqrt{p} for all kk, or equivalently, when maxkdkmk1\max_{k}\sqrt{\frac{d_{k}}{m_{k}}}\leq 1, we do not need to pay the extra factor of logp\sqrt{\log p} as in Lemma 13 [15] on the diagonal portion of the error bound, resulting in the improved rates of convergence in Theorem 3.1. Note that when dk=o(mklogp),kd_{k}=o(m_{k}\log p),\forall k, the bound in Lemma 4.3 still leads to an improvement on the overall rate.

Lemma 4.4.

Let 𝕊dk1\mathbb{S}^{d_{k}-1} be the sphere in dk{\mathbb{R}}^{d_{k}}. Construct an ε\varepsilon-net Πdk𝕊dk1\Pi_{d_{k}}\subset\mathbb{S}^{d_{k}-1} such that |Πdk|(1+2/ε)dk\left\lvert\Pi_{d_{k}}\right\rvert\leq(1+2/\varepsilon)^{d_{k}}, where 0<ε<1/20<\varepsilon<1/2, as in Lemma 4.1. Recall 𝐘(k)=(𝐗(k))T{\bf{Y}}^{(k)}=({\bf{X}}^{(k)})^{T}. Let δ=(δ1,,δdk)\delta=(\delta_{1},\ldots,\delta_{d_{k}}). Define the event 𝒢k{\mathcal{G}}_{k} as:

supδΠdki=1dkδi(Yi(k),Yi(k)𝔼Yi(k),Yi(k))tk,\displaystyle\sup_{\delta\in\Pi_{d_{k}}}\sum_{i=1}^{d_{k}}\delta_{i}\big{(}\;\langle{\,Y^{(k)}_{i},Y^{(k)}_{i}\,}\rangle\;-{\mathbb{E}}\;\langle{\,Y^{(k)}_{i},Y^{(k)}_{i}\,}\rangle\;\big{)}\leq t_{k},
wheretk:=CmΣ02(pdk).\displaystyle\quad\quad\quad\text{where}\quad t_{k}:=C_{m}\left\lVert\Sigma_{0}\right\rVert_{2}(\sqrt{p}\vee d_{k}). (21)

Let Cm,cC_{m},c be some absolute constants. Let 𝒢=𝒢1𝒢L{\mathcal{G}}={\mathcal{G}}_{1}\cap\ldots\cap{\mathcal{G}}_{L}. Then (𝒢)1kexp(cdk)\mathbb{P}\left({\mathcal{G}}\right)\geq 1-\sum_{k}\exp(-cd_{k}). Moreover, we have by a standard approximation argument, on event 𝒢{\mathcal{G}}, simultaneously for all kk,

supδ𝕊dk1i=1dkδi(Yi(k),Yi(k)𝔼Yi(k),Yi(k))tk1ε.\displaystyle\sup_{\delta\in\mathbb{S}^{d_{k}-1}}\sum_{i=1}^{d_{k}}\delta_{i}\big{(}\;\langle{\,Y^{(k)}_{i},Y^{(k)}_{i}\,}\rangle\;-{\mathbb{E}}\;\langle{\,Y^{(k)}_{i},Y^{(k)}_{i}\,}\rangle\;\big{)}\leq\frac{t_{k}}{1-\varepsilon}.

Proof idea. Notice that the expression for tkt_{k} clearly depends on the dimension dkd_{k} of Ψk\Psi_{k}. Let δdk\delta\in\mathbb{R}^{d_{k}}. Using the notation in Lemma 4.2, let diag(ΔΨk)=diag(δ1,,δdk)\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})=\mathrm{diag}(\delta_{1},\ldots,\delta_{d_{k}}) and

diag(Δ~k):=I[d1:k1]diag(ΔΨk)I[dk+1:L].\displaystyle\mathrm{diag}(\widetilde{\Delta}_{k}):=I_{[d_{1:k-1}]}\otimes\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\otimes I_{[d_{k+1:L}]}. (22)

Now for each 1kL1\leq k\leq L, following Lemma 2.1, we have

diag(Δ~k),S^Σ0=mkSk𝔼(Sk),diag(ΔΨk)\displaystyle\;\langle{\,\mathrm{diag}(\widetilde{\Delta}_{k}),\widehat{S}-\Sigma_{0}\,}\rangle\;=m_{k}\;\langle{\,S^{k}-{\mathbb{E}}(S^{k}),\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\,}\rangle\; (23)
=\displaystyle= tr(𝐘(k)diag(ΔΨk)𝐘(k)T)𝔼tr(𝐘(k)diag(ΔΨk)𝐘(k)T)\displaystyle{\rm tr}({\bf{Y}}^{(k)}\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}}){\bf{Y}}^{(k)T})-{\mathbb{E}}{\rm tr}({\bf{Y}}^{(k)}\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}}){\bf{Y}}^{(k)T})
=j=1dkδj(Yj(k),Yj(k)𝔼Yj(k),Yj(k)).\displaystyle\quad\quad=\sum_{j=1}^{d_{k}}\delta_{j}\big{(}\;\langle{\,Y^{(k)}_{j},Y^{(k)}_{j}\,}\rangle\;-{\mathbb{E}}\;\langle{\,Y^{(k)}_{j},Y^{(k)}_{j}\,}\rangle\;\big{)}.

To bound the probability for event 𝒢k{\mathcal{G}}_{k}, we use the Hanson-Wright inequality [31], cf. Theorem 1.1 therein, and the union bound. The rest is deferred to Section 4.2.

4.1 Proof of Lemma 4.3

Besides 𝒢{\mathcal{G}}, we need the following event 𝒟0\mathcal{D}_{0}:

𝒟0={|Ip,S^Σ0|CplogpΣ02}.\displaystyle\quad\mathcal{D}_{0}=\left\{\left\lvert\;\langle{\,I_{p},\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert\leq C\sqrt{p\log p}\left\lVert\Sigma_{0}\right\rVert_{2}\right\}. (24)

Suppose 𝒢𝒟0{\mathcal{G}}\cap\mathcal{D}_{0} holds. Denote by

diag(ΔΨk)=diag(δ1k,,δdkk)=:diag(δk),\displaystyle\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})=\mathrm{diag}(\delta^{k}_{1},\ldots,\delta^{k}_{d_{k}})=:\mathrm{diag}(\delta^{k}), (25)

where δk2:=diag(ΔΨk)F\left\lVert\delta^{k}\right\rVert_{2}:=\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}. Denote by

tk\displaystyle t_{k}^{\prime} =\displaystyle= tkδk2=CmΣ02diag(ΔΨk)F(pdk),\displaystyle t_{k}\left\lVert\delta^{k}\right\rVert_{2}=C_{m}\left\lVert\Sigma_{0}\right\rVert_{2}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}\big{(}\sqrt{p}\vee d_{k}\big{)},

for tkt_{k} as in (21). For each index 1kL1\leq k\leq L, on event 𝒢k{\mathcal{G}}_{k}, simultaneously for all diag(Δ~k)\mathrm{diag}(\widetilde{\Delta}_{k}) as in (18) and (37), we have

|diag(Δ~k),S^Σ0|=|mkSk𝔼Sk,diag(ΔΨk)|\displaystyle\left\lvert\;\langle{\,\mathrm{diag}(\widetilde{\Delta}_{k}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert=\left\lvert m_{k}\;\langle{\,S^{k}-{\mathbb{E}}S^{k},\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\,}\rangle\;\right\rvert
=\displaystyle= δk2|j=1dkδjkδk2(Yj(k),Yj(k)𝔼Yj(k),Yj(k))|\displaystyle\left\lVert\delta^{k}\right\rVert_{2}\left\lvert\sum_{j=1}^{d_{k}}\frac{\delta^{k}_{j}}{\left\lVert\delta^{k}\right\rVert_{2}}\big{(}\;\langle{\,Y^{(k)}_{j},Y^{(k)}_{j}\,}\rangle\;-{\mathbb{E}}\;\langle{\,Y^{(k)}_{j},Y^{(k)}_{j}\,}\rangle\;\big{)}\right\rvert
\displaystyle\leq δk2supδ𝕊dk1i=1dkδi(Yi(k),Yi(k)𝔼Yi(k),Yi(k)).\displaystyle\left\lVert\delta^{k}\right\rVert_{2}\sup_{\delta\in\mathbb{S}^{d_{k}-1}}\sum_{i=1}^{d_{k}}\delta_{i}\big{(}\;\langle{\,Y^{(k)}_{i},Y^{(k)}_{i}\,}\rangle\;-{\mathbb{E}}\;\langle{\,Y^{(k)}_{i},Y^{(k)}_{i}\,}\rangle\;\big{)}.

Now, on event 𝒢{\mathcal{G}}, we have by Lemma 4.4, simultaneously for all ΔΩ\Delta^{\prime}_{\Omega} as in (18),

|diag(ΔΩ),S^Σ0|k|diag(Δ~k),S^Σ0|\displaystyle\left\lvert\;\langle{\,\mathrm{diag}(\Delta^{\prime}_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert\leq\sum_{k}\left\lvert\;\langle{\,\mathrm{diag}(\widetilde{\Delta}_{k}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert
\displaystyle\leq ktkδk21ε=kCmΣ02diag(ΔΨk)F(pdk).\displaystyle\sum_{k}\frac{t_{k}\left\lVert\delta^{k}\right\rVert_{2}}{1-\varepsilon}=\sum_{k}C_{m}\left\lVert\Sigma_{0}\right\rVert_{2}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}(\sqrt{p}\vee d_{k}).

By the bound immediately above and (24), we obtain

on𝒢𝒟0,|diag(ΔΩ),S^Σ0|\displaystyle\text{on}\quad{\mathcal{G}}\cap\mathcal{D}_{0},\quad\left\lvert\;\langle{\,\mathrm{diag}(\Delta_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert
\displaystyle\leq |τpIp,S^Σ0|+|diag(ΔΩ),S^Σ0|\displaystyle\left\lvert\;\langle{\,\tau_{p}I_{p},\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert+\left\lvert\;\langle{\,\mathrm{diag}(\Delta^{\prime}_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert
\displaystyle\leq C0Σ02(τΩplogp+k=1Lpdiag(ΔΨk)F)\displaystyle C_{0}\left\lVert\Sigma_{0}\right\rVert_{2}\big{(}\tau_{\Omega}\sqrt{p\log p}+\sum_{k=1}^{L}\sqrt{p}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}\big{)}
+CmΣ02k=1Ldkdiag(ΔΨk)F=:rdiag,1+rdiag,2,\displaystyle+C_{m}\left\lVert\Sigma_{0}\right\rVert_{2}\sum_{k=1}^{L}d_{k}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}=:r_{\mathrm{diag},1}+r_{\mathrm{diag},2},

where by Lemma 4.2, for rdiag,2/(CmΣ02){r_{\mathrm{diag},2}}/{(C_{m}\left\lVert\Sigma_{0}\right\rVert_{2})},

k=1Ldkdiag(ΔΨk)FdmaxmminLdmaxdiag(ΔΩ)F,\displaystyle\sum_{k=1}^{L}d_{k}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}\leq\sqrt{\frac{d_{\max}}{m_{\min}}}\sqrt{Ld_{\max}}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F},

and by the Cauchy-Schwarz inequality,

rdiag,1C0Σ02:=τΩplogp+k=1Ldkmkdiag(ΔΨk)F\displaystyle\frac{r_{\mathrm{diag},1}}{C_{0}\left\lVert\Sigma_{0}\right\rVert_{2}}:=\tau_{\Omega}\sqrt{p}\sqrt{\log p}+\sum_{k=1}^{L}\sqrt{d_{k}}\sqrt{m_{k}}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}
\displaystyle\leq (logp+k=1Ldk)1/2(k=1Lmkdiag(ΔΨk)F2+τΩ2p)1/2\displaystyle\big{(}\log p+\sum_{k=1}^{L}d_{k}\big{)}^{1/2}\big{(}\sum_{k=1}^{L}m_{k}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}^{2}+\tau_{\Omega}^{2}p\big{)}^{1/2}
\displaystyle\leq c(k=1Ldk)1/2diag(ΔΩ)FcLdmaxdiag(ΔΩ)F,\displaystyle c\big{(}\sum_{k=1}^{L}d_{k}\big{)}^{1/2}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}\leq c\sqrt{Ld_{\max}}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F},

where logp=k=1logdkk=1Ldk\log p=\sum_{k=1}\log d_{k}\leq\sum_{k=1}^{L}d_{k}, since the RHS is a polynomial function of pp, and the last line holds by  (20). Putting things together, we have

rdiagΣ02\displaystyle\frac{r_{\mathrm{diag}}}{\left\lVert\Sigma_{0}\right\rVert_{2}} \displaystyle\leq C1dmaxLdiag(ΔΩ)F(1dmax/mmin).\displaystyle C_{1}\sqrt{d_{\max}}\sqrt{L}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}\big{(}1\vee\sqrt{{d_{\max}}/{m_{\min}}}\big{)}.

To bound 𝒟0\mathcal{D}_{0}, we rewrite the trace as a quadratic form:

S^Σ0,I=tr(S^Σ0)=ZTΣ0Z𝔼(ZTΣ0Z),\displaystyle\;\langle{\,\widehat{S}-\Sigma_{0},I\,}\rangle\;={\rm tr}(\widehat{S}-\Sigma_{0})=Z^{T}\Sigma_{0}Z-{\mathbb{E}}(Z^{T}\Sigma_{0}Z),

where ZpZ\in{\mathbb{R}}^{p} is the same as in (4). Thus, we have by the Hanson-Wright inequality [31], cf. Theorem 1.1 therein,

(|S^Σ0,I>CΣ02plogp|)\displaystyle\mathbb{P}\left(\left\lvert\;\langle{\,\widehat{S}-\Sigma_{0},I\,}\rangle\;>C\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{p\log p}\right\rvert\right)
\displaystyle\leq 2exp(cmin(C2plogpΣ022Σ0F2,Cplogp))1p4,\displaystyle 2\exp\big{(}-c\min\big{(}\frac{C^{2}p\log p\left\lVert\Sigma_{0}\right\rVert_{2}^{2}}{\left\lVert\Sigma_{0}\right\rVert_{F}^{2}},C\sqrt{p\log p}\big{)}\big{)}\leq\frac{1}{p^{4}},

where (C2C)c4(C^{2}\wedge C)c\geq 4 and Σ0FpΣ02\left\lVert\Sigma_{0}\right\rVert_{F}\leq\sqrt{p}\left\lVert\Sigma_{0}\right\rVert_{2}.

Hence by Lemma 4.4 and the bound immediately above,

(𝒢𝒟0)1cexp(logp)kexp(cdk).\displaystyle\mathbb{P}\left({\mathcal{G}}\cap\mathcal{D}_{0}\right)\geq 1-c^{\prime}\exp(-\log p)-\sum_{k}\exp(-cd_{k}).

The lemma thus holds upon adjusting the constants.  \;\;\scriptstyle\Box

4.2 Proof of Lemma 4.4

Set tk>0t_{k}>0. First, we rewrite (23) and the trace term as a quadratic form in subgaussian random variables,

diag(Δ~k),S^Σ0=ZTWZ𝔼(ZTWZ),\displaystyle\quad\;\langle{\,\mathrm{diag}(\widetilde{\Delta}_{k}),\widehat{S}-\Sigma_{0}\,}\rangle\;=Z^{T}WZ-{\mathbb{E}}(Z^{T}WZ), (26)
with Zp as in (4) and W:=Σ01/2diag(Δ~k)Σ01/2.\displaystyle\text{with $Z\in{\mathbb{R}}^{p}$ as in~{}\eqref{eq::tensordata} and }W:=\Sigma_{0}^{1/2}\mathrm{diag}(\widetilde{\Delta}_{k})\Sigma_{0}^{1/2}.

Then Wdiag(Δ~k)Σ02\left\lVert W\right\rVert\leq\left\lVert\mathrm{diag}(\widetilde{\Delta}_{k})\right\rVert\left\lVert\Sigma_{0}\right\rVert_{2}, where \left\lVert\cdot\right\rVert represents the operator or the Frobenius norm. Now for δdk\delta\in{\mathbb{R}}^{d_{k}}, by (23) and (26), and the Hanson-Wright inequality, we have

(|i=1dkδiδ2(Yi(k)22𝔼Yi(k)22)|tk)\displaystyle\mathbb{P}\left(\left\lvert\sum_{i=1}^{d_{k}}\frac{\delta_{i}}{\left\lVert\delta\right\rVert_{2}}\big{(}\left\lVert Y^{(k)}_{i}\right\rVert_{2}^{2}-{\mathbb{E}}\left\lVert Y^{(k)}_{i}\right\rVert_{2}^{2}\big{)}\right\rvert\geq t_{k}\right) (27)
=\displaystyle= (|diag(Δ~k),S^Σ0|tkδ2)\displaystyle\mathbb{P}\left(\left\lvert\;\langle{\,\mathrm{diag}(\widetilde{\Delta}_{k}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert\geq t_{k}{\left\lVert\delta\right\rVert_{2}}\right)
=\displaystyle= (|ZTWZ𝔼(ZTWZ)|tkδ2)\displaystyle\mathbb{P}\left(\left\lvert Z^{T}WZ-{\mathbb{E}}(Z^{T}WZ)\right\rvert\geq t_{k}{\left\lVert\delta\right\rVert_{2}}\right)\leq
2exp[cmin(tk2δ22WF2,tkδ2W2)]=:p1.\displaystyle 2\exp\left[-c\min\big{(}\frac{t_{k}^{2}\left\lVert\delta\right\rVert_{2}^{2}}{\left\lVert W\right\rVert_{F}^{2}},\frac{t_{k}{\left\lVert\delta\right\rVert_{2}}}{\left\lVert W\right\rVert_{2}}\big{)}\right]=:p_{1}.

Now for all δ=(δ1,,δdk)\delta=(\delta_{1},\ldots,\delta_{d_{k}}) and diag(ΔΨk)=diag(δ)\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})=\mathrm{diag}(\delta), we have diag(Δ~k)2=diag(ΔΨk)2\left\lVert\mathrm{diag}(\widetilde{\Delta}_{k})\right\rVert_{2}=\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{2} and diag(Δ~k)F=mkdiag(ΔΨk)F=mkδ2\left\lVert\mathrm{diag}(\widetilde{\Delta}_{k})\right\rVert_{F}=\sqrt{m_{k}}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}=\sqrt{m_{k}}\left\lVert\delta\right\rVert_{2} by (22). Thus

W2Σ02diag(Δ~k)2Σ02δ2,and\displaystyle\left\lVert W\right\rVert_{2}\leq\left\lVert\Sigma_{0}\right\rVert_{2}\left\lVert\mathrm{diag}(\widetilde{\Delta}_{k})\right\rVert_{2}\leq\left\lVert\Sigma_{0}\right\rVert_{2}\left\lVert\delta\right\rVert_{2},\;\text{and }
WFΣ02diag(ΔΨk)F=Σ02mkδ2.\displaystyle\left\lVert W\right\rVert_{F}\leq\left\lVert\Sigma_{0}\right\rVert_{2}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}={\left\lVert\Sigma_{0}\right\rVert_{2}}\sqrt{m_{k}}\left\lVert\delta\right\rVert_{2}.

Recall Πdk\Pi_{d_{k}} is an ε\varepsilon-net of the sphere 𝕊dk1\mathbb{S}^{d_{k}-1}, where 0<ε<1/20<\varepsilon<1/2. Then for tk:=CmΣ02(pdk)t_{k}:=C_{m}\left\lVert\Sigma_{0}\right\rVert_{2}(\sqrt{p}\vee d_{k}) as in (21), we have by (27) and the union bound,

(δΠdk:i=1dkδi(Yi(k)22𝔼Yi(k)22)tk)\displaystyle\mathbb{P}\left(\exists\delta\in\Pi_{d_{k}}:\sum_{i=1}^{d_{k}}\delta_{i}\big{(}\left\lVert Y^{(k)}_{i}\right\rVert_{2}^{2}-{\mathbb{E}}\left\lVert Y^{(k)}_{i}\right\rVert_{2}^{2}\big{)}\geq t_{k}\right)
=:\displaystyle=: ( event 𝒢kcoccurs )(1+2/ε)dkp1\displaystyle\mathbb{P}\left(\text{ event }\;{\mathcal{G}}^{c}_{k}\;\text{occurs }\right)\leq(1+2/\varepsilon)^{d_{k}}p_{1}
\displaystyle\leq 5dkexp(cmin(Cm2Σ022pmkΣ022,CmΣ02dkΣ02))\displaystyle 5^{d_{k}}\exp\big{(}-c\min\big{(}\frac{C_{m}^{2}\left\lVert\Sigma_{0}\right\rVert_{2}^{2}p}{m_{k}\left\lVert\Sigma_{0}\right\rVert_{2}^{2}},\frac{C_{m}\left\lVert\Sigma_{0}\right\rVert_{2}d_{k}}{\left\lVert\Sigma_{0}\right\rVert_{2}}\big{)}\big{)}
\displaystyle\leq exp(dklog5cdk(Cm2Cm))\displaystyle\exp(d_{k}\log 5-cd_{k}(C_{m}^{2}\wedge C_{m}))
\displaystyle\leq exp(cdklog5).\displaystyle\exp(-c^{\prime}d_{k}\log 5).

The “moreover” statement follows from a standard approximation argument. Suppose event 𝒢{\mathcal{G}} holds. Denote by

y=(Y1(k)22𝔼Y1(k)22,,Ydk(k)22𝔼Ydk(k)22).\displaystyle y=\big{(}\left\lVert Y_{1}^{(k)}\right\rVert_{2}^{2}-{\mathbb{E}}\left\lVert Y_{1}^{(k)}\right\rVert_{2}^{2},\ldots,\left\lVert Y_{d_{k}}^{(k)}\right\rVert_{2}^{2}-{\mathbb{E}}\left\lVert Y_{d_{k}}^{(k)}\right\rVert_{2}^{2}\big{)}.

We have for δ=(δ1,,δdk)𝕊dk1\delta=(\delta_{1},\ldots,\delta_{d_{k}})\in\mathbb{S}^{d_{k}-1},

supδΠdkδ,yy2=supδ𝕊dk1y,δ11εsupδΠdkδ,y.\displaystyle\sup_{\delta\in\Pi_{d_{k}}}\;\langle{\,\delta,y\,}\rangle\;\leq\left\lVert y\right\rVert_{2}=\sup_{\delta\in\mathbb{S}^{d_{k}-1}}\;\langle{\,y,\delta\,}\rangle\;\leq\frac{1}{1-\varepsilon}\sup_{\delta\in\Pi_{d_{k}}}\;\langle{\,\delta,y\,}\rangle\;.

The LHS is obvious. To see the RHS, notice that for δ𝕊dk1\delta\in\mathbb{S}^{d_{k}-1} that achieves maximality in

y2=supδ𝕊dk1y,δ,\displaystyle\left\lVert y\right\rVert_{2}=\sup_{\delta\in\mathbb{S}^{d_{k}-1}}\;\langle{\,y,\delta\,}\rangle\;,

we can find δ0Πdk\delta_{0}\in\Pi_{d_{k}} such that δδ02ε\left\lVert\delta-\delta_{0}\right\rVert_{2}\leq\varepsilon. Now

δ0,y\displaystyle\;\langle{\,\delta_{0},y\,}\rangle\; =\displaystyle= δ,yδδ0,y\displaystyle\;\langle{\,\delta,y\,}\rangle\;-\;\langle{\,\delta-\delta_{0},y\,}\rangle\;
\displaystyle\geq δ,ysupδ𝕊dk1εδ,y\displaystyle\;\langle{\,\delta,y\,}\rangle\;-\sup_{\delta\in\mathbb{S}^{d_{k}-1}}\varepsilon\;\langle{\,\delta,y\,}\rangle\;
=\displaystyle= (1ε)supδ𝕊dk1δ,y,\displaystyle(1-\varepsilon)\sup_{\delta\in\mathbb{S}^{d_{k}-1}}\;\langle{\,\delta,y\,}\rangle\;,

and hence

supδΠdkδ,y\displaystyle\sup_{\delta\in\Pi_{d_{k}}}\;\langle{\,\delta,y\,}\rangle\; \displaystyle\geq (1ε)supδ𝕊dk1δ,y=(1ε)y2.\displaystyle(1-\varepsilon)\sup_{\delta\in\mathbb{S}^{d_{k}-1}}\;\langle{\,\delta,y\,}\rangle\;=(1-\varepsilon)\left\lVert y\right\rVert_{2}.

The lemma thus holds.  \;\;\scriptstyle\Box

5 Conclusion

In this work, we present tighter theoretical bounds on the statistical convergence of the 1\ell_{1} regularized TeraLasso estimator of precision matrices with Kronecker sum structures. The key innovation in the present work is to derive tight concentration bounds for the trace terms on the diagonal component of the loss function (2), which was not available in [16]. Crucially, this improvement allows for finite-sample statistical convergence rates to be derived for the two-way Kronecker sum model, which was missing from [16] and was also deemed the most demanding, due to the lack of sample replications. This closes the gap between the low single-sample error for the two-way model as observed in [16] and the lack of theoretical guarantee for this particular case.

Acknowledgement

We thank Harrison Zhou for helpful discussions. We thank the Simons Institute for the Theory of Computing at University of California, Berkeley at the occasion of Algorithmic Advances for Statistical Inference with Combinatorial Structure Workshop, and organizers of International Conference on Statistics and Related Fields (ICON STARF), University of Luxembourg, for their kind invitations, where SZ presented a talk including this work in 2021 in the middle of the pandemic.

Appendix A Proof of Theorem 3.1

Recall (2) is equivalent to the following:

Ω^=argminΩ𝒦𝐩(log|Ω|+S^,Ω+k=1Lmkρn,k|Ψk|1,off),\displaystyle\widehat{\Omega}=\arg\min_{\Omega\in\mathcal{K}_{\mathbf{p}}^{\sharp}}\big{(}-\log\left\lvert\Omega\right\rvert+\;\langle{\,\widehat{S},\Omega\,}\rangle\;+\sum_{k=1}^{L}m_{k}\rho_{n,k}\left\lvert{\Psi}_{k}\right\rvert_{1,{\rm off}}\big{)},

where S^\widehat{S} is as defined in (10), in view of (6). First, we define the unified event 𝒜\mathcal{A} as the event that all these events hold, i.e.

𝒜=𝒯𝒟0𝒢, where𝒢=𝒢1𝒢L,\displaystyle\mathcal{A}=\mathcal{T}\cap\mathcal{D}_{0}\cap{\mathcal{G}},\quad\text{ where}\quad\mathcal{G}=\mathcal{G}_{1}\cap\dots\cap\mathcal{G}_{L},

whose probability of holding will yield the probability as stated in the main Theorem 3.1. We focus on the case n=1n=1. For n>1n>1, we defer the proof to Section A.3. First, we state Lemma A.1, which is proved in  [15], cf. Lemma 8 therein.

Lemma A.1.

(Lemma 8 of [15]) For all Ω𝒦𝐩\Omega\in\mathcal{K}_{\mathbf{p}}, Ω2L+1minkmkΩF\left\lVert\Omega\right\rVert_{2}\leq\sqrt{\frac{L+1}{\min_{k}m_{k}}}\left\lVert\Omega\right\rVert_{F}.

In the proof of Theorem 3.1 that follows, our strategy will be to show that several events controlling the concentration of the sample covariance matrix (in the n=1n=1 case, simply an outer product) hold with high probability, and then show that given these events hold, the statistical error bounds in Theorem 3.1 hold. The off-diagonal events are as defined in (12). We defer the definitions of new diagonal events to Section 4. We use the following notation to describe errors in the precision matrix and its factors. For Ω𝒦𝐩\Omega\in\mathcal{K}_{\mathbf{p}} let ΔΩ=ΩΩ0𝒦𝐩\Delta_{\Omega}=\Omega-\Omega_{0}\in\mathcal{K}_{\mathbf{p}}. Since both Ω\Omega and Ω0\Omega_{0} are Kronecker sums,

ΔΩ\displaystyle\Delta_{\Omega} =\displaystyle= ΔΨ1ΔΨ2ΔΨL\displaystyle\Delta_{\Psi_{1}}\oplus\Delta_{\Psi_{2}}\oplus\ldots\oplus\Delta_{\Psi_{L}}

for some ΔΨk\Delta_{\Psi_{k}} whose off-diagonal (but not diagonal) elements are uniquely determined. For an index set SS and a matrix W=[wij]W=[w_{ij}], write WS(wijI((i,j)S))W_{S}\equiv(w_{ij}I((i,j)\in S)), where I()I(\cdot) is an indicator function.

A.1 Preliminary results

Before we show the proof of Theorem 3.1, we need to state the following lemmas. Proofs of Lemmas A.1 and A.2 appear in [15] (cf. Lemmas 8 and 10 therein). We then present an error bound for the off-diagonal component of the loss function, which appears as Lemma 12 in [15] and follows from the concentration of measure bounds on elements of offd(SkΣ0(k))\mathrm{offd}(S^{k}-\Sigma_{0}^{(k)}); cf. (12). Combined with our new concentration bound on the diagonal component of the loss function, cf. Lemma 2.2, we obtain the improved overall rate of convergence as stated in Theorem 3.1.

Lemma A.2.

Let Ω00\Omega_{0}\succ 0. Let S={(i,j):Ω0ij0,ij}S=\{(i,j):\ \Omega_{0ij}\neq 0,\ i\neq j\} and Sc={(i,j):Ω0ij=0,ij}S^{c}=\{(i,j):\ \Omega_{0ij}=0,\ i\neq j\}. Then for all Δ𝒦𝐩\Delta\in\mathcal{K}_{\mathbf{p}}, we have

|Ω0+Δ|1,off|Ω0|1,off\displaystyle\left|\Omega_{0}+\Delta\right|_{1,{\rm off}}-\left|\Omega_{0}\right|_{1,{\rm off}} \displaystyle\geq |ΔSc|1|ΔS|1\displaystyle\left|\Delta_{S^{c}}\right|_{1}-\left|\Delta_{S}\right|_{1} (28)

where by disjointness of supp(offd(Ψk)):={(i,j):ij,Ψk,ij0},k=1,,L\mathop{\text{\rm supp}\kern 0.86108pt}(\mathrm{offd}(\Psi_{k})):=\{(i,j):i\not=j,\;\Psi_{k,ij}\not=0\},k=1,\ldots,L,

|ΔS|1=k=1Lmk|ΔΨk,S|1 and |ΔSc|1=k=1Lmk|ΔΨk,Sc|1.\displaystyle\left|\Delta_{S}\right|_{1}=\sum_{k=1}^{L}m_{k}\left|\Delta_{\Psi_{k},S}\right|_{1}\;\text{ and }\;\left|\Delta_{S^{c}}\right|_{1}=\sum_{k=1}^{L}m_{k}\left|\Delta_{\Psi_{k},S^{c}}\right|_{1}.

Lemma A.3 follows from [15]; cf. Lemmas 11 and 12 therein.

Lemma A.3.

With probability at least 12Lexp(clogp)1-2L\exp(-c^{\prime}\log p),

|offd(ΔΩ),S^Σ0|k=1Lmk|ΔΨk|1,offδk,\displaystyle\left\lvert\;\langle{\,\mathrm{offd}(\Delta_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert\leq\sum_{k=1}^{L}m_{k}\left|\Delta_{\Psi_{k}}\right|_{1,{\rm off}}\delta_{k},
whereδklogpmkΣ02,k.\displaystyle\text{ where}\delta_{k}\asymp\sqrt{\frac{\log p}{m_{k}}}\left\lVert\Sigma_{0}\right\rVert_{2},\forall k.

Next we show that as an immediate corollary of (28), we have Lemma A.4, which is a deterministic result and identical to Lemma 10 [15]. The proof is omitted. Lemma A.5 follows immediately from Lemmas A.3 and A.4.

Lemma A.4.

(Deterministic bounds) Let ρk0\rho_{k}\geq 0. Denote by

Δg\displaystyle\Delta_{g} :=\displaystyle:= k=1Lmkρk(|Ψk+ΔΨk|1,off|Ψk|1,off),\displaystyle\sum_{k=1}^{L}m_{k}\rho_{k}\left(\left|\Psi_{k}+\Delta_{\Psi_{k}}\right|_{1,{\rm off}}-\left|\Psi_{k}\right|_{1,{\rm off}}\right), (29)
thenΔg\displaystyle\text{ then}\quad\Delta_{g} \displaystyle\geq k=1Lmkρk(|ΔΨk,Sc|1|ΔΨk,S|1).\displaystyle\sum_{k=1}^{L}m_{k}\rho_{k}\left(\left|\Delta_{\Psi_{k},S^{c}}\right|_{1}-\left|\Delta_{\Psi_{k},S}\right|_{1}\right).
Lemma A.5.

Suppose that dk=O(mk)d_{k}=O(m_{k}) for all kk. Under the settings of Lemmas A.3 and A.4, we have for the choice of ρk=δk/εk,k\rho_{k}=\delta_{k}/\varepsilon_{k},\forall k, where 0<εk<10<\varepsilon_{k}<1 and δklogpmkΣ02\delta_{k}\asymp\sqrt{\frac{\log p}{m_{k}}}\left\lVert\Sigma_{0}\right\rVert_{2},

Δg+offd(ΔΩ),S^Σ0\displaystyle\Delta_{g}+\;\langle{\,\mathrm{offd}(\Delta_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\; \displaystyle\geq 2maxkρk|ΔS|1,\displaystyle-2\max_{k}\rho_{k}\left|\Delta_{S}\right|_{1}, (30)

where Δg\Delta_{g} is as in Lemma A.4.

Lemma A.6 follows from Lemmas 2.2 and A.5. We provide a proof of Lemmas A.5 and A.6 in the supplementary material for completeness. Since p=kdk2Lp=\prod_{k}d_{k}\geq 2^{L} so long as dk2d_{k}\geq 2, we have logpL\log p\geq L and hence exp(clogp)>L\exp(c\log p)>L for sufficiently large cc.

Lemma A.6.

Suppose that n=1n=1. Let s=k=1Lmksks=\sum_{k=1}^{L}m_{k}s_{k}. Then, under the settings of Lemmas 2.2 and A.5, we have with probability at least 1Lexp(clogp)1-L\exp(-c^{\prime}\log p),

|Δg+ΔΩ,S^Σ0|\displaystyle\left\lvert\Delta_{g}+\;\langle{\,\Delta_{\Omega},\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert \displaystyle\leq CΣ02T3 where\displaystyle C^{\prime}\left\lVert\Sigma_{0}\right\rVert_{2}T_{3}\;\text{ where}\;
T3\displaystyle T_{3} :=\displaystyle:= slogp+LpΔΩFmmin.\displaystyle\frac{\sqrt{s\log p+Lp}\left\lVert\Delta_{\Omega}\right\rVert_{F}}{\sqrt{m_{\min}}}.
Proposition A.7.

Set C>36(maxk1εkCdiag)C>36(\max_{k}\frac{1}{\varepsilon_{k}}\vee C_{\mathrm{diag}}) for CdiagC_{\mathrm{diag}} as in Lemma 4.3. Let

r𝐩\displaystyle r_{\mathbf{p}} =\displaystyle= CΣ02slogp+Lp/mmin\displaystyle C\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{s\log p+Lp}/\sqrt{m_{\min}} (31)
where M=12ϕmax2(Ω0)=12ϕmin2(Σ0).\displaystyle M=\frac{1}{2}{\phi^{2}_{\max}(\Omega_{0})}=\frac{1}{2\phi^{2}_{\min}(\Sigma_{0})}. (32)

Let ΔΩ𝒦𝐩\Delta_{\Omega}\in\mathcal{K}_{\mathbf{p}} such that ΔΩF=Mr𝐩\left\lVert\Delta_{\Omega}\right\rVert_{F}=Mr_{\mathbf{p}}. Then ΔΩ212ϕmin(Ω0).\left\lVert\Delta_{\Omega}\right\rVert_{2}\leq\frac{1}{2}\phi_{\min}(\Omega_{0}).

Proof.

Indeed, by Theorem A.1, we have for all Δ𝒯n\Delta\in\mathcal{T}_{n},

Δ2\displaystyle\left\lVert\Delta\right\rVert_{2} \displaystyle\leq L+1minkmkΔF=L+1mminMr𝐩\displaystyle\sqrt{\frac{L+1}{\min_{k}m_{k}}}\|\Delta\|_{F}=\sqrt{\frac{L+1}{m_{\min}}}Mr_{\mathbf{p}}
\displaystyle\leq L+1mminC21ϕmin2(Σ0)Σ02slogp+pLmmin\displaystyle\sqrt{\frac{L+1}{m_{\min}}}\frac{C}{2}\frac{1}{\phi_{\min}^{2}(\Sigma_{0})}\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{\frac{s\log p+pL}{m_{\min}}}
\displaystyle\leq 12ϕmin(Ω0)=12ϕmax(Σ0)\displaystyle\frac{1}{2}\phi_{\min}(\Omega_{0})=\frac{1}{2\phi_{\max}(\Sigma_{0})}

so long as mmin2>2C2(L+1)κ(Σ0)4(slogp+pL)m_{\min}^{2}>2C^{2}(L+1)\kappa(\Sigma_{0})^{4}(s\log p+pL), where κ(Σ0)\kappa(\Sigma_{0}) is the condition number of Σ0\Sigma_{0}.  \;\;\scriptstyle\Box

A.2 Proof of Theorem 3.1

We will only show the proof for n=1n=1. Let

G(ΔΩ)\displaystyle G(\Delta_{\Omega}) =\displaystyle= Q(Ω0+ΔΩ)Q(Ω0)\displaystyle Q(\Omega_{0}+\Delta_{\Omega})-Q(\Omega_{0}) (33)

be the difference between the objective function (A) at Ω0+ΔΩ\Omega_{0}+\Delta_{\Omega} and at Ω0\Omega_{0}. Clearly Δ^Ω=Ω^Ω0\widehat{\Delta}_{\Omega}=\widehat{\Omega}-\Omega_{0} minimizes G(ΔΩ)G(\Delta_{\Omega}), which is a convex function with a unique minimizer on 𝒦𝐩\mathcal{K}_{\mathbf{p}}^{\sharp} (cf. Theorem 5 [15]). Let r𝐩r_{\mathbf{p}} be as defined in (31) for some large enough absolute constant CC to be specified, and

𝒯n={ΔΩ𝒦𝐩:ΔΩ=ΩΩ0,Ω,Ω0𝒦𝐩,ΔΩF=Mr𝐩}.\mathcal{T}_{n}=\left\{\Delta_{\Omega}\in\mathcal{K}_{\mathbf{p}}:\Delta_{\Omega}=\Omega-\Omega_{0},\Omega,\Omega_{0}\in\mathcal{K}_{\mathbf{p}}^{\sharp},\|\Delta_{\Omega}\|_{F}=Mr_{\mathbf{p}}\right\}. (34)

In particular, we set C>36(maxk1εkCdiag)C>36(\max_{k}\frac{1}{\varepsilon_{k}}\vee C_{\mathrm{diag}}) in r𝐩r_{\mathbf{p}}, for absolute constant CdiagC_{\mathrm{diag}} as in Lemma 4.3.

Proposition A.8 follows from [45].

Proposition A.8.

If G(Δ)>0G(\Delta)>0 for all Δ𝒯n\Delta\in\mathcal{T}_{n} as defined in (34), then G(Δ)>0G(\Delta)>0 for all Δ\Delta in

𝒱n={Δ𝒦𝐩:Δ=ΩΩ0,Ω,Ω0𝒦𝐩,ΔF>Mr𝐩}\displaystyle\mathcal{V}_{n}=\{\Delta\in\mathcal{K}_{\mathbf{p}}:\Delta=\Omega-\Omega_{0},\Omega,\Omega_{0}\in\mathcal{K}_{\mathbf{p}}^{\sharp},\|\Delta\|_{F}>Mr_{\mathbf{p}}\}

for r𝐩r_{\mathbf{p}} (31). Hence if G(Δ)>0G(\Delta)>0 for all Δ𝒯n\Delta\in\mathcal{T}_{n}, then G(Δ)>0G(\Delta)>0 for all Δ𝒯n𝒱n\Delta\in\mathcal{T}_{n}\cup\mathcal{V}_{n}.

Proposition A.9.

Suppose G(ΔΩ)>0G(\Delta_{\Omega})>0 for all ΔΩ𝒯n\Delta_{\Omega}\in\mathcal{T}_{n}. We then have

Δ^ΩF<Mr𝐩.\displaystyle\left\lVert\widehat{\Delta}_{\Omega}\right\rVert_{F}<Mr_{\mathbf{p}}.
Proof.

By definition, G(0)=0G(0)=0, so G(Δ^Ω)G(0)=0G(\widehat{\Delta}_{\Omega})\leq G(0)=0. Thus if G(ΔΩ)>0G(\Delta_{\Omega})>0 on 𝒯n\mathcal{T}_{n}, then by Proposition A.8, Δ^Ω𝒯n𝒱n\widehat{\Delta}_{\Omega}\notin\mathcal{T}_{n}\cup\mathcal{V}_{n} where 𝒱n\mathcal{V}_{n} is defined therein. The proposition thus holds.  \;\;\scriptstyle\Box

Lemma A.10.

Under (A1) - (A3), for all Δ𝒯n\Delta\in{\mathcal{T}_{n}} for which r𝐩=o(minkmkL+1)r_{\mathbf{p}}=o\left(\sqrt{\frac{\min_{k}m_{k}}{L+1}}\right),

log|Ω0+Δ|log|Ω0|Σ0,Δ29Ω022ΔF2.\displaystyle\log|\Omega_{0}+\Delta|-\log|\Omega_{0}|\leq\langle\Sigma_{0},\Delta\rangle-\frac{2}{9\|\Omega_{0}\|_{2}^{2}}\left\lVert\Delta\right\rVert_{F}^{2}.

The proof of Lemma A.10 is deferred to the supplementary material Section B.3.

By Proposition A.9, it remains to show that G(ΔΩ)>0G(\Delta_{\Omega})>0 on 𝒯n\mathcal{T}_{n} under the settings of Lemma A.6.

Lemma A.11.

With probability at least 1Lexp(clogp)1-L\exp(-c^{\prime}\log p), we have G(Δ)>0G(\Delta)>0 for all Δ𝒯n\Delta\in\mathcal{T}_{n}.

Proof.

By Lemma A.10, if r𝐩minkmk/(L+1)r_{\mathbf{p}}\leq\sqrt{\min_{k}m_{k}/(L+1)}, we can express (33) as

G(ΔΩ)=\displaystyle G(\Delta_{\Omega})= (35)
Ω0+ΔΩ,S^log|Ω0+ΔΩ|Ω0,S^+log|Ω0|\displaystyle\langle\Omega_{0}+\Delta_{\Omega},\widehat{S}\rangle-\log|\Omega_{0}+\Delta_{\Omega}|-\langle\Omega_{0},\widehat{S}\rangle+\log|\Omega_{0}|
+kρkmk(|Ψk,0+ΔΨ,k|1,off|Ψk,0|1,off)Δg\displaystyle+\underbrace{\sum_{k}\rho_{k}m_{k}(|\Psi_{k,0}+\Delta_{\Psi,k}|_{1,{\rm off}}-|\Psi_{k,0}|_{1,{\rm off}})}_{\Delta_{g}}
\displaystyle\geq ΔΩ,S^Σ0+29Ω022ΔΩF2+Δg.\displaystyle\quad\quad\quad\;\langle{\,\Delta_{\Omega},\widehat{S}-\Sigma_{0}\,}\rangle\;+\frac{2}{9\|\Omega_{0}\|_{2}^{2}}\|\Delta_{\Omega}\|_{F}^{2}+\Delta_{g}.

By Lemma A.6 and (35), we have for all ΔΩ𝒯n\Delta_{\Omega}\in\mathcal{T}_{n}, and C=maxk(2εk)2CdiagC^{\prime}=\max_{k}(\frac{2}{\varepsilon_{k}})\vee 2C_{\mathrm{diag}}, and for L1L\geq 1,

G(ΔΩ)\displaystyle G(\Delta_{\Omega}) \displaystyle\geq 29Ω022ΔΩF2|Δg+ΔΩ,S^Σ0|\displaystyle\frac{2}{9\|\Omega_{0}\|_{2}^{2}}\|\Delta_{\Omega}\|_{F}^{2}-\left\lvert{\Delta_{g}}+\;\langle{\,\Delta_{\Omega},\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert
\displaystyle\geq 29Ω022ΔΩF2CΣ02minkmkslogp+LpΔΩF\displaystyle\frac{2}{9\left\lVert\Omega_{0}\right\rVert_{2}^{2}}\left\lVert\Delta_{\Omega}\right\rVert_{F}^{2}-\frac{C^{\prime}\left\lVert\Sigma_{0}\right\rVert_{2}}{\sqrt{\min_{k}m_{k}}}\sqrt{s\log p+Lp}\left\lVert\Delta_{\Omega}\right\rVert_{F}
=:W,\displaystyle=:W,

where by Lemma A.6, we have with probability at least 1Lexp(clogp)1-L\exp(-c^{\prime}\log p), for dk=O(mk)d_{k}=O(m_{k}),

|Δg+ΔΩ,S^Σ0|\displaystyle\left\lvert\Delta_{g}+\;\langle{\,\Delta_{\Omega},\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert \displaystyle\leq CΣ02slogp+LpΔΩFmmin.\displaystyle C^{\prime}\left\lVert\Sigma_{0}\right\rVert_{2}\frac{\sqrt{s\log p+Lp}\left\lVert\Delta_{\Omega}\right\rVert_{F}}{\sqrt{m_{\min}}}.

Now W>0W>0 for ΔΩF=Mr𝐩\left\lVert\Delta_{\Omega}\right\rVert_{F}=Mr_{\mathbf{p}}, where M=12ϕmin2(Σ0)M=\frac{1}{2\phi_{\min}^{2}(\Sigma_{0})}, since

CΣ021minkmk(Lp+slogp)1Mr𝐩=CCM\displaystyle C^{\prime}\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{\frac{1}{\min_{k}m_{k}}}\sqrt{(Lp+s\log p)}\frac{1}{Mr_{\mathbf{p}}}=\frac{C^{\prime}}{CM}
=2CCϕmin2(Σ0)<29Ω022,\displaystyle=\frac{2C^{\prime}}{C}\phi_{\min}^{2}(\Sigma_{0})<\frac{2}{9\left\lVert\Omega_{0}\right\rVert_{2}^{2}},

which holds so long as CC is chosen to be large enough in r𝐩r_{\mathbf{p}} as defined in (31). For example, we set C=18C=36(maxk(1εk)Cdiag)C=18C^{\prime}=36(\max_{k}(\frac{1}{\varepsilon_{k}})\vee C_{\mathrm{diag}}).  \;\;\scriptstyle\Box

Theorem 3.1 follows from Proposition A.9 immediately. Combining Lemmas A.3 and 2.2 using the union bound implies both events hold with probability at least 1Lexp(clogp)1-L\exp(-c^{\prime}\log p). The error in the operator norm immediately follows from the Frobenius norm error bound and Lemma A.1.  \;\;\scriptstyle\Box

A.3 Extension to multiple samples n>1n>1

To complete the proof, it remains to present the case of n>1n>1. Incorporating n>1n>1 directly into the proof above is relatively straightforward but notation-dense; hence it suffices to note that having nn independent samples essentially increases the mkm_{k} replication to nmknm_{k}, and propagate this fact through the proof. We also note that the multi-sample n>1n>1 case can be converted to the single sample n=1n=1 regime to obtain a result directly. To see this, note that nn independent samples with precision matrix Ω0p×p\Omega_{0}\in\mathbb{R}^{p\times p} can be represented as a single sample with the block-diagonal precision matrix, i.e. Ω0\Omega_{0} repeated nn times blockwise along the diagonal, specifically, Ω(n)=InΩ0pn×pn\Omega^{(n)}=I_{n}\otimes\Omega_{0}\in\mathbb{R}^{pn\times pn}. Recall that by definition of the Kronecker sum, Ω(n)=InΩ0=0n×nΨ1ΨL\Omega^{(n)}=I_{n}\otimes\Omega_{0}=0_{n\times n}\oplus\Psi_{1}\oplus\dots\oplus\Psi_{L} is a (L+1)(L+1)-order Kronecker sum with p(n)=pnp^{(n)}=pn, achieved by introducing an all-zero factor Ψ0=0n×n\Psi_{0}=0_{n\times n} with d0=nd_{0}=n (and m0=pm_{0}=p). Since this extra factor is zero, the operator norms are not affected. The sparsity factor of Ω(n)\Omega^{(n)} is s(n)=sns^{(n)}=sn since the non-zero elements are replicated nn times, and each co-dimension mk(n):=p(n)/dk=nmkm^{(n)}_{k}:=p^{(n)}/d_{k}=nm_{k} for k>0k>0. Hence the single sample convergence result can be applied with L(n)=L+1L^{(n)}=L+1, yielding for ndmaxn\leq d_{\max} and L2L\geq 2

Ω^Ω02/Ω02=\displaystyle\left\lVert\widehat{\Omega}-\Omega_{0}\right\rVert_{2}/\left\lVert\Omega_{0}\right\rVert_{2}=
Cκ(Σ0)L(n)+1s(n)logp(n)+L(n)p(n)[mmin(n)]2\displaystyle C\kappa(\Sigma_{0})\sqrt{L^{(n)}+1}\sqrt{\frac{s^{(n)}\log p^{(n)}+L^{(n)}p^{(n)}}{[m^{(n)}_{\min}]^{2}}}
=\displaystyle= Cκ(Σ0)L+2s(logp+logn)+(L+1)pnmmin2\displaystyle C\kappa(\Sigma_{0})\sqrt{L+2}\sqrt{\frac{s(\log p+\log n)+(L+1)p}{nm_{\min}^{2}}}
\displaystyle\leq C83κ(Σ0)L+1slogp+Lpnmmin2\displaystyle C\sqrt{\frac{8}{3}}\kappa(\Sigma_{0})\sqrt{L+1}\sqrt{\frac{s\log p+Lp}{nm_{\min}^{2}}}

since mmin(n)=min(m0,nmmin)=min(dmaxmmin,nmmin)=nmminm^{(n)}_{\min}=\min(m_{0},nm_{\min})=\min(d_{\max}m_{\min},nm_{\min})=nm_{\min} whenever ndmaxn\leq d_{\max}. Hence Theorem 3.1 is recovered for ndmaxn\leq d_{\max}, with constant slightly worse than could be obtained by incorporating nn directly into the proof.

A.4 Proof of Corollary 3.3

Denote by s=kmksks=\sum_{k}m_{k}s_{k}. Then for n=1n=1 and p=dmaxmmin=mkdkp=d_{\max}m_{\min}=m_{k}d_{k} for all kk,

Lslogp+Lpmmin2\displaystyle\sqrt{L}\sqrt{\frac{s\log p+Lp}{m_{\min}^{2}}} =\displaystyle= Ldmaxmminkmksklogp+Lpdmaxmmin\displaystyle\sqrt{L\frac{d_{\max}}{m_{\min}}}\sqrt{\frac{\sum_{k}m_{k}s_{k}\log p+Lp}{d_{\max}m_{\min}}}
=\displaystyle= Ldmaxmminkmksklogp+Lpp\displaystyle\sqrt{L\frac{d_{\max}}{m_{\min}}}\sqrt{\frac{\sum_{k}m_{k}s_{k}\log p+Lp}{p}}
=\displaystyle= Ldmaxmminksklogpdk+L<1\displaystyle\sqrt{L\frac{d_{\max}}{m_{\min}}}\sqrt{\sum_{k}\frac{s_{k}\log p}{d_{k}}+L}<1

by (A3); The corollary thus follows from Theorem 3.1.  \;\;\scriptstyle\Box

A.5 Proof of Theorem 3.4

Suppose that m1m2mLm_{1}\asymp m_{2}\asymp\ldots\asymp m_{L}. Denote by s=kmksks=\sum_{k}m_{k}s_{k}. Then

slogp+Lp(minkmk)\displaystyle\sqrt{\frac{s\log p+Lp}{(\min_{k}m_{k})}} =\displaystyle= kmksklogp+Lpmmin\displaystyle\sqrt{\frac{\sum_{k}m_{k}s_{k}\log p+Lp}{m_{\min}}}
\displaystyle\approx L1Lksklogp+dmax\displaystyle\sqrt{L}\sqrt{\frac{1}{L}\sum_{k}s_{k}\log p+d_{\max}}

The theorem thus follows from Theorem 3.1.  \;\;\scriptstyle\Box

Appendix B Proof of preliminary results in Section A

B.1 Proof of Lemma A.5

First, we prove (30). We have by (29)

Δg+offd(ΔΩ),S^Σ0\displaystyle\Delta_{g}+\;\langle{\,\mathrm{offd}(\Delta_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;
\displaystyle\geq k=1Lmkρk(|Ψk+ΔΨk|1,off|Ψk|1,off)\displaystyle\sum_{k=1}^{L}m_{k}\rho_{k}\left(\left|\Psi_{k}+\Delta_{\Psi_{k}}\right|_{1,{\rm off}}-\left|\Psi_{k}\right|_{1,{\rm off}}\right)
+offd(ΔΩ),S^Σ0=:S2\displaystyle+\;\langle{\,\mathrm{offd}(\Delta_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;=:S_{2}

where under the settings of Lemma A.3,

S2\displaystyle S_{2} \displaystyle\geq k=1Lmkρk(|ΔΨk,Sc|1|ΔΨk,S|1)k=1Lmk|ΔΨk|1,offδk\displaystyle\sum_{k=1}^{L}m_{k}\rho_{k}\left(\left|\Delta_{\Psi_{k},S^{c}}\right|_{1}-\left|\Delta_{\Psi_{k},S}\right|_{1}\right)-\sum_{k=1}^{L}m_{k}\left|\Delta_{\Psi_{k}}\right|_{1,{\rm off}}\delta_{k}
\displaystyle\geq k=1Lmkρk(|ΔΨk,Sc|1|ΔΨk,S|1)\displaystyle\sum_{k=1}^{L}m_{k}\rho_{k}\left(\left|\Delta_{\Psi_{k},S^{c}}\right|_{1}-\left|\Delta_{\Psi_{k},S}\right|_{1}\right)
k=1Lmkδk(|ΔΨk,Sc|1+|ΔΨk,S|1)\displaystyle-\sum_{k=1}^{L}m_{k}\delta_{k}\left(\left|\Delta_{\Psi_{k},S^{c}}\right|_{1}+\left|\Delta_{\Psi_{k},S}\right|_{1}\right)
\displaystyle\geq k=1Lmk(ρk+δk)|ΔΨk,S|1\displaystyle-\sum_{k=1}^{L}m_{k}(\rho_{k}+\delta_{k})\left|\Delta_{\Psi_{k},S}\right|_{1}
\displaystyle\geq 2maxkρkk=1Lmk|ΔΨk,S|1\displaystyle-2\max_{k}\rho_{k}\sum_{k=1}^{L}m_{k}\left|\Delta_{\Psi_{k},S}\right|_{1}
=\displaystyle= 2maxkρk|ΔS|1;\displaystyle-2\max_{k}\rho_{k}\left|\Delta_{S}\right|_{1};

Thus (30) holds.  \;\;\scriptstyle\Box

B.2 Proof of Lemma A.6

We focus on the case dkmkkd_{k}\leq m_{k}\forall k; By definition of Δg\Delta_{g},

Δ,SΣ0+Δg\displaystyle\;\langle{\,\Delta,S-\Sigma_{0}\,}\rangle\;+\Delta_{g} :=\displaystyle:= offd(Δ),SΣ0+Δg+\displaystyle\;\langle{\,\mathrm{offd}(\Delta),S-\Sigma_{0}\,}\rangle\;+\Delta_{g}+
diag(Δ),SΣ0\displaystyle\;\langle{\,\mathrm{diag}(\Delta),S-\Sigma_{0}\,}\rangle\;

Then we have by (30) and (2.2), with probability at least 1k=1L2exp(cdk)2Lexp(clogp)1-\sum_{k=1}^{L}2\exp(-cd_{k})-2L\exp(-c^{\prime}\log p), for dk=O(mk)d_{k}=O(m_{k}) and |ΔS|1sΔSF\left|\Delta_{S}\right|_{1}\leq\sqrt{s}\left\lVert\Delta_{S}\right\rVert_{F}, where s=k=1Lmksks=\sum_{k=1}^{L}m_{k}s_{k},

|Δg+offd(ΔΩ),S^Σ0|2maxkρk|ΔS|1\displaystyle\left\lvert\Delta_{g}+\;\langle{\,\mathrm{offd}(\Delta_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert\leq 2\max_{k}\rho_{k}\left|\Delta_{S}\right|_{1}
\displaystyle\leq 2maxk(1εklogpmk)sΔΩ,SF\displaystyle 2\max_{k}\big{(}\frac{1}{\varepsilon_{k}}\sqrt{\frac{\log p}{m_{k}}}\big{)}\sqrt{s}\left\lVert\Delta_{\Omega,S}\right\rVert_{F}

and

|diag(ΔΩ),S^Σ0|\displaystyle\left\lvert\;\langle{\,\mathrm{diag}(\Delta_{\Omega}),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert
\displaystyle\leq CdiagΣ02dmaxLdiag(ΔΩ)F(1+dmaxmmin).\displaystyle C_{\mathrm{diag}}\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{d_{\max}}\sqrt{L}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}\big{(}1+\sqrt{\frac{d_{\max}}{m_{\min}}}\big{)}.

The Lemma thus holds by the triangle inequality: for dmaxpd_{\max}\leq\sqrt{p}

|Δ,SΣ0+Δg||offd(Δ),S^Σ0+Δg|\displaystyle\left\lvert\;\langle{\,\Delta,S-\Sigma_{0}\,}\rangle\;+\Delta_{g}\right\rvert\leq\left\lvert\;\langle{\,\mathrm{offd}(\Delta),\widehat{S}-\Sigma_{0}\,}\rangle\;+\Delta_{g}\right\rvert
+|diag(Δ),S^Σ0|\displaystyle+\left\lvert\;\langle{\,\mathrm{diag}(\Delta),\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert
\displaystyle\leq 2CoffdΣ02slogpmminΔΩ,SF+\displaystyle 2C_{\mathrm{offd}}\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{\frac{s\log p}{m_{\min}}}\left\lVert\Delta_{\Omega,S}\right\rVert_{F}+
CdiagΣ02dmaxLdiag(ΔΩ)F(1+dmaxmmin)\displaystyle C_{\mathrm{diag}}\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{d_{\max}}\sqrt{L}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}\left(1+\sqrt{\frac{d_{\max}}{m_{\min}}}\right)
\displaystyle\leq 2CoffdCdiagΣ02(slogpmminΔΩ,SF\displaystyle 2C_{\mathrm{offd}}\vee C_{\mathrm{diag}}\left\lVert\Sigma_{0}\right\rVert_{2}\big{(}\sqrt{\frac{s\log p}{m_{\min}}}\left\lVert\Delta_{\Omega,S}\right\rVert_{F}
+Ldiag(ΔΩ)Fp+dmax2mmin)\displaystyle+\sqrt{L}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}\frac{\sqrt{p}+d_{\max}}{2\sqrt{m_{\min}}}\big{)}
\displaystyle\leq CΣ02T3\displaystyle C^{\prime}\left\lVert\Sigma_{0}\right\rVert_{2}T_{3}

where Coffd=maxk(1/εk)C_{\mathrm{offd}}=\max_{k}\big{(}1/{\varepsilon_{k}}\big{)}, C=2(CdiagCoffd)C^{\prime}=2(C_{\mathrm{diag}}\vee C_{\mathrm{offd}}), and where by Cauchy-Schwarz inequality,

slogpoffd(ΔΩ)F+Lpdiag(ΔΩ)F\displaystyle\sqrt{s\log p}\left\lVert\mathrm{offd}(\Delta_{\Omega})\right\rVert_{F}+\sqrt{Lp}\left\lVert\mathrm{diag}(\Delta_{\Omega})\right\rVert_{F}
\displaystyle\leq slogp+pLΔΩF\displaystyle\sqrt{s\log p+pL}\left\lVert\Delta_{\Omega}\right\rVert_{F}

where Coffd=2maxk1εkC_{\mathrm{offd}}=2\max_{k}\frac{1}{\varepsilon_{k}}.  \;\;\scriptstyle\Box

B.3 Proof of Lemma A.10

We first state Proposition B.1

Proposition B.1.

Under (A1)-(A3), for all Δ𝒯n\Delta\in{\mathcal{T}_{n}},

Δ2Mr𝐩L+1minkmk12ϕmin(Ω0)\displaystyle\left\lVert\Delta\right\rVert_{2}\leq Mr_{\mathbf{p}}\sqrt{\frac{L+1}{\min_{k}m_{k}}}\leq\frac{1}{2}\phi_{\min}(\Omega_{0}) (36)

so that Ω0+vΔ0,vI[0,1]\Omega_{0}+v\Delta\succ 0,\forall v\in I\supset[0,1], where II is an open interval containing [0,1][0,1].

Proof.

By Proposition A.7, (36) holds for Δ𝒯n\Delta\in{\mathcal{T}_{n}}; Next, it is sufficient to show that Ω0+(1+ε)Δ0\Omega_{0}+(1+\varepsilon)\Delta\succ 0 and Ω0εΔ0\Omega_{0}-\varepsilon\Delta\succ 0 for some 1>ε>01>\varepsilon>0. Indeed, for ε<1\varepsilon<1,

ϕmin(Ω0+(1+ε)Δ)\displaystyle\phi_{\min}(\Omega_{0}+(1+\varepsilon)\Delta) \displaystyle\geq ϕmin(Ω0)(1+ε)Δ2\displaystyle\phi_{\min}(\Omega_{0})-(1+\varepsilon)\left\lVert\Delta\right\rVert_{2}
>\displaystyle> ϕmin(Ω0)2L+1minkmkMr𝐩>0\displaystyle\phi_{\min}(\Omega_{0})-2\sqrt{\frac{L+1}{\min_{k}m_{k}}}Mr_{\mathbf{p}}>0

given that by definition of 𝒯n\mathcal{T}_{n} and (36).  \;\;\scriptstyle\Box

Thus we have that log|Ω0+vΔ|\log|\Omega_{0}+v\Delta| is infinitely differentiable on the open interval I[0,1]I\supset[0,1] of vv. This allows us to use the Taylor’s formula with integral remainder to prove Lemma A.10, following identical steps in [15], drawn from [30], and hence is omitted.  \;\;\scriptstyle\Box

Appendix C Some details for the proof of Lemmas 4.3 and 4.4

Let the sample covariance S^:=vec{𝓧T}vec{𝓧T}\widehat{S}:=\rm{vec}\{\,\boldsymbol{\mathscr{X}}^{T}\,\}\otimes\rm{vec}\{\,\boldsymbol{\mathscr{X}}^{T}\,\} be as in (10) and Σ0=Ω01n×n\Sigma_{0}=\Omega_{0}^{-1}\in\mathbb{R}^{n\times n} be the true covariance matrix. Thus we denote by vec{𝓧T}Σ01/2Z\rm{vec}\{\,\boldsymbol{\mathscr{X}}^{T}\,\}\sim\Sigma_{0}^{1/2}Z, where ZpZ\in\mathbb{R}^{p} denotes an isotropic sub-gaussian random vector with independent coordinates. Let

diag(Δ~k)\displaystyle\mathrm{diag}(\widetilde{\Delta}_{k}) :=\displaystyle:= I[d1:k1]diag(ΔΨk)I[dk+1:L].\displaystyle I_{[d_{1:k-1}]}\otimes\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\otimes I_{[d_{k+1:L}]}. (37)

This explains (26). Consequently, by (37)

diag(Δ~k)F2:=mkdiag(ΔΨk)F2.\left\lVert\mathrm{diag}(\widetilde{\Delta}_{k})\right\rVert_{F}^{2}:=m_{k}\left\lVert\mathrm{diag}(\Delta^{\prime}_{\Psi_{k}})\right\rVert_{F}^{2}.

By Lemma 2.1, we have for the diagonal and off-diagonal components of the trace term defined as follows: for Ω0=Ψ1ΨL\Omega_{0}=\Psi_{1}\oplus\dots\oplus\Psi_{L},

S^,diag(Ω0)\displaystyle\;\langle{\,\widehat{S},\mathrm{diag}(\Omega_{0})\,}\rangle\; =\displaystyle= k=1Li=1dkΨk,iiYi(k),Yi(k), and\displaystyle\sum_{k=1}^{L}\sum_{i=1}^{d_{k}}\Psi_{k,ii}\;\langle{\,Y^{(k)}_{i},Y^{(k)}_{i}\,}\rangle\;,\text{ and }
S^,offd(Ω0)\displaystyle\;\langle{\,\widehat{S},\mathrm{offd}(\Omega_{0})\,}\rangle\; =\displaystyle= k=1LijdkΨk,ijYi(k),Yj(k).\displaystyle\sum_{k=1}^{L}\sum_{i\not=j}^{d_{k}}\Psi_{k,ij}\;\langle{\,Y^{(k)}_{i},Y^{(k)}_{j}\,}\rangle\;.

where offd(Ω0)=offd(Ψ1)offd(ΨL)\mathrm{offd}(\Omega_{0})=\mathrm{offd}(\Psi_{1})\oplus\dots\oplus\mathrm{offd}(\Psi_{L}) and diag(Ω0)=diag(Ψ1)diag(ΨL)\mathrm{diag}(\Omega_{0})=\mathrm{diag}(\Psi_{1})\oplus\dots\oplus\mathrm{diag}(\Psi_{L}). This explains (23). See also (2). Moreover, we need the following event 𝒟0\mathcal{D}_{0}:

event 𝒟0={|Ip,S^Σ0|CpΣ02logp/n},\displaystyle\text{ event }\;\mathcal{D}_{0}=\left\{\left\lvert\;\langle{\,I_{p},\widehat{S}-\Sigma_{0}\,}\rangle\;\right\rvert\leq C\sqrt{p}\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{\log p/n}\right\},

which states that the sum of the errors of the diagonal of the sample covariance S^\widehat{S} (10) is bounded tightly in an average sense. Indeed, as expected, tr(S^){\rm tr}(\widehat{S}) converges to tr(Σ0){\rm tr}(\Sigma_{0}) at the rate of

|tr(S^)tr(Σ0)|/p=OP(Σ02logp/(np)).\displaystyle\left\lvert{\rm tr}(\widehat{S})-{\rm tr}(\Sigma_{0})\right\rvert/p=O_{P}(\left\lVert\Sigma_{0}\right\rVert_{2}\sqrt{\log p/(np)}).

\;\;\scriptstyle\Box

References

  • Allen and Tibshirani [2010] Allen, G. and Tibshirani, R. (2010). Transposable regularized covariance models with an application to missing data imputation. Ann. Appl. Stat. 4 764–790.
  • Andrianov [1997] Andrianov, S. N. (1997). A matrix representation of lie algebraic methods for design of nonlinear beam lines. In AIP Conference Proceedings, vol. 391. AIP.
  • Banerjee et al. [2008] Banerjee, O., Ghaoui, L. E. and d’Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J. Mach. Learn. Res. 9 485–516.
  • Beckermann et al. [2013] Beckermann, B., Kressner, D. and Tobler, C. (2013). An error analysis of galerkin projection methods for linear systems with tensor product structure. SIAM Journal on Numerical Analysis 51 3307–3326.
  • Chapman et al. [2014] Chapman, A., Nabi-Abdolyousefi, M. and Mesbahi, M. (2014). Controllability and observability of network-of-networks via cartesian products. IEEE Transactions on Automatic Control 59 2668–2679.
  • d’Aspremont et al. [2008] d’Aspremont, A., Banerjee, O. and Ghaoui, L. E. (2008). First-order methods for sparse covariance selection. SIAM Journal on Matrix Analysis and Applications 30 56–66.
  • Dawid [1981] Dawid, A. P. (1981). Some matrix-variate distribution theory: Notational considerations and a bayesian application. Biometrika 68 265–274.
  • Dorr [1970] Dorr, F. W. (1970). The direct solution of the discrete poisson equation on a rectangle. SIAM review 12 248–263.
  • Eilers and Marx [2003] Eilers, P. H. and Marx, B. D. (2003). Multivariate calibration with temperature interaction using two-dimensional penalized signal regression. Chemometrics and intelligent laboratory systems 66 159–174.
  • Ellner [1986] Ellner, N. S. (1986). New ADI model problem applications. In Proceedings of 1986 ACM Fall joint computer conference.
  • Fey et al. [2018] Fey, M., Eric Lenssen, J., Weichert, F. and Müller, H. (2018). Splinecnn: Fast geometric deep learning with continuous b-spline kernels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • Friedman et al. [2008] Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical Lasso. Biostatistics 9 432–441.
  • Grasedyck [2004] Grasedyck, L. (2004). Existence and computation of low kronecker-rank approximations for large linear systems of tensor product structure. Computing 72 247–265.
  • Greenewald et al. [2017] Greenewald, K., Park, S., Zhou, S. and Giessing, A. (2017). Time-dependent spatially varying graphical models, with application to brain fmri data analysis. In Advances in Neural Information Processing Systems 30. 5832–5840.
  • Greenewald et al. [2019a] Greenewald, K., Zhou, S. and Hero, A. (2019a). Supplementary material for ”tensor graphical lasso (teralasso)” .
  • Greenewald et al. [2019b] Greenewald, K., Zhou, S. and Hero, A. (2019b). The Tensor graphical Lasso (TeraLasso). J. R. Stat. Soc., B: Stat. Methodol. 81 901–931.
  • Hornstein et al. [2019] Hornstein, M., Fan, R., Shedden, K. and Zhou, S. (2019). Joint mean and covariance estimation for unreplicated matrix-variate data. J. Amer. Statist. Assoc. 114 682–696.
  • Imrich et al. [2008] Imrich, W., Klavžar, S. and Rall, D. F. (2008). Topics in graph theory: Graphs and their Cartesian product. AK Peters/CRC Press.
  • Kalaitzis et al. [2013] Kalaitzis, A., Lafferty, J., Lawrence, N. and Zhou, S. (2013). The bigraphical lasso. In Proc. 30th Int. Conf. Mach. Learn.
  • Kolda and Bader [2009] Kolda, T. G. and Bader, B. W. (2009). Tensor decompositions and applications. SIAM review 51 455–500.
  • Kotzagiannidis and Dragotti [2017] Kotzagiannidis, M. S. and Dragotti, P. L. (2017). Splines and wavelets on circulant graphs. Applied and Computational Harmonic Analysis .
  • Kressner and Tobler [2010] Kressner, D. and Tobler, C. (2010). Krylov subspace methods for linear systems with tensor product structure. SIAM journal on matrix analysis and applications 31 1688–1714.
  • Lauritzen [1996] Lauritzen, S. L. (1996). Graphical Models. Oxford University Press.
  • Leng and Tang [2012] Leng, C. and Tang, C. (2012). Sparse matrix graphical models. J. Amer. Statist. Assoc. 107 1187–1200.
  • Li et al. [2022] Li, S., López-Garcia, M., Lawrence, N. D. and Cutillo, L. (2022). Two-way sparse network inference for count data. In International Conference on Artificial Intelligence and Statistics. PMLR.
  • Luenberger [1966] Luenberger, D. (1966). Observers for multivariable systems. IEEE Transactions on Automatic Control 11 190–197.
  • Meinshausen and Bühlmann [2006] Meinshausen, N. and Bühlmann, P. (2006). High dimensional graphs and variable selection with the Lasso. Ann. Statist. 34 1436–1462.
  • Milman and Schechtman [1986] Milman, V. D. and Schechtman, G. (1986). Asymptotic Theory of Finite Dimensional Normed Spaces. Lecture Notes in Mathematics 1200. Springer.
  • Ravikumar et al. [2011] Ravikumar, P., Wainwright, M., Raskutti, G. and Yu, B. (2011). High-dimensional covariance estimation by minimizing 1\ell_{1}-penalized log-determinant divergence. Electron. J. Stat. 4 935–980.
  • Rothman et al. [2008] Rothman, A., Bickel, P., Levina, E. and Zhu, J. (2008). Sparse permutation invariant covariance estimation. Electron. J. Stat. 2 494–515.
  • Rudelson and Vershynin [2013] Rudelson, M. and Vershynin, R. (2013). Hanson-Wright inequality and sub-gaussian concentration. Electron. Commun. Probab. 18 1–9.
  • Rudelson and Zhou [2017] Rudelson, M. and Zhou, S. (2017). Errors-in-variables models with dependent measurements. Electron. J. Statist. 11 1699–1797.
  • Schmitt et al. [2001] Schmitt, U., Louis, A. K., Darvas, F., Buchner, H. and Fuchs, M. (2001). Numerical aspects of spatio-temporal current density reconstruction from eeg-/meg-data. IEEE Transactions on Medical Imaging 20 314–324.
  • Shi et al. [2013] Shi, X., Wei, Y. and Ling, S. (2013). Backward error and perturbation bounds for high order sylvester tensor equation. Linear and Multilinear Algebra 61 1436–1446.
  • Tsiligkaridis et al. [2013] Tsiligkaridis, T., Hero, A. and Zhou, S. (2013). On convergence of kronecker graphical lasso algorithms. IEEE Trans. Signal Process. 61 1743–1755.
  • Van Loan [2000] Van Loan, C. F. (2000). The ubiquitous kronecker product. Journal of computational and applied mathematics 123 85–100.
  • Wang and Hero [2021] Wang, Y. and Hero, A. (2021). SG-PALM: a fast physically interpretable tensor graphical model. arXiv preprint arXiv:2105.12271 .
  • Wang et al. [2022] Wang, Y., Sun, Z., Song, D. and Hero, A. (2022). Kronecker-structured covariance models for multiway data. arXiv preprint arXiv:2212.01721 .
  • Wood [2006] Wood, S. N. (2006). Low-rank scale-invariant tensor product smooths for generalized additive mixed models. Biometrics 62 1025–1036.
  • Wood et al. [2016] Wood, S. N., Pya, N. and Safken, B. (2016). Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111 1548–1563.
  • Yoon and Kim [2022] Yoon, J. H. and Kim, S. (2022). Eiglasso for scalable sparse kronecker-sum inverse covariance estimation. Journal of Machine Learning Research 23 1–39.
  • Yuan and Lin [2007] Yuan, M. and Lin, Y. (2007). Model selection and estimation in the gaussian graphical model. Biometrika 94 19–35.
  • Zhou [2014] Zhou, S. (2014). Gemini: Graph estimation with matrix variate normal instances. Ann. Statist. 42 532–562.
  • Zhou [2024] Zhou, S. (2024). Concentration of measure bounds for matrix-variate data with missing values. Bernoulli 30 198–226.
  • Zhou et al. [2010] Zhou, S., Lafferty, J. and Wasserman, L. (2010). Time varying undirected graphs. Mach. Learn. 80 295–319.
  • Zhou et al. [2011] Zhou, S., Rütimann, P., Xu, M. and Bühlmann, P. (2011). High-dimensional covariance estimation based on Gaussian graphical models. J. Mach. Learn. Res. 12 2975–3026.