Sharper rates of convergence for the tensor graphical Lasso estimator
Abstract
Many modern datasets exhibit dependencies among observations as well as variables. This gives rise to the challenging problem of analyzing high-dimensional matrix-variate data with unknown dependence structures. To address this challenge, Kalaitzis et. al. (2013) proposed the Bigraphical Lasso (BiGLasso), an estimator for precision matrices of matrix-normals based on the Cartesian product of graphs. Subsequently, Greenewald, Zhou and Hero (GZH 2019) introduced a multiway tensor generalization of the BiGLasso estimator, known as the TeraLasso estimator. In this paper, we provide sharper rates of convergence in the Frobenius and operator norm for both BiGLasso and TeraLasso estimators for estimating inverse covariance matrices. This improves upon the rates presented in GZH 2019. In particular, (a) we strengthen the bounds for the relative errors in the operator and Frobenius norm by a factor of approximately ; (b) Crucially, this improvement allows for finite-sample estimation errors in both norms to be derived for the two-way Kronecker sum model. The two-way regime is important because it is the setting that is the most theoretically challenging, and simultaneously the most common in applications. Normality is not needed in our proofs; instead, we consider sub-gaussian ensembles and derive tight concentration of measure bounds, using tensor unfolding techniques. The proof techniques may be of independent interest.
1 Introduction
Matrix and tensor-valued data with complex dependencies are ubiquitous in modern statistics and machine learning, flowing from sources as diverse as medical and radar imaging modalities, spatial-temporal and meteorological data collected from sensor networks and weather stations, and biological, neuroscience and spatial gene expression data aggregated over trials and time points. Learning useful structures from these large scale, complex and high-dimensional data in the low sample regime is an important task. Undirected graphs are often used to describe high dimensional distributions. Under sparsity conditions, the graph can be estimated using -penalization methods, such as the graphical Lasso (GLasso) [12] and multiple (nodewise) regressions [27]. Under suitable conditions, including the independence among samples, such approaches yield consistent (and sparse) estimation in terms of graphical structure and fast convergence rates with respect to the operator and Frobenius norm for the covariance matrix and its inverse. The independence assumptions as mentioned above substantially simplify mathematical derivations but they tend to be very restrictive.
To remedy this, recent work has demonstrated another regime where further improvements in the sample size lower bounds are possible under additional structural assumptions, which arise naturally in the above mentioned contexts for data with complex dependencies. For example, the matrix-normal model [7] as studied in [1, 24, 35, 43] restricts the topology of the graph to tensor product graphs where the precision matrix corresponds to a Kronecker product over two component graphs. In [43], the author showed that one can estimate the covariance and inverse covariance matrices well using only one instance from the matrix variate normal distribution. However, such a normality assumption is also not needed, as elaborated in a recent paper by the same author [44]. More specifically, while the precision matrix encodes conditional independence relations for Gaussian distributions, for the more general sub-gaussian matrix variate model, this no longer holds. However, the inverse covariance matrix still encodes certain zero correlation relations between the residual errors and the covariates in a regression model, analogous to the commonly known Gaussian regression model [23]. See [44], where such regression model is introduced for sub-gaussian matrix variate data. See also [17] and references therein for more recent applications of matrix variate models in genomics.
Along similar lines, the Bigraphical Lasso framework was proposed to parsimoniously model conditional dependence relationships of matrix variate data based on the Cartesian product of graphs. As pointed out in [19], the associativity of the Kronecker sum yields an approach to the modeling of datasets organized into 3 or higher-order tensors. In [16], we explored this possibility to a great extent, by (a) introducing the tensor graphical Lasso (TeraLasso) procedure for estimating sparse -way decomposable inverse covariance matrices for all ; and (b) showing the rates of convergence in the Frobenius and the operator norm for estimating this class of inverse covariance matrices for sub-gaussian tensor-valued data
Consider the mean zero -order random tensor , and assume that we are given independent samples . Here represents that two vectors follow the same distribution. Denote by the vector of component dimensions and the product of s. Hence
(1) |
is the effective sample size we have to estimate the relations among the features along the mode in the tensor model. It was shown that due to the element replication inherent in the Cartesian product structure, the precision matrix in the TeraLasso model can be accurately estimated from limited data samples of high dimensional variables with multiway coordinates such as space, time and replicates. In particular, single sample convergence was proved for and empirically observed for all . In contrast, direct application of the models in [12, 45] and the analysis frameworks in [30, 45, 46] require the sample size to scale proportionally to , when the total number of edges is , which is still often too large to be practical. As a result, it is common to assume certain axes of are i.i.d., often an overly simplistic model.
Contributions. Previously in [16], we provided theoretical guarantees for the TeraLasso estimator (2), when the sample size is low, including single-sample convergence when . In the present work, we strengthen the bounds for the relative errors in the operator and Frobenius norm in [16] by a factor of , improving upon those in Theorem 3.2 (as originally proved in [16]). These faster rates of convergence are stated in Theorem 3.1 in the present paper. This significant improvement is due to the tighter error bounds on the diagonal component of the loss function, cf. Lemma 2.2. We now show that the TeraLasso estimator achieves low errors with a constant number of replicates, namely , even for the regime. This closes the gap between the low single-sample error for the two-way model empirically observed in [16] and the theoretical bounds therein. The key technical innovation in the present work is the uniform concentration of measure bounds on the trace terms appearing in the diagonal component of the loss function (2). The proof techniques may be of independent interest. There, we highlight tensor unfolding techniques and applications of Hanson-Wright inequality, which are crucial in analyzing the sub-gaussian tensor-valued data with complex dependencies. Experiments that confirm the derived statistical rates are provided in the arXiv preprint arXiv:2304.00441v1.
1.1 Related Work
Models similar to the Kronecker sum precision model have been successfully used in a variety of fields, including regularization of multivariate splines [39, 9, 21, 40], design of physical networks [18, 36, 11], and Sylvester equations arising from the discretization of separable -dimensional PDEs with tensorized finite elements [13, 22, 4, 34, 10]. Additionally, Kronecker sums find extensive use in applied mathematics and statistics, including beam propagation physics [2], control theory [26, 5], fluid dynamics [8], errors-in-variables [32], and spatio-temporal modeling and neural processes [14, 33]. When the data indeed follows a matrix normal model, the BiGLasso [19] and TeraLasso [16] also effectively recovers the conditional dependence graphs and precision matrices simultaneously for a class of Gaussian graphical models by restricting the topology to Cartesian product graphs. We provided a composite gradient-based optimization algorithm, and obtained algorithmic and statistical rates of convergence for estimating structured precision matrix for tensor-valued data [16]. Recently, several methods have arisen that can speed up the numerical convergence of the optimization of the BiGLasso objective of [19], cf. (2) with . A Newton-based optimization algorithm for was presented in [41] that provides significantly faster convergence in ill-conditioned settings. Subsequently, [25] also developed a scalable flip-flop approach, building upon the original BiGLasso flip-flop algorithm as derived in [19]. Using the Kronecker sum eigenvalue decomposition similar to that of [16] to make the memory requirements scalable, their algorithm also provides faster numerical convergence than the first-order algorithm presented in [16], especially when the precision matrix is ill-conditioned. They also provided a Gaussian copula approach for applying the model to certain non-Gaussian data. As our focus in the current work is the statistical convergence rate, not optimization convergence, we continue to use the first-order optimization algorithm of [16] in our experiments because it is flexible to handle for . For , it will recover the same estimate as the algorithms of [25] and [41] since the objective function is convex and all three algorithms are guaranteed to converge to the unique global minimum. Although non-convex regularizers were also considered in [16], we do not consider these in the present work. Instead, we focus on deriving sharp statistical rates of convergence in the Frobenius and operator norm for estimating precision matrix (5) with convex function (2). Subsequent to [16], a related SG-PALM was presented in [37], where the precision matrix is the square of an -way Kronecker sum. See [38] for a recent survey of multiway covariance models.
Organization. The rest of the paper is organized as follows. In Section 2, we define the model and the method, with discussions. Section 3 presents the main technical results. Section 4 presents key technical proofs regarding the diagonal component of the loss function. We conclude in Section 5. We place additional technical proofs in the supplementary material.
1.2 Definitions and notations
Let be the canonical basis of . Let and be the unit Euclidean ball and the unit sphere of , respectively. For a set , denote . We denote by the set . We use for matrices, for tensors, and for vectors. For , we use as in [20], and define by analogy to the matrix transpose, i.e. .
We refer to a vector with at most nonzero entries as a -sparse vector. For a finite set , the cardinality is denoted by . For a vector , denote by and . For a given vector , denotes the diagonal matrix whose main diagonal entries are the entries of . For a symmetric matrix , let and be the largest and the smallest eigenvalue of respectively. For a matrix , we use to denote its operator norm and the Frobenius norm, given by . For a matrix of size , let and denote the maximum absolute row and column sum of the matrix respectively. Let denote the componentwise matrix maximum norm. Let be the diagonal of . Let . Let denote the condition number for matrix . Denote by the determinant of . We use the inner product throughout. The inner product of two same-sized tensors is sum of the products of their entries, i.e.,
(2) |
where denotes the element of . Fibers are the higher-order analogue of matrix rows and columns. Following definition by [20], tensor unfolding or matricization of along the th-mode is denoted as , and is formed by arranging the mode- fibers as columns of the resulting matrix of dimension . Denote by its transpose. When extracted from the tensor, fibers are always assumed to be oriented as column vectors. The specific permutation of columns is not important so long as it is consistent across related calculations [20]. Denote by the column vector of . One can compute the corresponding (rescaled) Gram matrix with . Thus, we have columns to compute :
(3) |
For two numbers , , and . We write if for some positive absolute constants that are independent of , sparsity, and sampling parameters. We write or if for some absolute constant and or if . We write if as , where the parameter will be the size of the matrix under consideration. In this paper, , etc, denote various absolute positive constants which may change line by line.
2 The model and the method
For a random variable , the subgaussian (or ) norm of denoted by is defined as Consider the tensor-valued data generated from a subgaussian random vector with independent mean-zero unit variance components whose norms are uniformly bounded:
(4) | |||||
We refer to as an order- sub-gaussian random tensor with covariance throughout this paper. Let . We assume that the precision matrix of is the -way Kronecker sum of matrix components . As such, we have
(5) | |||||
where |
where denotes the Kronecker (direct) product and . The precision matrix (5) has an immediate connection to the positive-semidefinite Gram matrices associated with each mode of the tensor , through tensor unfolding. Let be i.i.d. random tensors following (4). We define the mode- Gram matrix and its corresponding factor-wise marginal covariance as follows:
(6) | |||
(7) |
See [15]. Denote by the set of positive definite matrices that are decomposable into a Kronecker sum of fixed factor dimensions :
(8) | |||
The TeraLasso estimator [16] minimizes the negative -penalized Gaussian log-likelihood function over the domain of precision matrices , where
(9) | |||
(10) | |||
is a penalty parameter to be specified. Here, the weight (1) for each is determined by the number of times for which a structure is replicated in . Then for as in (8), we have
The objective function (2) depends on the training data via the coordinate-wise Gram matrices , where we replace the trace term in (9) with the weighted sum over component-wise trace terms in (2), in view of (5) and (6), cf. Lemma 2.1. Here and in [16], the set of penalty parameters are chosen to dominate the maximum of entrywise errors for estimating the population (7) with sample as in (6), for each on event ; cf. (12). This choice works equally well for the subgaussian model (4). For and , the objective function (2) is similar in spirit to the BiGLasso objective [19], where correspond to the Gram matrices computed from row and column vectors of matrix variate samples respectively. When is a Kronecker product rather than a Kronecker sum over the factors, the objective function (2) is also closely related to the Gemini estimators in [43], where is a linear combination of . When follows a multivariate Gaussian distribution and the precision matrix has a decomposition of the form (5), the sparsity pattern of for each corresponds to the conditional independence graph across the dimension of the data. Similar to the graphical Lasso, incorporating an -penalty promotes a sparse graphical structure in the and by extension . See for example [6, 3, 42, 46, 29, 19, 43, 16] and references therein.
2.1 The projection perspective
Throughout this paper, the subscript is omitted from and () in case to avoid clutter in the notation. Lemma 2.1 explains the smoothing ideas. Intuitively, we use the fibers to estimate relations between and among the features along the mode, as encoded in . Hence, this forms the aggregation of all data from modes other than , which allows uniform concentration of measure bounds as shown in Lemma 2.2 to be accomplished.
Lemma 2.1.
(KS trace: Projection lemma) Consider the mean zero -order random tensor . Denote by the column vector in the matrix formed by tensor unfolding. Now let . Denote by the row vector in . Denote by . Then for sample covariance and as in (8),
where is the same as in (3). Then we also have
Here of a matrix is obtained by stacking columns of into a long vector of size .
Lemma 2.2.
Discussions. For simplicity, we state Lemma 2.1 for the trace term in case , with obvious extensions for and for any . Notice that following (3), and
which in turn can be interpreted as the tensor inner product (2) with . We mention in passing that (resp. ) appears in the rates of convergence in Theorems 3.1 and 3.2 as the effective sample size for estimating for (resp. ). We prove Lemma 2.2 in Section 4. Before leaving this section, we define the support set of .
Definition 2.3.
(The support set of ) For each , , denote its support set by . Let , for all . Similarly, denote the support set of by , with .
3 The main theorem
In the present work, due to the tighter error bound on the diagonal component of the loss function as stated in Lemma 2.2, we achieve the sharper rates of convergence in Theorem 3.1, which significantly improve upon earlier results in [16] as stated in Theorem 3.2. Specifically, we replace the in the earlier factor with for the relative errors in the operator and Frobenius norm in Theorem 3.1 in the present work. Under suitable assumptions on the sparsity parameter and dimensions , consistency and the rate of convergence in the operator norm can be obtained for all and . First, we state the set of assumptions, (A1), (A2) and (A3).
-
(A1)
Let . Denote by for . Suppose , where for all .
-
(A2)
The smallest eigenvalue , and the largest eigenvalue .
-
(A3)
The sample size satisfies the following:
where , is as in Definition 2.3, and is an absolute constant.
Theorem 3.1.
(Main result) Suppose (A1), (A2), and (A3) hold. Then for absolute constants , and , with probability ,
The condition number for is defined as
where we have used the additivity of the eigenvalues of the Kronecker sum. Following [16], we focus on error bounds on the estimate of itself, rather than the individually factors in the present work. We emphasize that we retain essentially the same error bound as that in [16] for the off-diagonal component of the trace terms in (2). Denote by , where
(12) | |||
(13) |
Event is needed to control the off-diagonal component of the loss function as in the supplementary Lemma A.3, cf. Proof of Lemma 12 [15].
Intuitively, we use fibers to estimate relations between and among the features along the mode as encoded in and this allows optimal statistical rates of convergence to be derived, in terms of entrywise errors for estimating (7) with (6). Correspondingly, events in (12), where were originally defined in [16], are used in the present work that reflect this sample aggregation with being the effective size for estimating . Indeed, as we will show in Theorem 3.2 [16], these entrywise error bounds already enabled a significant improvement in the sample size lower bound in order to estimate parameters and the associated conditional independence graphs along different coordinates such as space, time and experimental conditions. However, these entrywise error bounds are not sufficient to achieve the operator norm-type of bounds as in Theorem 3.1 for inverse covariance estimation. In fact, using the entrywise error bounds to control the diagonal components of the trace terms will result in an extra factor in the sample size lower bound and correspondingly a slower rate of convergence. This extraneous factor is undesirable since the diagonal component of the loss function dominates the overall rate of convergence in sparse settings for inverse covariance estimation.
Summary. The worst aspect ratio is defined as , that is, . Clearly, a smaller aspect ratio implies a faster rate of convergence for the relative errors in the operator and Frobenius norm. First, observe that for relative error in the operator norm in Theorem 3.2,
The same improvement holds true for the Frobenius norm error. Here we eliminate the extraneous factor from the diagonal component of the error through new concentration of measure analysis in the present work; cf. Lemma 2.2. This is a significant improvement for two reasons: (a) since is the product of the s, is often nontrivial, especially for larger ; and (b) more importantly, for and (in contrast to ), the error bound in the operator norm in Theorem 3.2 from [16] will diverge for any as increases, since
(14) |
where and equality holds only when . As a result, in Theorem 3.2 [16], the sample lower bound, namely, implies that , since in view of (14). In contrast, the lower bound on in (A3) is less stringent, saving a factor of so long as .
This is consistent with the successful finite sample experiments in both [16] and the present work, where for , bounded errors in the operator norm are observed as increases. As a result, our new bound supports the use of the TeraLasso estimator when , so long as a small number of replicates are available, that is, when , in a way that the previous Theorem 3.2 cannot. More precisely, for finite sample settings, namely, when , the relative errors will still be bounded at for , for example, when the two dimensions are at the same order: , and rapidly converge to zero for ; cf. Theorem 3.4.
Single sample convergence. First of all, both Theorems 3.1 and 3.2 imply convergence for the relative error in the operator norm, when and , which we refer to as the cubic tensor settings, since potentially will hold. See Theorem 3.4 in the sequel. However, when the s are skewed, this may not be the case. To make this clear, we first state Corollary 3.3.
Corollary 3.3 (Dependence on aspect ratio for ).
Suppose (A1), (A2) and (A3) hold for . Then with probability at least , we have for some absolute constants ,
Under the bounded aspect ratio regime, the relative errors in the operator and Frobenius norm for estimating the precision matrix depend on the decay of the worst aspect ratio and the average of over all modes, which represents relative sparsity levels (sparsity / dimension) in an average sense. For , typically the aspect ratio is much less than 1 and convergence happens rapidly. If the sparse support set is small relative to nominal dimension along each mode, for example, when , this convergence is at the rate of decay of the worst aspect ratio. In this case, the diagonal component dominates the rate of convergence and this is essentially optimal, since in the largest component with dimension , it has parameters to be estimated and effective samples for the task. Moreover, is needed in the bound since we estimate components all together using one sample in case . As a first example, we consider the cubic setting where .
Theorem 3.4.
(The cubic tensor: ) Suppose all conditions in Theorem 3.1 hold. Suppose for all . Suppose . Then,
Suppose in addition Then
Optimality of the rate. In words, a tensor is cubical if all s are at the same order. Then
(15) |
Note that for , we obtain a fast rate of convergence in the operator norm for , since in the cubic tensor settings, the effective sample size increases significantly faster than given that . More precisely, we state Theorem 3.4, where we consider the cubic tensor setting and . Theorem 3.4 shows that convergence will occur for the dense cubic case, so long as which is a reasonable assumption in case and holds under (A3). In other words, the relative errors in the operator and Frobenius norm are bounded so long as the effective sample size is at least , which is roughly times the total number of (unique) diagonal entries in , and also at least times , which in turn denotes the size of total supports over off-diagonal components of factor matrices . Consider now an even more special case. Suppose that in the cubic tensor setting, we have in addition. Then the error in the operator norm is again dominated by the square root of the aspect ratio parameter. In other words, to achieve the near optimal rate of , it is sufficient for each axis dimension to dominate the average sparsity across all factors, namely, by a factor. A more general result has been stated in Corollary 3.3.
4 The new concentration bounds
Throughout this proof, we assume for simplicity. We now provide outline for proving the upper bound on the diagonal component of the main result of the paper. Recall the true parameter , where (5). Let . Since , we have
(16) |
for some whose off-diagonal (but not diagonal) elements are uniquely determined. For self-containment, we state Lemma 4.2, where we also state the notation we use throughout this section. Here we use the trace-zero convention which guarantees the uniqueness of the in (17). We will then restate Lemma 2.2 in Lemma 4.3. The off-diagonal component has been dealt with in [16], cf. the supplementary Lemma A.3. Hence in the rest of this section, we prove Lemma 4.3. First we show the following bounds on the -net of .
Lemma 4.1.
[28] Let . For each , one can construct an -net , which satisfies
Lemma 4.2.
Proof.
Lemma 4.3.
Note when we have for all , or equivalently, when , we do not need to pay the extra factor of as in Lemma 13 [15] on the diagonal portion of the error bound, resulting in the improved rates of convergence in Theorem 3.1. Note that when , the bound in Lemma 4.3 still leads to an improvement on the overall rate.
Lemma 4.4.
Let be the sphere in . Construct an -net such that , where , as in Lemma 4.1. Recall . Let . Define the event as:
(21) |
Let be some absolute constants. Let . Then . Moreover, we have by a standard approximation argument, on event , simultaneously for all ,
Proof idea. Notice that the expression for clearly depends on the dimension of . Let . Using the notation in Lemma 4.2, let and
(22) |
Now for each , following Lemma 2.1, we have
(23) | |||||
To bound the probability for event , we use the Hanson-Wright inequality [31], cf. Theorem 1.1 therein, and the union bound. The rest is deferred to Section 4.2.
4.1 Proof of Lemma 4.3
Besides , we need the following event :
(24) |
Suppose holds. Denote by
(25) |
where . Denote by
for as in (21). For each index , on event , simultaneously for all as in (18) and (37), we have
Now, on event , we have by Lemma 4.4, simultaneously for all as in (18),
By the bound immediately above and (24), we obtain
where by Lemma 4.2, for ,
and by the Cauchy-Schwarz inequality,
where , since the RHS is a polynomial function of , and the last line holds by (20). Putting things together, we have
To bound , we rewrite the trace as a quadratic form:
where is the same as in (4). Thus, we have by the Hanson-Wright inequality [31], cf. Theorem 1.1 therein,
where and .
Hence by Lemma 4.4 and the bound immediately above,
The lemma thus holds upon adjusting the constants.
4.2 Proof of Lemma 4.4
Set . First, we rewrite (23) and the trace term as a quadratic form in subgaussian random variables,
(26) | |||
Then , where represents the operator or the Frobenius norm. Now for , by (23) and (26), and the Hanson-Wright inequality, we have
(27) | |||||
Now for all and , we have and by (22). Thus
Recall is an -net of the sphere , where . Then for as in (21), we have by (27) and the union bound,
The “moreover” statement follows from a standard approximation argument. Suppose event holds. Denote by
We have for ,
The LHS is obvious. To see the RHS, notice that for that achieves maximality in
we can find such that . Now
and hence
The lemma thus holds.
5 Conclusion
In this work, we present tighter theoretical bounds on the statistical convergence of the regularized TeraLasso estimator of precision matrices with Kronecker sum structures. The key innovation in the present work is to derive tight concentration bounds for the trace terms on the diagonal component of the loss function (2), which was not available in [16]. Crucially, this improvement allows for finite-sample statistical convergence rates to be derived for the two-way Kronecker sum model, which was missing from [16] and was also deemed the most demanding, due to the lack of sample replications. This closes the gap between the low single-sample error for the two-way model as observed in [16] and the lack of theoretical guarantee for this particular case.
Acknowledgement
We thank Harrison Zhou for helpful discussions. We thank the Simons Institute for the Theory of Computing at University of California, Berkeley at the occasion of Algorithmic Advances for Statistical Inference with Combinatorial Structure Workshop, and organizers of International Conference on Statistics and Related Fields (ICON STARF), University of Luxembourg, for their kind invitations, where SZ presented a talk including this work in 2021 in the middle of the pandemic.
Appendix A Proof of Theorem 3.1
Recall (2) is equivalent to the following:
where is as defined in (10), in view of (6). First, we define the unified event as the event that all these events hold, i.e.
whose probability of holding will yield the probability as stated in the main Theorem 3.1. We focus on the case . For , we defer the proof to Section A.3. First, we state Lemma A.1, which is proved in [15], cf. Lemma 8 therein.
Lemma A.1.
(Lemma 8 of [15]) For all , .
In the proof of Theorem 3.1 that follows, our strategy will be to show that several events controlling the concentration of the sample covariance matrix (in the case, simply an outer product) hold with high probability, and then show that given these events hold, the statistical error bounds in Theorem 3.1 hold. The off-diagonal events are as defined in (12). We defer the definitions of new diagonal events to Section 4. We use the following notation to describe errors in the precision matrix and its factors. For let . Since both and are Kronecker sums,
for some whose off-diagonal (but not diagonal) elements are uniquely determined. For an index set and a matrix , write , where is an indicator function.
A.1 Preliminary results
Before we show the proof of Theorem 3.1, we need to state the following lemmas. Proofs of Lemmas A.1 and A.2 appear in [15] (cf. Lemmas 8 and 10 therein). We then present an error bound for the off-diagonal component of the loss function, which appears as Lemma 12 in [15] and follows from the concentration of measure bounds on elements of ; cf. (12). Combined with our new concentration bound on the diagonal component of the loss function, cf. Lemma 2.2, we obtain the improved overall rate of convergence as stated in Theorem 3.1.
Lemma A.2.
Let . Let and . Then for all , we have
(28) |
where by disjointness of ,
Lemma A.3.
With probability at least ,
Next we show that as an immediate corollary of (28), we have Lemma A.4, which is a deterministic result and identical to Lemma 10 [15]. The proof is omitted. Lemma A.5 follows immediately from Lemmas A.3 and A.4.
Lemma A.4.
(Deterministic bounds) Let . Denote by
(29) | |||||
Lemma A.5.
Lemma A.6 follows from Lemmas 2.2 and A.5. We provide a proof of Lemmas A.5 and A.6 in the supplementary material for completeness. Since so long as , we have and hence for sufficiently large .
Lemma A.6.
Proposition A.7.
Proof.
A.2 Proof of Theorem 3.1
We will only show the proof for . Let
(33) |
be the difference between the objective function (A) at and at . Clearly minimizes , which is a convex function with a unique minimizer on (cf. Theorem 5 [15]). Let be as defined in (31) for some large enough absolute constant to be specified, and
(34) |
In particular, we set in , for absolute constant as in Lemma 4.3.
Proposition A.8.
Proposition A.9.
Suppose for all . We then have
Proof.
By definition, , so . Thus if on , then by Proposition A.8, where is defined therein. The proposition thus holds.
Lemma A.10.
Under (A1) - (A3), for all for which ,
Lemma A.11.
With probability at least , we have for all .
Proof.
A.3 Extension to multiple samples
To complete the proof, it remains to present the case of . Incorporating directly into the proof above is relatively straightforward but notation-dense; hence it suffices to note that having independent samples essentially increases the replication to , and propagate this fact through the proof. We also note that the multi-sample case can be converted to the single sample regime to obtain a result directly. To see this, note that independent samples with precision matrix can be represented as a single sample with the block-diagonal precision matrix, i.e. repeated times blockwise along the diagonal, specifically, . Recall that by definition of the Kronecker sum, is a -order Kronecker sum with , achieved by introducing an all-zero factor with (and ). Since this extra factor is zero, the operator norms are not affected. The sparsity factor of is since the non-zero elements are replicated times, and each co-dimension for . Hence the single sample convergence result can be applied with , yielding for and
since whenever . Hence Theorem 3.1 is recovered for , with constant slightly worse than could be obtained by incorporating directly into the proof.
A.4 Proof of Corollary 3.3
A.5 Proof of Theorem 3.4
Appendix B Proof of preliminary results in Section A
B.1 Proof of Lemma A.5
B.2 Proof of Lemma A.6
B.3 Proof of Lemma A.10
We first state Proposition B.1
Proposition B.1.
Under (A1)-(A3), for all ,
(36) |
so that , where is an open interval containing .
Proof.
Appendix C Some details for the proof of Lemmas 4.3 and 4.4
Let the sample covariance be as in (10) and be the true covariance matrix. Thus we denote by , where denotes an isotropic sub-gaussian random vector with independent coordinates. Let
(37) |
This explains (26). Consequently, by (37)
By Lemma 2.1, we have for the diagonal and off-diagonal components of the trace term defined as follows: for ,
where and . This explains (23). See also (2). Moreover, we need the following event :
which states that the sum of the errors of the diagonal of the sample covariance (10) is bounded tightly in an average sense. Indeed, as expected, converges to at the rate of
References
- Allen and Tibshirani [2010] Allen, G. and Tibshirani, R. (2010). Transposable regularized covariance models with an application to missing data imputation. Ann. Appl. Stat. 4 764–790.
- Andrianov [1997] Andrianov, S. N. (1997). A matrix representation of lie algebraic methods for design of nonlinear beam lines. In AIP Conference Proceedings, vol. 391. AIP.
- Banerjee et al. [2008] Banerjee, O., Ghaoui, L. E. and d’Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J. Mach. Learn. Res. 9 485–516.
- Beckermann et al. [2013] Beckermann, B., Kressner, D. and Tobler, C. (2013). An error analysis of galerkin projection methods for linear systems with tensor product structure. SIAM Journal on Numerical Analysis 51 3307–3326.
- Chapman et al. [2014] Chapman, A., Nabi-Abdolyousefi, M. and Mesbahi, M. (2014). Controllability and observability of network-of-networks via cartesian products. IEEE Transactions on Automatic Control 59 2668–2679.
- d’Aspremont et al. [2008] d’Aspremont, A., Banerjee, O. and Ghaoui, L. E. (2008). First-order methods for sparse covariance selection. SIAM Journal on Matrix Analysis and Applications 30 56–66.
- Dawid [1981] Dawid, A. P. (1981). Some matrix-variate distribution theory: Notational considerations and a bayesian application. Biometrika 68 265–274.
- Dorr [1970] Dorr, F. W. (1970). The direct solution of the discrete poisson equation on a rectangle. SIAM review 12 248–263.
- Eilers and Marx [2003] Eilers, P. H. and Marx, B. D. (2003). Multivariate calibration with temperature interaction using two-dimensional penalized signal regression. Chemometrics and intelligent laboratory systems 66 159–174.
- Ellner [1986] Ellner, N. S. (1986). New ADI model problem applications. In Proceedings of 1986 ACM Fall joint computer conference.
- Fey et al. [2018] Fey, M., Eric Lenssen, J., Weichert, F. and Müller, H. (2018). Splinecnn: Fast geometric deep learning with continuous b-spline kernels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
- Friedman et al. [2008] Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical Lasso. Biostatistics 9 432–441.
- Grasedyck [2004] Grasedyck, L. (2004). Existence and computation of low kronecker-rank approximations for large linear systems of tensor product structure. Computing 72 247–265.
- Greenewald et al. [2017] Greenewald, K., Park, S., Zhou, S. and Giessing, A. (2017). Time-dependent spatially varying graphical models, with application to brain fmri data analysis. In Advances in Neural Information Processing Systems 30. 5832–5840.
- Greenewald et al. [2019a] Greenewald, K., Zhou, S. and Hero, A. (2019a). Supplementary material for ”tensor graphical lasso (teralasso)” .
- Greenewald et al. [2019b] Greenewald, K., Zhou, S. and Hero, A. (2019b). The Tensor graphical Lasso (TeraLasso). J. R. Stat. Soc., B: Stat. Methodol. 81 901–931.
- Hornstein et al. [2019] Hornstein, M., Fan, R., Shedden, K. and Zhou, S. (2019). Joint mean and covariance estimation for unreplicated matrix-variate data. J. Amer. Statist. Assoc. 114 682–696.
- Imrich et al. [2008] Imrich, W., Klavžar, S. and Rall, D. F. (2008). Topics in graph theory: Graphs and their Cartesian product. AK Peters/CRC Press.
- Kalaitzis et al. [2013] Kalaitzis, A., Lafferty, J., Lawrence, N. and Zhou, S. (2013). The bigraphical lasso. In Proc. 30th Int. Conf. Mach. Learn.
- Kolda and Bader [2009] Kolda, T. G. and Bader, B. W. (2009). Tensor decompositions and applications. SIAM review 51 455–500.
- Kotzagiannidis and Dragotti [2017] Kotzagiannidis, M. S. and Dragotti, P. L. (2017). Splines and wavelets on circulant graphs. Applied and Computational Harmonic Analysis .
- Kressner and Tobler [2010] Kressner, D. and Tobler, C. (2010). Krylov subspace methods for linear systems with tensor product structure. SIAM journal on matrix analysis and applications 31 1688–1714.
- Lauritzen [1996] Lauritzen, S. L. (1996). Graphical Models. Oxford University Press.
- Leng and Tang [2012] Leng, C. and Tang, C. (2012). Sparse matrix graphical models. J. Amer. Statist. Assoc. 107 1187–1200.
- Li et al. [2022] Li, S., López-Garcia, M., Lawrence, N. D. and Cutillo, L. (2022). Two-way sparse network inference for count data. In International Conference on Artificial Intelligence and Statistics. PMLR.
- Luenberger [1966] Luenberger, D. (1966). Observers for multivariable systems. IEEE Transactions on Automatic Control 11 190–197.
- Meinshausen and Bühlmann [2006] Meinshausen, N. and Bühlmann, P. (2006). High dimensional graphs and variable selection with the Lasso. Ann. Statist. 34 1436–1462.
- Milman and Schechtman [1986] Milman, V. D. and Schechtman, G. (1986). Asymptotic Theory of Finite Dimensional Normed Spaces. Lecture Notes in Mathematics 1200. Springer.
- Ravikumar et al. [2011] Ravikumar, P., Wainwright, M., Raskutti, G. and Yu, B. (2011). High-dimensional covariance estimation by minimizing -penalized log-determinant divergence. Electron. J. Stat. 4 935–980.
- Rothman et al. [2008] Rothman, A., Bickel, P., Levina, E. and Zhu, J. (2008). Sparse permutation invariant covariance estimation. Electron. J. Stat. 2 494–515.
- Rudelson and Vershynin [2013] Rudelson, M. and Vershynin, R. (2013). Hanson-Wright inequality and sub-gaussian concentration. Electron. Commun. Probab. 18 1–9.
- Rudelson and Zhou [2017] Rudelson, M. and Zhou, S. (2017). Errors-in-variables models with dependent measurements. Electron. J. Statist. 11 1699–1797.
- Schmitt et al. [2001] Schmitt, U., Louis, A. K., Darvas, F., Buchner, H. and Fuchs, M. (2001). Numerical aspects of spatio-temporal current density reconstruction from eeg-/meg-data. IEEE Transactions on Medical Imaging 20 314–324.
- Shi et al. [2013] Shi, X., Wei, Y. and Ling, S. (2013). Backward error and perturbation bounds for high order sylvester tensor equation. Linear and Multilinear Algebra 61 1436–1446.
- Tsiligkaridis et al. [2013] Tsiligkaridis, T., Hero, A. and Zhou, S. (2013). On convergence of kronecker graphical lasso algorithms. IEEE Trans. Signal Process. 61 1743–1755.
- Van Loan [2000] Van Loan, C. F. (2000). The ubiquitous kronecker product. Journal of computational and applied mathematics 123 85–100.
- Wang and Hero [2021] Wang, Y. and Hero, A. (2021). SG-PALM: a fast physically interpretable tensor graphical model. arXiv preprint arXiv:2105.12271 .
- Wang et al. [2022] Wang, Y., Sun, Z., Song, D. and Hero, A. (2022). Kronecker-structured covariance models for multiway data. arXiv preprint arXiv:2212.01721 .
- Wood [2006] Wood, S. N. (2006). Low-rank scale-invariant tensor product smooths for generalized additive mixed models. Biometrics 62 1025–1036.
- Wood et al. [2016] Wood, S. N., Pya, N. and Safken, B. (2016). Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111 1548–1563.
- Yoon and Kim [2022] Yoon, J. H. and Kim, S. (2022). Eiglasso for scalable sparse kronecker-sum inverse covariance estimation. Journal of Machine Learning Research 23 1–39.
- Yuan and Lin [2007] Yuan, M. and Lin, Y. (2007). Model selection and estimation in the gaussian graphical model. Biometrika 94 19–35.
- Zhou [2014] Zhou, S. (2014). Gemini: Graph estimation with matrix variate normal instances. Ann. Statist. 42 532–562.
- Zhou [2024] Zhou, S. (2024). Concentration of measure bounds for matrix-variate data with missing values. Bernoulli 30 198–226.
- Zhou et al. [2010] Zhou, S., Lafferty, J. and Wasserman, L. (2010). Time varying undirected graphs. Mach. Learn. 80 295–319.
- Zhou et al. [2011] Zhou, S., Rütimann, P., Xu, M. and Bühlmann, P. (2011). High-dimensional covariance estimation based on Gaussian graphical models. J. Mach. Learn. Res. 12 2975–3026.