Regularized Estimation of Sparse Spectral Precision Matrices
Abstract
Spectral precision matrix, the inverse of a spectral density matrix, is an object of central interest in frequency-domain analysis of multivariate time series. Estimation of spectral precision matrix is a key step in calculating partial coherency and graphical model selection of stationary time series. When the dimension of a multivariate time series is moderate to large, traditional estimators of spectral density matrices such as averaged periodograms tend to be severely ill-conditioned, and one needs to resort to suitable regularization strategies involving optimization over complex variables. Existing -regularization approaches in this direction either separately penalize real and imaginary parts of a complex number, thereby artificially increasing dimensionality, or use off-the-shelf optimization routines for complex variables that do not explicitly leverage underlying sparsity structure of the problem, leading to scalability issues.
In this work, we propose complex graphical Lasso (CGLASSO), an -penalized estimator of spectral precision matrix based on local Whittle likelihood maximization. We develop fast pathwise coordinate descent algorithms for implementing CGLASSO on large dimensional time series data sets. At its core, our algorithmic development relies on a ring isomorphism between complex and real matrices that helps map a number of optimization problems over complex variables to similar optimization problems over real variables. This finding may be of independent interest and more broadly applicable for high-dimensional statistical analysis with complex-valued data. We also present a complete non-asymptotic theory of our proposed estimator which shows that consistent estimation is possible in high-dimensional regime as long as the underlying spectral precision matrix is suitably sparse. We compare the performance of CGLASSO with competing alternatives on simulated data sets, and use it to construct partial coherence network among brain regions from a real fMRI data set.
1 Introduction
Estimation and structure learning of a spectral density matrix and its inverse, commonly known as the spectral precision matrix, are canonical problems in multivariate time series and signal processing literature with several important statistical tasks. For example, these objects are used for measuring frequency-domain analogues of correlation and partial correlations, viz. coherency and partial coherency. These are also used in graphical modeling of stationary time series, i.e. discovering conditional independence relationships among the components of a multivariate time series. This task amounts to learning the sparsity patterns in the off-diagonal elements of spectral precision matrices at different frequencies [Dahlhaus, 2000; Davis et al., 2016]. Frequency-domain measures such as spectral density, spectral precision, and more generally graphical modeling of stationary time series (StGM), have proven to be useful in several scientific areas including neuroscience, economics and environmental sciences [Granger, 1969; Bowyer, 2016; Fiecas et al., 2019; Baek et al., 2021b].
When the dimension of the multivariate time series is moderate or large, or the number of measured time points is small, the classical approach of estimating a spectral density matrix with smoothed periodograms and subsequently inverting it is not ideal, as shown in Böhm and von Sachs [2009]. This observation has motivated several approaches for shrinkage or sparsity-regularized estimation of spectral precision matrix for moderate to large-scale systems.
One approach is to first shrink a spectral density matrix and invert it, or shrink the inverted spectral density matrix [Fiecas and von Sachs, 2014; Fiecas et al., 2010; Schneider-Luftman and Walden, 2016]. These methods, however, do not ensure a sparse estimator of the spectral precision matrix. The other approach is to maximize local Whittle likelihood subject to sparsity constraints or penalties [Fiecas et al., 2019; Dallakyan et al., 2022; Baek et al., 2021a; Jung et al., 2015; Tugnait, 2022; Krampe and Paparoditis, 2022; Wodeyar and Srinivasan, 2022]. These methods rely on the fact that for stationary processes, discrete Fourier transforms (DFTs) at neighboring Fourier frequencies behave asymptotically as independent and nearly identically distributed “complex-valued data” in the frequency-domain. So algorithms can be designed following standard approaches in Gaussian graphical models (GGM) for real-valued i.i.d. data.
Some important algorithmic and statistical questions remain open in this emerging field of sparse spectral precision matrix estimation. On the algorithmic front, a key challenge is the lack of fast pathwise algorithms for penalized Whittle likelihood maximization that can take advantage of the underlying sparsity structure and efficiently compute penalized estimators on a grid of tuning parameters such as the penalty or the smoothing span. In GGM with real-valued data, pathwise coordinate descent [Friedman et al., 2007] provides state-of-the-art solution for this task. In contrast, existing algorithms to deal with complex-valued data either use off-the-shelf convex solvers such as the alternating direction method of multipliers (ADMM) [Jung et al., 2015; Dallakyan et al., 2022; Tugnait, 2022] or artificially increase the dimensionality of the problem by optimizing separately over real and imaginary parts of a complex number [Fiecas et al., 2019; Krampe and Paparoditis, 2022]. On the theoretical front, to the best of our knowledge there is no systematic asymptotic analysis for the penalized local Whittle likelihood maximizer that provides insight into the interplay of sparsity, high-dimensionality and the temporal and cross-sectional dependence of the underlying time series for any Fourier frequency. The closest works on this front are Baek et al. [2021a], which focuses on long memory processes but only on frequency , and Weylandt Jr [2020], which focuses on i.i.d. complex-valued data.
In this paper, we make two contributions to this literature. First, we provide fast pathwise coordinate descent algorithms for complex-variable Lasso (CLASSO) and graphical Lasso (CGLASSO). These algorithms are capable of taking advantage of the underlying sparsity structure of the problem in the same manner as their real counterparts. The core insight behind of our algorithms is to extend a ring isomorphism between complex numbers and orthogonal matrices to the spaces of complex vectors and matrices. This connection helps represent a number of statistical optimization problems over complex variables (e.g. ordinary least squares and Gaussian likelihood maximization) into well-understood optimization problems over real variables. In particular, we can express a generic complex Lasso regression as a group Lasso over real variables with orthogonal group structures. This allows us to build block coordinate descent algorithm with closed form updates, which is significantly faster than a generic group Lasso implementation such as grplasso [Meier, 2020]. An upshot of this connection is that we can rely on the convergence results of block coordinate descent over real variables [Tseng, 2001]. Finally, we use this connection to build fast, pathwise algorithms for both CLASSO and CGLASSO.
Second, we provide a systematic non-asymptotic analysis to study the estimation error of our method in both low- and high-dimensional regimes. This requires a careful combination of tools from Sun et al. [2018], Ravikumar et al. [2011], and complex analysis to deal with complex random variables. Our results show that under suitable sparsity and irrepresentability assumptions akin to what is commonly used in high-dimensional statistics, our method can consistently estimate the true spectral precision matrix.
We conduct extensive numerical experiments on simulated data to assess a number of finite sample properties of CGLASSO. First, our analysis confirms that for spectral precision estimation, jointly penalizing real and imaginary parts of a complex variable is beneficial over separating them as in Fiecas et al. [2019], especially in high-dimension. This matches a similar observation for complex Lasso regression made in Maleki et al. [2013]. Second, we confirm that as expected CGLASSO outperforms unregularized estimators of spectral precision matrix in terms of estimation accuracy. Third, We find that CGLASSO, which works with the entire likelihood information, offers better model selection than node-wise regression strategies along the line of Meinshausen and Bühlmann [2006]. Finally, we demonstrate the use of CGLASSO for partial coherence estimation task on a real fMRI data set obtained from the Human Connectome Project (HCP).
The rest of the paper is organized as follows. In section 2, we provide some background of the problem, review relevant works in the literature and propose our estimator. In Section 3, we discuss our new optimization algorithms for complex Lasso and graphical Lasso. Section 4 contains a complete non-asymptotic analysis of our method for Gaussian time series and linear processes. In Sections 5 and 6, we report the performance of our method on simulated and real fMRI data.
Notation. Throughout the paper, and denote the set of integers, real numbers and complex numbers respectively. denotes the absolute value of a real number , or the modulus of a complex number . denotes the imaginary number . For any complex number , and denote the real and imaginary parts of respectively, and denote its complex conjugate. For any positive integer , denotes the dimensional basis vector with in the th position and 0 elsewhere. For a complex vector , , and denotes its real part, imaginary part and conjugate transpose respectively. denotes the -norm of a vector , i.e. . For any , denotes norm of i.e. . For a matrix , denotes its conjugate transpose, denotes its vector representation and denotes its th column. For two matrices and , the Fröbenius inner product between and , denoted by is defined as . If is a square matrix, denotes its trace, i.e. the sum of its eigen values. In addition, and denote the maximum modulus row sum norm i.e. , spectral norm or the square root of the largest eigen value of , Frobenius norm and elementwise maximum modulus value respectively. The class of matrices denotes the set of all symmetric non-negative definite real matrices, i.e. . denotes the set of all symmetric positive definite matrices, i.e. . Similarly, and denotes the set of Hermitian non-negative definite and positive definite complex matrices respectively. For asymptotics, we use the following standard notation: for two sequences and , we write if for some independent of and . Similarly, if for some independent of and . We also write if both and hold. We denote if there is a universal constant , independent of model dimension and model parameters, such that , and is defined analogously. We also use to denote universal constants whose values are allowed to change from line to line within a proof.
2 Background and methods
We consider a -dimensional weakly stationary real-valued time series . We also assume for ease of exposition. Weak stationarity implies that the autocovariance function depends only on the lag . The spectral density at a frequency is defined as the Fourier transform of the autocovariance functions
We use to denote the data matrix that contains consecutive observations from the time series in its rows. Our goal is to estimate the spectral precision matrix at a given frequency . It is customary to focus only on the Fourier frequencies , where denotes the integer part of .
The autocovariance can be expressed in terms of as
Some of the properties of the stationary process can therefore be described equivalently in terms of rather than [Brockwell and Davis, 1991; Shumway and Stoffer, 2000; Ombao and Pinto, 2022]. Similarly , as a process, has a spectral representation, known as the Cramér’s representation, given by
(2.1) |
where is an orthogonal increment process satisfying the following properties:
(2.2) |
(2.3) |
This provides us with a routine to decompose as a sum of processes at different frequencies. The integral in Cramér’s representation of is sum across all frequencies and is the random weight for each frequency . We are interested in the existence of such a decomposition because it means that we can sensibly talk about movements in in a given frequency without worrying about its movements in other frequencies. In this sense, the object can be naturally viewed to capture frequency specific conditional independence structure among the component time series. In the next subsection, we elaborate on this and give an illustrative example.
2.1 Frequency-specific conditional independence and
As we can see from (2.2) and (2.3), at each frequency , the correlation structure among the movements in different components is captured by . Similarly, the partial correlation structure among those component-wise random movements can be indicated using . The connection between precision matrix and partial correlation is reminiscent of Gaussian graphical models and the context will be stated more formally in 2.2.
This decomposition into different frequencies are useful in several fields including economics, neuroscience and signal processing. For example, often in human brain connectivity studies the scientists measure neurophysiological signals in the form of Electroencephalogram (EEG) or Functional magnetic resonance imaging (fMRI) for different brain regions. The resulting ‘brain-rhythm’ is viewed as a superposition of signals representing different physiological processes such as cardiovascular and respiratory activities. These different processes are captured by signals of different frequency intervals, e.g. delta band (0.5-4.0 Hz), theta band (4.0-8.0 Hz), alpha band (8.0-12.0 Hz), beta band (12.0-30.0 Hz), gamma band (30.0-50.0 Hz), obtained by frequency band pass filters [Ombao and Pinto, 2022]. The spectral density matrix in this context captures the correlation structure among different brain regions in terms of physiological processes represented by a specific frequency . Similarly, the partial correlation structure (the conditional dependence graph) among those regions can be extracted using the spectral precision matrix if we are interested only in the movements at frequency . Below we illustrate this with a small example from Shumway and Stoffer [2000].
Example 1. We consider two time series and , each having three frequency components, and generate observations as follows
where . We generate the amplitude processes from the following distributions
We obtain and to behave as shown in Figure 1. In this example, ’s are the orthogonal basis functions for the signals and , and and ’s are the corresponding random basis coefficients respectively and each set of basis coefficients are independent for each . We may think of this as an example with two brain regions which generates signals as a resultant of three distinct physiological processes, and hence three distinct frequencies.
In this illustration, if the spectral density of the two dimensional process is , Cramér’s representation as in (2.1) implies . In general, for , for . For the cross-spectral term, , and similarly . Therefore, we see that fully captures the correlation structure of the same frequency orthogonal components of the joint process . Similarly we can also compute the spectral precision which captures the partial correlation. Hence, estimation and inference on the spectral density and similarly the spectral precision sets path for understanding the correlation and the partial correlation graph respectively among the same frequency components extracted from the time series observations.

Such two-dimensional representation can be generalized to a -dimensional case using discrete Fourier transforms (DFT) (section 10.1 of Brockwell and Davis [1991]). We use the DFT evaluated at the Fourier frequencies to estimate and . The -dimensional complex vector-valued DFT at the Fourier frequency , , is defined as
We represent it as . The classical estimator of spectral density is based on the periodogram [Brockwell and Davis, 1991], defined as . According to proposition 10.3.1 of Brockwell and Davis [1991], for any , , which implies . For any pair , the cross-spectral terms can be similarly expressed as . The bottomline is that the correlation structure among the DFT values i.e. the ’s is approximately equal to the spectral density which again captures the correlation pattern of the underlying orthogonal increment processes as in (2.3). In general, is asymptotically unbiased for but not consistent even under classical, fixed- asymptotics. For example, for i.i.d. Gaussian white noise , the variance of is of the order (Proposition 10.3.2 of Brockwell and Davis [1991]), and does not vanish asymptotically. To ensure consistency, it is common to use averaged or smoothed periodogram estimator
(2.4) |
Note that the indices of Fourier frequencies are evaluated “modulo ” as may fall outside . It is well-known (cf. Theorem 10.4.1 of Brockwell and Davis [1991]) that for , (2.4) is a consistent estimator of in the classical asymptotic framework where is fixed but the sample size .
2.2 Method: Penalized Whittle likelihood estimator
We start by introducing the complex normal random variable and its role in the estimation of spectral precision matrix. The following definition is referred to the definition 11.7.1 of Brockwell and Davis [1991]).
Definition 2.1 (Complex Multivariate Normal Distribution).
follows a p-dimensional complex-valued multivariate normal distribution with mean vector and covariance matrix if
In this case we use the notation .
It is well-known that under some regularity conditions, asymptotically (Proposition 11.7.3 and 11.7.4 of Brockwell and Davis [1991]), and with the exception of the DFTs at different Fourier frequencies are approximately independent. So, for any given and small enough , it makes sense to use the DFTs evaluated at as nearly independent and identically distributed data with covariance matrix and precision matrix .
This formulation is analogous to the framework of Gaussian graphical models (GGM), where one is interested in learning the precision matrix from i.i.d. data for . It is quite natural to devise algorithms that are similar to GGM estimators. Indeed, the sparse inverse periodogram estimator (SIPE) is built on a popular GGM estimator called CLIME [Cai et al., 2011], and the regression approach adopted in Krampe and Paparoditis [2022] is based on Meinshausen and Bühlmann [2006].
We propose to develop a sparsity regularized maximum likelihood estimator of Whittle’s approximate likelihood [Whittle, 1951]. Note that for Gaussian time series, asymptotically for every Fourier frequency , and Whittle likelihood summarizes the pseudo-likelihood assuming the periodograms are approximately independent across frequencies. The negative log likelihood (upto constants) takes the form
To estimate the spectral precision at a Fourier frequency , we work with the approximation
We solve the following penalized negative log likelihood minimization problem to estimate using
(2.5) |
where is the penalty parameter, and is the sum of the moduli of all off-diagonal elements of . Equivalently,
(2.6) |
Here is abbreviated as , and in the optimization problem (2.5) is replaced by for simplifying the notation. This optimization problem is similar in spirit to estimation of inverse covariance matrices based on the graphical Lasso [Friedman et al., 2008] for i.i.d. data . The main difference is that takes the role of sample covariance matrix.
Similar penalized local Whittle likelihood approaches have been investigated in recent literature by several authors [Jung et al., 2015; Dallakyan et al., 2022; Baek et al., 2021a; Tugnait, 2022]. Among these works, Baek et al. [2021a] focuses only on but allows long-range dependent processes, and the others provide a single graph by penalizing across all the frequencies at once. In what follows, we restrict our discussions to only the SIPE algorithm of Fiecas et al. [2019] and Krampe and Paparoditis [2022], because only they provide algorithms to estimate for any given .
2.3 Nodewise regression (NWR) of DFTs
Building upon the parallel to GGM, if one is only interested in recovering the sparsity structure of and not the estimation of the full matrix, a natural alternative to graphical Lasso is to perform nodewise regression along the line of Meinshausen and Bühlmann [2006], where we regress one coordinate of the DFTs on all the other coordinates. This can be done by running different complex Lasso regressions. This is closely related to the approach in Krampe and Paparoditis [2022], where instead of the complex Lasso penalty the authors used separate Lasso penalties on the real and imaginary parts of a complex number. While we do not pursue the asymptotic theory of this method in our paper, we compare the model selection performance of our method against this method in the simulation section.
Next we provide a brief description of nodewise DFT regression. Consider the data matrix for estimation at Fourier frequency . Going forward, we will drop subscript when the index is clear from the context.
We will denote the column of as , and the matrix containing all the other columns as . Our plan is to regress on , and use the sparsity pattern of the regression coefficients to learn the sparsity pattern in the off-diagonal part of the row (equivalently, column) of . In low-dimensional regime, we can run ordinary least squares (OLS) regression with complex response and predictors. In high-dimension, we will use complex Lasso to estimate the nodewise regression coefficients
(2.8) |
Here , is the effective sample size, and is the penalty parameter. Then the entry in the off-diagonal part of the row of , i.e. , will be zero if and only if . This may lead to a potentially asymmetric sparsity pattern in , but we can symmetrize the sparsity pattern using standard post-processing techniques proposed in Meinshausen and Bühlmann [2006].
3 Optimization algorithms
While standard convex optimization algorithms such as the ADMM can be used to solve (2.6), they do not exploit underlying sparsity structure of the problem and could be computationally expensive when solving the problem for different penalty parameters and frequencies . In the literature of high-dimensional regression and GGM, coordinate descent algorithms are popular because they provide considerable speed-up by using problem-specific strategies such as warm start and active set selection [Hastie et al., 2015]. A direct adoption of coordinate descent for complex graphical Lasso is difficult since the penalty , for , is effectively a group Lasso penalty over its real and imaginary parts. In general, a block coordinate descent update for group Lasso does not have a closed form, and one needs to solve a separate optimization problem at every iteration of the block coordinate descent.
Our key observation is that the specific group Lasso regression for complex Lasso has an attractive feature.The predictors within a single group are orthogonal, which offers closed form updates in the block coordinate descent iterations. Using this observation, we first propose CLASSO, a coordinate descent algorithm for Lasso with complex variables. Then we use CLASSO to build CGLASSO, a graphical Lasso algorithm for complex variables that solves (2.6).
While this observation of orthogonality alone is sufficient for developing fast algorithms, we argue that this is not a coincidence. It reflects a deep connection between complex and real matrices that can be leveraged to systematically realify a broader family of complex optimization problems. We formalize this connection first, and then present the CLASSO and CGLASSO algorithms.
3.1 Realifying complex optimization with ring isomorphism
At the core of our algorithm lies a well-known field isomorphism between the set of complex scalars and a set of orthogonal real matrices. We show that this can be extended from to a ring isomporphism on complex-valued matrices . At a high-level, this means that sum, product and inverse of complex matrices have direct parallel to the same operations in the space of real-valued matrices. The upshot is that some generic complex-variable optimization problems arising in frequency-domain time series can be reduced to well-understood optimization problems over real variables arising in high-dimensional statistics.
First, we illustrate this isomorphism in Figure 2 for a single complex number, . The relationships and imply that the operation “multiplication by ” can be viewed as a rotation that maps the unit vector on the complex plane to , and maps the unit vector to . This linear map corresponds to the orthogonal matrix
If we identify with and with , we can identify any complex number , , with a matrix of the form given in (3.1).
Next we formalize and expand on the above heuristic of isomorphism through a series of propositions. For a brief introduction to common structures in abstract alebra, e.g. group, ring, field and isomorphisms, we refer the readers to Appendix A.
Proposition 3.1.
Define as
Then is a field isomorphism between and , where
(3.1) |
This implies
-
(i)
and .
-
(ii)
for all . In particular, , for all .
-
(iii)
for all . In particular, for all .
Also, for any , denote , where . Then the following hold
-
(i)
,
-
(ii)
,
-
(iii)
The proof is given in the Appendix B. The isomorphism can be naturally extended to higher dimensional vector spaces and matrix spaces . However the map will no longer be a field isomorphism since for , and are not necessarily fields, but rings. So it reduces to a ring isomorphism.
For any , we extend as
(3.2) |
Similarly, for any , the extension is
(3.3) |
Below we state the properties of the extended in (3.2) and (3.3) without proof since they are similar to Proposition 3.1.
Proposition 3.2.
Note that the extended map is also a group isomorphism, and the additive identities are preserved. Hence the extension is a ring isomorphism as well. We formally state this result next.
Proposition 3.3.
Remark 3.1.
Note that even though the map doubles the dimension of the input object, the computational complexity of dealing with objects from remains the same as operations involving complex numbers.
So far the function replaces every entry of a complex matrix by a real matrix. While this is useful for proving the isomorphism properties, rearranging the rows and columns of helps us present our algorithms in a more compact form. Let be a permutation matrix that maps to . In matrix notation,
Then for and any , we introduce the notation
3.1.1 Ordinary least squares (OLS) with complex variables
The aforementioned properties of can be used to rewrite standard complex variable optimization problems in real-variable transformed problems. For instance, the objective function of an ordinary least squares (OLS) can be rewritten as
[Fröbenius norm is invariant under orthogonal transform] | ||||
(3.4) |
Therefore the isomorphism makes a bridge between the complex-variable and real-variable least squares problems. A similar correspondence can be for the normal equation of the least squares problem
(3.5) |
This implies that the transformed normal equation (3.5) is equivalent to a complex normal equation .
3.1.2 Complex Gaussian log-likelihood for precision matrix estimation
The Gaussian log-likelihood can alternatively be written in terms of real variable too. Consider the setup with i.i.d. random variables for , and denote the complex-valued sample Gram matrix as .
The negative log-likelihood can be expressed (ignoring constants) as
Let us consider a similar function
If is the spectral decomposition of with being an unitary matrix and being a diagonal matrix with all positive real entries, then
Therefore,
(3.6) |
We consider the Schur decomposition of to be with being a unitary matrix and being an upper triangular matrix with the eigenvalues of on the diagonal. Let be the square root of the positive definite matrix where is the diagonal matrix with the square of the diagonals of , Since is similar to the positive definite matrix , all the eigenvalues of is positive i.e. has real positive diagonals. Then the term involving trace can be expressed as
(3.7) |
where the fact that trace of a product of matrices is invariant under cyclic permutations is used twice in the calculation. Combining (3.6) and (3.7), we obtain
which implies that minimizing with respect to is equivalent to minimizing with respect to , or (there is a one-one correspondence between and ).
3.2 Lasso with complex variables
In order to solve the complex Lasso optimization problem (2.8), we can use group Lasso [Yuan and Lin, 2006] for real valued predictors and response. We show this using generic notations for complex Lasso. Let and be the predictors and the response in complex variable regression problem. The Lasso problem is
(3.8) |
where is the penalty parameter. In the optimization problem 2.8, and play the roles of and respectively.
-
(a)
Calculate th partial residual
-
(b)
-
(c)
The Lasso problem (3.8) can be alternatively expressed as an optimization problem with real variables. We use (3.4) and the following fact for the regularization part of (3.8)
Altogether, the optimization problem (3.8) is equivalent to
(3.9) |
This is a group Lasso problem with groups, each of size 2. Let the optimizer of (3.9) be and be the vector of th partial residuals. Following the equation (4.14) for group Lasso in Hastie et al. [2015], the optimizer should satisfy if , and otherwise the following equation:
(3.10) |
The equation (3.10) does not have a closed form solution in general, but here we exploit the structure of . For each , the columns of are orthogonal, which implies
Therefore, the update (3.10) has a closed form expression of given by
(3.11) |
Soft threshold operator. We define two close variants of the soft threshold operators widely used in Lasso literature. For any , let . For every , we define the soft threshold operator as
(3.12) |
and for any , we similarly define as
the soft threshold operator in . The two operators are connected by the map and the following identity. For any , we have
where we use the property (iii) from Proposition 3.1. Similar univariate threshold operator has been defined in Sun et al. [2018].
We now use these soft threshold operators for getting a more convenient form of (3.11). The update (3.11) immediately becomes . If the columns of are scaled so that for every , then
(3.13) |
Using the group Lasso update provided in Hastie et al. [2015] and the soft threshold operator notation in (3.13), the group Lasso update in this case is
We denote i.e. , and for . Therefore, we can write
where . Using this representation, the Lasso update becomes
Therefore, being an isomorphism, we can simply omit it from the Lasso updates above and still we get an equivalent complex-valued update. The final steps are 4.(a-c) of Algorithm 1.
Complex Lasso with covariance update. We can modify Algorithm 1 to take the covariance matrix of and as inputs, which is needed for graphical Lasso. In step 4.(a), we can update instead of , i.e. the update for each will be
which is aligned with the fact that information about and are sufficient for estimation. The modified algorithm is presented as Algorithm 2. It should be noted as well that the objective function of Algorithm 2 is slightly different than that of Algorithm 1 in term of scaling. The specific routine followed in Algorithm 2 is used in the CGLASSO algorithm we propose in 3.3.

Computational benefit of orthogonal group structure. We would like to highlight the importance of the orthogonal predictor structure in the above group Lasso formulation. While it is well-known that a Lasso with complex variables can be solved using standard group Lasso optimization routines [Maleki et al., 2013], to the best of our knowledge the orthogonality of predictors within groups has not been observed or utilized in algorithm development in the existing literature. We perform a small numerical experiment to compare the computational efficiency of CLASSO with that of a popular group Lasso R package grplasso that does not use the within-group orthogonality of predictors.
The experiment consists of a dimensional sample of size . We randomly generate the real and imaginary parts of the entries in from a standard normal distribution and consider such that if is odd and 0 otherwise. We consider the error distribution and response variable . The experiment is replicated 20 times and the run times for both CLASSO and grplasso are recorded. We used a MacBook Pro M1 with 8 core CPU for running the experiment. The boxplots of the runtimes are shown in figure 3. We get an average run time of CLASSO and grplasso to be 0.01 and 1.34 seconds respectively. We have a run time improvement of 134 folds if we use CLASSO.
Warm start and active set selection. The complex Lasso can be sped up by incorporating warm start and active set selection [Hastie et al., 2015]. In practice, one needs to find the Lasso solution for not only one fixed , but a sequence of values for and obtain a regularization path. Such regularization path helps us identify which is the best choice of in term of certain criterion (e.g. relative mean squared error (RMSE), Bayesian Information Criterion (BIC) or cross-validation error, see section 5 for details). One method to do so is to start with a reasonably large value of the tuning parameter () and obtain an all-zero solution . For each smaller value (), we can use the solution for the previous , as an initial value i.e. warm start and compute . This scales up the efficiency of computing the Lasso solution for all tuning parameters since we can expect that a small change in the tuning parameter has limited effect on the solution. This method is referred to as the pathwise coordinate descent.
An active set is the set of indices for non-zero entries in iteration of the Lasso algorithm. For the vector of current residuals if is satisfied, then is excluded from , i.e. . For the indices in , the soft threshold step 4.(b) of Algorithm 1 is performed. The advantage of doing the active set selection is that once a index is declared inactive and sparse, the number of soft threshold operation in each step decreases. If after a few iterations starts to become sparse, then the number of soft threshold operations significantly goes down.
3.3 Graphical Lasso with complex variables
We return to the original problem (2.6) with the given matrix of DFT values and the averaged periodogram for a fixed Fourier frequency . For ease of notation, we will denote as , and keep the frequency index throughout the algorithm description. We are allowed to do it since for now our algorithm does not involve information of the estimates in other frequencies. Also we denote the inverse periodogram as .
-
•
; remains unchanged throughout the algorithm
-
•
-
(a)
Define
-
-
(b)
-
(c)
Update:
-
-
•
-
•
Using Claim B.1, the solution of the optimization problem (2.6) solves the following equation (2.6) is
(3.14) |
where has the following entries
(3.15) |
The system (3.14) can be solved using block coordinate descent [Friedman et al., 2008]. The current working version of and are partitioned as follows
Let be the current working version of . Following Friedman et al. [2008] and using some tools in calculus of complex variables, it can be shown that (3.14) is equivalent to solving a Lasso problem where is the analogue of , and corresponds to , the estimated cross product matrix in the current model. Based on the Algorithm 1, we can implement a block coordinate descent approach and come up with a graphical Lasso method in complex regime.
3.3.1 Implementation details
We provide some details on the implementation and convergence of the CGLASSO algorithm. While coordinate descent is fast and simple to implement, it is known to face convergence issues for very small values of penalty parameter, even in classical Lasso and graphical Lasso problems [Mazumder and Hastie, 2012; Friedman et al., 2007]. While we also encounter similar issues, we find that the algorithm performs robustly and accurately on the portion of the regularization path where is neither too small nor too large. We also make some modifications to the base algorithm that helps with convergence and speeding up performance.
- 1.
-
2.
Warm and warmer start: Warm start is a standard technique to speed up convergence of pathwise coordinate descent (see section 3.2). We find that in the implementation of CGLASSO, another modification, which we call warmer start, helps more. In simple terms, warmer start is a two-fold warm start for the CGLASSO and the inner CLASSO.COV. For a fixed we can find the CGLASSO solution of for a sequence of and construct a regularization path. Starting from a reasonably high value of , we obtain the solution with all zero in off-diagonal entries. For each smaller value , the solution for the previous is used as an initial value. In addition, at each iteration of CGLASSO we go through the following steps (b and c of Algorithm 3)
the above routine is a two-step warm up for the initial solution in both the Lasso and graphical Lasso algorithm.
-
3.
Convergence issues for small : For very small , we may encounter convergence issues. In our simulation studies, we handle this issue by declaring a stopping rule for the regularization path depending on the estimation error since the true is known. We calculate relative mean square error (RMSE) which, at frequency and penalty , is defined in (5.3).
Let us assume for now that we are estimating at a fixed frequency so that we can simply denote by . If the initial value of the pentalty parameter is , and is the minimizer of the values along the regularization path, then at the current we stop the regularization path if the gap is large enough and
Such stopping criterion perform very well when the regularization path is ideally U-shaped and the stopping rule is able to detect the deviation of the regularization path from the minima.
While our reported results are not affected by these choices, i.e. the min-BIC solutions are achieved by where the algorithm converges and does not require a hard cut-off, we do plan to resolve these issues in future work.
4 Theoretical properties
In this section we establish non-asymptotic high probability upper bounds on the estimation error of , for a single frequency . First we will control the distance between and in -norm. Our proof techniques build upon the arguments in Ravikumar et al. [2011] and Sun et al. [2018]. In particular, Ravikumar et al. [2011] used a primal-dual witness technique which constructs the pair such that the pair satisfies the optimality condition with high probability. Given that the construction is successful, the witness is equal to the solution of the optimization problem 2.6. The advantage of this construction is that we can ensure both estimation and variable selection properties of the estimator with high probability.
In this section, we first state the assumptions and explain the notations of terms that will govern our error bounds. Then we present the estimation error bounds for Gaussian and non-Gaussian linear processes.
Sparsity: The sparsity parameters capture the ambient dimension of the optimization problem. We use to denote the true spectral precision matrix at . The set of off-diagonal entries of , used to define the edge set of the graph, is denoted by
and the augmented edge set is denoted as . For simplicity, we write as . We need the following two sparsity parameters in our analysis,
The parameter captures the total number of edges in the graph. The parameter is the maximum degree of a node, and equals the maximum number of non-zero entries in any row of .
Our analysis will show that consistent estimation is possible when the effective sample size is larger than the ambient dimension captured by the above parameters.
Identifiability: In high-dimensional statistics, some type of identifiability assuptions needs to hold in order to carry out meaningful estimation. Typical assumptions used in the literature are restricted eigenvalue, restricted strong convexity or irrepresentability conditions. These conditions inherently impose restrictions to ensure that the support and non-support variables are not too similar. These assumptions are also closely related to some form of strong convexity or steep curvature of the second derivative of the loss function at the true parameter in certain directions of the high-dimensional space.
We consider the function
then the Hessian is essentially , where denotes the matrix Kronecker product, i.e. is a matrix indexed by the vertex pair/edge such that the entries is of the form evaluated at . This can be verified from the results on matrix derivative in Boyd and Vandenberghe [2004]. The following quantity captures the magnitude of this Hessian restricted on the true support set, and will appear in our error bounds of Whittle estimation.
We will also need the following irrepresentability assumption, which is common in the literature of GGM [Ravikumar et al., 2011].
Assumption 4.1.
There exists some such that
Such assumption limits the influence of the non-edge terms indicated by with the edge terms based on [Zhao and Yu, 2006].
Temporal memory: We will need a few parameters to capture the effect of temporal and cross-sectional memory of the process on our error bounds.
Assumption 4.2.
We also define the following quantity
which is a measure of stability of the time series . Larger values of are associated with processes having stronger temporal and cross-sectional dependence and less stability. The above assumption ensures the finiteness of since for any
We define two model dependent quantities
(4.1) |
These two capture the strength of temporal and contemporaneous dependence in the multivariate time series. According to proposition 3.3 of Sun et al. [2018] they play an important role to control the bias of averaged periodogram. We shall use this information to control the bias in the estimation setup we deal with.
Finally, we will need the quantity
in our error bounds. It captures the size of the spectral density matrix in norm, and is used to control the curvature of the Hessian of the loss function in cglasso.
4.1 Case I: Gaussian time series
Our first result, rephrased from Propositions 3.5 and 3.6 of Sun et al. [2018] using our notations, provides a high probability upper bound on the element-wise difference between the averaged periodogram and the true spectral density. The bound depends on the bias and tail decay of , and decays with the dimension at the rate , where is the bandwidth parameter in spectral density estimation.
Proposition 4.1.
The proof is given in Appendix B for completeness.
Remark 4.1.
The statement of proposition 4.1 is non-asymptotic in nature and can require sample size and bandwidth paramater depending on the stability parameter . For appropriate choices of and the bias term can vanish faster than the tail decay for certain data generating processes. For example, according to proposition 3.4. of Sun et al. [2018], if the autocovariance function of the underlying centered stationary process satisfies for every and some , then and . The first components of the bias term is . For the choice of bandwidth parameter for some , and , which implies . Therefore, the bias vanishes faster than the tail decay, and the threshold term behaves as . This similar to the single-deviation control of covariance matrices in Ravikumar et al. [2011] and Bickel and Levina [2008].
Our next result uses the element-wise deviation bound as a starting point to control the departure of graphical lasso estimator from the true spectral precision in different norms.
Theorem 4.1.
Let be consecutive observations from a stable Gaussian centered time series satisfying Assumptions 4.1 and 4.2. Consider a single Fourier frequency , and the true spectral precision matrix . Assume and , and denote . Then for any choice of tuning parameters , , , and any , there exist universal constants such that the following statements hold with probability at least ,
-
(a)
The CGLASSO estimator in (2.6) satisfies
-
(b)
The edge set specified by , i.e. , is a subset of and includes all edges with ,
-
(c)
The estimator satisfies
(4.4) (4.5)
Remark 4.2.
The proof is deferred to Appendix B. In the statement of theorem 4.1m the condition ensures that the tail decay and the bias are separately controlled by . However, under common mixing conditions, the tail decay may asymptotically dominate the bias term. For instance as shown in Remark 4.1, when the autocovariance function of the underlying centered stationary process satisfies for every and some , for the choice of the bandwidth for some , the tail decay dominates the bias term. If and are asymptotically constants as functions of , the threshold term behaves as so that i.e. . This rate is similar to the sample size requirement of Ravikumar et al. [2011] for precision matrix estimation using real graphical lasso. In each iteration of the CGLASSO algorithm, CLASSO i.e. the complex variant of the lasso algorithm is implemented. Hence, lasso succeeds with high probability with a sample of size [Wainwright, 2009]. As addressed in Ravikumar et al. [2011], the bandwidth requirement of might be an artifact of the theoretical analysis in this case. Also, consistency of around is ensured when . The part (b) of Theorem 4.1 also states that entry-wise model selection consistency is achieved when the true entry is larger than a signal depending on the threshold .
4.2 Case II: Linear processes
In this section, we extend the estimation consistency results from Gaussian time series to linear processes with non-Gaussian errors. We focus on three classes of error distributions given in Section 4 in Sun et al. [2018]. We consider a general linear process with absolute summable MA() coefficients
(4.6) |
where , and have i.i.d. centered distribution with possibly heavier tail than Gaussian. The absolute summability of MA() coefficients ensure stationarity of the process [Rosenblatt, 2012]
(4.7) |
Under this condition, the autocovariance function is well-defined for every , and Assumption 4.2 holds (see Lemma C.7 of Sun et al. [2018]). We assume that all the components of , viz. , come form one of the following three distribution families:
-
(C1)
Sub-Gaussian: There exists such that for all
-
(C2)
Generalized sub-exponential family with parameter : There exist positive constants and such that for all ,
-
(C3)
Distribution with finite 4th moment: There exists such that
With this specification of error distribution, we have the following results on the estimation error of averaged periodogram.
Proposition 4.2.
Suppose is a linear process (4.6) satisfying (4.7), and comes from one of the distribution families (C1), (C2) or (C3). Consider a given Fourier frequency . Assume , where for the three families (C1), (C2) and (C3) respectively. Then for satisfying and , and the following choice of threshold with an ,
-
(C1)
,
-
(C2)
,
-
(C3)
,
the error of averaged periodogram satisfies
(4.8) |
where the tail probabilities , , for (C1), (C2) and (C3) are given by
and are constants depending only on the distributions of errors but not on the temporal dependence parameters of the linear process.
This proposition shows how the sample size requirement changes as the errors in the linear process have heavier tails than Gaussian. For example, under the assumption of only the existence of moment, we need samples. In contrast, in the subGaussian setting samples are sufficient for consistency.
Theorem 4.2.
Let be as in Proposition 4.2. Consider a single Fourier frequency . Assume , and where for are as described in proposition 4.2, and denote . Then for any choice of tuning parameters , , , and any , there exist universal constants such that with probability greater than , (a),(b) and (c) in Theorem 4.1 hold.
Proofs of Proposition 4.2 and Theorem 4.2 follow from a similar argument as in propositions 4.3 and 4.4 of Sun et al. [2018], where estimation error bounds for spectral density matrices were considered. For general linear processes, in proposition 4.2, the first term of the threshold is tailored to the underlying process i.e. the threshold for estimation error decays slower from (C1) to (C3) as the tail of error distribution is heavier. The order of required sample size and the bandwidth also increases for heavier tail errors.
Proofs of these generalizations from Gaussian to linear processes follow the same argument as in Propositions 4.3 and 4.4 of Sun et al. [2018], where estimation error bounds for spectral density matrices were considered. The proof of Theorem 4.2 follows in the similar way as Theorem 4.1 (proof mentioned in appendix B). The upper bound in the right-side of (4.3) is replaced by more general set of upper bounds for linear processes as mentioned in (4.8) of Proposition 4.2.
5 Numerical experiments
We conduct three sets of experiments to demonstrate the finite sample properties of CGLASSO, and compare its performance with existing alternative methods.
In graphical model and lasso literature [Janková and van de Geer, 2018; Carter et al., 2023] using a single tuning parameter is not ideal because the variables may be in different scales and possibly have different standard deviations. Hence the scaling issue needs to be carefully addressed. We use the following two variants of the CGLASSO algorithm.
CGLASSO-I: The first variant, called CGLASSO-I, takes the common approach of scaling all the DFTs to have before using them as input in CGLASSO. The final output is then rescaled using , where is a diagonal matrix containing the , for . Similar to Janková and van de Geer [2018], this is equivalent to solving a graphical lasso based on the sample coherence matrix and rescale it back. The corresponding optimization problem is
(5.1) |
The above optimization problem is equivalent to a complex graphical lasso with weighted penalty (similar fact for graphical lasso on real covariance matrix is stated in Janková and van de Geer [2018]) and the weights are determined by the diagonal matrix , that is
CGLASSO-II: The second variant, called CGLASSO-II, takes a more involved approach where in each invoke of CLASSO.COV within CGLASSO, it uses to scale the input of CLASSO.COV. One way to view this operation is that every time a Lasso is run within the inner loop of CGLASSO, it ensures that the predictors in the Lasso input are scaled. This is equivalent to the following scheme for each iteration of the Lasso update in :
(5.2) |
Comparison between CGLASSO variants: We compare between CGLASSO-I and CGLASSO-II at . Our comparison is based on the regularization paths (Figures 4a and 4b) for relative means squared error (RMSE) as described in (5.3) for two different DGPs.
Experiment 1: We compare CGLASSO with Sparse Inverse Periodogram Estimator (SIPE) of Fiecas et al. [2019] on white noise and VAR processes. Our results show that penalizing real and imaginary parts separately as in SIPE can lead to loss of estimation and model selection accuracy. It is indeed preferable to penalize the real and imaginary parts together using a group penalty, as specified in the original penalized Whittle likelihood formulation (2.6). Our findings qualitatively match with the findings in Maleki et al. [2013] for generic regression with complex variables.
Experiment 2: We report relative mean squared error (RMSE) of CGLASSO at frequency for three different DGPs (white noise, VAR and VARMA), and different values of and . We show that CGLASSO substantially outperforms the traditional estimator of inverse averaged periodogram.
Experiment 3: We report model selection performances at for the same settings as in Experiment 2 using precision, recall, accuracy and area under the ROC curve (AUROC). We show that CGLASSO outperforms node-wise regression (NWR) algorithm of Section 2.3 Meinshausen and Bühlmann [2006] in terms of AUROC.
Before presenting the results of our experiments, we describe our simulation setup, performance metrics and choice of tuning parameters.
Generative models: Three DGPs are considered and for each setup, we generate data with and . The DGPs are following:
-
(DGP1)
White noise: We consider where is a tri-diagonal symmetric positive definite matrix with diagonal entries 0.7, upper and lower diagonal entries 0.3.
-
(DGP2)
VAR(1): We consider where is a coefficient matrix with banded structure i.e. for , for and for .
-
(DGP3)
VARMA(2, 2): The setting in this case is similar to the simulation setup mentioned in Wilms et al. [2021]. For any , let the matrices and denote the -dimensional identity matrix and -dimensional matrix with all ones respectively. The model is . For coefficients of the autoregressive part, we use the diagonal matrices and for coefficients of the moving average part of the model we use block diagonal matrices where the block matrices for and respectively are and .


We estimate the spectral precision matrix for all of the above three setups and compare with the true matrix. As mentioned in section 11.5 of Brockwell and Davis [1991], the true spectral density for the VARMA(p, q) process with for all is given by
where and . In our case the specific forms of the true spectral densities are calculated by changing ’s, ’s and and the spectral precision is . One observation is that for white noise model, , hence for all . Therefore, the sparsity patterns of the inverse covariance matrix and the precision matrix are identical.




At a fixed frequency, we seek to get the full regularization path for a range of penalty parameter () values. Given the estimated at a certain , we incorporate warm start, i.e. we use as a initial value and estimate for near . Warm start helps us obtain the full regularization path by repeatedly using the current estimated as an initial value for the next step.
Performance metric: In order to compare the estimated output with the true spectral precision matrix, we use a relative mean squared error (RMSE) at frequency defined as
(5.3) |
In order to capture how well the graphical lasso is able to recover the sparsity pattern and the non-zero coordinates of true , we use precision, recall and accuracy measures defined as follows:
precision | |||
recall | |||
accuracy |
In addition, we calculate the area under receiver operating characteristic (ROC) curve. All the experiments are replicated 20 times in order to calculate the mean and standard deviation of each performance metric.
Bandwidth and tuning parameter selection: We set the smoothing span based on theoretical results in section 4. Such choice of smoothing span is also supported by Sun et al. [2018] and Fiecas et al. [2019].
We use Extended Bayesian information criterion (EBIC) [Chen and Chen, 2008] for tuning parameter selection. As discussed in Foygel and Drton [2010] the expression of EBIC for Gaussian graphical model in our case would be
where is the log-likelihood corresponding to the Gaussian graphical model, is the edge set of a candidate graph corresponding to (described in (4)), and is a parameter. The choice of has been theoretically studied in some other paper [Chen and Chen, 2008]. In our simulations, we only report results for for all the models, when EBIC is essentially BIC. However, in high dimensional models, the extended BIC with a suitable positive value of can be used as a tuning parameter selection criterion. Additional experiments (not reported here) with shows that our results are qualitatively robust to its choice. We choose the tuning parameter with the minimum value of BIC, that is
In order to find the minimum, we search on a log-linear scale grid of values for .
RMSE | ||||||
---|---|---|---|---|---|---|
Family | Inverse periodogram | CGLASSO (oracle) | CGLASSO (BIC) | |||
White Noise | ||||||
196.56 (189.68) | 13.01 (3.69) | 13.74 (3.52) | ||||
65.88 (28.74) | 9.09 (2.32) | 10.80 (2.77) | ||||
3015.44 (1807.44) | 13.11 (2.28) | 19.69 (1.45) | ||||
482.38 (332.41) | 9.82 (1.72) | 14.53 (1.61) | ||||
- | 15.87 (1.65) | 32.13 (0.92) | ||||
- | 11.34 (0.80) | 23.31 (0.78) | ||||
VAR(1) | ||||||
155.00 (84.18) | 12.07 (3.04) | 13.25 (2.86) | ||||
70.89 (49.07) | 9.27 (2.19) | 10.59 (1.88) | ||||
3233.31 (3136.76) | 11.84 (2.12) | 14.14 (1.74) | ||||
394.83 (220.99) | 8.86 (1.51) | 10.95 (1.44) | ||||
- | 14.49 (1.69) | 28.70 (2.27) | ||||
- | 12.04 (2.02) | 25.17 (2.36) | ||||
VARMA(2, 2) | ||||||
414.04 (277.31) | 8.49 (1.86) | 9.79 (3.39) | ||||
109.53 (53.68) | 5.57 (1.77) | 7.06 (2.04) | ||||
13704.13 (16463.75) | 10.36 (2.06) | 11.46 (4.38) | ||||
736.20 (322.94) | 7.82 (1.46) | 8.60 (1.53) | ||||
- | 11.96 (1.17) | 13.21 (1.21) | ||||
- | 9.02 (0.89) | 12.20 (0.84) |
NWR | CGLASSO | CGLASSO | |||
---|---|---|---|---|---|
Family | AUROC | AUROC | Precision | Recall | Accuracy |
White Noise | |||||
89.41 (6.77) | 94.49 (4.24) | 44.45 (7.99) | 96.67 (6.35) | 74.11 (6.74) | |
96.67 (2.98) | 97.76 (2.66) | 42.98 (7.00) | 98.33 (4.07) | 72.68 (6.61) | |
87.64 (5.97) | 95.53 (3.22) | 39.30 (5.58) | 94.74 (5.40) | 84.42 (3.45) | |
94.60 (4.00) | 98.90 (1.29) | 37.78 (4.93) | 99.21 (1.93) | 83.21 (3.14) | |
88.58 (2.42) | 87.07 (3.01) | 55.98 (5.26) | 75.41 (5.90) | 96.61 (0.48) | |
93.01 (2.07) | 97.31 (1.47) | 43.28 (4.75) | 95.41 (2.80) | 94.71 (0.98) | |
VAR(1) | |||||
83.78 (6.09) | 90.13 (5.12) | 66.56 (6.04) | 87.65 (7.61) | 78.44 (4.84) | |
90.2 (3.98) | 96.07 (2.94) | 69.86 (5.54) | 95.75 (4.42) | 82.47 (4.03) | |
90.84 (3.12) | 95.83 (1.96) | 49.44 (3.77) | 95.45 (2.86) | 79.95 (2.85) | |
95.25 (2.24) | 97.89 (1.63) | 48.62 (2.85) | 97.87 (2.48) | 79.34 (2.33) | |
93.72 (1.86) | 98.69 (0.67) | 45.43 (3.49) | 98.06 (1.09) | 90.43 (1.39) | |
97.25 (0.85) | 99.72 (0.39) | 42.47 (2.52) | 99.57 (0.71) | 89.23 (1.04) | |
VARMA(2, 2) | |||||
70.56 (7.85) | 88.25 (3.83) | 79.34 (8.58) | 83.00 (6.16) | 82.33 (5.37) | |
74.16 (6.37) | 89.54 (4.52) | 73.67 (7.01) | 86.25 (7.23) | 79.89 (5.92) | |
66.11 (5.23) | 87.22 (4.34) | 64.37 (5.47) | 79.50 (8.10) | 86.32 (2.51) | |
72.32 (5.00) | 90.94 (1.44) | 64.557 (4.68) | 85.875 (2.84) | 86.95 (2.102) | |
72.65 (2.30) | 89.47 (1.98) | 48.32 (3.51) | 81.85 (3.90) | 91.30 (1.00) | |
70.73 (2.56) | 92.73 (2.35) | 49.94 (4.29) | 87.60 (4.44) | 91.75 (1.162) |
Results of comparison between CGLASSO-I and CGLASSO-II: The comparison of relative errors indicates that the minimum RMSE along the regularization path of the CGLASSO-I outputs are less than that of CGLASSO-I. Empirical observations also suggest that more the data generating process involves temporal and cross-sectional dependence in the time series model, better are the relative error results of CGLASSO-II. We demonstrate this with two cases: data coming from a white noise model with , , and VARMA(2, 2) with , (Figure 4a & 4b). Even when the data generating process is fixed, the difference of minimum RMSEs tends to increase in high dimensional-low sample size setups. In a nutshell, the scale-modified algorithm performs better than the vanilla version when the estimation problem becomes harder in terms of the data generating process, the dependence structure and sample size-to-dimension ratio. Also, the minimum RMSE along the regularization path of CGLASSO-II is attained for higher values of , implying that the model with least error is sparser when scaling is incorporated.
Results: Next we describe the key findings of our three experiments.
Experiment 1: We compare the results for our proposed estimator with the sparse inverse periodogram estimator (SIPE) of Fiecas et al. [2019]. The latter uses the constrained minimization for inverse matrix estimation (CLIME) of Cai et al. [2011]. For certain combinations of and for white noise and VAR(1) models, we calculate the relative mean squared error for frequency along the regularization path.
In white noise model with and , we see in figures 5a and 5b that the minimum relative error of SIPE is almost larger than that of complex graphical lasso. In addition, the area under ROC curve for SIPE is approximately less than that of complex graphical lasso, indicating a better model selection performance by the latter method. Additionally, if we run the experiment with a sample of size of 200 from a 50-dimensional VAR(1) model with being a block diagonal matrix with the blocks being the matrix with all diagonal elements 0.5 and upper diagonals 0.2, and error distribution , the minimum RMSE of SIPE is approximately 28% larger than that of complex graphical lasso. Similar to white noise model, the area under ROC curve for SIPE is almost 12% less than complex graphical lasso, as shown in Figures 5c and 5d. One reason for such significant outperforming results of graphical lasso might be that Fiecas et al. [2019] uses CLIME twice for estimating real and imaginary parts of separately. In particular, their method involves estimation of entries instead of estimands in graphical lasso.
For the sparse inverse periodogram estimator, the AUROC values are higher for larger values of the penalty parameter (figures 5c and 5d). However, the value of RMSE also increases for higher penalty. For complex graphical lasso, minimum RMSE estimator also has a higher value of RMSE than that of SIPE, indicating that for our proposed method, the estimator with minimum relative error is well able to recover the true sparsity structure.
Experiment 2: The relative error comparison with the classical estimator i.e. the inverted periodogram is reported in table 1. For , the averaged periodogram is not invertible since in our setup its rank can be at most , hence the entries for those cases are left blank. In most simulation settings, our proposed method not only has significantly smaller relative error but also the BIC selected model has RMSE values just slightly higher than the minimum. In addition, RMSE goes down if we fix the dimension and increase . Such phenomena supports the fact that the relative estimation error should go down if a larger sample is available.
Experiment 3: Area under ROC curve, precision, recall, F1 score are reported in Table 2. In most of the simulation settings, AUROC values are above 80%, most values lying in the range 90-100% indicating a good recovery of the sparsity and signal structure by our methods. In addition, in each setting with the data generative family and the dimension fixed, increased sample size from to results in an increase in AUROC values. The methods have lower precision but higher recall, indicating a tendency of CGLASSO to select false positives. This pattern of penalized estimators is well-known in the literature, and two-stage methods such as the adaptive lasso [Zou, 2006] have gained popularity to alleviate this problem. Such an extension of CGLASSO is beyond the scope of this paper, and we leave it for future research.
6 Functional connectivity analysis with fMRI data


We implement CGLASSO for interpretation and visualization of functional connectivity among different brain regions of a human subject. The data set used here is a publicly available, high-resolution, preprocessed magnetic resonance imaging data from the Human Connectome Project (HCP)—Young Adult S1200 release [Van Essen et al., 2013; Dhamala et al., 2020]. We consider the time series data of 1003 healthy adults observed over same time horizon across four complete resting state fMRI runs, each scan with time points.
Preprocessing: MRIs were acquired on a Siemens Skyra 3 T scanner at Washington University in St. Louis. Each subject underwent four gradient-echo EPI resting-state functional MRIs (rsfMRI), with TR = 720 ms, TE = 33.1 ms, 2.0 mm isotropic voxels, FoV = 208 × 180 mm2, flip angle = 52∘, matrix = 104 × 90, 72 slices. Two 15 min rsfMRI runs were acquired at each of the two sessions. In the end, the data consisted of 1200 time points for each run, for a total of 4800 time points for each subject over four scans. Each run of each subject’s rfMRI was preprocessed by the HCP consortium [Smith et al., 2013]. The data were minimally preprocessed [Glasser et al., 2013] and had artefacts removed using ICA + FIX [Griffanti et al., 2014; Salimi-Khorshidi et al., 2014]. Gray matter was parcellated into 86 cortical, subcortical and cerebellar regions using FreeSurfer [Fischl et al., 1999] and regional time series were calculated as the mean of the time series in each voxel of a given region. Each region was further assigned to a Yeo functional network delineating visual, somatomotor, dorsal attention, ventral attention, limbic, default mode, and fronto-parietal networks [Yeo et al., 2011]; we added a subcortical and cerebellar network for whole brain coverage as in previous work [Dhamala et al., 2020].
We estimate the spectral precision at frequency using CGLASSO for the individual with subject ID 100206. The smoothing span is selected and we select the penalty parameter that produces the lowest value of BIC.
Results: In Figure 6 we show an example of FC coherence network of one individual in our sample. Our proposed CGLASSO algorithm seems successful in capturing the underlying sparsity pattern. The estimated inverse coherence matrix is well able to retain the known physiological connections e.g. strong connection between bilateral homologues (functional connection between the same regions in the right and left hemisphere) which are known to have strong functional connections [Zuo et al., 2010], while also suppressing many of the weaker functional connections that are likely due to noise. Particularly strong are the bilateral connections between motor and visual areas, which are known to be strongly connected. The same is also supported by Sun et al. [2018] where the authors have used adaptive lasso thresholding method.
7 Conclusion
In this paper we proposed and analyzed a frequency-domain variant of graphical Lasso, an -penalized local Whittle estimator for estimating spectral precision matrices of multivariate stationary time series. We developed fast pathwise coordinate descent algorithms for Lasso and graphical Lasso involving complex variables. We conducted a non-asymptotic analysis of our proposed method and showed that consistent estimation is possible in both low- and high-dimensional regimes as long as the true spectral precision matrix is sufficiently sparse.
We did not delve into a number of natural questions related to our method. First, extending the algorithm from a single frequency to all the frequencies is required for recovering the true graphical model for stationary time series. Such an extension will need additional work on both algorithmic and analytical fronts. A faster pathwise algorithm for jointly optimizing over different frequencies and different tuning parameters will be helpful. Extending the theoretical guarantees from a single frequency to all frequencies will also be required.
Another important question is the selection of bandwidth/smoothing parameter when calculating the spectral density matrix. For theoretical simplicity, we use following the custom in the literature of large-scale spectral density matrix estimation [Böhm and von Sachs, 2009]. However, a data driven choice of as proposed in Ombao et al. [2001] would be more appropriate in practice. One can also consider allowing different bandwidth choices for different off-diagonal entries. We leave these directions for future research.
Finally, a number of recent works have investigated the inferential properties of frequency-domain estimators [Chang et al., 2022; Krampe and Paparoditis, 2022] and delved into the nonstationary time series in high-dimension [Zhang and Wu, 2021; Basu and Subba Rao, 2023]. Extending CGLASSO and CLASSO to these broader frameworks are also left as avenues for future research.
Acknowledgements
ND, AK and SB acknowledge partial support from NIH award R21NS120227. In addition, SB acknowledges partial support from NSF awards DMS-1812128, DMS-2210675, DMS-2239102 and NIH award R01GM135926.
References
- Baek et al. [2021a] C. Baek, M.-C. Düker, and V. Pipiras. Thresholding and graphical local whittle estimation. arXiv preprint arXiv:2105.13342, 2021a.
- Baek et al. [2021b] C. Baek, M. Gampe, B. Leinwand, K. Lindquist, J. Hopfinger, K. M. Gates, and V. Pipiras. Detecting functional connectivity changes in fmri data. Brain Connectivity, 2021b.
- Basu and Subba Rao [2023] S. Basu and S. Subba Rao. Graphical models for nonstationary time series. The Annals of Statistics, 51(4):1453–1483, 2023.
- Bickel and Levina [2008] P. J. Bickel and E. Levina. Covariance regularization by thresholding. The Annals of Statistics, 36(6):2577 – 2604, 2008. doi: 10.1214/08-AOS600. URL https://doi.org/10.1214/08-AOS600.
- Böhm and von Sachs [2009] H. Böhm and R. von Sachs. Shrinkage estimation in the frequency domain of multivariate time series. Journal of Multivariate Analysis, 100(5):913–935, 2009.
- Bowyer [2016] S. M. Bowyer. Coherence a measure of the brain networks: past and present. Neuropsychiatric Electrophysiology, 2(1):1–12, 2016.
- Boyd and Vandenberghe [2004] S. P. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
- Brockwell and Davis [1991] P. J. Brockwell and R. A. Davis. Time series: theory and methods. Springer science & business media, 1991.
- Cai et al. [2011] T. Cai, W. Liu, and X. Luo. A constrained minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association, 106(494):594–607, 2011.
- Carter et al. [2023] J. S. Carter, D. Rossell, and J. Q. Smith. Partial correlation graphical lasso. Scandinavian Journal of Statistics, 2023.
- Chang et al. [2022] J. Chang, Q. Jiang, T. S. McElroy, and X. Shao. Statistical inference for high-dimensional spectral density matrix. arXiv preprint arXiv:2212.13686, 2022.
- Chen and Chen [2008] J. Chen and Z. Chen. Extended bayesian information criteria for model selection with large model spaces. Biometrika, 95(3):759–771, 2008.
- Dahlhaus [2000] R. Dahlhaus. Graphical interaction models for multivariate time series. Metrika, 51(2):157–172, 2000.
- Dallakyan et al. [2022] A. Dallakyan, R. Kim, and M. Pourahmadi. Time series graphical lasso and sparse var estimation. Computational Statistics & Data Analysis, 176:107557, 2022.
- Davis et al. [2016] R. A. Davis, P. Zang, and T. Zheng. Sparse vector autoregressive modeling. Journal of Computational and Graphical Statistics, 25(4):1077–1096, 2016.
- Dhamala et al. [2020] E. Dhamala, K. W. Jamison, M. R. Sabuncu, and A. Kuceyeski. Sex classification using long-range temporal dependence of resting-state functional mri time series. Human brain mapping, 41(13):3567–3579, 2020.
- Dudley et al. [2011] R. M. Dudley, R. Norvaiša, and R. Norvaiša. Concrete functional calculus. Springer, 2011.
- Dummit and Foote [2004] D. S. Dummit and R. M. Foote. Abstract algebra, volume 3. Wiley Hoboken, 2004.
- Fiecas and von Sachs [2014] M. Fiecas and R. von Sachs. Data-driven shrinkage of the spectral density matrix of a high-dimensional time series. Electronic Journal of Statistics, 8(2):2975–3003, 2014.
- Fiecas et al. [2010] M. Fiecas, H. Ombao, C. Linkletter, W. Thompson, and J. Sanes. Functional connectivity: Shrinkage estimation and randomization test. NeuroImage, 49(4):3005–3014, 2010.
- Fiecas et al. [2019] M. Fiecas, C. Leng, W. Liu, and Y. Yu. Spectral analysis of high-dimensional time series. Electronic Journal of Statistics, 13(2):4079–4101, 2019.
- Fischl et al. [1999] B. Fischl, M. I. Sereno, and A. M. Dale. Cortical surface-based analysis: II. Inflation, flattening, and a surface-based coordinate system. NeuroImage, 9(2):195–207, 1999. ISSN 10538119. doi: 10.1006/nimg.1998.0396.
- Foygel and Drton [2010] R. Foygel and M. Drton. Extended bayesian information criteria for gaussian graphical models. Advances in neural information processing systems, 23, 2010.
- Friedman et al. [2007] J. Friedman, T. Hastie, H. Höfling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2):302 – 332, 2007. doi: 10.1214/07-AOAS131. URL https://doi.org/10.1214/07-AOAS131.
- Friedman et al. [2008] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–441, 2008.
- Glasser et al. [2013] M. F. Glasser, S. N. Sotiropoulos, J. A. Wilson, T. S. Coalson, B. Fischl, J. L. Andersson, J. Xu, S. Jbabdi, M. Webster, J. R. Polimeni, et al. The minimal preprocessing pipelines for the human connectome project. Neuroimage, 80:105–124, 2013.
- Granger [1969] C. W. Granger. Investigating causal relations by econometric models and cross-spectral methods. Econometrica: journal of the Econometric Society, pages 424–438, 1969.
- Griffanti et al. [2014] L. Griffanti, G. Salimi-Khorshidi, C. F. Beckmann, E. J. Auerbach, G. Douaud, C. E. Sexton, E. Zsoldos, K. P. Ebmeier, N. Filippini, C. E. Mackay, et al. Ica-based artefact removal and accelerated fmri acquisition for improved resting state network imaging. Neuroimage, 95:232–247, 2014.
- Haase [2018] M. Haase. Lectures on functional calculus. In 21st International Internet Seminar, Kiel Univ, 2018.
- Hastie et al. [2015] T. Hastie, R. Tibshirani, and M. Wainwright. Statistical learning with sparsity. Monographs on statistics and applied probability, 143:143, 2015.
- Herstein [1991] I. N. Herstein. Topics in algebra. John Wiley & Sons, 1991.
- Horn and Johnson [2012] R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge university press, 2012.
- Janková and van de Geer [2018] J. Janková and S. van de Geer. Inference in high-dimensional graphical models. In Handbook of graphical models, pages 325–350. CRC Press, 2018.
- Jung et al. [2015] A. Jung, G. Hannak, and N. Goertz. Graphical lasso based model selection for time series. IEEE Signal Processing Letters, 22(10):1781–1785, 2015.
- Kellogg et al. [1976] R. B. Kellogg, T.-Y. Li, and J. Yorke. A constructive proof of the brouwer fixed-point theorem and computational results. SIAM Journal on Numerical Analysis, 13(4):473–483, 1976.
- Krampe and Paparoditis [2022] J. Krampe and E. Paparoditis. Frequency domain statistical inference for high-dimensional time series. arXiv preprint arXiv:2206.02250, 2022.
- Loh and Wainwright [2017] P.-L. Loh and M. J. Wainwright. Support recovery without incoherence: A case for nonconvex regularization. The Annals of Statistics, 45(6):2455 – 2482, 2017. doi: 10.1214/16-AOS1530. URL https://doi.org/10.1214/16-AOS1530.
- Maleki et al. [2013] A. Maleki, L. Anitori, Z. Yang, and R. G. Baraniuk. Asymptotic analysis of complex lasso via complex approximate message passing (camp). IEEE Transactions on Information Theory, 59(7):4290–4308, 2013.
- Mazumder and Hastie [2012] R. Mazumder and T. Hastie. The graphical lasso: New insights and alternatives. Electronic journal of statistics, 6:2125, 2012.
- Meier [2020] L. Meier. grplasso: Fitting User-Specified Models with Group Lasso Penalty, 2020. URL https://CRAN.R-project.org/package=grplasso. R package version 0.4-7.
- Meinshausen and Bühlmann [2006] N. Meinshausen and P. Bühlmann. High-dimensional graphs and variable selection with the lasso. The annals of statistics, 34(3):1436–1462, 2006.
- Ombao and Pinto [2022] H. Ombao and M. Pinto. Spectral dependence. Econometrics and Statistics, 2022.
- Ombao et al. [2001] H. C. Ombao, J. A. Raz, R. L. Strawderman, and R. Von Sachs. A simple generalised crossvalidation method of span selection for periodogram smoothing. Biometrika, 88(4):1186–1192, 2001.
- Ravikumar et al. [2011] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing -penalized log-determinant divergence. Electronic Journal of Statistics, 5:935–980, 2011.
- Rosenblatt [2012] M. Rosenblatt. Stationary sequences and random fields. Springer Science & Business Media, 2012.
- Salimi-Khorshidi et al. [2014] G. Salimi-Khorshidi, G. Douaud, C. F. Beckmann, M. F. Glasser, L. Griffanti, and S. M. Smith. Automatic denoising of functional mri data: combining independent component analysis and hierarchical fusion of classifiers. Neuroimage, 90:449–468, 2014.
- Schneider-Luftman and Walden [2016] D. Schneider-Luftman and A. T. Walden. Partial coherence estimation via spectral matrix shrinkage under quadratic loss. IEEE Transactions on Signal Processing, 64(22):5767–5777, 2016.
- Shumway and Stoffer [2000] R. H. Shumway and D. S. Stoffer. Time series analysis and its applications, volume 3. Springer, 2000.
- Smith et al. [2013] S. M. Smith, C. F. Beckmann, J. Andersson, E. J. Auerbach, J. Bijsterbosch, G. Douaud, E. Duff, D. A. Feinberg, L. Griffanti, M. P. Harms, et al. Resting-state fmri in the human connectome project. Neuroimage, 80:144–168, 2013.
- Sun et al. [2018] Y. Sun, Y. Li, A. Kuceyeski, and S. Basu. Large spectral density matrix estimation by thresholding. arXiv preprint arXiv:1812.00532, 2018.
- Tseng [2001] P. Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of optimization theory and applications, 109(3):475, 2001.
- Tugnait [2022] J. K. Tugnait. On sparse high-dimensional graphical model learning for dependent time series. Signal Processing, 197:108539, 2022.
- Van Essen et al. [2013] D. C. Van Essen, S. M. Smith, D. M. Barch, T. E. Behrens, E. Yacoub, K. Ugurbil, W.-M. H. Consortium, et al. The wu-minn human connectome project: an overview. Neuroimage, 80:62–79, 2013.
- Wainwright [2009] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using -constrained quadratic programming (lasso). IEEE transactions on information theory, 55(5):2183–2202, 2009.
- Weylandt Jr [2020] R. M. Weylandt Jr. Computational and Statistical Methodology for Highly Structured Data. PhD thesis, Rice University, 2020.
- Whittle [1951] P. Whittle. Hypothesis testing in time series analysis. (No Title), 1951.
- Wilms et al. [2021] I. Wilms, S. Basu, J. Bien, and D. S. Matteson. Sparse identification and estimation of large-scale vector autoregressive moving averages. Journal of the American Statistical Association, pages 1–12, 2021.
- Wodeyar and Srinivasan [2022] A. Wodeyar and R. Srinivasan. Structural connectome constrained graphical lasso for meg partial coherence. Network Neuroscience, 6(4):1219–1242, 2022.
- Yeo et al. [2011] B. T. T. Yeo, F. M. Krienen, J. Sepulcre, M. R. Sabuncu, D. Lashkari, M. Hollinshead, J. L. Roffman, J. W. Smoller, L. Zöllei, J. R. Polimeni, B. Fischl, H. Liu, and R. L. Buckner. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of neurophysiology, 106(3):1125–65, 9 2011. ISSN 1522-1598. doi: 10.1152/jn.00338.2011.
- Yuan and Lin [2006] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006.
- Zhang and Wu [2021] D. Zhang and W. B. Wu. Convergence of covariance and spectral density estimates for high-dimensional locally stationary processes. The Annals of Statistics, 49(1):233 – 254, 2021. doi: 10.1214/20-AOS1954. URL https://doi.org/10.1214/20-AOS1954.
- Zhao and Yu [2006] P. Zhao and B. Yu. On model selection consistency of lasso. The Journal of Machine Learning Research, 7:2541–2563, 2006.
- Zou [2006] H. Zou. The adaptive lasso and its oracle properties. Journal of the American statistical association, 101(476):1418–1429, 2006.
- Zuo et al. [2010] X.-N. Zuo, C. Kelly, A. Di Martino, M. Mennes, D. S. Margulies, S. Bangaru, R. Grzadzinski, A. C. Evans, Y.-F. Zang, F. X. Castellanos, et al. Growing together and growing apart: regional and sex differences in the lifespan developmental trajectories of functional homotopy. Journal of Neuroscience, 30(45):15034–15043, 2010.
Appendix A A short introduction to groups, rings and fields
This supplement is for the readers new to the abstract algebraic objects used in this paper. In this section, we provide with some basic axiomatic definitions and relevant examples of algebraic structures that is required to understand the methodological tools in this paper. Readers will find deeper treatment of this topic in Herstein [1991] and Dummit and Foote [2004].
Definition A.1 (Group).
A nonempty set of elements is said to form a group if in there is defined a binary operation, called the product and denoted by , such that
-
1.
(closed),
-
2.
(associative law),
-
3.
such that for all (existence of identity),
-
4.
For every , such that (existence of inverse).
Example. 1. The set of all integers under the additive operation +, 2. the set of all complex numbers under complex addition +, the set of matrices defined in (3.1) under the matrix addition +, etc.
In this paper we deal with Abelian groups of infinite order, that is, groups with infinitely many elements and where the group operation is commutative, i.e. for all . Details can be found in Herstein [1991].
Definition A.2 (Ring).
A nonempty set is said to be an associative ring if in there are defined two operations and , such that for all ,
-
1.
,
-
2.
,
-
3.
,
-
4.
There is an element such that for all (additive identity),
-
5.
There is an element such that (additive inverse),
-
6.
,
-
7.
,
-
8.
and (distributive laws).
It follows from axioms 1-5 that is an Abelian group under the operation (addition). Similarly, it follows from axioms 6-7 that a ring is closed under associative operation (multiplication). In usual notations, is simply written as . The two operations are bridged by axiom 8.
Example
-
1.
The set of real numbers is a commutative ring under the operations (addition) and (multuplication). The additive unit is , and also has a multipicative unit denoted by 1.
-
2.
Similarly, is a ring under and .
-
3.
The set of all matrices characterized by (3.1) is a ring under matrix addition and multiplication.
Definition A.3 (Ring homomorphism).
A map from the ring into the ring is said to be a homomorphism if
-
1.
-
2.
Note. The operations and on the left side are concerned with the ring , and the ring operations and appearing on the right are of .
Example.
-
1.
Let be the set of all real numbers , both associated with the ring operations being addition and multiplication . Then , defined by for every , is a ring isomorphism.
-
2.
The complex conjugation map , defined as for all , is a ring homomorphism.
Definition A.4 (Ring isomorphism).
A homomorphism of into is said to be an isomorphism if it is a one-to-one mapping. Two rings are said to be isomorphic if there exists an isomorphism of one onto the other.
Example. Both the examples for ring homomorphism are one-to-one maps, and hence serve as examples for ring isomorphism as well.
Definition A.5 (Field).
A ring is said to be a division ring if its non zero elements form a group under multiplication. A field is a commutative division ring.
Example.
-
1.
Some standard widely used fields in mathematics are the rational numbers , the set of real numbers , complex numbers , finite fields for a prime integer , etc.
- 2.
Note. A field is always a ring. Two fields and are said to be isomorphic if they are isomorphic as rings. We use the terms ring isomorphism and field isomorphism separately in order to keep the underlying isomorphic objects specified as rings or fields respectivly. A field isomorphism is always a ring isomorphism, however the converse may not be true.
Appendix B Proofs of technical results
Proof of Proposition 3.1
We first show that the map is a field isomorphism. Clearly is a field. Also, is a field with the following additive identity (zero) and multiplicative identity (unit)
and for any , where both and are not zero together, the inverse element of the field is
Next we consider the map . For any , , . Hence,
Let and . Therefore,
Hence, the map is an isomorphism. Also note that and . In addition for any , . Hence, is a field isomorphism between and .
For any , and . Hence,
Also it is clear that
Hence properties (i)-(iii) is shown.
Proof of Proposition 3.3
-
(a)
It can be shown that both and is are rings under addition and entry-wise multiplication of -dimensional complex-valued vectors and -dimensional real matrices respectively, for any positive integer [Herstein, 1991]. We consider the extended map as described in (3.2). Then, for any such that ,
and
Therefore, the extended of (3.2) is a ring homomorphism. Using, proposition 5 of section 7.3 of Dummit and Foote [2004], is a subring of , and the kernel of , denoted by , is a subring of . In this case, consists of all those complex numbers such that , i.e. . Therefore, has a trivial kernel and hence it is a one-to-one map. As a result, described in (3.2) is a ring isomorphism.
-
(b)
The proof is similar to part (a). The only difference in this case are (i) and are rings under addition and entry-wise multiplication of dimensional complex matrices and dimensional real matrices respectively, and (ii) is described in .
-
(c)
Similar to the proof of part (a), is a field, and hence a ring. is a subring of and the kernel is trivial, which implies that described in (3.3) is a ring isomorphism in this case. In addition, using the homomorphic properties of and the field properties of , it can be shown that is also a field. Therefore, is a field in this case, and hence is a field isomorphism.
For any , . Therefore, .
Proof of Proposition 4.1
By Proposition 3.5 of Sun et al. [2018], for some constants and any , we have
We choose . In the asymptotic regime , we have . Therefore, by using a union bound over all , we obtain
(B.1) |
Proof of Theorem 4.1
Our objective is to show that the primal dual witness matrix in (B.4) is equal to the solution of the optimization problem (2.6) with high probability. For a fixed and Fourier frequency , we define the event . As we are in the regime , and , so , . Also, in the definition of in (4.2), we consider i.e. . Hence, by Proposition 4.1 for constants and . In the following analysis, we condition on the event .
Our proof goes on the following route- we first describe a primal-dual witness approach required for the proof, state a series of claims which are just restatement of a number of lemmas in Ravikumar et al. [2011] with highlight on the changes due to incorporating complex regime.
Claim B.1.
The penalized log-determinant program 2.6 has a unique solution characterized by the equation
(B.3) |
where is an element in the set of subdifferentials of at as characterized in (3.15).
Proof sketch. The proof of this claim B.1 follows from the Lemma 3 of Ravikumar et al. [2011]. Claim B.1 is similar to their result with the real covariance matrix replaced with the averaged periodogram matrix , and replaced with the matrix with entry-wise signs of (details available in Equation 41 Section 4.1 in Ravikumar et al. [2011]). The basic idea for the proof is following: the penalized log-determinant program (2.6) is a strict convex program i.e. the concerned objective function is a strictly convex function of since the penalty on the off-diagonals is a convex function of and the log-determinant barrier is also convex (section 3.1.5 of Boyd and Vandenberghe [2004]). Also for complex positive definite Hermitian matrices, i.e. , eigen values and diagonal elements of are strictly positive. According to the proof of Lemma 3 in Ravikumar et al. [2011], and Hadamard’s inequality for Hermitian positive definite matrices (theorem 7.8.1 of Horn and Johnson [2012]), we have . Hence,
The right-hand side of the above inequality is coercive, i.e. it diverges to infinity as . Therefore, the objective function is a continuous and coercive function of . Hence the level sets are compact, and continuity implies that a minimum is attained. In addition, strict convexity guarantees the uniqueness of the minimum.
A sufficient condition for to be a solution of (2.6) is that zero belongs to the subdifferential of the objective function of (2.6) evaluated at , i.e. . Using theorems 2.9 and 2.10 of Weylandt Jr [2020], the KKT (Karush-Kahn Tucker) conditions, i.e. the subdifferential equation corresponding to the optimization problem (2.6) is the same as (B.3).
As now we guarantee existence, uniqueness, and a formulation of the optimal solution of (2.6), we can take a primal dual witness approach to calculate the solution as well as to recover the correct support set. The primal dual witness construction is motivated by previous works by Zhao and Yu [2006]; Wainwright [2009]; Ravikumar et al. [2011]; Loh and Wainwright [2017]. The key idea is to search for a pair of unique primal and dual feasible solutions satisfying the strict dual feasibility conditions in a setting when the objective function may not be strictly convex. Next we provide a reiteration of the primal dual witness construction in Ravikumar et al. [2011] for a restricted CGLASSO problem.
Primal-dual witness construction
We recall from section 4 that is the augmented edge set, and consider the restricted log-determinant problem
(B.4) |
By this construction, is positive-definite Hermitian as well as . Let . For , we replace with
The pair satisfies the equation (B.3) replacing . Finally we check the strict dual feasibility condition, i.e.
This step ensures that satisfies the necessary conditions to belong to the set of subdifferentials.
We define . Also let the remainder term of the first order Taylor expansion of around be . We state the following result:
Claim B.2.
Suppose that
(B.5) |
Then the vector constructed in the primal dual witness construction satisfies . Hence strict duality holds and .
Proof sketch. The proof of claim B.2 follows from the proof of Lemma 4 in Ravikumar et al. [2011] and involves basic linear algebra computation with complex valued matrices. The only change in notation are that is replaced by in our case and is the difference between the primal dual witness and the true spectral density instead of the difference (as per the notation in Ravikumar et al. [2011], in our literature is used to denote the threshold parameter) between the true inverse covariance and its primal dual witness. Lemma 4 in Ravikumar et al. [2011] shows that under the assumption , the matrix of entry-wise signs for the primal-dual witness problem i.e. satisfies strict dual feasibility condition . The same strict dual feasibility condition can be concluded under the condition (B.5) if we follow the steps (51)-(55) in the proof of Lemma 4 (section 4.2.1) in Ravikumar et al. [2011]. And under strict dual feasibility, by construction, is ensured.
Next, we verify that the assumption (B.5) holds. As our choice of regularization parameter is , under the event we have . In order to show (B.5) it remains to show . For that we will require the following two claims.
Claim B.3.
Suppose that
Then .
Proof sketch. The proof of Claim B.3 is similar to that of Lemma 6 in Ravikumar et al. [2011] and involves linear algebra of complex matrices. By claim B.1, the restricted problem (B.4) has a unique solution. Thus, upon taking partial derivative w.r.t the restricted element, it should vanish at the unique optimum, that is, satisfies
This restricted zero gradient condition is necessary and sufficient for the optimal solution, hence the solution is unique. Now we define the map as follows
From construction, , i.e. . Let be the ball in , i.e. Now, following the matrix algebraic calculations in equations 72 to 74 of Ravikumar et al. [2011], it can be easily checked that . Since is a continuous function on and is convex and compact in , by Brouwere’s fixed point theorem [Kellogg et al., 1976], existence of a fixed point of the function is guaranteed. Also uniqueness of the zero gradient condition (B) implies the uniqueness of the fixed points of . Therefore, we can conclude that , and hence the claim is proved.
By our choice of the regularization constant
From the conditions of Theorem 4.1, , and . Therefore, the following two inequalities are satisfied
(B.6) |
and
(B.7) |
Now we turn to a result in similar spirit as Lemma 5 in Ravikumar et al. [2011]. The result states the following:
Claim B.4.
implies .
The proof follows from Lemma 5 in Ravikumar et al. [2011]. The proof involves a step where we need to expand the power series of the complex matrix as
The power series expansion can be verified as implies (ensures the radius of convergence condition, see equation (68) in Appendix B of Ravikumar et al. [2011]), and also from the results in holomorphic functional calculus that involves the general power series expansion of an operator on a Banach space [Dudley et al., 2011; Haase, 2018]. Rest of the proof can be imitated as the proof of Lemma 5 in Appendix B of Ravikumar et al. [2011]
Combining this result with (B.6) we get
where the last inequality follows from (B.7). Therefore, we have shown that condition (B.5) holds under and strict duality allows us to conclude that . Hence, satisfies the bound (B.6) and . Hence . Also, for such that
Therefore, contains all edges with and hence (b) is proved.
Now we use the idea of Corollary 3 of Ravikumar et al. [2011] and write
And
where we use the definition of norm and the fact that and both have non-zeros in each row. Now we use the fact that Frobenius norm dominates norm and (c) follows.
Appendix C Figures and tables


RMSE | ||||||
---|---|---|---|---|---|---|
Family | Inverse periodogram | CGLASSO (oracle) | CGLASSO (BIC) | |||
White Noise | ||||||
123.69 (66.17) | 11.59 (2.08) | 17.34 (2.72) | ||||
50.68 (17.23) | 8.11 (1.09) | 15.01 (1.49) | ||||
200.23 (783.49) | 14.06 (1.52) | 21.76 (1.71) | ||||
394.05 (123.59) | 10.88 (1.43) | 18.69 (1.53) | ||||
- | 18.16 (1.34) | 31.07 (0.77) | ||||
- | 14.21 (1.07) | 25.44 (0.88) | ||||
VAR(1) | ||||||
205.82 (137.47) | 17.50 (-) | 17.50 (-) | ||||
121.15 (34.07) | 17.99 (1.45) | 18.06 (1.25) | ||||
1831.27 (666.7) | 20.67 (-) | 20.67 (-) | ||||
489.32 (140.18) | 19.47 (0.81) | 19.47 (0.81) | ||||
- | 20.59 (-) | 20.59 (-) | ||||
- | 19.52 (-) | 19.52 (-) | ||||
VARMA(2, 2) | ||||||
117.13 (42.06) | 13.72 (2.34) | 15.00 (2.32) | ||||
62.96 (14.67) | 11.20 (2.03) | 11.48 (2.03) | ||||
2359.21 (1101.53) | 15.32 (1.73) | 20.35 (1.69) | ||||
446.45 (127.16) | 12.83 (2.01) | 14.80 (1.67) | ||||
- | 16.77 (0.90) | 28.98 (1.28) | ||||
- | 13.60 (0.86) | 18.67 (0.73) |
RMSE | ||||||
---|---|---|---|---|---|---|
Family | Inverse periodogram | CGLASSO (oracle) | CGLASSO (BIC) | |||
White Noise | ||||||
148.83 (79.25) | 10.31 (2.15) | 12.83 (2.21) | ||||
80.58 (52.42) | 8.75 (2.38) | 11.21 (2.73) | ||||
3620.6 (4038.01) | 12.49 (2.07) | 18.69 (2.31) | ||||
441.79 (222.01) | 9.74 (1.55) | 14.96 (1.63) | ||||
- | 14.96 (1.44) | 28.44 (1.16) | ||||
- | 11.94 (0.93) | 23.47 (1.18) | ||||
VAR(1) | ||||||
158.18 (119.3) | 9.49 (-) | 15.68 (1.65) | ||||
73.09 (37.48) | 8.69 (-) | 14.17 (2.40) | ||||
4439.06 (3856.06) | 13.55 (-) | 16.61 (2.15) | ||||
607.81 (324.17) | 12.65 (1.14) | 14.77 (1.84) | ||||
- | 16.44 (0.48) | 17.30 (1.47) | ||||
- | 14.74 (-) | 15.15 (0.52) | ||||
VARMA(2, 2) | ||||||
162.61 (131.66) | 9.64 (0.84) | 15.11 (2.36) | ||||
67.55 (23.58) | 7.62 (0.87) | 9.21 (2.16) | ||||
3174.67 (2912.83) | 12.04 (1.39) | 21.01 (2.44) | ||||
470.83 (270.81) | 9.11 (-) | 17.98 (1.56) | ||||
- | 14.10 (0.83) | 19.85 (1.28) | ||||
- | 11.16 (0.95) | 17.52 (0.93) |
NWR | CGLASSO | CGLASSO | |||
---|---|---|---|---|---|
Family | AUROC | AUROC | Precision | Recall | Accuracy |
White Noise | |||||
97.27 (2.79) | 99.61 (0.69) | 53.45 (7.93) | 100.00 (0.00) | 81.78 (5.79) | |
99.68 (0.57) | 99.91 (0.28) | 59.37 (9.45) | 100.00 (0.00) | 85.56 (5.07) | |
96.50 (2.11) | 98.82 (1.64) | 53.04 (5.89) | 98.42 (3.01) | 90.92 (2.08) | |
97.99 (2.18) | 99.78 (0.63) | 51.97 (5.39) | 99.74 (1.18) | 90.55 (2.02) | |
96.52 (1.55) | 94.81 (2.40) | 68.60 (6.89) | 90.00 (4.72) | 97.90 (0.54) | |
98.18 (1.09) | 99.48 (0.53) | 62.79 (4.86) | 99.08 (1.04) | 97.58 (0.49) | |
VAR(1) | |||||
74.42 (5.93) | 50.00 (-) | - | 0.00 (0.00) | 62.22 (0.00) | |
80.81 (7.04) | 50.25 (0.85) | 100.00 (-) | 0.49 (1.70) | 62.41 (0.64) | |
72.76 (4.97) | 52.70 (-) | 100.00 (-) | 5.41 (-) | 81.58 (-) | |
79.42 (3.51) | 50.75 (0.71) | 100.00 (0.00) | 1.50 (1.42) | 80.82 (0.28) | |
68.48 (3.33) | 49.96 (-) | 0.00 (-) | 0.00 (-) | 92.00 (-) | |
75.54 (2.70) | 50.00 (-) | - | 0.00 (-) | 92.08 (-) | |
VARMA(2, 2) | |||||
76.27 (7.25) | 90.20 (6.02) | 75.97 (10.69) | 88.25 (6.34) | 81.56 (8.65) | |
79.62 (4.27) | 93.83 (3.69) | 76.00 (6.82) | 92.25 (6.17) | 83.22 (5.17) | |
73.37 (4.33) | 85.27 (4.40) | 88.98 (5.73) | 71.62 (8.67) | 92.11 (2.07) | |
77.96 (2.28) | 94.36 (1.65) | 77.53 (6.15) | 90.62 (3.23) | 92.34 (1.87) | |
77.58 (1.75) | 50.00 (0.00) | - | 0.00 (0.00) | 91.84 (0.00) | |
79.88 (2.10) | 88.29 (2.72) | 88.74 (3.17) | 76.90 (5.42) | 97.31 (0.43) |
NWR | CGLASSO | CGLASSO | |||
---|---|---|---|---|---|
Family | AUROC | AUROC | Precision | Recall | Accuracy |
White Noise | |||||
93.46 (4.26) | 96.71 (3.77) | 46.73 (6.85) | 97.22 (6.11) | 76.56 (5.61) | |
95.22 (3.86) | 97.93 (2.55) | 48.18 (6.34) | 98.33 (4.07) | 77.78 (5.90) | |
88.63 (5.41) | 96.01 (3.20) | 39.19 (5.21) | 95.00 (5.53) | 84.37 (3.26) | |
95.36 (3.13) | 98.48 (1.76) | 42.91 (5.01) | 98.42 (3.01) | 86.50 (2.66) | |
88.68 (3.63) | 91.48 (2.23) | 42.58 (5.63) | 84.69 (4.32) | 94.68 (1.13) | |
92.98 (2.95) | 96.43 (1.63) | 42.77 (4.51) | 93.78 (3.14) | 94.63 (1.02) | |
VAR(1) | |||||
60.63 (8.53) | 53.00 (3.05) | 84.38 (22.90) | 6.95 (6.35) | 64.24 (2.32) | |
64.01 (7.95) | 50.93 (1.71) | 100.00 (0.00) | 1.86 (3.43) | 62.92 (1.29) | |
58.52 (5.95) | 50.32 (0.78) | 58.33 (49.16) | 0.75 (1.55) | 80.58 (0.36) | |
60.13 (4.20) | 50.92 (1.25) | 81.25 (37.20) | 1.93 (2.47) | 80.83 (0.53) | |
55.86 (2.28) | 50.00 (0.00) | - | 0.00 (0.00) | 92.08 (0.00) | |
58.41 (2.37) | 50.63 (0.61) | 42.47 (2.52) | 1.44 (1.21) | 92.03 (0.17) | |
VARMA(2, 2) | |||||
65.19 (6.36) | 71.59 (8.46) | 90.88 (7.09) | 46.05 (16.63) | 74.04 (7.56) | |
68.87 (6.78) | 81.12 (6.83) | 74.44 (8.42) | 75.25. (11.75) | 77.22 (7.42) | |
63.09 (5.02) | 50.00 (0.00) | - | 0.00 (0.00) | 78.95 (0.00) | |
67.26 (4.28) | 50.00 (0.00) | - | 0.00 (0.00) | 78.95 (0.00) | |
65.10 (2.84) | 50.00 (0.00) | - | 0.00 (0.00) | 91.84 (0.00) | |
66.88 (2.56) | 50.00 (0.00) | - | 0.00 (0.00) | 91.84 (0.00) |