Ising Model Selection Using -Regularized Linear Regression: A Statistical Mechanics Analysis
Abstract
We theoretically analyze the typical learning performance of -regularized linear regression (-LinR) for Ising model selection using the replica method from statistical mechanics. For typical random regular graphs in the paramagnetic phase, an accurate estimate of the typical sample complexity of -LinR is obtained. Remarkably, despite the model misspecification, -LinR is model selection consistent with the same order of sample complexity as -regularized logistic regression (-LogR), i.e., , where is the number of variables of the Ising model. Moreover, we provide an efficient method to accurately predict the non-asymptotic behavior of -LinR for moderate , such as precision and recall. Simulations show a fairly good agreement between theoretical predictions and experimental results, even for graphs with many loops, which supports our findings. Although this paper mainly focuses on -LinR, our method is readily applicable for precisely characterizing the typical learning performances of a wide class of -regularized -estimators including -LogR and interaction screening.
1 Introduction
The advent of massive data across various scientific disciplines has led to the widespread use of undirected graphical models, also known as Markov random fields (MRFs), as a tool for discovering and visualizing dependencies among covariates in multivariate data [1]. The Ising model, originally proposed in statistical physics, is one special class of binary MRFs with pairwise potentials and has been widely used in different domains such as image analysis, social networking, gene network analysis [2, 3, 4, 5, 6, 7]. Among various applications, one fundamental problem of interest is called Ising model selection, which refers to recovering the underlying graph structure of the original Ising model from independent, identically distributed (i.i.d.) samples. A variety of methods have been proposed [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18], demonstrating the possibility of successful Ising model selection even when the number of samples is smaller than that of the variables. Notably, it has been demonstrated that for the -regularized logistic regression (-LogR) [10, 16] and interaction screening (IS) [14, 15] estimators, samples suffice for an Ising model with spins under certain assumptions, which is consistent with respect to (w.r.t.) previously established information-theoretic lower-bound [11]. Both -LogR and IS are -regularized -estimators [19] with logistic and IS objective (ISO) loss functions, respectively.
In this paper, we focus on one simpler linear estimator called -regularized linear regression (-LinR) and theoretically investigate its typical learning performance using the powerful replica method [20, 21, 22, 23] from statistical mechanics. The -LinR estimator, widely known as least absolute shrinkage and selection operator (LASSO) [24] in statistics and machine learning, is considered here mainly for two reasons. On the one hand, it is one representative example of model misspecification since the quadratic loss of -LinR does not match the true log-conditional-likelihood as -LogR, nor does it have the interaction screening property as IS. On the other hand, as one of the most popular linear estimator, -LinR is more computationally efficient than -LogR and IS, and thus it is of practical importance to investigate its learning performance for Ising model selection. Since it is difficult to obtain results for general graphs, as a first step we consider the random regular (RR) graphs in the paramagnetic phase [23], where denotes the ensemble of RR graphs with constant node degree and uniform coupling strength on the edges.
1.1 Contributions
The main contributions are summarized as follows. First, we obtain an accurate estimate of the typical sample complexity of -LinR for Ising model selection for typical RR graphs in the paramagnetic phase, which, remarkably, has the same order as -LogR. Specifically, for a typical RR graph , using -LinR with a regularization parameter , one can consistently reconstruct the structure with samples, where . The accuracy of our typical sample complexity prediction is verified by its excellent agreement with experimental results. To the best of our knowledge, this is the first result that provides an accurate typical sample complexity for Ising model selection. Interestingly, as , a lower bound of the typical sample complexity is obtained, which has the same scaling as the information-theoretic lower bound [11] for some constant at high temperatures (i.e., small ) since as .
Second, we provide a computationally efficient method to precisely predict the typical learning performance of -LinR in the non-asymptotic case with moderate , such as precision, recall, and residual sum of square (RSS). Such precise non-asymptotic predictions of -LinR for Ising model selection have been unavailable even for -LogR [10, 16] and IS [14, 15], nor are they the same as previous asymptotic results of -LinR assuming fixed [25, 26, 27, 28]. Moreover, although our theoretical analysis is based on a tree-like structure assumption, experimental results on two dimensional (2D) grid graphs also show a fairly good agreement, indicating that our theoretical result can be a good approximation even for graphs with many loops.
Third, while this paper mainly focuses on -LinR, our method is readily applicable to a wide class of -regularized -estimators [19], including -LogR [10] and IS [14, 15]. Thus, an additional technical contribution is providing a generic approach for precisely characterizing the typical learning performances of various -regularized -estimators for Ising model selection. Although the replica method from statistical mechanics is non-rigorous, our results are conjectured to be correct, which is supported by their excellent agreement with the experimental results. Additionally, several technical advances we propose in this paper, e.g., the entropy term computation by averaging over the Haar measure and the modification of EOS to address the finite-size effect, might be of general interest to those who use the replica method as a tool for performance analysis.
1.2 Related works
There has been some earlier works on the analysis of Ising model selection (also known as the inverse Ising problem) using the replica method [4, 5, 6, 7, 29] from statistical mechanics. For example, in [6], the performance of the pseudo-likelihood (PL) method [30] is studied. However, instead of graph structure learning, [6] focuses on the problem of parameter learning. Then, [7] extends the analysis to the Ising model with sparse couplings using logistic regression without regularization. The recent work [29] analyzes the performance of -regularized linear regression but the techniques invented there are not applicable to -LinR since the -norm breaks the rotational invariance property.
Regarding the study of -LinR (LASSO) under model misspecification, the past few years have seen a line of research in the field of signal processing with a specific focus on the single-index model [31, 32, 27, 33, 34]. These studies are closely related to ours but there are several important differences. First, in our study, the covariates are generated from an Ising model rather than a Gaussian distribution. Second, we focus on model selection consistency of -LinR while most previous studies consider estimation consistency except [33]. However, [33] only considers the classical asymptotic regime while our analysis includes the high-dimensional setting where .
As far as we have searched, there is no earlier study of -LinR estimator for Ising model selection, though some are found for Gaussian graphical models [35, 36]. One closely related work [15] states that at high temperatures when the coupling magnitude is approaching zero, both logistic and ISO losses can be approximated by a quadratic loss. However, their claim is only restricted to the very small magnitude near zero while our analysis extends the validity range to the whole paramagnetic phase. Moreover, they evaluate the minimum number of samples necessary for consistently reconstructing “arbitrary” Ising models, which, however, seems much larger than that actually needed. By contrast, we provide the first accurate assessment of typical sample complexity for consistently reconstructing typical samples of Ising models defined over the RR graphs. Furthermore, [15] does not provide precise predictions of the non-asymptotic learning performance as we do.
2 Background and Problem Setup
2.1 Ising Model
Ising model is one special class of MRFs with pairwise potentials and each variable takes binary values [22, 23], which is one classical model from statistical physics. The joint probability distribution of an Ising model with variables (spins) has the form
(1) |
where is the partition function and are the original couplings, respectively. In general, there are also external fields but here they are assumed to be zero for simplicity. The structure of Ising model can be described by an undirected graph , where is a collection of vertices at which the spins are assigned, and is a collection of undirected edges, i.e., for all pairs of . For each vertex , its neighborhood is defined as the subset .
2.2 Neighborhood-based -regularized linear regression (-LinR)
The problem of Ising model selection refers to recovering the graph (edge set ), given i.i.d. samples from the Ising model. While the maximum likelihood method has nice properties of consistency and asymptotic efficiency, it suffers from high computational complexity. To tackle this difficulty, several local learning algorithms have been proposed, notably the -LogR estimator [10] and IS estimator [14]. Both -LogR and IS optimize a regularized local cost function for each spin i.e., ,
(2) |
where , , and denotes the norm. Specifically, for -LogR and for IS, which correspond to the minus log conditional distribution [10] and the ISO [14], respectively. Consequently, the problem of recovering the edge set is equivalently reduced to local neighborhood selection, i.e., recovering the neighborhood set for each vertex . In particular, given the estimates in (2), the neighborhood set of vertex can be estimated via the nonzero coefficients, i.e.,
(3) |
In this paper, we focus on one simple linear estimator, termed as the -LinR estimator, i.e., ,
(4) |
which, recalling that , corresponds to a quadratic loss in (2). The neighorbood set for each vertex is estimated in the same way as (3). Interestingly, the quadratic loss used in (4) implies that the postulated conditional distribution is Gaussian and thus inconsistent with the true one, which is one typical case of model misspecification. Furthermore, compared with nonlinear estimators -LogR and IS, the -LinR estimator is more efficient to implement.
3 Statistical Mechanics Analysis
In this section, a statistical mechanics analysis of the -LinR estimator is presented for typical RR graphs in the paramagnetic phase. Our analysis is applicable to any M-estimator of the form (2) and please refer to Appendix A for a unified analysis, including detailed results for the -LogR estimator.
To characterize the structure learning performance, the precision and recall are considered:
(5) |
where , , denote the number of true positives, false positives, and false negatives in the estimated couplings, respectively. The concept of model selection consistent for an estimator is defined in Definition 1, which is also known as the sparsistency property [10].
Definition 1.
An estimator is called model selection consistent if both the associated precision and recall satisfy and as .
Additionally, if one is further interested in the specific values of the estimated couplings, our analysis can also yield the residual sum of squares (RSS) for the estimated couplings.
Our theoretical analysis of the learning performance builds on the statistical mechanics framework. Contrary to the probably almost correct (PAC) learning theory [37] in mathematical statistics, statistical mechanics tries to describe the typical (as defined in Definition 2) behavior exactly rather than to bound the worst case which is likely to be over-pessimistic [38].
Definition 2.
“typical” means not just most probable but in addition the probability for situations different from the typical one can be made arbitrarily small as [38].
Similarly, when referring to typical RR graphs, we mean tree-like RR graphs, i.e., when seen from a random node, they look like part of an infinite tree, which are typical realizations from the uniform probability distribution on the ensemble of RR graphs.
3.1 Problem Formulation
For simplicity and without loss of generality, we focus on spin . With a slight abuse of notation, we will drop certain subscripts in following descriptions, e.g., will be denoted as hereafter which represents a vector rather than a matrix. The basic idea of the statistical mechanical approach is to introduce the following Hamiltonian and Boltzmann distribution induced by the loss function
(6) | ||||
(7) |
where is the partition function, and is the inverse temperature. In the zero-temperature limit , the Boltzmann distribution (7) converges to a point-wise measure on the estimator (2). The macroscopic properties of (7) can be analyzed by assessing the free energy density , from which, once obtained, we can evaluate averages of various quantities simply by taking its derivatives w.r.t. external fields [21]. In current case, depends on the predetermined randomness of , which plays the role of quenched disorder. As , is expected to show the self averaging property [21]: for typical datasets , converges to its average over the random data :
(8) |
where denotes expectation over , i.e. . Consequently, one can analyze the typical performance of any -regularized M-estimator of the form (2) via the assessment of (8), with -LinR in (4) being a special case with .
3.2 Replica computation of the free energy density
Unfortunately, computing (8) rigorously is difficult. For practically overcoming this difficulty, we resort to the powerful replica method [20, 21, 22, 23] from statistical mechanics, which is symbolized using the following identity
(9) |
The basic idea is as follows. One replaces the average of by that of the -th power which is analytically tractable for in the large limit, and constructs an analytically continuable expression from to , then takes the limit by using the expression. Although the replica method is not rigorous, it has been empirically verified from extensive studies in disorder systems [22, 23] and also found useful in the study of high-dimensional models in machine learning [39, 40]. For more details of the replica method, please refer to [20, 21, 22, 23].
Specifically, with the Hamiltonian , assuming is a positive integer, the replicated partition function in (9) can be written as
(10) |
where will be termed as local field hereafter, and (and in the following) is index variable of the replicas. The analysis below essentially depends on the distribution of but it is nontrivial. To resolve it, we take a similar approach as [7, 29] and introduce the following ansatz.
Ansatz 1 (A1): Denote and as the active and inactive sets of spin , respectively, then for a typical RR graph in the paramagnetic phase, i.e., , the -LinR estimator in (4) is a random vector determined by random realizations of and obeys the following form
(11) |
where is the mean value of the estimator and is a random variable which is asymptotically zero mean with variance scaled as .
The consistency of Ansatz 1 is checked in Appendix B. Under Ansatz 1, the local fields can be decomposed as where is the “noise” part. According to the central limit theorem (CLT), can be approximated as multivariate Gaussian variables, which, under the replica symmetric (RS) ansatz [21], can be fully described by two order parameters
(12) |
where is the covariance matrix of the original Ising model without the spin . Since the difference between and that with is not essential in the limit , hereafter the superscript \0 will be discarded. As shown in Appendix A, for quadratic loss of -LinR, the average free energy density (9) in the limit can be computed as
(13) |
where denotes the extremum operation w.r.t. relevant variables and denote the energy and entropy terms:
(14) | ||||
(15) | ||||
(16) |
where , denotes the expectation operation w.r.t. and [7]. For different losses , the free energy results (13) only differ in the energy term , which in general is non-analytical (e.g., logistic loss for -LogR) but can be solved numerically. Please refer to Appendix A.3 for more details.
In contrast to the case of -norm in [29], the -norm in (15) breaks the rotational invariance property, i.e., for general orthogonal matrix , making it difficult to compute the entropy term . To circumvent this difficulty, we employ an observation that, when considering the RR graph ensemble as the coupling network of the Ising model, the orthogonal matrix diagonalizing the covariance matrix appears to be distributed from the Haar orthogonal measure [41, 42]. Thus, it is assumed that in (15) can be replaced by its average over the Haar-distributed :
Ansatz 2 (A2): Denote , where as the covariance matrix of spin configurations . Suppose that the eigendecomposition of is , where is the orthogonal matrix, then can be seen as a random sample generated from the Haar orthogonal measure and thus for typical graph realizations from , in (15) is equal to the average .
The consistency of Ansatz (A2) is numerically checked in Appendix C. Under Ansatz (A2), the entropy term in (14) can be alternatively computed as , as shown in Appendix A. Finally, under the RS ansatz, the average free energy density (9) in the limit reads
(20) |
where , and is a function defined as
(21) |
and is the eigenvalue distribution (EVD) of the covariance matrix , and is a collection of macroscopic parameters . For details of these macroscopic parameters and , please refer to Appendix A and F, respectively. Note that in (20), apart from the ratio , and also appear as in the free energy result, which is different from previous results [7, 29, 39]. The reason is that, thanks to the -regularization term , the mean estimates in the active set and the noise in the inactive set essentially give different scaling contributions to the free energy density.
Although there are no analytic solutions, these macroscopic parameters in (20) can be obtained by numerically solving the corresponding equations of state (EOS) employing the physics terminology. Specifically, for the -LinR estimator, the EOS can be obtained from the extremization condition in (20) as follows (for EOS of a general M-esimator and -LogR, please refer to Appendix A.3):
(22) |
where satisfying is determined by the extremization condition in (21) and is the soft-thresholding function. Once the EOS is solved, the free energy density defined in (8) is readily obtained.
3.3 High-dimensional asymptotic result

One important result of our replica analysis is that, as derived (see Appendix A.3) from the free energy result (20), the original high dimensional -LinR estimator (4) is decoupled into a pair of scalar estimators, one for the active set and one for the inactive set, i.e.,
(23) | |||||
(24) |
where are i.i.d. standard Gaussian random variables. The decoupling property asserts that, once the EOS (22) is solved, the asymptotic behavior of -LinR can be statistically described by a pair of simple scalar soft-thresholding estimators (see Figs. 1(a) and 1(b)).
In the high-dimensional setting where is allowed to grow as a function of , one important question is that what is the minimum number of samples required to achieve model selection consistency as . Though we obtain a pair of scalar estimators (23) and (24), there are no analytical solutions to EOS (22), making it difficult to derive an explicit condition. To overcome this difficulty, as shown in Appendix D, we perform a perturbation analysis of EOS (20) and obtain an asymptotic relation , Then, we obtain that for a RR graph , given i.i.d. samples , the -LinR estimator (4) can consistently recover the graph structure as if
(25) |
where is a constant value dependent on the regularization parameter and coupling strength and a sharp prediction (as verified in Sec. 5) is obtained as
(26) |
For details of the analysis, including the counterpart of -LogR, see Appendix D. Consequently, we obtain the typical sample complexity of -LinR for Ising model selection for typical RR graphs in the paramagnetic phase. The result in (25) is derived for -LinR with a fixed regularization parameter . Since the value of is upper bounded by (otherwise false negatives occur as discussed in Appendix D), a lower bound of the typical sample complexity for -LinR is obtained as
(27) |
Interestingly, the scaling in (27) is the same as the information-theoretic worst-case result obtained in [11] at high temperatures (i.e., small ) since as .
3.4 Non-asymptotic result for moderate
In practice, it is desirable to predict the non-asymptotic performance of the -LinR estimator for moderate . However, the scalar estimator (23) for the active set (see Fig. 1(a)) fails to capture the fluctuations around the mean estimates. This is because in obtaining the energy term (16) of the free energy density (20), the fluctuations around the mean estimates are averaged out by the expectation . To address this problem, we replace in (20) with a sample average by accounting for the finite-size effect, thus obtaining a modified estimator for the active set as follows
(28) |
where . The modified -dimensional estimator (28) (see Fig. 1(c) for a schematic) is equivalent to the scalar one (23) (Fig. 1(a)) as but it enables us to capture the fluctuations of for moderate . Note that due to the replacement of expectation with sample average in the free energy density (20), the EOS (22) also needs to be modified and it can be solved iteratively as sketched in Algorithm 1. The details are shown in Appendix E.1.
Consequently, for moderate , the non-asymptotic statistical properties of the -LinR estimator can be characterized by the reduced -dimensional -LinR estimator (28) (Fig. 1(c)) and scalar estimator (24) (Fig. 1(b)) using MC simulations. Denote as the estimates in -th MC simulation, where and are solutions of (28) and (24), and is the total number of MC simulations. Then, the and are computed as
(29) |
where is the -norm indicating the number of nonzero elements. In addition, the RSS can be computed as .
4 Discussions
It might seem surprising that, despite apparent model misspecification due to the use of quadratic loss, the -LinR estimator can still correctly infer the structure with the same order of sample complexity as -LogR. Also, our theoretical analysis implies that the idea of using linear regression for binary data is not as outrageous as one might imagine. Here we provide an intuitive explanation of its success with a discussion of its limitations.
On the average, from (4), the condition for the -LinR estimator is given as
(30) |
where and represent average w.r.t. the Boltzmann distribution (7) and the sub-gradient of , respectively. In the paramagnetic phase, decays in its magnitude exponentially w.r.t. the distance of sites and . This guarantees that once the connections of sites in the first nearest neighbor set are given so that
(31) |
holds, the other conditions are automatically satisfied by setting all the other connections that are not from to zero. For appropriate choice of , (31) has solutions of . Namely , the estimate of has the same sign as the true value . This implies that on average the -LinR estimator can successfully recover the network structure up to the connection signs if is chosen appropriately.
The key of the above argument is that decays exponentially fast w.r.t. the distance of two sites, which does not hold after the phase transition. Thus, it is conjectured that the -LinR estimator will start to fail in the network recovery just at the phase transition point. However, it is worth noting that this is in fact not limited to -LinR: -LogR also exhibits similar behavior unless post-thresholding is used, as reported in [43].
5 Experimental results


In this section, we conduct numerical experiments to verify the accuracy of the theoretical analysis. The experimental procedures are as follows. First, a random graph is generated and the Ising model is defined on it. Then, the spin snapshots are obtained using the Metropolis–Hastings algorithm [44, 45, 46] in the same way as [7], yielding the dataset . We randomly choose a center spin and infer its neighborhood using the -LinR (4) and -LogR [10] estimators. To obtain standard error bars, we repeat the sequence of operations 1000 times. The RR graph with node degree and coupling strength is considered, which satisfies the paramagnetic condition . The active couplings have the same probability of taking both signs of or 111Though this setting is different from the analysis where the nonzero couplings take a uniform sign, the result can be directly compared thanks to gauge symmetry [21]..
We first verify the precise non-asymptotic predictions of our method described in Sec.3.4. Fig. 2 (upper figure) shows the replica and experimental results of , , for with different values of . It can be seen that for both -LinR and -LogR, there is a fairly good agreement between the theoretical predictions and experimental results, even for small and small (equivalently small ), verifying the correctness of the replica analysis. Interestingly, a quantitatively similar behavior between -LinR and -LogR is observed in terms of precision and recall. Regarding RSS, the two estimators actually behave differently, which can be clearly seen in Fig. 7 in Appendix G: the RSS is much smaller for -LogR, which is natural since the estimates of -LogR are closer to the true ones due to the model match. As our theoretical analysis is based on the typical tree-like structure assumption, it is interesting to see if it is applicable to graphs with loops. To this end, we consider the 2D 4-nearest neighbor grid with periodic boundary condition, which is one common loopy graph. Fig. 2 (lower figure) shows the results for a 2D grid with uniform constant coupling . The agreement between the theoretical and numerical results is fairly good, indicating that our theoretical result can be a good approximation even for loopy graphs. More results for different values of and are shown in Fig. 7 and Fig. 8 in Appendix G.
Subsequently, the asymptotic result and sharpness of the critical scaling value in (25) are evaluated. First, Fig. 3 (left) shows comparison of between -LinR and -LogR for the RR graph when , indicating similar behavior of -LogR and -LinR. Then, we conduct experiments for with different values of around , and investigate the trend of and as increases. When , Fig. 3 (middle and right) show the results of Precision and Recall, respectively. As expected, the Precision increases consistently with when and decreases consistently with when while the Recall increases consistently and approaches to 1 as , which verifies the sharpness of the critical scaling value prediction. The results for -LogR, including the case of for both -LinR and -LogR, are shown in Fig. 9 and Fig. 10 in Appendix G.
6 Conclusion
In this paper, we provide a unified statistical mechanics framework for the analysis of typical learning performances of -regularized -estimators, -LinR in particular, for Ising model selection on typical paramagnetic RR graphs. Using the powerful replica method, the high-dimensional -regularized M-estimator is decoupled into a pair of scalar estimators, by which we obtain an accurate estimate of the typical sample complexity. It is revealed that, perhaps surprisingly, the misspecified -LinR estimator is model selection consistent using samples, which is of the same order as -LogR. Moreover, with a slight modification of the scalar estimator for the active set to account for the finite-size effect, we further obtain sharp predictions of the non-asymptotic behavior of -LinR (also -LogR) for moderate . There is an excellent agreement between theoretical predictions and experimental results, even for graphs with many loops, which supports our findings. Several key assumptions are made in our theoretical analysis, such as the paramagnetic assumption which implies that the coupling strength should not be too large. It is worth noting that the restrictive paramagnetic assumption is not only limited to -LinR, but also to other low-complexity estimators like -LogR unless post-thresholding is used [43]. These assumptions restrict the applicability of the presented result, and thus overcoming such limitations will be an important direction for future work.
Acknowledgements
This work was supported by JSPS KAKENHI Nos. 17H00764, 18K11463, and 19H01812, and JST CREST Grant Number JPMJCR1912, Japan.
References
- [1] Martin J Wainwright and Michael Irwin Jordan. Graphical models, exponential families, and variational inference. Now Publishers Inc, 2008.
- [2] H Chau Nguyen, Riccardo Zecchina, and Johannes Berg. Inverse statistical problems: from the inverse ising problem to data science. Advances in Physics, 66(3):197–261, 2017.
- [3] Erik Aurell and Magnus Ekeberg. Inverse ising inference using all the data. Physical review letters, 108(9):090201, 2012.
- [4] Ludovica Bachschmid-Romano and Manfred Opper. Learning of couplings for random asymmetric kinetic ising models revisited: random correlation matrices and learning curves. Journal of Statistical Mechanics: Theory and Experiment, 2015(9):P09016, 2015.
- [5] Johannes Berg. Statistical mechanics of the inverse ising problem and the optimal objective function. Journal of Statistical Mechanics: Theory and Experiment, 2017(8):083402, 2017.
- [6] Ludovica Bachschmid-Romano and Manfred Opper. A statistical physics approach to learning curves for the inverse ising problem. Journal of Statistical Mechanics: Theory and Experiment, 2017(6):063406, 2017.
- [7] Alia Abbara, Yoshiyuki Kabashima, Tomoyuki Obuchi, and Yingying Xu. Learning performance in inverse ising problems with sparse teacher couplings. Journal of Statistical Mechanics: Theory and Experiment, 2020(7):073402, 2020.
- [8] Martin J Wainwright, John D Lafferty, and Pradeep K Ravikumar. High-dimensional graphical model selection using -regularized logistic regression. In Advances in neural information processing systems, pages 1465–1472, 2007.
- [9] Holger Höfling and Robert Tibshirani. Estimation of sparse binary pairwise markov networks using pseudo-likelihoods. Journal of Machine Learning Research, 10(4):883–906, 2009.
- [10] Pradeep Ravikumar, Martin J Wainwright, John D Lafferty, et al. High-dimensional ising model selection using -regularized logistic regression. The Annals of Statistics, 38(3):1287–1319, 2010.
- [11] Narayana P Santhanam and Martin J Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. IEEE Transactions on Information Theory, 58(7):4117–4134, 2012.
- [12] Aurélien Decelle and Federico Ricci-Tersenghi. Pseudolikelihood decimation algorithm improving the inference of the interaction network in a general class of ising models. Physical review letters, 112(7):070603, 2014.
- [13] Guy Bresler. Efficiently learning ising models on arbitrary graphs. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pages 771–782, 2015.
- [14] Marc Vuffray, Sidhant Misra, Andrey Lokhov, and Michael Chertkov. Interaction screening: Efficient and sample-optimal learning of ising models. In Advances in Neural Information Processing Systems, pages 2595–2603, 2016.
- [15] Andrey Y Lokhov, Marc Vuffray, Sidhant Misra, and Michael Chertkov. Optimal structure and parameter learning of ising models. Science advances, 4(3):e1700791, 2018.
- [16] Shanshan Wu, Sujay Sanghavi, and Alexandros G Dimakis. Sparse logistic regression learns all discrete pairwise graphical models. Advances in Neural Information Processing Systems, 32:8071–8081, 2019.
- [17] Guy Bresler, Elchanan Mossel, and Allan Sly. Reconstruction of markov random fields from samples: Some observations and algorithms. In Approximation, Randomization and Combinatorial Optimization. Algorithms and Techniques, pages 343–356. Springer, 2008.
- [18] Adarsh Prasad, Vishwak Srinivasan, Sivaraman Balakrishnan, and Pradeep Ravikumar. On learning ising models under huber’s contamination model. Advances in Neural Information Processing Systems, 33, 2020.
- [19] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified framework for high-dimensional analysis of -estimators with decomposable regularizers. Statistical science, 27(4):538–557, 2012.
- [20] Marc Mézard, Giorgio Parisi, and Miguel Angel Virasoro. Spin glass theory and beyond: An Introduction to the Replica Method and Its Applications, volume 9. World Scientific Publishing Company, 1987.
- [21] Hidetoshi Nishimori. Statistical physics of spin glasses and information processing: an introduction. Number 111. Clarendon Press, 2001.
- [22] Manfred Opper and David Saad. Advanced mean field methods: Theory and practice. MIT press, 2001.
- [23] Marc Mezard and Andrea Montanari. Information, physics, and computation. Oxford University Press, 2009.
- [24] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996.
- [25] Mohsen Bayati and Andrea Montanari. The lasso risk for gaussian matrices. IEEE Transactions on Information Theory, 58(4):1997–2017, 2011.
- [26] Sundeep Rangan, Alyson K Fletcher, and Vivek K Goyal. Asymptotic analysis of map estimation via the replica method and applications to compressed sensing. IEEE Transactions on Information Theory, 58(3):1902–1923, 2012.
- [27] Christos Thrampoulidis, Ehsan Abbasi, and Babak Hassibi. Lasso with non-linear measurements is equivalent to one with linear measurements. Advances in Neural Information Processing Systems, 28:3420–3428, 2015.
- [28] Cédric Gerbelot, Alia Abbara, and Florent Krzakala. Asymptotic errors for convex penalized linear regression beyond gaussian matrices. arXiv preprint arXiv:2002.04372, 2020.
- [29] Xiangming Meng, Tomoyuki Obuchi, and Yoshiyuki Kabashima. Structure learning in inverse ising problems using -regularized linear estimator. Journal of Statistical Mechanics: Theory and Experiment, 2021(5):053403, 2021.
- [30] Julian Besag. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society: Series D (The Statistician), 24(3):179–195, 1975.
- [31] David R Brillinger. A generalized linear model with gaussian regressor variables. In A Festschrift for Erich L. Lehmann, page 97–114. 1982.
- [32] Yaniv Plan and Roman Vershynin. The generalized lasso with non-linear observations. IEEE Transactions on information theory, 62(3):1528–1537, 2016.
- [33] Yue Zhang, Weihong Guo, and Soumya Ray. On the consistency of feature selection with lasso for non-linear targets. In International Conference on Machine Learning, pages 183–191. PMLR, 2016.
- [34] Martin Genzel. High-dimensional estimation of structured signals from non-linear observations with general convex loss functions. IEEE Transactions on Information Theory, 63(3):1601–1619, 2016.
- [35] Nicolai Meinshausen, Peter Bühlmann, et al. High-dimensional graphs and variable selection with the lasso. The annals of statistics, 34(3):1436–1462, 2006.
- [36] Peng Zhao and Bin Yu. On model selection consistency of lasso. The Journal of Machine Learning Research, 7:2541–2563, 2006.
- [37] Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.
- [38] Andreas Engel and Christian Van den Broeck. Statistical mechanics of learning. Cambridge University Press, 2001.
- [39] Federica Gerace, Bruno Loureiro, Florent Krzakala, Marc Mézard, and Lenka Zdeborová. Generalisation error in learning with random features and the hidden manifold model. In International Conference on Machine Learning, pages 3452–3462. PMLR, 2020.
- [40] Madhu Advani and Surya Ganguli. Statistical mechanics of optimal convex inference in high dimensions. Physical Review X, 6(3):031034, 2016.
- [41] Persi Diaconis and Mehrdad Shahshahani. On the eigenvalues of random matrices. Journal of Applied Probability, 31(A):49–62, 1994.
- [42] Kurt Johansson. On random matrices from the compact classical groups. Annals of mathematics, pages 519–545, 1997.
- [43] José Bento and Andrea Montanari. Which graphical models are difficult to learn? arXiv preprint arXiv:0910.5761, 2009.
- [44] Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087–1092, 1953.
- [45] W Keith Hastings. Monte carlo sampling methods using markov chains and their applications. 1970.
- [46] Stuart Geman and Donald Geman. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Transactions on pattern analysis and machine intelligence, (6):721–741, 1984.
- [47] Federico Ricci-Tersenghi. The bethe approximation for solving the inverse ising problem: a comparison with other inference methods. Journal of Statistical Mechanics: Theory and Experiment, 2012(08):P08015, 2012.
- [48] H Chau Nguyen and Johannes Berg. Bethe–peierls approximation and the inverse ising problem. Journal of Statistical Mechanics: Theory and Experiment, 2012(03):P03004, 2012.
- [49] Brendan D McKay. The expected eigenvalue distribution of a large regular graph. Linear Algebra and its Applications, 40:203–216, 1981.
Appendix A Free energy density computation
The detailed derivation of the average free energy density in (9) using the replica method is illustrated. Our method provides a unified framework for the statistical mechanics analysis of any -regularized -estimator of the form (2). As a result, for generality, in the following derivations, we first focus on a generic -regularized -estimator (2) with a generic loss function . After obtaining the generic results, specific results for both the -LinR estimator (4) with square loss and the -LogR estimator with logistic loss are provided. For the IS estimator, the results can be easily obtained by substituting , though the specific results are not shown.
A.1 Energy term of
The key of replica method is to compute the replicated partition function . According to the definition in (10) and Ansatz (A1) in Sec. 3.2, the average replicated partition function can be re-written as
(32) |
where in the finite active set are neglected in the second line when is large, is the marginal distribution of that can be computed as [7], is the distribution of the “noise” part of the local field. In the last line, the asymptotic independence between and () are applied as discussed in [7]. Regarding the marginal distribution , in general we have to take into account the cavity fields in the marginal distribution. In the case considered in this paper, however, the paramagnetic assumption simplifies the marginal distribution and finally it is proportional to [7]. When has a small cardinality , we can compute the expectation w.r.t. exactly by exhaustive enumeration. For large , MC methods like the Metropolis–Hastings algorithm [44, 45, 46] might be used.
To proceed with the calculation, according to the CLT, the noise part can be regarded as Gaussian variables so that can be approximated as a multivariate Gaussian distribution. Under the RS ansatz, two auxiliary order parameters are introduced, i.e.,
(33) | ||||
(34) |
where is the covariance matrix of the original Ising model without . To write the integration in terms of the order parameters , we introduce the following trivial identities
(35) | ||||
(36) |
so that in (32) can be rewritten as
(37) | ||||
(38) |
where
(39) | ||||
(40) |
According to CLT and (33) and (34), the noise parts follow a multivariate Gaussian distribution with zero mean (paramagnetic assumption) and covariances
(41) |
Consequently, by introducing two auxiliary i.i.d. standard Gaussian random variables , the noise parts can be written in a compact form
(42) |
so that in (40) could be written as
(43) |
where . As a result, using the replica formula, we have
(44) |
where in the last line, a change of variable is used.
As a result, from (9), the average free energy density in the limit reads
(45) |
where denotes extremization w.r.t. some relevant variables, and are the corresponding energy and entropy terms of , respectively:
(46) | ||||
(47) | ||||
(48) |
and the relation is used [6, 7]. The extremization in the free energy result (45) comes from saddle point method in the large limit.
A.2 Entropy term of
To obtain the final result of free energy density, there is still one remaining entropy term to compute, which requires the result of (47). However, unlike the -norm, the -norm in (47) breaks the rotational invariance property, which makes the computation of difficult and the methods in [7, 29] are no loner applicable. To address this problem, applying the Haar Orthogonal Ansatz (A2) in Sec. 3.2, we employ a method to replace with an average over the orthogonal matrix generated from the Haar orthogonal measure.
Specifically, also under the RS ansatz, two auxiliary order parameters are introduced, i.e.,
(49) | ||||
(50) |
Then, by inserting the delta functions , we obtain
(51) |
Moreover, replacing the original delta functions in (51) as the following identities
and taking average over the orthogonal matrix , after some algebra, the is replaced with the following average
(52) | ||||
(53) |
To proceed with the computation, the eigendecompostion of the matrix is performed. After some algebra, for the configuration of that satisfies both and , the eigenvalues and associated eigenvectors of matrix can be calculated as follows
(54) |
where is the eigenvalue corresponding to the eigenvector while is the degenerate eigenvalue corresponding to eigenvectors . To compute , we define a function as
(55) |
and is the eigenvalue distribution (EVD) of . Then, combined with (54), after some algebra, we obtain that
(56) |
Furthermore, replacing the original delta functions in (51) as
we obtain
(57) |
In addition, using a Gaussian integral, the following result can be linearized as
where . Consequently, the entropy term of the free energy density is computed as
For , according to the characteristic of the Boltzmann distribution, the following scaling relations are assumed to hold, i.e.,
(58) |
Finally, the entropy term is computed as
(59) | ||||
(60) |
A.3 Free energy density result
Combining the results (48) and (60) together, the free energy density for general loss function in the limit is obtained as
(64) |
where the values of the parameters can be calculated by the extremization condition, i.e., solving the equations of state (EOS). For general loss function , the EOS for (64) is as follows
(65) |
where satisfying is determined by the extremization condition in (55) combined with the free energy result (64). In general, there are no analytic solutions for the EOS (65) but it can be solved numerically.
A.3.1 quadratic loss
In the case of square lass for the -LinR estimator, there is an analytic solution to in and thus the results can be further simplified. Specifically, the free energy can be written as follows
(69) |
and the corresponding EOS can be written as
(70) |
Note that the mean estimates in (70) is obtained by solving the following reduced optimization problem
(71) |
where the corresponding fixed-point equation associated with any can be written as follows
(72) |
where the denotes an element-wise application of the standard sign function. For a RR graph with degree and coupling strength , without loss of generality, assuming that all the active couplings are positive, we have , and . Given these results and thanks to the the symmetry, we obtain
(73) |
where is the soft-thresholding function, i.e.,
(74) |
On the other hand, in the inactive set , each component of the scaled noise estimates can be statistically described as the solution to the scalar estimator in (64). Consequently, recalling the definition of in (11), the estimates in the inactive set are
(75) |
which are i.i.d. random Gaussian noise.
A.3.2 Logistic loss
In the case of logistic lass for the -LogR estimator, however, there is no analytic solution to in and we have to solve it together iteratively with other parameters . After some algebra, we obtain the EOS for the -LogR estimator:
(76) |
In the active set , the mean estimates can be obtained by solving a reduced -regularized optimization problem
(77) |
In contrast to the -LinR estimator, the mean estimates in (77) for the -LogR estimator do not have analytic solutions and also have to be solved numerically. For a RR graph with degree and coupling strength , after some algebra, the corresponding fixed-point equations for are obtained as follows
(78) |
which can be solved iteratively.
The estimates in the inactive set are the same as (75) that of -LinR, which can be described by a scalar theresholding estimator once the EOS is solved.
Appendix B Check the consistency of ansatz (A1)
To check the consistency of Ansatz (A1), first we categorize the estimators based on the distance or generation from the focused spin . Considering the original Ising model whose coupling network is a tree-like graph, we can naturally define generations of the spins according to the distance from the focused spin . We categorize the spins directly connected to as the first generation and denote the corresponding index set as . Each spin in is connected to some other spins except for , and those spins constitute the second generation and we denote its index set as . This recursive construction of generations can be unambiguously continued on the tree-like graph, and we denote the index set of the -th generation from spin as . The overall construction of generations is graphically represented in Fig. 4. Generally, assume that the set of nonzero values of the -LinR estimator is denoted as . Then, Ansatz (A1) means that the correct active set of the mean estimates is .

To verify this, we examine the values of mean estimates based on (69). Due to the symmetry, it is expected that for each , the values of the mean estimates are identical to each other within the same set . In addition, if the solutions satisfy Ansatz (A1) in (11), i.e., , from (69) we obtain
(79) |
where the result is used for any two spins whose distance is in the RR graph . Note that the solution of the first equation in (79) automatically satisfies the second equation (sub-gradient condition) since , which indicates that is one valid solution. Moreover, the convexity of the quadratic loss function indicates that this is the unique and correct solution, which checks the Ansatz (A1).
Appendix C Check the consistency of ansatz (A2)
We here check the consistency of a part of the Ansatz (A2) in Sec.3.2, the orthogonal matrix diagonalizing the covariance matrix is distributed from the Haar orthogonal measure. To achieve this, we compare certain properties of the orthogonal matrix generated from the diagonalization of the covariance matrix with the orthogonal matrix which is actually generated from the Haar orthogonal measure. Specifically, we compute the cumulants of the trace of the power of the orthogonal matrix. All cumulants with degree are shown to disappear in the large limit [41, 42]. The nontrivial cumulants are only second order cumulant with the same power . We have computed these cumulants about the orthogonal matrix from the covariance matrix and found that they exhibit the same behavior as the ones generated from the true Haar measure, as shown in Fig. 5.

Appendix D Details of the High-dimensional asymptotic result
Here the asymptotic performance of and are considered for both the -LinR estimator and the -LogR estimator. Recall that perfect Ising model selection is achieved if and only if and
D.1 Recall rate
According to the definition in (5), the recall rate is only related to the statistical properties of estimates in the active set and thus the mean estimates in the limit are considered.
D.1.1 quadratic loss
In this case, in the limit , the mean estimates in the active set are shown in (73) and rewritten as follows for ease of reference
(80) |
As a result, as long as , and thus we can successfully recover the active set so that . In addition, when , as , as demonstrated later by the relation in (90). As a result, the regularization parameter needs to satisfy .
D.1.2 Logistic loss
In this case, in the limit , the mean estimates in the active set are shown in (78) and rewritten as follows for ease of reference
(81) |
There is no analytic solution for and the following fixed-point equation has to be solved numerically
(82) |
Then one can determine the valid choice of to enable . Numerical results show that the choice of is similar to that of the quadratic loss.
D.2 Precision rate
According to the definition in (5), to compute the , the number of true positives and false positives are needed, respectively. On the one hand, as discussed in Appendix D.1, in the limit , the recall rate approach to one and thus we have for a RR graph . On the other hand, the number of false positives can be computed as , where is the false positive rate (FPR).
As shown in Appendix A.3, the estimator in the inactive set can be statistically described by a scalar estimator (75) and thus the can be computed as
(83) |
which depends on . However, for both the quadratic loss and logistic loss, there is no analytic result for in (65). Nevertheless, we can obtain some asymptotic result using perturbative analysis.
Specifically, we focus on the asymptotic behavior of the macroscopic parameters, e.g., , in the regime , which is necessary for successful Ising model selection. From in EOS (65) and the in (83), there is . Moreover, by combining and , the following relation can be obtained
(84) |
Thus as , there is , implying that the magnitude of . Consequently, using the truncated series expansion, we obtain
(85) |
where . Then, solving the quadratic equation (85), we obtain the solution (the other solution is not considered since it is a smaller value) of as
(86) |
To compute , we use the following relation
(87) | ||||
(88) |
In addition, as , from (65) we obtain
(93) |
where the first result in uses the asymptotic relation as and the last result in results from the asymptotic relation in (89). Then, substituting (93) into (92) leads to the following relation
(94) |
Interestingly, the common terms in both sides of (94) cancel with each other. Therefore, the key result for is obtained as follows
(95) |
In addition, from (95) and (91), can be simplified as
(96) |
As shown in (65), , thus the result in (95) implies that there is a linear relation between and . The relation between and are also verified numerically in Fig. 6 when for using the -LinR estimator.

In the paramagnetic phase, it can be obtained that the mean value of the eigenvalue . Specifically, we have . Denote by , where , then the in (83) can be rewritten as follows
(97) |
where the last inequality uses the upper bound of erfc function, i.e., . Consequently, the number of false positives satisfies
(98) |
where the last inequality holds when , which is necessary when as . Consequently, to ensure as , from (98), the term should grow at least faster than , i.e.,
(99) |
Meanwhile, the number of false positives will decay as for some constant .
D.2.1 Quadratic loss
In this case, when , from (70), we can obtain an analytic result for as follows
(100) | ||||
(101) |
On the other hand, from the discussion in Appendix D.1, the recall rate as when . Overall, for a RR graph with degree and coupling strength , given i.i.d. samples , using -LinR estimator (4) with regularization parameter , perfect recovery of the graph structure can be achieved as if the number of samples satisfies
(102) |
where is a value dependent on the regularization parameter and coupling strength , which can be approximated in the limit as:
(103) |
D.2.2 Logistic loss
In this case, from (76), the value of can be computed as
(104) |
However, different from the case of -LinR estimator, there is no analytic solution but it can be calculated numerically. It can be seen that the -LinR estimator only differs in the value of scaling factor with the -LogR estimator for Ising model selection.
Appendix E Details of the non-asymptotic result for moderate
As demonstrated in Appendix A.3, from the replica analysis, both -LinR and -LogR estimators are decoupled and their asymptotic behavior can be described by two scalar estimators for the active set and inactive set, respectively. It is desirable to obtain the non-asymptotic result for moderate . However, it is found that the behavior of the two scalar estimators by simply inserting the finite values of into the EOS does not always lead to good consistency with the experimental results, especially for the when is small. This can be explained by the derivation of the free energy density. In calculating the energy term , the limit is taken implicitly when assuming the limit with . As a result, the scalar estimator associated with the active set can only describe the asymptotic performance in the limit . Thus, one cannot describe the fluctuating behavior of the estimator in the active set such as the recall rate for finite . To characterize the non-asymptotic behavior of the estimates in the active set , we replace the expectation in (64) by the sample average over samples, and the corresponding estimates are obtained as
(105) |
where and are random samples . Note that the mean estimates are replaced by in (105) as we now focus on its fluctuating behavior due to the finite size effect. In the limit , the sample average will converge to the expectation and thus (105) is equivalent to (77) when .
E.1 quadratic loss
In the case of quadratic loss , there is an analytic solution to in . Consequently, similar to (71), the result of (105) for the -LinR estimator becomes
(106) |
As the mean estimates are modified as in (106), the corresponding solution to the EOS in (70) also needs to be modified, and this can be solved iteratively as sketched in Algorithm 1. For a practical implementation of Algorithm 1, the details are described in the following.
First, in the EOS (22), we need to obtain satisfying the following relation
(107) |
which is difficult to solve directly. To obtain , we introduce an auxiliary variable , by which (107) can be rewritten as
(108) |
which can be solved iteratively. Accordingly, the in EOS (22) can be equivalently written in terms of .
Second, when solving the EOS (22) iteratively using numerical methods, it is helpful to improve the convergence of the solution by introducing a small amount of damping factor for in each iteration.
E.2 Logistic loss
In the case of square lass , since there is no analytic solution to in , the result of (105) for the -LogR estimator becomes
(109) |
Similarly as the case for quadratic loss, as the mean estimates are modified as in (109), the corresponding solutions to the EOS in (76) also need to be modified, which can be solved iteratively as shown in Algorithm 3.
Appendix F Eigenvalue Distribution
From the replica analysis presented, the learning performance will depend on the eigenvalue distribution (EVD) of the covariance matrix of the original Ising model.
There are two issues to be noted. One is about the formula connecting the performance of the estimator and the spectral density, and the other is the numeric values of quantities which are computed from the formula. For the first point, no assumption about the spectral density is needed to obtain the formula itself and this formula is valid when the graph structure is tree-like and the Ising model defined on the graph is in the paramagnetic phase. For the second point, we need the specific form of the spectral density to obtain numeric solutions in general. As a demonstration, we assume the random regular graph with constant coupling strength for which the spectral density can be obtained analytically as has already been known before in [7].
In general, it is difficult to obtain this EVD; however, for sparse tree-like graphs such as RR graph with constant node degree and sufficiently small coupling strength that yields the paramagnetic state (), it can be computed analytically. For this, we express the covariances as
(110) |
where and the assessment is carried out at .
In addition, for technical convenience we introduce the Gibbs free energy as
(111) |
The definition of (111) indicates that following two relations hold:
(112) |
where the evaluations are performed at and ( under the paramagnetic assumption).
Consequently, we can focus on the computation of to obtain the EVD of . The inverse covariance matrix of a RR graph can be computed from the Hessian of the Gibbs free energy [7, 47, 48] as
(113) |
and in matrix form, we have
(114) |
where is an identity matrix of proper size, and the operations on matrix are defined in the component-wise manner. For RR graph , is a sparse matrix, therefore the matrix also corresponds to a sparse coupling matrix (whose nonzero coupling positions are the same as ) with constant coupling strength and fixed connectivity , the corresponding eigenvalue (denoted as ) distribution can be calculated as [49]
(115) |
From (114), the eigenvalue of is
(116) |
which, when combined with (115), readily yields the EVD of as as follows:
(117) |
where .
Consequently, since , we obtain the EVD of as follows
(118) |
where .
Appendix G Additional Experimental Results
Fig. 7 and Fig. 8 show the full results of non-asymptotic learning performance prediction when and , respectively. Good agreements between replica results and experimental results are achieved in all cases. As can be seen, there is negligible difference in and between -LinR and -LogR. Meanwhile, compared to Fig. 7 when , the difference in RSS between -LinR and -LogR is reduced when . In addition, by comparing Fig. 7 and Fig. 8, it can be seen that under the same setting, when increases, the becomes larger while the becomes smaller, implying a tradeoff in choosing in practice for Ising model selection with finite .


Fig. 9 and Fig. 10 show the full results of critical scaling prediction when and , respectively. For comparison, both the results of -LinR and -LogR are shown. It can be seen that apart from the good agreements between replica results and experimental results, the prediction of the scaling value is very accurate.

