On the expected number of critical points of
locally isotropic Gaussian random fields
Abstract
We consider locally isotropic Gaussian random fields on the -dimensional Euclidean space for fixed . Using the so called Gaussian Orthogonally Invariant matrices first studied by Mallows in 1961 which include the celebrated Gaussian Orthogonal Ensemble (GOE), we establish the Kac–Rice representation of expected number of critical points of non-isotropic Gaussian fields, complementing the isotropic case obtained by Cheng and Schwartzman in 2018. In the limit , we show that such a representation can be always given by GOE matrices, as conjectured by Auffinger and Zeng in 2020.
1 Introduction
Locally isotropic random fields on the -dimensional Euclidean space were introduced by Kolmogorov in 1941 [Ko41] for the application in statistical theory of turbulence. Since then, this class of stochastic processes has been extensively studied in both physics and mathematics. In particular, locally isotropic Gaussian random fields have been used to model a particle confined in a random potential and serve as a toy model for the elastic manifold. For an incomplete list of literature, we refer the interested reader to [MP91, En93, Fy04, FS07, FB08, FN12, FLD18, FLD20] for the background in physics and [Kli12, AZ20, AZ22, BBMsd, BBMsd2, XZ22] for the mathematical development.
The goal of this paper is two-fold. For application to statistical physics, frequently we need to send for the thermodynamic limit, which puts additional restrictions on the class of such Gaussian fields. Using the Kac–Rice formula and the connection to random matrices, in the seminal work [Fy04] Fyodorov considered the expected number of critical points of isotropic Gaussian random fields on in the asymptotic regime . This is commonly known as landscape complexity (or complexity for simplicity) in statistical physics; see also [FN12, Fy15] for related topics. Recently, Auffinger and Zeng provided a detailed study on the complexity of non-isotropic Gaussian random fields with isotropic increments [AZ20, AZ22]. On the other hand, for applications to statistics and other fields, it is also of interest to consider random fields on a finite-dimensional Euclidean space. Guided by this principle, Cheng and Schwartzman gave a representation of the expected number of critical points of isotropic Gaussian random fields on with fixed [CS18, Theorem 3.5]. Here an apparent gap arises: What about the non-isotropic Gaussian fields on the fixed ? We provide several answers to this question. Furthermore, we show that a technical assumption used in [AZ20] is redundant in the large limit as conjectured by the authors while it is useful to obtain a representation via matrices from the Gaussian Orthogonal Ensemble (GOE) in finite dimensions.
Let us be more precise. A locally isotropic Gaussian random field is a centered Gaussian process indexed by that satisfies
(1.1) |
Here the function is called the structure function of , is the Euclidean norm and . The process is also known as a Gaussian random field with isotropic increments. The condition \reftagform@1.1 determines the law of up to an additive shift by a Gaussian random variable. Following [Ya87], we recall some basic properties of the structure function. Let denote the set of all -dimensional structure functions and the set of structure functions which belong to for all . Since any -dimensional structure function is necessarily an -dimensional structure function for all integers less than , it is clear that
where the symbol denotes inclusion. In the following, we write for the structure function of a field that can be defined on for all natural numbers and in this case we frequently write . Let us define
Here is the th Bessel function of the first kind, which has the following representation
From here it can be shown that . By [Sch38, Ya57], locally isotropic Gaussian random fields (or equivalently the class ) can be classified into two cases:
-
1.
Isotropic fields. There exists a function such that
(1.2) where has the representation
with a constant and a finite measure on . Clearly, in this case we have . In particular, for the class , we write and it can be represented as
-
2.
Non-isotropic fields with isotropic increments. The structure function can be written as
(1.3) where is a constant and is a -finite measure with
For , the structure function has the following form
(1.4) which is, in the language of Theorem 3.2 below, a Bernstein function with .
These representations provide us some information on the sign of derivatives provided they are finite. For instance, \reftagform@1.4 implies that and for . However, due to the oscillatory nature of Bessel functions, we only have and for .
Let and be Borel subsets. If the random field is twice differentiable, we define
where is the index (or the number of negative eigenvalues) of the Hessian . Under some suitable smoothness conditions in [AT07], Cheng and Schwartzman gave a representation of for the isotropic fields using the Kac–Rice formula in [CS18, Theorem 3.5], where and is the unit-volume ball in . A key ingredient in the representation is what the authors call the Gaussian Orthogonally Invariant (GOI) matrix, which was first studied by Mallows [Ma61]. Following their terminology, an real symmetric random matrix is said to be Gaussian Orthogonally Invariant (GOI) with covariance parameter , denoted by , if its entries are centered Gaussian random variables such that
Note that the distribution of such matrices is invariant under the congruence transformation by any orthogonal matrix. Therefore, they share some properties of the GOE matrices. In particular, for , GOI matrices are exactly GOE matrices. Recall that a Gaussian vector is nondegenerate if and only if the covariance matrix of the Gaussian vector is positive definite. Cheng and Schwartzman [CS18] showed that a GOI matrix of size is nondegenerate if and only if . Moreover, for this nondegenerate GOI matrix, they derived that the density of the ordered eigenvalues is given by
(1.5) |
where is the normalization constant. For , one may write a GOI matrix as a GOE matrix plus a random scalar matrix. If the scalar matrix is independent of the GOE matrix, while if the scalar matrix is no longer independent. In the limit or equivalently for structure functions , the covariance parameter is positive and thus for these structure functions and all dimensions , the Kac–Rice representation of can always be given by GOE matrices. This was the setting in the pioneering works of Fyodorov and Nadal [Fy04, FN12] where the GOE matrices are crucial for asymptotic analysis. However, if one insists on considering structure functions from for fixed , then additional restriction is needed in order to get a GOE matrix (plus an independent scalar matrix) in the Kac–Rice representation.
Before illustrating similar problems for the non-isotropic Gaussian fields, let us first put a remark on the relationship between the two cases. In general, the two types of Gaussian random fields are quite different, like the Ornstein–Uhlenbeck process and Brownian motion in dimension . But if is a finite measure in the second case, the non-isotropic field is essentially a shifted isotropic field. Indeed, let be an isotropic field that satisfies
Then we can verify that satisfies
(1.6) |
and , which means that is a non-isotropic Gaussian random field with isotropic increments. Conversely, let be a non-isotropic field satisfying \reftagform@1.6 with some finite measure . Define , where is a centered Gaussian random variable with variance and
One can check that is an isotropic Gaussian random field. Due to the relation , the non-isotropic field has the same critical points and landscape as those of the isotropic field . Furthermore, let be a non-isotropic Gaussian random field satisfying
with some finite measure and constant . In this case, we can split into two independent non-isotropic parts , where satisfies \reftagform@1.6 and satisfies
Note that we may write for a centered Gaussian random vector with covariance matrix , where is the Euclidean inner product and denotes the identity matrix hereafter. In this case, the field is almost trivial since its gradient is a fixed random vector and we have if and only if . Therefore, when we talk about the non-isotropic fields, we may assume that and the measure is -finite but not finite.
For the non-isotropic fields with isotropic increments, it is expected that the above representation problem is more challenging since the distribution of the field depends on the location variable . Recently, Auffinger and Zeng considered the Kac–Rice representation of for the non-isotropic Gaussian fields with isotropic increments in the limit during their study of the landscape complexity in [AZ20, AZ22], where GOE matrices are indispensable in the analysis as the isotropic case. This left the representation problem open for the fixed finite dimensions. Moreover, in order to utilize GOE matrices, they assumed a technical condition (see Assumption III below) and verified it for some special cases or subclass of . They conjectured that this condition always holds for all structure functions belonging to .
This paper aims to resolve these issues for the non-isotropic Gaussian fields with isotropic increments. In Section 2, we prove the main results of this paper, which display various representations of . The basic tool is the Kac–Rice formula as usual. In general, the representation can be obtained by using GOI matrices (see Theorem 2.6). Furthermore, we may reduce the GOI matrices to GOE matrices in the representation, in the special case of (see Theorem 2.7) or with Assumption III (see Theorem 2.10). These results can be regarded as the non-isotropic analog of those for the isotropic Gaussian fields in [CS18]. In Section 3, we show that for the structure functions in , can always be represented by using GOE matrices when is a shell domain. In other words, the aforementioned technical condition is redundant as conjectured in [AZ20].
2 Representations for non-isotropic Gaussian fields on
2.1 A perturbed GOI matrix model
For clarity of exposition, we introduce a random matrix model which is a special perturbation of the GOI model. We call an real symmetric random matrix Spiked Gaussian Orthogonally Invariant (SGOI) with parameters and , denoted by , if its entries are centered Gaussian random variables such that
(2.1) |
Clearly, if and , the matrix is exactly a matrix with .
Lemma 2.1.
Let be an matrix. Then is nondegenerate if and only if and .
Proof.
Since is symmetric and the diagonal entries are independent of the off-diagonal entries, is nondegenerate if and only if the vector of diagonal entries is nondegenerate. By \reftagform@2.1, the covariance matrix of the vector is
(2.2) |
It is straightforward to check that the lower right submatrix of is positive definite if and only if . Note that is positive definite if and only if the determinant of is larger than and the lower right submatrix of is positive definite. By the Schur complement formula, one can show that
which gives the desired result. ∎
Let be an matrix. In general, can be represented as the following form
(2.3) |
where is a centered column Gaussian vector with covariance matrix , which is independent of and the GOE matrix . To represent as explicitly as possible, we need to determine the relationship among and the diagonal elements of . Since for ,
where ’s are the diagonal elements of , we find that is a constant for . Let and . The covariance matrix of is
(2.4) |
A moment of reflection shows that is nondegenerate if and only if is positive definite. Therefore, we just need to choose and such that
If and we set as in [CS18, (2.2)], we find a sufficient condition for :
(2.5) |
If and we set as in [CS18], we will get a more involved condition. For concrete examples, if , we may take and ; if , we may set and .
Note that the distribution of SGOI matrices is not invariant under the orthogonal congruence transformation, which makes it hard to deduce the joint eigenvalue density for this matrix model. Fortunately, it is sufficient to use the joint eigenvalue density of GOI matrices (1) for our representation formulas below. By the conditional distribution of Gaussian vectors, we have
(2.6) |
where is an GOI matrix with parameter . Since we always have the conditional distribution \reftagform@2.1 is invariant for different constructions of SGOI matrices. This is crucial in Theorem 2.6 below, and the freedom to specify and in (2.4) allows us to choose matrix models that are easier to work with in Theorem 2.10. In the following, we will only need the covariances given in (2.2) but not the concrete relationship among and until deriving Theorem 2.10.
2.2 The nondegeneracy condition
According to [AZ20], we need the following assumption for the model so that the random fields are twice differentiable almost surely.
Assumption I (Smoothness).
For fixed , the function is four times differentiable at 0 and it satisfies
For the non-isotropic Gaussian random fields with isotropic increments, we also need the following assumption, which is a natural assumption for stochastic processes with stationary increments.
Assumption II (Pinning).
We have
Let us recall the covariance structure of the non-isotropic Gaussian fields with isotropic increments.
Lemma 2.2 ([AZ20, Lemma A.1]).
Cheng and Schwartzman [CS18, Proposition 3.3] studied the nondegeneracy condition of the isotropic Gaussian random fields, and they showed that the Gaussian vector is nondegenerate if and only if The following lemma gives the nondegeneracy condition of the non-isotropic Gaussian random fields with isotropic increments.
Lemma 2.3.
Assume Assumptions I and II hold for the structure function and the associated Gaussian field . Then for any fixed , the Gaussian vector is nondegenerate if and only if for ,
(2.7) |
Moreover, if for some , Assumptions I and II hold, and we have for ,
(2.8) |
then the Gaussian vector is nondegenerate. Here if , we understand that the indices .
Proof.
Following Lemma 2.2, it is clear that the covariance matrix of the Gaussian vector
is given by
where
is the covariance vector of with ,
is the covariance vector of with ,
is an -dimensional vector, which is the covariance vector of with the Hessian above the diagonal, , and are , , and matrices with all elements equal to zero, respectively. In addition to this,
where is the -dimensional column vector with all elements equal to one hereafter. It is clear that all of and are positive definite matrices. By the eigenvalue interlacing theorem, the matrix has at most one negative eigenvalue. Therefore, the positive definiteness of the matrix is equivalent to . Noting that , we may consider the Schur complement of the block diagonal submatrix in and find
The proof of the first claim is completed by replacing with .
For the second assertion, we need to show that
Notice that
By the elementary inequality , we have
Together with the assumption \reftagform@2.8, we obtain the desired result. ∎
We remark that the condition \reftagform@2.8 does not depend on the dimension , which makes it easy to apply in the following. Indeed, if it holds for all , then the Gaussian field and its derivatives associated to are nondegenerate, provided belongs to a certain class and Assumptions I and II hold.
2.3 Representation with GOI matrices
Under the nondegeneracy condition \reftagform@2.7, we now investigate the expected number of critical points (with or without given indices) of non-isotropic Gaussian random fields with isotropic increments.
We follow the idea for the case in [AZ20, AZ22] and list the notations from [AZ20, (3.15)] for later use:
(2.9) |
Using Lemma 2.2, we define
and
Then is a centered Gaussian random variable whose variance is defined in \reftagform@2.9 with replaced by . Notice that both and are independent of . By the Kac–Rice formula [AT07, Theorem 11.2.1],
(2.10) |
and similarly
(2.11) |
where is the p.d.f. of at .
From Lemma 2.2, we note that has the same distribution as . Arguing as in [AZ20, Section 3] and using the spherical coordinates, we deduce that
where is an matrix with parameters
(2.12) |
and the functions , , , and are given as in \reftagform@2.9 with replaced by . Similar to \reftagform@2.3, we may write the shifted SGOI matrix as a block matrix
(2.13) |
where
(2.14) |
From \reftagform@2.1, we know
(2.15) |
where is an GOI matrix with parameter
(2.16) |
Set
(2.17) |
where is defined in \reftagform@2.9. Let be the eigenvalues of the matrix . Since the distribution of matrices is invariant under orthogonal congruence transformation, following the same argument as for the GOE matrices, there exists a random orthogonal matrix independent of the unordered eigenvalues such that
(2.18) |
Since the law of the Gaussian vector is rotationally invariant, is a centered Gaussian vector with covariance matrix that is independent of ’s. We can rewrite , where is an -dimensional standard Gaussian random vector. We also need the following lemma for the calculation of . Recall that the signature of a symmetric matrix is the number of positive eigenvalues minus that of negative eigenvalues.
Lemma 2.4 ([Laz88, Equation 2]).
Let be a symmetric block matrix, and write its inverse in block form with the same block structure:
Then , with denoting the signature of the matrix .
Let us define
Following Lemma 2.4, we have for ,
for ,
and for ,
Moreover, by \reftagform@2.15 and \reftagform@2.18, we deduce
(2.19) |
where
Recall that is an matrix with parameters and defined in \reftagform@2.12 depending on the structure function . Since we will use to deduce the main theorem in this section, it is also necessary to guarantee that is nondegenerate.
Lemma 2.5.
Proof.
According to Lemma 2.1, it suffices to show that \reftagform@2.7 is equivalent to
where with and defined in \reftagform@2.9, and we have replaced with in the definitions of , and for simplicity.
If , straightforward calculations show that is equivalent to \reftagform@2.7. To be precise, multiplying both sides by implies
Note that and
Multiplying both sides by , we get
which is exactly \reftagform@2.7. Notice that each step of above derivations is reversible. Therefore, \reftagform@2.7 is equivalent to , on condition that .
Due to the rotational symmetry of isotropy, we are interested in the critical points in the shell domain
with critical values in an arbitrary Borel subset . For simplicity, we write . Together with \reftagform@2.10 and \reftagform@2.11, using the spherical coordinates and writing , we arrive at
(2.20) | ||||
(2.21) |
where is the area of -dimensional unit sphere; see [AZ20, (4.1)] for details.
Unlike the isotropic case in [CS18], here it is difficult to find the joint eigenvalue density of the matrix due to the lack of invariance property. However, we observe that conditioning on the first entry of an SGOI() matrix, its lower right submatrix has the same distribution as a GOI matrix for some suitable . We are now ready to state our main result in this section.
Theorem 2.6.
Let be a non-isotropic Gaussian field with isotropic increments. Let be a Borel set and be the shell domain , where . Assume Assumptions I, II and the nondegeneracy condition \reftagform@2.7 holds for all . Then we have
and for ,
where
is defined in \reftagform@2.19, and are given in \reftagform@2.9, is defined in \reftagform@2.16, is defined in \reftagform@2.17, is an -dimensional standard Gaussian random vector independent of ’s, and by convention , . We emphasize that the expectation is taken with respect to the GOI eigenvalues ’s.
Proof.
Recall that , where is an matrix defined in \reftagform@2.3 with parameters , and . The Schur complement formula implies that
(2.22) |
Conditioning on , using \reftagform@2.15, \reftagform@2.17 and \reftagform@2.3, we obtain
(2.23) |
where and is the GOI matrix with parameter defined in \reftagform@2.16. By \reftagform@2.18 and the independence of and ’s, we deduce
(2.24) |
Combining \reftagform@2.20, \reftagform@2.23 and \reftagform@2.24, after some simplifications we obtain
which is the desired result for the first part.
Let us turn to the second assertion. For , combining \reftagform@2.19, \reftagform@2.3, \reftagform@2.23 and \reftagform@2.24 gives
Plugging the above equation into \reftagform@2.21 yields that
where for .
Similarly, for we have
and
where . We omit the proof for index here, since it is similar to the case . ∎
2.4 Representation with GOE matrices
In order for the large asymptotic analysis, it is desirable to write the GOI matrix in the above representation as a sum of a GOE matrix and an independent scalar matrix. Most of results in this subsection follow from arguments similar to those in [AZ20, AZ22]. First, as observed in these works, if the critical values are not restricted (i.e., ), there is no need to consider the conditional distribution of Hessian and it is straightforward to employ GOE matrices for the Kac–Rice representation.
Theorem 2.7.
The proof is omitted here since it is the same as for [AZ20, Theorem 1.1] and [AZ22, Theorems 1.1, 1.2]. As a comparison to the isotropic case, we give an example below, which is analogous to [CS18, Example 3.8].
Example 2.8.
The situation is more involved when the critical values are constrained. The following condition for any fixed turns out to be sufficient for using GOE matrices in the representation.
Assumption III.
This condition was used in [AZ20] for the structure functions in the class , and in that case we trivially have for any . Actually, we will show that Assumption III holds for all structure functions in in the next section. A natural question is the relationship between Assumption III and the nondegeneracy of the field and its derivatives. The following result shows that Assumption III is stronger than the nondegeneracy condition \reftagform@2.7. Recall the parameter with replaced by from \reftagform@2.16. Assumption III also implies .
Lemma 2.9.
Proof.
First of all, \reftagform@2.25 and \reftagform@2.26 imply that the denominator of is smaller than . To prove , it remains to show that
(2.27) |
which is equivalent to the inequality (2.8) and thus further yields the nondegeneracy condition by Lemma 2.3. Since , we have
If , then \reftagform@2.25 implies that
If , we must have and \reftagform@2.26 implies that
Using this lemma, we may deduce the GOE representation from Theorem 2.6 by unraveling GOE eigenvalues and an independent Gaussian random variable from the GOI matrix. This is equivalent to considering the conditional distribution of given below in (2.29), which would bring in an extra random variable in \reftagform@2.31. For transparency and for a different perspective compared with Theorem 2.6, here we provide formulas that are suitable for asymptotic analysis following [AZ20, AZ22]. Since Assumption III implies , according to [AZ20, Section 3], the shifted SGOI matrix in (2.13) can be represented as the following form, which corresponds to setting and in \reftagform@2.4 and in this case the condition (2.5) is equivalent to (2.27):
(2.28) |
where with being independent standard Gaussian random variables,
is a centered column Gaussian vector with covariance matrix , which is independent of and the GOE matrix . The conditional distribution of given is
(2.29) |
where
(2.30) |
Define the random variable
(2.31) |
where are independent standard Gaussian random variables and are the ordered eigenvalues of the GOE matrix . By the above analysis, given Assumption III, we can express the expected number of critical points (with or without given indices) of non-isotropic Gaussian random fields with isotropic increments using the eigenvalue density of the GOE matrix. We omit the proof here since it follows from an easy adaption of the arguments in [AZ20, Section 4] and [AZ22, Section 3.1].
Theorem 2.10.
Let be a non-isotropic Gaussian field with isotropic increments. Let be a Borel set and be the shell domain , where . Assume Assumptions I, II, and III. Then we have
for , | ||||
where and are given in \reftagform@2.9, is given in \reftagform@2.4, is defined with the GOE eigenvalues and the independent standard Gaussian random variables , in \reftagform@2.31, is the c.d.f. of the standard Gaussian random variable and by convention , .
3 Representation for fields with structure functions in
In this section, we show that if a structure function has the representation \reftagform@1.4, then Assumption III always holds.
Lemma 3.1.
Proof.
Since has the form \reftagform@1.4, the function is positive and strictly convex on . Together with the mean value theorem, for any , , which further implies
(3.1) |
If the inequality \reftagform@2.25 holds, by \reftagform@3.1, we deduce that , which yields the inequality \reftagform@2.26. On the other hand, for any , note that
Thus, to prove \reftagform@2.8, we only need to show
which follows from \reftagform@3.1 by observing that . ∎
Recall that a function is a Bernstein function if is of class for all and for all and . For the Bernstein functions, we have the following theorem.
Theorem 3.2 ([SSV12, Theorem 3.2]).
A function is a Bernstein function if and only if, it admits the representation
where and is a measure on satisfying . In particular, the triplet determines uniquely and vice versa.
In the paper [AZ20], the authors conjectured that all Bernstein functions with the form of defined in \reftagform@1.4 satisfy Assumption III. The following result shows that it is indeed true. Thanks to Lemma 3.1, it remains to prove the inequality \reftagform@2.25.
Theorem 3.3.
Proof.
It suffices to show
(3.2) | ||||
(3.3) |
for any . Indeed, adding the above two inequalities together implies that for all ,
which is equivalent to the desired conclusion.
To prove \reftagform@3.2, notice that and
Therefore, it suffices to show that for ,
or equivalently, . But from the definition of , we have for ,
where in the last inequality we used the fact that for any .
On the other hand, Assumption III is not expected to hold for all structure functions in the class with fixed , since the Bessel functions in the representation \reftagform@1.3 are oscillatory. The following example gives a family of functions in satisfying Assumption III.
Example 3.4.
For any , the function
is in and satisfies Assumption III. For simplicity, we denote
Then for . Let us verify that satisfies the desired properties.
(1) We first show that . Indeed, it is well known that is a Bernstein function [SSV12] with , which yields . For , we have . If we set , then
It follows that and thus . To check , we compute
and
Therefore, , as . From here we can find an such that , which indicates .
(2) We check . This would follow from for all . In fact, from calculus we know for , which implies for all . Together with , we have for . And for , since , we obtain
(3) With the same decomposition as in the proof of Theorem 3.3, \reftagform@2.25 follows from
(3.4) | ||||
(3.5) |
Notice that \reftagform@3.4 is equivalent to . One can check that for , which further implies for . Since , we have for . Therefore,
which implies and \reftagform@3.4 for .
Since , \reftagform@3.5 is equivalent to . Since , it suffices to prove . By Taylor expansion we know for . It follows that for . Together with , we have
for If , we have
which implies that for ,
(3.6) |
For , since , we find
which implies that
(3.7) |
for . Now \reftagform@3.5 follows from \reftagform@3.6 and \reftagform@3.7.
Remark 3.1.
In the isotropic case, it was shown in [CS18, Section 3.3] that one can use GOE in the Kac–Rice representation if and only if . In particular, we have if is positive definite on for all . Consider for . One can check directly but is not completely monotone. It follows from Schoenberg’s theorem [Sch38] that is not positive definite on for large enough.
The above discussion shows that for locally isotropic Gaussian random fields the class is sufficient for the GOE representation but not necessary.
Acknowledgements.
We are grateful to the anonymous referees for careful reading and many constructive suggestions which have significantly improved the paper.