This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On the expected number of critical points of
locally isotropic Gaussian random fields

Hao Xu Department of Mathematics, University of Macau, [email protected].    Haoran Yang School of Mathematical Sciences, Peking University, [email protected].    Qiang Zeng Department of Mathematics, University of Macau, [email protected], research partially supported by SRG 2020-00029-FST and FDCT 0132/2020/A3.
Abstract

We consider locally isotropic Gaussian random fields on the NN-dimensional Euclidean space for fixed NN. Using the so called Gaussian Orthogonally Invariant matrices first studied by Mallows in 1961 which include the celebrated Gaussian Orthogonal Ensemble (GOE), we establish the Kac–Rice representation of expected number of critical points of non-isotropic Gaussian fields, complementing the isotropic case obtained by Cheng and Schwartzman in 2018. In the limit N=N=\infty, we show that such a representation can be always given by GOE matrices, as conjectured by Auffinger and Zeng in 2020.

1 Introduction

Locally isotropic random fields on the NN-dimensional Euclidean space N\mathbb{R}^{N} were introduced by Kolmogorov in 1941 [Ko41] for the application in statistical theory of turbulence. Since then, this class of stochastic processes has been extensively studied in both physics and mathematics. In particular, locally isotropic Gaussian random fields have been used to model a particle confined in a random potential and serve as a toy model for the elastic manifold. For an incomplete list of literature, we refer the interested reader to [MP91, En93, Fy04, FS07, FB08, FN12, FLD18, FLD20] for the background in physics and [Kli12, AZ20, AZ22, BBMsd, BBMsd2, XZ22] for the mathematical development.

The goal of this paper is two-fold. For application to statistical physics, frequently we need to send NN\to\infty for the thermodynamic limit, which puts additional restrictions on the class of such Gaussian fields. Using the Kac–Rice formula and the connection to random matrices, in the seminal work [Fy04] Fyodorov considered the expected number of critical points of isotropic Gaussian random fields on N\mathbb{R}^{N} in the asymptotic regime NN\to\infty. This is commonly known as landscape complexity (or complexity for simplicity) in statistical physics; see also [FN12, Fy15] for related topics. Recently, Auffinger and Zeng provided a detailed study on the complexity of non-isotropic Gaussian random fields with isotropic increments [AZ20, AZ22]. On the other hand, for applications to statistics and other fields, it is also of interest to consider random fields on a finite-dimensional Euclidean space. Guided by this principle, Cheng and Schwartzman gave a representation of the expected number of critical points of isotropic Gaussian random fields on N\mathbb{R}^{N} with fixed NN [CS18, Theorem 3.5]. Here an apparent gap arises: What about the non-isotropic Gaussian fields on the fixed N\mathbb{R}^{N}? We provide several answers to this question. Furthermore, we show that a technical assumption used in [AZ20] is redundant in the large NN limit as conjectured by the authors while it is useful to obtain a representation via matrices from the Gaussian Orthogonal Ensemble (GOE) in finite dimensions.

Let us be more precise. A locally isotropic Gaussian random field HN={HN(x):xN}H_{N}=\left\{H_{N}(x):x\in\mathbb{R}^{N}\right\} is a centered Gaussian process indexed by N\mathbb{R}^{N} that satisfies

𝔼[(HN(x)HN(y))2]=DN(xy2),x,yN.\displaystyle\mathbb{E}\left[\left(H_{N}(x)-H_{N}(y)\right)^{2}\right]=D_{N}\left(\|x-y\|^{2}\right),\quad x,y\in\mathbb{R}^{N}. (1.1)

Here the function DN:++D_{N}\colon\mathbb{R}_{+}\to\mathbb{R}_{+} is called the structure function of HNH_{N}, \|\cdot\| is the Euclidean norm and +=[0,)\mathbb{R}_{+}=[0,\infty). The process HNH_{N} is also known as a Gaussian random field with isotropic increments. The condition \reftagform@1.1 determines the law of HNH_{N} up to an additive shift by a Gaussian random variable. Following [Ya87], we recall some basic properties of the structure function. Let 𝒟N\mathcal{D}_{N} denote the set of all NN-dimensional structure functions and 𝒟\mathcal{D}_{\infty} the set of structure functions which belong to 𝒟N\mathcal{D}_{N} for all NN\in\mathbb{N}. Since any NN-dimensional structure function is necessarily an MM-dimensional structure function for all integers MM less than NN, it is clear that

𝒟1𝒟2𝒟N𝒟,\mathcal{D}_{1}\supset\mathcal{D}_{2}\supset\dots\supset\mathcal{D}_{N}\supset\dots\supset\mathcal{D}_{\infty},

where the symbol \supset denotes inclusion. In the following, we write D𝒟D_{\infty}\in\mathcal{D}_{\infty} for the structure function of a field that can be defined on N\mathbb{R}^{N} for all natural numbers NN\in\mathbb{N} and in this case we frequently write N=N=\infty. Let us define

ΛN(x)={cosx,N=1,2(N2)/2Γ(N2)J(N2)/2(x)x(N2)/2=1x22N+x424N(N+2),N=2,3,,ex2,N=.\displaystyle\Lambda_{N}(x)=\begin{cases}\cos x,&N=1,\\ 2^{(N-2)/2}\Gamma\left(\frac{N}{2}\right)\frac{J_{(N-2)/2}(x)}{x^{(N-2)/2}}=1-\frac{x^{2}}{2N}+\frac{x^{4}}{2\cdot 4\cdot N(N+2)}-\cdots,&N=2,3,\dots,\\ e^{-x^{2}},&N=\infty.\end{cases}

Here JNJ_{N} is the NNth Bessel function of the first kind, which has the following representation

JN(x)=m=0(1)mm!Γ(m+N+1)(x2)2m+N.J_{N}(x)=\sum_{m=0}^{\infty}\frac{(-1)^{m}}{m!\Gamma(m+N+1)}\left(\frac{x}{2}\right)^{2m+N}.

From here it can be shown that ΛN(2Nx)Nex2\Lambda_{N}(\sqrt{2N}x)\stackrel{{\scriptstyle N\rightarrow\infty}}{{\longrightarrow}}e^{-x^{2}}. By [Sch38, Ya57], locally isotropic Gaussian random fields (or equivalently the class 𝒟N\mathcal{D}_{N}) can be classified into two cases:

  1. 1.

    Isotropic fields. There exists a function BN:+B_{N}\colon\mathbb{R}_{+}\to\mathbb{R} such that

    𝔼[HN(x)HN(y)]=BN(xy2),\displaystyle\mathbb{E}\left[H_{N}(x)H_{N}(y)\right]=B_{N}\left(\|x-y\|^{2}\right), (1.2)

    where BNB_{N} has the representation

    BN(r)=CN+(0,)ΛN(rt)νN(dt),\displaystyle B_{N}(r)=C_{N}+\int_{(0,\infty)}\Lambda_{N}(\sqrt{rt})\nu_{N}(\mathrm{d}t),

    with a constant CN+C_{N}\in\mathbb{R}_{+} and a finite measure νN\nu_{N} on (0,)(0,\infty). Clearly, in this case we have DN(r)=2(BN(0)BN(r))D_{N}(r)=2(B_{N}(0)-B_{N}(r)). In particular, for the class 𝒟\mathcal{D}_{\infty}, we write B(r)=B(r)B(r)=B_{\infty}(r) and it can be represented as

    B(r)=C+(0,)ertν(dt).B(r)=C+\int_{(0,\infty)}e^{-rt}\nu(\mathrm{d}t).
  2. 2.

    Non-isotropic fields with isotropic increments. The structure function DND_{N} can be written as

    DN(r)=(0,)(1ΛN(rt))νN(dt)+ANr,\displaystyle D_{N}(r)=\int_{(0,\infty)}\left(1-\Lambda_{N}(\sqrt{rt})\right)\nu_{N}(\mathrm{d}t)+A_{N}r, (1.3)

    where AN+A_{N}\in\mathbb{R}_{+} is a constant and νN\nu_{N} is a σ\sigma-finite measure with

    (0,)t1+tνN(dt)<.\int_{(0,\infty)}\frac{t}{1+t}\nu_{N}(\mathrm{d}t)<\infty.

    For N=N=\infty, the structure function D(r):=D(r)D(r):=D_{\infty}(r) has the following form

    D(r)=(0,)(1ert)ν(dt)+Ar,\displaystyle D(r)=\int_{(0,\infty)}\left(1-e^{-rt}\right)\nu(\mathrm{d}t)+Ar, (1.4)

    which is, in the language of Theorem 3.2 below, a Bernstein function with D(0)=0D(0)=0.

These representations provide us some information on the sign of derivatives provided they are finite. For instance, \reftagform@1.4 implies that D(r)0D^{\prime}(r)\geq 0 and D′′(r)0D^{\prime\prime}(r)\leq 0 for r0r\geq 0. However, due to the oscillatory nature of Bessel functions, we only have DN(0)0D_{N}^{\prime}(0)\geq 0 and DN′′(0)0D_{N}^{\prime\prime}(0)\leq 0 for NN\in\mathbb{N}.

Let EE\subset\mathbb{R} and TNNT_{N}\subset\mathbb{R}^{N} be Borel subsets. If the random field is twice differentiable, we define

CrtN(E,TN)\displaystyle\operatorname{Crt}_{N}\left(E,T_{N}\right) =#{xTN:HN(x)=0,HN(x)E},\displaystyle=\#\left\{x\in T_{N}:\nabla H_{N}(x)=0,H_{N}(x)\in E\right\},
CrtN,k(E,TN)\displaystyle\operatorname{Crt}_{N,k}\left(E,T_{N}\right) =#{xTN:HN(x)=0,HN(x)E,i(2HN(x))=k},k=0,1,,N,\displaystyle=\#\left\{x\in T_{N}:\nabla H_{N}(x)=0,\,H_{N}(x)\in E,\,i\left(\nabla^{2}H_{N}(x)\right)=k\right\},\quad k=0,1,\dots,N,

where i(2HN(x))i(\nabla^{2}H_{N}(x)) is the index (or the number of negative eigenvalues) of the Hessian 2HN(x)\nabla^{2}H_{N}(x). Under some suitable smoothness conditions in [AT07], Cheng and Schwartzman gave a representation of 𝔼[CrtN,k([u,),T)]\mathbb{E}[\mathrm{Crt}_{N,k}([u,\infty),T)] for the isotropic fields using the Kac–Rice formula in [CS18, Theorem 3.5], where u[,)u\in[-\infty,\infty) and TT is the unit-volume ball in N\mathbb{R}^{N}. A key ingredient in the representation is what the authors call the Gaussian Orthogonally Invariant (GOI) matrix, which was first studied by Mallows [Ma61]. Following their terminology, an N×NN\times N real symmetric random matrix MM is said to be Gaussian Orthogonally Invariant (GOI) with covariance parameter cc, denoted by GOI(c)\operatorname{GOI}(c), if its entries Mij,1i,jNM_{ij},1\leq i,j\leq N are centered Gaussian random variables such that

𝔼[MijMkl]=12(δikδjl+δilδjk)+cδijδkl.\mathbb{E}\left[M_{ij}M_{kl}\right]=\frac{1}{2}\left(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)+c\delta_{ij}\delta_{kl}.

Note that the distribution of such matrices is invariant under the congruence transformation by any orthogonal matrix. Therefore, they share some properties of the GOE matrices. In particular, for c=0c=0, GOI(0)(0) matrices are exactly GOE matrices. Recall that a Gaussian vector is nondegenerate if and only if the covariance matrix of the Gaussian vector is positive definite. Cheng and Schwartzman [CS18] showed that a GOI(c)(c) matrix of size N×NN\times N is nondegenerate if and only if c>1Nc>-\frac{1}{N}. Moreover, for this nondegenerate GOI(c)(c) matrix, they derived that the density of the ordered eigenvalues is given by

fc(λ1,,λN)=\displaystyle f_{c}\left(\lambda_{1},\ldots,\lambda_{N}\right)= 1KN1+Ncexp{12i=1Nλi2+c2(1+Nc)(i=1Nλi)2}\displaystyle\frac{1}{K_{N}\sqrt{1+Nc}}\exp\left\{-\frac{1}{2}\sum_{i=1}^{N}\lambda_{i}^{2}+\frac{c}{2(1+Nc)}\left(\sum_{i=1}^{N}\lambda_{i}\right)^{2}\right\}
×1i<jN|λiλj|𝟏{λ1λN},\displaystyle\times\prod_{1\leq i<j\leq N}\left|\lambda_{i}-\lambda_{j}\right|\mathbf{1}{\left\{\lambda_{1}\leq\cdots\leq\lambda_{N}\right\}}, (1.5)

where KN=2N/2i=1NΓ(i2)K_{N}=2^{N/2}\prod_{i=1}^{N}\Gamma\left(\frac{i}{2}\right) is the normalization constant. For c0c\neq 0, one may write a GOI(c)(c) matrix as a GOE matrix plus a random scalar matrix. If c>0c>0 the scalar matrix is independent of the GOE matrix, while if c<0c<0 the scalar matrix is no longer independent. In the limit NN\to\infty or equivalently for structure functions D𝒟D\in\mathcal{D}_{\infty}, the covariance parameter cc is positive and thus for these structure functions and all dimensions NN\in\mathbb{N}, the Kac–Rice representation of 𝔼[CrtN,k(E,TN)]\mathbb{E}[\mathrm{Crt}_{N,k}(E,T_{N})] can always be given by GOE matrices. This was the setting in the pioneering works of Fyodorov and Nadal [Fy04, FN12] where the GOE matrices are crucial for asymptotic analysis. However, if one insists on considering structure functions from 𝒟N\mathcal{D}_{N} for fixed NN, then additional restriction is needed in order to get a GOE matrix (plus an independent scalar matrix) in the Kac–Rice representation.

Before illustrating similar problems for the non-isotropic Gaussian fields, let us first put a remark on the relationship between the two cases. In general, the two types of Gaussian random fields are quite different, like the Ornstein–Uhlenbeck process and Brownian motion in dimension 11. But if νN\nu_{N} is a finite measure in the second case, the non-isotropic field is essentially a shifted isotropic field. Indeed, let HN(x)H_{N}(x) be an isotropic field that satisfies

𝔼[HN(x)HN(y)]=12(0,)ΛN(xy2t)νN(dt).\mathbb{E}\left[H_{N}(x)H_{N}(y)\right]=\frac{1}{2}\int_{(0,\infty)}\Lambda_{N}\left(\sqrt{\|x-y\|^{2}t}\right)\nu_{N}(\mathrm{d}t).

Then we can verify that H^N(x):=HN(x)HN(0)\widehat{H}_{N}(x):=H_{N}(x)-H_{N}(0) satisfies

𝔼[(H^N(x)H^N(y))2]=(0,)(1ΛN(xy2t))νN(dt),\mathbb{E}\left[\left(\widehat{H}_{N}(x)-\widehat{H}_{N}(y)\right)^{2}\right]=\int_{(0,\infty)}\left(1-\Lambda_{N}\left(\sqrt{\|x-y\|^{2}t}\right)\right)\nu_{N}(\mathrm{d}t), (1.6)

and H^N(0)=0\widehat{H}_{N}(0)=0, which means that H^N(x)\widehat{H}_{N}(x) is a non-isotropic Gaussian random field with isotropic increments. Conversely, let H^N(x)\widehat{H}_{N}(x) be a non-isotropic field satisfying \reftagform@1.6 with some finite measure νN\nu_{N}. Define HN(x)=H^N(x)+HN(0)H_{N}(x)=\widehat{H}_{N}(x)+H_{N}(0), where HN(0)H_{N}(0) is a centered Gaussian random variable with variance 𝔼[HN(0)2]=12|νN|\mathbb{E}[H_{N}(0)^{2}]=\frac{1}{2}|\nu_{N}| and

𝔼[H^N(x)HN(0)]=12(0,)(ΛN(x2t)1)νN(dt).\mathbb{E}\left[\widehat{H}_{N}(x)H_{N}(0)\right]=\frac{1}{2}\int_{(0,\infty)}\left(\Lambda_{N}\left(\sqrt{\|x\|^{2}t}\right)-1\right)\nu_{N}(\mathrm{d}t).

One can check that HN(x)H_{N}(x) is an isotropic Gaussian random field. Due to the relation HN(x)HN(0)=H^N(x)H_{N}(x)-H_{N}(0)=\widehat{H}_{N}(x), the non-isotropic field H^N(x)\widehat{H}_{N}(x) has the same critical points and landscape as those of the isotropic field HNH_{N}. Furthermore, let H~N(x)\widetilde{H}_{N}(x) be a non-isotropic Gaussian random field satisfying

𝔼[(H~N(x)H~N(y))2]=(0,)(1ΛN(xy2t))νN(dt)+ANxy2,\mathbb{E}\left[\left(\widetilde{H}_{N}(x)-\widetilde{H}_{N}(y)\right)^{2}\right]=\int_{(0,\infty)}\left(1-\Lambda_{N}\left(\sqrt{\|x-y\|^{2}t}\right)\right)\nu_{N}(\mathrm{d}t)+A_{N}\|x-y\|^{2},

with some finite measure νN\nu_{N} and constant AN>0A_{N}>0. In this case, we can split H~N(x)\widetilde{H}_{N}(x) into two independent non-isotropic parts H~N(x)=H^N(x)+H¯N(x)\widetilde{H}_{N}(x)=\widehat{H}_{N}(x)+\overline{H}_{N}(x), where H^N(x)\widehat{H}_{N}(x) satisfies \reftagform@1.6 and H¯N(x)\overline{H}_{N}(x) satisfies

𝔼[(H¯N(x)H¯N(y))2]=ANxy2.\mathbb{E}\left[\left(\overline{H}_{N}(x)-\overline{H}_{N}(y)\right)^{2}\right]=A_{N}\|x-y\|^{2}.

Note that we may write H¯N(x)=x,ξ\overline{H}_{N}(x)=\langle x,\xi\rangle for a centered Gaussian random vector ξ=(ξ1,ξ2,,ξN)\xi=(\xi_{1},\xi_{2},\dots,\xi_{N}) with covariance matrix AN𝐈NA_{N}\mathbf{I}_{N}, where ,\langle\cdot,\cdot\rangle is the Euclidean inner product and 𝐈N\mathbf{I}_{N} denotes the N×NN\times N identity matrix hereafter. In this case, the field H¯N\overline{H}_{N} is almost trivial since its gradient is a fixed random vector and we have H~N(x)=0\nabla\widetilde{H}_{N}(x)=0 if and only if H^N(x)=ξ\nabla\widehat{H}_{N}(x)=-\xi. Therefore, when we talk about the non-isotropic fields, we may assume that limrD(r)=\lim_{r\to\infty}D(r)=\infty and the measure νN\nu_{N} is σ\sigma-finite but not finite.

For the non-isotropic fields with isotropic increments, it is expected that the above representation problem is more challenging since the distribution of the field HN(x)H_{N}(x) depends on the location variable xNx\in\mathbb{R}^{N}. Recently, Auffinger and Zeng considered the Kac–Rice representation of 𝔼[CrtN,k(E,TN)]\mathbb{E}[\operatorname{Crt}_{N,k}\left(E,T_{N}\right)] for the non-isotropic Gaussian fields with isotropic increments in the limit NN\to\infty during their study of the landscape complexity in [AZ20, AZ22], where GOE matrices are indispensable in the analysis as the isotropic case. This left the representation problem open for the fixed finite dimensions. Moreover, in order to utilize GOE matrices, they assumed a technical condition (see Assumption III below) and verified it for some special cases or subclass of 𝒟\mathcal{D}_{\infty}. They conjectured that this condition always holds for all structure functions belonging to 𝒟\mathcal{D}_{\infty}.

This paper aims to resolve these issues for the non-isotropic Gaussian fields with isotropic increments. In Section 2, we prove the main results of this paper, which display various representations of 𝔼[CrtN,k(E,TN)]\mathbb{E}[\operatorname{Crt}_{N,k}\left(E,T_{N}\right)]. The basic tool is the Kac–Rice formula as usual. In general, the representation can be obtained by using GOI(c)(c) matrices (see Theorem 2.6). Furthermore, we may reduce the GOI(c)(c) matrices to GOE matrices in the representation, in the special case of E=E=\mathbb{R} (see Theorem 2.7) or with Assumption III (see Theorem 2.10). These results can be regarded as the non-isotropic analog of those for the isotropic Gaussian fields in [CS18]. In Section 3, we show that for the structure functions in 𝒟\mathcal{D}_{\infty}, 𝔼[CrtN,k(E,TN)]\mathbb{E}[\operatorname{Crt}_{N,k}(E,T_{N})] can always be represented by using GOE matrices when TNT_{N} is a shell domain. In other words, the aforementioned technical condition is redundant as conjectured in [AZ20].

2 Representations for non-isotropic Gaussian fields on N\mathbb{R}^{N}

2.1 A perturbed GOI matrix model

For clarity of exposition, we introduce a random matrix model which is a special perturbation of the GOI model. We call an N×NN\times N real symmetric random matrix MM Spiked Gaussian Orthogonally Invariant (SGOI) with parameters d1,d2d_{1},d_{2} and d3d_{3}, denoted by SGOI(d1,d2,d3)\operatorname{SGOI}(d_{1},d_{2},d_{3}), if its entries Mij,1i,jNM_{ij},1\leq i,j\leq N are centered Gaussian random variables such that

𝔼[MijMkl]=12(δikδjl+δilδjk)+d1δijδkl+d2(δi1δj1δkl+δk1δl1δij)+d3δi1δj1δk1δl1.\displaystyle\mathbb{E}\left[M_{ij}M_{kl}\right]=\frac{1}{2}\left(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)+d_{1}\delta_{ij}\delta_{kl}+d_{2}(\delta_{i1}\delta_{j1}\delta_{kl}+\delta_{k1}\delta_{l1}\delta_{ij})+d_{3}\delta_{i1}\delta_{j1}\delta_{k1}\delta_{l1}. (2.1)

Clearly, if d2=0d_{2}=0 and d3=0d_{3}=0, the SGOI(d1,0,0)\operatorname{SGOI}(d_{1},0,0) matrix is exactly a GOI(c)\operatorname{GOI}(c) matrix with c=d1c=d_{1}.

Lemma 2.1.

Let MM be an N×NN\times N SGOI(d1,d2,d3)\operatorname{SGOI}(d_{1},d_{2},d_{3}) matrix. Then MM is nondegenerate if and only if d1>1N1d_{1}>-\frac{1}{N-1} and 1+d3+d1+2d2(N1)d221+(N1)d1>01+d_{3}+\frac{d_{1}+2d_{2}-(N-1)d_{2}^{2}}{1+(N-1)d_{1}}>0.

Proof.

Since MM is symmetric and the diagonal entries are independent of the off-diagonal entries, MM is nondegenerate if and only if the vector of diagonal entries (M11,M22,,MNN)(M_{11},M_{22},\dots,M_{NN}) is nondegenerate. By \reftagform@2.1, the covariance matrix of the vector (M11,M22,,MNN)(M_{11},M_{22},\dots,M_{NN}) is

Θ=(1+d1+2d2+d3d1+d2d1+d2d1+d2d1+1d1d1+d2d1d1+1).\displaystyle\Theta=\begin{pmatrix}1+d_{1}+2d_{2}+d_{3}&d_{1}+d_{2}&\dots&d_{1}+d_{2}\\ d_{1}+d_{2}&d_{1}+1&\dots&d_{1}\\ \vdots&\vdots&\ddots&\vdots\\ d_{1}+d_{2}&d_{1}&\dots&d_{1}+1\\ \end{pmatrix}. (2.2)

It is straightforward to check that the lower right (N1)×(N1)(N-1)\times(N-1) submatrix of Θ\Theta is positive definite if and only if d1>1N1d_{1}>-\frac{1}{N-1}. Note that Θ\Theta is positive definite if and only if the determinant of Θ\Theta is larger than 0 and the lower right (N1)×(N1)(N-1)\times(N-1) submatrix of Θ\Theta is positive definite. By the Schur complement formula, one can show that

detΘ=[1+(N1)d1][1+d3+d1+2d2(N1)d221+(N1)d1],\det\Theta=[1+(N-1)d_{1}]\left[1+d_{3}+\frac{d_{1}+2d_{2}-(N-1)d_{2}^{2}}{1+(N-1)d_{1}}\right],

which gives the desired result. ∎

Let MM be an N×NN\times N SGOI(d1,d2,d3)\operatorname{SGOI}(d_{1},d_{2},d_{3}) matrix. In general, MM can be represented as the following form

M=(ζ1ξ𝖳ξGOI(d1))=(ζ1ξ𝖳ξGOEN1+ζ2𝐈N1),\displaystyle M=\begin{pmatrix}\zeta_{1}&\xi^{\mathsf{T}}\\ \xi&\operatorname{GOI}(d_{1})\end{pmatrix}=\begin{pmatrix}\zeta_{1}&\xi^{\mathsf{T}}\\ \xi&\mathrm{GOE}_{N-1}+\zeta_{2}\mathbf{I}_{N-1}\end{pmatrix}, (2.3)

where ξ\xi is a centered column Gaussian vector with covariance matrix 12𝐈N1\frac{1}{2}\mathbf{I}_{N-1}, which is independent of ζ1,ζ2\zeta_{1},\zeta_{2} and the (N1)×(N1)(N-1)\times(N-1) GOE matrix GOEN1\mathrm{GOE}_{N-1}. To represent MM as explicitly as possible, we need to determine the relationship among ζ1,ζ2\zeta_{1},\zeta_{2} and the diagonal elements of GOEN1\mathrm{GOE}_{N-1}. Since for i=1,,N1i=1,\dots,N-1,

1+d1=Var[(GOEN1)ii+ζ2]=1+2Cov[(GOEN1)ii,ζ2]+Var(ζ2),1+d_{1}=\mathrm{Var}[(\mathrm{GOE}_{N-1})_{ii}+\zeta_{2}]=1+2\mathrm{Cov}[(\mathrm{GOE}_{N-1})_{ii},\zeta_{2}]+\mathrm{Var}(\zeta_{2}),

where (GOEN1)ii(\mathrm{GOE}_{N-1})_{ii}’s are the diagonal elements of GOEN1\mathrm{GOE}_{N-1}, we find that Cov[(GOEN1)ii,ζ2]\mathrm{Cov}[(\mathrm{GOE}_{N-1})_{ii},\zeta_{2}] is a constant for 1iN11\leq i\leq N-1. Let ς=Cov(ζ1,ζ2)\varsigma=\mathrm{Cov}(\zeta_{1},\zeta_{2}) and ϑ=Cov(ζ2,(GOEN1)ii)\vartheta=\mathrm{Cov}(\zeta_{2},(\mathrm{GOE}_{N-1})_{ii}). The covariance matrix of (ζ1,ζ2,(GOEN1)ii,1iN1)(\zeta_{1},\zeta_{2},(\mathrm{GOE}_{N-1})_{ii,1\leq i\leq N-1}) is

Ξ=(1+d1+2d2+d3ςd1+d2ςd1+d2ςςd12ϑϑϑd1+d2ςϑ10d1+d2ςϑ01).\displaystyle\Xi=\begin{pmatrix}1+d_{1}+2d_{2}+d_{3}&\varsigma&d_{1}+d_{2}-\varsigma&\dots&d_{1}+d_{2}-\varsigma\\ \varsigma&d_{1}-2\vartheta&\vartheta&\dots&\vartheta\\ d_{1}+d_{2}-\varsigma&\vartheta&1&\dots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ d_{1}+d_{2}-\varsigma&\vartheta&0&\dots&1\end{pmatrix}. (2.4)

A moment of reflection shows that MM is nondegenerate if and only if Ξ\Xi is positive definite. Therefore, we just need to choose ς\varsigma and ϑ\vartheta such that

11+d1(N1)N1<ϑ<1+1+d1(N1)N1anddetΞ>0.\displaystyle\frac{-1-\sqrt{1+d_{1}(N-1)}}{N-1}<\vartheta<\frac{-1+\sqrt{1+d_{1}(N-1)}}{N-1}\quad\text{and}\quad\det\Xi>0.

If d10d_{1}\geq 0 and we set ϑ=0\vartheta=0 as in [CS18, (2.2)], we find a sufficient condition for ς\varsigma:

1+d1+2d2+d3ς2d1(N1)(d1+d2ς)2>0.\displaystyle 1+d_{1}+2d_{2}+d_{3}-\frac{\varsigma^{2}}{d_{1}}-(N-1)(d_{1}+d_{2}-\varsigma)^{2}>0. (2.5)

If d1<0d_{1}<0 and we set ϑ=d1\vartheta=d_{1} as in [CS18], we will get a more involved condition. For concrete examples, if d10d_{1}\geq 0, we may take ϑ=0\vartheta=0 and ς=d12+d1d21+d1\varsigma=\frac{d_{1}^{2}+d_{1}d_{2}}{1+d_{1}}; if d1<0d_{1}<0, we may set ϑ=d1\vartheta=d_{1} and ς=0\varsigma=0.

Note that the distribution of SGOI matrices is not invariant under the orthogonal congruence transformation, which makes it hard to deduce the joint eigenvalue density for this matrix model. Fortunately, it is sufficient to use the joint eigenvalue density of GOI matrices (1) for our representation formulas below. By the conditional distribution of Gaussian vectors, we have

𝔼(GOEN1+ζ2𝐈N1ζ1=y)=\displaystyle\mathbb{E}(\mathrm{GOE}_{N-1}+\zeta_{2}\mathbf{I}_{N-1}\mid\zeta_{1}=y)= (d1+d2)y1+d1+2d2+d3𝐈N1,\displaystyle\frac{(d_{1}+d_{2})y}{1+d_{1}+2d_{2}+d_{3}}\mathbf{I}_{N-1},
(GOEN1+ζ2𝐈N1ζ1=y)=d\displaystyle\left(\mathrm{GOE}_{N-1}+\zeta_{2}\mathbf{I}_{N-1}\mid\zeta_{1}=y\right)\stackrel{{\scriptstyle d}}{{=}} (d1+d2)y1+d1+2d2+d3𝐈N1+M,\displaystyle\frac{(d_{1}+d_{2})y}{1+d_{1}+2d_{2}+d_{3}}\mathbf{I}_{N-1}+M^{\prime}, (2.6)

where MM^{\prime} is an (N1)×(N1)(N-1)\times(N-1) GOI(c)(c) matrix with parameter c=d1+d1d3d221+d1+2d2+d3c=\frac{d_{1}+d_{1}d_{3}-d_{2}^{2}}{1+d_{1}+2d_{2}+d_{3}}. Since we always have 𝔼[ζ1(GOEN1+ζ2𝐈N1)]=(d1+d2)𝐈N1,\mathbb{E}[\zeta_{1}(\mathrm{GOE}_{N-1}+\zeta_{2}\mathbf{I}_{N-1})]=(d_{1}+d_{2})\mathbf{I}_{N-1}, the conditional distribution \reftagform@2.1 is invariant for different constructions of SGOI matrices. This is crucial in Theorem 2.6 below, and the freedom to specify ς\varsigma and ϑ\vartheta in (2.4) allows us to choose matrix models that are easier to work with in Theorem 2.10. In the following, we will only need the covariances given in (2.2) but not the concrete relationship among ζ1,ζ2\zeta_{1},\zeta_{2} and GOEN1\mathrm{GOE}_{N-1} until deriving Theorem 2.10.

2.2 The nondegeneracy condition

According to [AZ20], we need the following assumption for the model so that the random fields are twice differentiable almost surely.

Assumption I (Smoothness).

For fixed N{}N\in\mathbb{N}\cup\{\infty\}, the function DND_{N} is four times differentiable at 0 and it satisfies

0<|DN(4)(0)|<.0<\left|D_{N}^{(4)}(0)\right|<\infty.

For the non-isotropic Gaussian random fields with isotropic increments, we also need the following assumption, which is a natural assumption for stochastic processes with stationary increments.

Assumption II (Pinning).

We have HN(0)=0.H_{N}(0)=0.

Let us recall the covariance structure of the non-isotropic Gaussian fields with isotropic increments.

Lemma 2.2 ([AZ20, Lemma A.1]).

Assume Assumptions I and II. Then for xNx\in\mathbb{R}^{N},

Cov[HN(x),HN(x)]\displaystyle\operatorname{Cov}\left[H_{N}(x),H_{N}(x)\right] =DN(x2),\displaystyle=D_{N}\left(\|x\|^{2}\right),
Cov[HN(x),iHN(x)]\displaystyle\operatorname{Cov}\left[H_{N}(x),\partial_{i}H_{N}(x)\right] =DN(x2)xi,\displaystyle=D_{N}^{\prime}\left(\|x\|^{2}\right)x_{i},
Cov[iHN(x),jHN(x)]\displaystyle\operatorname{Cov}\left[\partial_{i}H_{N}(x),\partial_{j}H_{N}(x)\right] =DN(0)δij,\displaystyle=D_{N}^{\prime}(0)\delta_{ij},
Cov[HN(x),ijHN(x)]\displaystyle\operatorname{Cov}\left[H_{N}(x),\partial_{ij}H_{N}(x)\right] =2DN′′(x2)xixj+[DN(x2)DN(0)]δij,\displaystyle=2D_{N}^{\prime\prime}\left(\|x\|^{2}\right)x_{i}x_{j}+\left[D_{N}^{\prime}\left(\|x\|^{2}\right)-D_{N}^{\prime}(0)\right]\delta_{ij},
Cov[kHN(x),ijHN(x)]\displaystyle\operatorname{Cov}\left[\partial_{k}H_{N}(x),\partial_{ij}H_{N}(x)\right] =0,\displaystyle=0,
Cov[lkHN(x),ijHN(x)]\displaystyle\operatorname{Cov}\left[\partial_{lk}H_{N}(x),\partial_{ij}H_{N}(x)\right] =2DN′′(0)[δjlδik+δilδkj+δklδij],\displaystyle=-2D_{N}^{\prime\prime}(0)\left[\delta_{jl}\delta_{ik}+\delta_{il}\delta_{kj}+\delta_{kl}\delta_{ij}\right],

where δij\delta_{ij} denotes the Kronecker delta function, i,j,k,l=1,,Ni,j,k,l=1,\dots,N.

Cheng and Schwartzman [CS18, Proposition 3.3] studied the nondegeneracy condition of the isotropic Gaussian random fields, and they showed that the Gaussian vector (HN(x),HN(x),ijHN(x),1ijN)(H_{N}(x),\nabla H_{N}(x),\partial_{ij}H_{N}(x),1\leq i\leq j\leq N) is nondegenerate if and only if BN(0)2BN′′(0)<N+2N.\frac{B_{N}^{\prime}(0)^{2}}{B_{N}^{\prime\prime}(0)}<\frac{N+2}{N}. The following lemma gives the nondegeneracy condition of the non-isotropic Gaussian random fields with isotropic increments.

Lemma 2.3.

Assume Assumptions I and II hold for the structure function DND_{N} and the associated Gaussian field HNH_{N}. Then for any fixed xN{0}x\in\mathbb{R}^{N}\setminus\{0\}, the Gaussian vector (HN(x),HN(x),ijHN(x),1ijN)(H_{N}(x),\nabla H_{N}(x),\partial_{ij}H_{N}(x),1\leq i\leq j\leq N) is nondegenerate if and only if for r=x2r=\|x\|^{2},

DN(r)DN(r)2rDN(0)+(N+1)DN′′(r)2r2(N+2)DN′′(0)+2rDN′′(r)(DN(r)DN(0))(N+2)DN′′(0)+N(DN(r)DN(0))22(N+2)DN′′(0)>0.D_{N}\left(r\right)-\frac{D_{N}^{\prime}(r)^{2}r}{D_{N}^{\prime}(0)}+\frac{(N+1)D_{N}^{\prime\prime}(r)^{2}r^{2}}{(N+2)D_{N}^{\prime\prime}(0)}+\frac{2rD_{N}^{\prime\prime}(r)(D_{N}^{\prime}(r)-D_{N}^{\prime}(0))}{(N+2)D_{N}^{\prime\prime}(0)}+\frac{N(D_{N}^{\prime}(r)-D_{N}^{\prime}(0))^{2}}{2(N+2)D_{N}^{\prime\prime}(0)}>0. (2.7)

Moreover, if D𝒟ND\in\mathcal{D}_{N} for some N{}N\in\mathbb{N}\cup\{\infty\}, Assumptions I and II hold, and we have for r=x2r=\|x\|^{2},

D(r)D(r)2rD(0)+D′′(r)2r2D′′(0)+(D(r)D(0))22D′′(0)>0,\displaystyle D\left(r\right)-\frac{D^{\prime}(r)^{2}r}{D^{\prime}(0)}+\frac{D^{\prime\prime}(r)^{2}r^{2}}{D^{\prime\prime}(0)}+\frac{\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}}{2D^{\prime\prime}(0)}>0, (2.8)

then the Gaussian vector (HN(x),HN(x),ijHN(x),1ijN)(H_{N}(x),\nabla H_{N}(x),\partial_{ij}H_{N}(x),1\leq i\leq j\leq N) is nondegenerate. Here if N=N=\infty, we understand that the indices i,ji,j\in\mathbb{N}.

Proof.

Following Lemma 2.2, it is clear that the covariance matrix of the Gaussian vector

(HN(x),HN(x),ijHN(x),1ijN),\left(H_{N}(x),\nabla H_{N}(x),\partial_{ij}H_{N}(x),1\leq i\leq j\leq N\right),

is given by

𝐆=(DN(x2)ξ1𝖳ξ2𝖳ξ3𝖳ξ1𝐀𝟎1𝖳𝟎2𝖳ξ2𝟎1𝐁𝟎3𝖳ξ3𝟎2𝟎3𝐂),\displaystyle\textbf{$\mathbf{G}$}=\begin{pmatrix}D_{N}\left(\|x\|^{2}\right)&\xi_{1}^{\mathsf{T}}&\xi_{2}^{\mathsf{T}}&\xi_{3}^{\mathsf{T}}\\ \xi_{1}&\textbf{$\mathbf{A}$}&\mathbf{0}_{1}^{\mathsf{T}}&\mathbf{0}_{2}^{\mathsf{T}}\\ \xi_{2}&\mathbf{0}_{1}&\textbf{$\mathbf{B}$}&\mathbf{0}_{3}^{\mathsf{T}}\\ \xi_{3}&\mathbf{0}_{2}&\mathbf{0}_{3}&\textbf{$\mathbf{C}$}\\ \end{pmatrix},

where

ξ1=(DN(x2)x1,,DN(x2)xN)𝖳,\xi_{1}=\left(D_{N}^{\prime}(\|x\|^{2})x_{1},\dots,D_{N}^{\prime}(\|x\|^{2})x_{N}\right)^{\mathsf{T}},

is the covariance vector of HN(x)H_{N}(x) with (1HN(x),,NHN(x))\left(\partial_{1}H_{N}(x),\dots,\partial_{N}H_{N}(x)\right),

ξ2=(2DN′′(x2)x12+DN(x2)DN(0),,2DN′′(x2)xN2+DN(x2)DN(0))𝖳,\xi_{2}=\left(2D_{N}^{\prime\prime}(\|x\|^{2})x_{1}^{2}+D_{N}^{\prime}(\|x\|^{2})-D_{N}^{\prime}(0),\dots,2D_{N}^{\prime\prime}(\|x\|^{2})x_{N}^{2}+D_{N}^{\prime}(\|x\|^{2})-D_{N}^{\prime}(0)\right)^{\mathsf{T}},

is the covariance vector of HN(x)H_{N}(x) with (11HN(x),,NNHN(x))\left(\partial_{11}H_{N}(x),\dots,\partial_{NN}H_{N}(x)\right),

ξ3=2DN′′(x2)(x1x2,,x1xN,x2x3,,x2xN,,xN1xN)𝖳,\xi_{3}=2D_{N}^{\prime\prime}(\|x\|^{2})\left(x_{1}x_{2},\dots,x_{1}x_{N},x_{2}x_{3},\dots,x_{2}x_{N},\dots,x_{N-1}x_{N}\right)^{\mathsf{T}},

is an N(N1)2\frac{N(N-1)}{2}-dimensional vector, which is the covariance vector of HN(x)H_{N}(x) with the Hessian above the diagonal, 𝟎1\mathbf{0}_{1}, 𝟎2\mathbf{0}_{2} and 𝟎3\mathbf{0}_{3} are N×NN\times N, N(N1)2×N\frac{N(N-1)}{2}\times N, and N(N1)2×N\frac{N(N-1)}{2}\times N matrices with all elements equal to zero, respectively. In addition to this,

𝐀=DN(0)𝐈N,𝐁=4DN′′(0)𝐈N2DN′′(0)𝟏N𝟏N𝖳,𝐂=2DN′′(0)𝐈N(N1)2,\mathbf{A}=D_{N}^{\prime}(0)\mathbf{I}_{N},\qquad\mathbf{B}=-4D_{N}^{\prime\prime}(0)\mathbf{I}_{N}-2D_{N}^{\prime\prime}(0)\mathbf{1}_{N}\mathbf{1}_{N}^{\mathsf{T}},\qquad\mathbf{C}=-2D_{N}^{\prime\prime}(0)\mathbf{I}_{\frac{N(N-1)}{2}},

where 𝟏N\mathbf{1}_{N} is the NN-dimensional column vector with all elements equal to one hereafter. It is clear that all of 𝐀,𝐁\mathbf{A},\mathbf{B} and 𝐂\mathbf{C} are positive definite matrices. By the eigenvalue interlacing theorem, the matrix 𝐆\mathbf{G} has at most one negative eigenvalue. Therefore, the positive definiteness of the matrix 𝐆\mathbf{G} is equivalent to det𝐆>0\det\mathbf{G}>0. Noting that 𝐁1=14(N+2)DN′′(0)[(N+2)𝐈N𝟏N𝟏NT]\mathbf{B}^{-1}=\frac{-1}{4(N+2)D_{N}^{\prime\prime}(0)}\left[(N+2)\mathbf{I}_{N}-\mathbf{1}_{N}\mathbf{1}_{N}^{T}\right], we may consider the Schur complement of the block diagonal submatrix Diag(𝐀,𝐁,𝐂)\mathrm{Diag}(\mathbf{A},\mathbf{B},\mathbf{C}) in 𝐆\mathbf{G} and find

det𝐆det𝐀det𝐁det𝐂=DN(x2)DN(x2)2x2DN(0)+(N+1)DN′′(x2)2x4(N+2)DN′′(0)+2DN′′(x2)x2(DN(x2)DN(0))(N+2)DN′′(0)+N(DN(x2)DN(0))22(N+2)DN′′(0).\frac{\det\mathbf{G}}{\det\mathbf{A}\det\mathbf{B}\det\mathbf{C}}=D_{N}\left(\|x\|^{2}\right)-\frac{D_{N}^{\prime}(\|x\|^{2})^{2}\|x\|^{2}}{D_{N}^{\prime}(0)}+\frac{(N+1)D_{N}^{\prime\prime}(\|x\|^{2})^{2}\|x\|^{4}}{(N+2)D_{N}^{\prime\prime}(0)}\\ +\frac{2D_{N}^{\prime\prime}(\|x\|^{2})\|x\|^{2}\left(D_{N}^{\prime}(\|x\|^{2})-D_{N}^{\prime}(0)\right)}{(N+2)D_{N}^{\prime\prime}(0)}+\frac{N\left(D_{N}^{\prime}(\|x\|^{2})-D_{N}^{\prime}(0)\right)^{2}}{2(N+2)D_{N}^{\prime\prime}(0)}.

The proof of the first claim is completed by replacing x2\|x\|^{2} with rr.

For the second assertion, we need to show that

D(r)D(r)2rD(0)+(N+1)D′′(r)2r2(N+2)D′′(0)+2rD′′(r)(D(r)D(0))(N+2)D′′(0)+N(D(r)D(0))22(N+2)D′′(0)>0.D\left(r\right)-\frac{D^{\prime}(r)^{2}r}{D^{\prime}(0)}+\frac{(N+1)D^{\prime\prime}(r)^{2}r^{2}}{(N+2)D^{\prime\prime}(0)}+\frac{2rD^{\prime\prime}(r)\left(D^{\prime}(r)-D^{\prime}(0)\right)}{(N+2)D^{\prime\prime}(0)}+\frac{N\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}}{2(N+2)D^{\prime\prime}(0)}>0.

Notice that

D(r)D(r)2rD(0)+(N+1)D′′(r)2r2(N+2)D′′(0)+2rD′′(r)(D(r)D(0))(N+2)D′′(0)+N(D(r)D(0))22(N+2)D′′(0)\displaystyle\mathrel{\phantom{=}}D\left(r\right)-\frac{D^{\prime}(r)^{2}r}{D^{\prime}(0)}+\frac{(N+1)D^{\prime\prime}(r)^{2}r^{2}}{(N+2)D^{\prime\prime}(0)}+\frac{2rD^{\prime\prime}(r)\left(D^{\prime}(r)-D^{\prime}(0)\right)}{(N+2)D^{\prime\prime}(0)}+\frac{N\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}}{2(N+2)D^{\prime\prime}(0)}
=D(r)D(r)2rD(0)+D′′(r)2r2D′′(0)+(D(r)D(0))22D′′(0)\displaystyle=D\left(r\right)-\frac{D^{\prime}(r)^{2}r}{D^{\prime}(0)}+\frac{D^{\prime\prime}(r)^{2}r^{2}}{D^{\prime\prime}(0)}+\frac{\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}}{2D^{\prime\prime}(0)}
+(D(r)D(0))2(N+2)D′′(0)+D′′(r)2r2(N+2)D′′(0)2rD′′(r)(D(r)D(0))(N+2)D′′(0).\displaystyle\quad+\frac{\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}}{-(N+2)D^{\prime\prime}(0)}+\frac{D^{\prime\prime}(r)^{2}r^{2}}{-(N+2)D^{\prime\prime}(0)}-\frac{2rD^{\prime\prime}(r)\left(D^{\prime}(r)-D^{\prime}(0)\right)}{-(N+2)D^{\prime\prime}(0)}.

By the elementary inequality a2+b22aba^{2}+b^{2}\geq 2ab, we have

(D(r)D(0))2(N+2)D′′(0)+D′′(r)2r2(N+2)D′′(0)2rD′′(r)(D(r)D(0))(N+2)D′′(0)0.\frac{\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}}{-(N+2)D^{\prime\prime}(0)}+\frac{D^{\prime\prime}(r)^{2}r^{2}}{-(N+2)D^{\prime\prime}(0)}-\frac{2rD^{\prime\prime}(r)\left(D^{\prime}(r)-D^{\prime}(0)\right)}{-(N+2)D^{\prime\prime}(0)}\geq 0.

Together with the assumption \reftagform@2.8, we obtain the desired result. ∎

We remark that the condition \reftagform@2.8 does not depend on the dimension NN, which makes it easy to apply in the following. Indeed, if it holds for all r>0r>0, then the Gaussian field and its derivatives associated to DD are nondegenerate, provided DD belongs to a certain class 𝒟N\mathcal{D}_{N} and Assumptions I and II hold.

2.3 Representation with GOI matrices

Under the nondegeneracy condition \reftagform@2.7, we now investigate the expected number of critical points (with or without given indices) of non-isotropic Gaussian random fields with isotropic increments.

We follow the idea for the case 𝒟\mathcal{D}_{\infty} in [AZ20, AZ22] and list the notations from [AZ20, (3.15)] for later use:

m1=m1(ρ,u)=u(2DN′′(ρ2)ρ2+DN(ρ2)DN(0))DN(ρ2)DN(ρ2)2ρ2DN(0),m2=m2(ρ,u)=u(DN(ρ2)DN(0))DN(ρ2)DN(ρ2)2ρ2DN(0),σY=σY(ρ)=DN(ρ2)DN(ρ2)2ρ2DN(0),α=α(ρ2)=2DN′′(ρ2)DN(ρ2)DN(ρ2)2ρ2DN(0),β=β(ρ2)=DN(ρ2)DN(0)DN(ρ2)DN(ρ2)2ρ2DN(0),σ1=σ1(ρ2)=4DN′′(0)(αρ2+β)αρ2,σ2=σ2(ρ2)=2DN′′(0)(αρ2+β)β.\begin{split}m_{1}&=m_{1}(\rho,u)=\frac{u(2D_{N}^{\prime\prime}(\rho^{2})\rho^{2}+D_{N}^{\prime}(\rho^{2})-D_{N}^{\prime}(0))}{D_{N}(\rho^{2})-\frac{D_{N}^{\prime}(\rho^{2})^{2}\rho^{2}}{D_{N}^{\prime}(0)}},\\ m_{2}&=m_{2}(\rho,u)=\frac{u(D_{N}^{\prime}(\rho^{2})-D_{N}^{\prime}(0))}{D_{N}(\rho^{2})-\frac{D_{N}^{\prime}(\rho^{2})^{2}\rho^{2}}{D_{N}^{\prime}(0)}},\qquad\qquad\sigma_{Y}=\sigma_{Y}(\rho)=\sqrt{D_{N}(\rho^{2})-\frac{D_{N}^{\prime}(\rho^{2})^{2}\rho^{2}}{D_{N}^{\prime}(0)}},\\ \alpha&=\alpha(\rho^{2})=\frac{2D_{N}^{\prime\prime}(\rho^{2})}{\sqrt{D_{N}(\rho^{2})-\frac{D_{N}^{\prime}(\rho^{2})^{2}\rho^{2}}{D_{N}^{\prime}(0)}}},\qquad\qquad\;\;\;\beta=\beta(\rho^{2})=\frac{D_{N}^{\prime}(\rho^{2})-D_{N}^{\prime}(0)}{\sqrt{D_{N}(\rho^{2})-\frac{D_{N}^{\prime}(\rho^{2})^{2}\rho^{2}}{D_{N}^{\prime}(0)}}},\\ \sigma_{1}&=\sigma_{1}(\rho^{2})=\sqrt{-4D_{N}^{\prime\prime}(0)-(\alpha\rho^{2}+\beta)\alpha\rho^{2}},\qquad\sigma_{2}=\sigma_{2}(\rho^{2})=\sqrt{-2D_{N}^{\prime\prime}(0)-(\alpha\rho^{2}+\beta)\beta}.\end{split} (2.9)

Using Lemma 2.2, we define

Σ01\displaystyle\Sigma_{01} =Cov(HN(x),HN(x))=DN(x2)x,\displaystyle=\operatorname{Cov}\left(H_{N}(x),\nabla H_{N}(x)\right)=D_{N}^{\prime}\left(\|x\|^{2}\right)x^{\top},
Σ11\displaystyle\Sigma_{11} =Cov(HN(x))=DN(0)𝐈N,\displaystyle=\operatorname{Cov}\left(\nabla H_{N}(x)\right)=D_{N}^{\prime}(0)\mathbf{I}_{N},

and

Y=HN(x)Σ01Σ111HN(x)=HN(x)DN(x2)i=1NxiiHN(x)DN(0).Y=H_{N}(x)-\Sigma_{01}\Sigma_{11}^{-1}\nabla H_{N}(x)=H_{N}(x)-\frac{D_{N}^{\prime}\left(\|x\|^{2}\right)\sum_{i=1}^{N}x_{i}\partial_{i}H_{N}(x)}{D_{N}^{\prime}(0)}.

Then YY is a centered Gaussian random variable whose variance σY2\sigma_{Y}^{2} is defined in \reftagform@2.9 with ρ\rho replaced by x\|x\|. Notice that both YY and 2HN(x)\nabla^{2}H_{N}(x) are independent of HN(x)\nabla H_{N}(x). By the Kac–Rice formula [AT07, Theorem 11.2.1],

𝔼[CrtN(E,TN)]=TN𝔼[|det2HN(x)|𝟏{Y+Σ01Σ111HN(x)E}|HN(x)=0]pHN(x)(0)dx=TN𝔼[|det2HN(x)|𝟏{YE}]pHN(x)(0)dx,\begin{split}&\mathrel{\phantom{=}}\mathbb{E}[\operatorname{Crt}_{N}\left(E,T_{N}\right)]\\ &=\int_{T_{N}}\mathbb{E}\left[\left|\operatorname{det}\nabla^{2}H_{N}(x)\right|\mathbf{1}\left\{Y+\Sigma_{01}\Sigma_{11}^{-1}\nabla H_{N}(x)\in E\right\}\middle|\nabla H_{N}(x)=0\right]p_{\nabla H_{N}(x)}(0)\,\mathrm{d}x\\ &=\int_{T_{N}}\mathbb{E}\left[\left|\operatorname{det}\nabla^{2}H_{N}(x)\right|\mathbf{1}\{Y\in E\}\right]p_{\nabla H_{N}(x)}(0)\,\mathrm{d}x,\end{split} (2.10)

and similarly

𝔼[CrtN,k(E,TN)]=TN𝔼[|det2HN(x)|𝟏{YE,i(2HN(x))=k}]pHN(x)(0)dx,\mathrel{\phantom{=}}\mathbb{E}[\operatorname{Crt}_{N,k}\left(E,T_{N}\right)]=\int_{T_{N}}\mathbb{E}\left[\left|\operatorname{det}\nabla^{2}H_{N}(x)\right|\mathbf{1}\{Y\in E,i\left(\nabla^{2}H_{N}(x)\right)=k\}\right]p_{\nabla H_{N}(x)}(0)\,\mathrm{d}x, (2.11)

where pHN(x)(0)=(2πDN(0))N/2p_{\nabla H_{N}(x)}(0)=(2\pi D_{N}^{\prime}(0))^{-N/2} is the p.d.f. of HN(x)\nabla H_{N}(x) at 0.

From Lemma 2.2, we note that 2HN(x)\nabla^{2}H_{N}(x) has the same distribution as 2DN′′(0)2\sqrt{-D_{N}^{\prime\prime}(0)} GOI(12)\operatorname{GOI}(\frac{1}{2}). Arguing as in [AZ20, Section 3] and using the spherical coordinates, we deduce that

(2HN(x)Y=u)=d(m1(x,u)00m2(x,u)𝐈N1)+2DN′′(0)M=:G,\displaystyle\left(\nabla^{2}H_{N}(x)\mid Y=u\right)\stackrel{{\scriptstyle d}}{{=}}\begin{pmatrix}m_{1}(\|x\|,u)&0\\ 0&m_{2}(\|x\|,u)\mathbf{I}_{N-1}\end{pmatrix}+2\sqrt{-D_{N}^{\prime\prime}(0)}M=:G,

where MM is an SGOI(d1,d2,d3)\operatorname{SGOI}(d_{1},d_{2},d_{3}) matrix with parameters

d1=12+β(x2)24DN′′(0),d2=α(x2)β(x2)x24DN′′(0),d3=α(x2)2x44DN′′(0),d_{1}=\frac{1}{2}+\frac{\beta(\|x\|^{2})^{2}}{4D_{N}^{\prime\prime}(0)},\qquad d_{2}=\frac{\alpha(\|x\|^{2})\beta(\|x\|^{2})\|x\|^{2}}{4D_{N}^{\prime\prime}(0)},\qquad d_{3}=\frac{\alpha(\|x\|^{2})^{2}\|x\|^{4}}{4D_{N}^{\prime\prime}(0)}, (2.12)

and the functions m1m_{1}, m2m_{2}, α\alpha, and β\beta are given as in \reftagform@2.9 with ρ\rho replaced by x\|x\|. Similar to \reftagform@2.3, we may write the shifted SGOI matrix GG as a block matrix

G=(2DN′′(0)ζ1+m12DN′′(0)ξ2DN′′(0)ξG),G=\begin{pmatrix}2\sqrt{-D_{N}^{\prime\prime}(0)}\zeta_{1}+m_{1}&2\sqrt{-D_{N}^{\prime\prime}(0)}\xi^{\top}\\ 2\sqrt{-D_{N}^{\prime\prime}(0)}\xi&G_{**}\end{pmatrix}, (2.13)

where

G:=2DN′′(0)[GOEN1+(ζ2+m22DN′′(0))𝐈N1].G_{**}:=2\sqrt{-D_{N}^{\prime\prime}(0)}\left[\mathrm{GOE}_{N-1}+\left(\zeta_{2}+\frac{m_{2}}{2\sqrt{-D_{N}^{\prime\prime}(0)}}\right)\mathbf{I}_{N-1}\right]. (2.14)

From \reftagform@2.1, we know

𝔼(GOEN1+ζ2𝐈N1ζ1=y)=(2DN′′(0)+β2+αβx2)y6DN′′(0)+(β+αx2)2𝐈N1,(GOEN1+ζ2𝐈N1ζ1=y)=d(2DN′′(0)+β2+αβx2)y6DN′′(0)+(β+αx2)2𝐈N1+M,\begin{split}\mathbb{E}(\mathrm{GOE}_{N-1}+\zeta_{2}\mathbf{I}_{N-1}\mid\zeta_{1}=y)&=\frac{(2D_{N}^{\prime\prime}(0)+\beta^{2}+\alpha\beta\|x\|^{2})y}{6D_{N}^{\prime\prime}(0)+(\beta+\alpha\|x\|^{2})^{2}}\mathbf{I}_{N-1},\\ \left(\mathrm{GOE}_{N-1}+\zeta_{2}\mathbf{I}_{N-1}\mid\zeta_{1}=y\right)&\stackrel{{\scriptstyle d}}{{=}}\frac{(2D_{N}^{\prime\prime}(0)+\beta^{2}+\alpha\beta\|x\|^{2})y}{6D_{N}^{\prime\prime}(0)+(\beta+\alpha\|x\|^{2})^{2}}\mathbf{I}_{N-1}+M^{\prime},\end{split} (2.15)

where MM^{\prime} is an (N1)×(N1)(N-1)\times(N-1) GOI(c)(c) matrix with parameter

c=d1+d1d3d221+d1+2d2+d3=4DN′′(0)+2β2+α2x412DN′′(0)+2(β+αx2)2.c=\frac{d_{1}+d_{1}d_{3}-d_{2}^{2}}{1+d_{1}+2d_{2}+d_{3}}=\frac{4D_{N}^{\prime\prime}(0)+2\beta^{2}+\alpha^{2}\|x\|^{4}}{12D_{N}^{\prime\prime}(0)+2(\beta+\alpha\|x\|^{2})^{2}}. (2.16)

Set

m3=(2DN′′(0)+β2+αβx2)y6DN′′(0)+(β+αx2)2+m22DN′′(0),m_{3}=\frac{(2D_{N}^{\prime\prime}(0)+\beta^{2}+\alpha\beta\|x\|^{2})y}{6D_{N}^{\prime\prime}(0)+(\beta+\alpha\|x\|^{2})^{2}}+\frac{m_{2}}{2\sqrt{-D_{N}^{\prime\prime}(0)}}, (2.17)

where m2m_{2} is defined in \reftagform@2.9. Let λ1λN1\lambda_{1}\leq\cdots\leq\lambda_{N-1} be the eigenvalues of the GOI(c)\operatorname{GOI}(c) matrix MM^{\prime}. Since the distribution of GOI(c)\operatorname{GOI}(c) matrices is invariant under orthogonal congruence transformation, following the same argument as for the GOE matrices, there exists a random orthogonal matrix VV independent of the unordered eigenvalues λ~j,j=1,,N1\tilde{\lambda}_{j},j=1,\ldots,N-1 such that

VMV𝖳=(λ~100λ~N1).\displaystyle VM^{\prime}V^{\mathsf{T}}=\begin{pmatrix}\tilde{\lambda}_{1}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&\tilde{\lambda}_{N-1}\end{pmatrix}. (2.18)

Since the law of the Gaussian vector ξ\xi is rotationally invariant, VξV\xi is a centered Gaussian vector with covariance matrix 12𝐈N1\frac{1}{2}\mathbf{I}_{N-1} that is independent of λ~j\tilde{\lambda}_{j}’s. We can rewrite Vξ=dZ/2V\xi\stackrel{{\scriptstyle d}}{{=}}Z/\sqrt{2}, where Z=(Z1,,ZN1)Z=(Z_{1},\ldots,Z_{N-1}) is an (N1)(N-1)-dimensional standard Gaussian random vector. We also need the following lemma for the calculation of 𝔼[CrtN,k(E,(R1,R2))]\mathbb{E}[\operatorname{Crt}_{N,k}\left(E,\left(R_{1},R_{2}\right)\right)]. Recall that the signature of a symmetric matrix is the number of positive eigenvalues minus that of negative eigenvalues.

Lemma 2.4 ([Laz88, Equation 2]).

Let SS be a symmetric block matrix, and write its inverse S1S^{-1} in block form with the same block structure:

S=(ABB𝖳C),S1=(AB(B)𝖳C).S=\left(\begin{array}[]{cc}A&B\\ B^{\mathsf{T}}&C\end{array}\right),\qquad S^{-1}=\left(\begin{array}[]{cc}A^{\prime}&B^{\prime}\\ \left(B^{\prime}\right)^{\mathsf{T}}&C^{\prime}\end{array}\right).

Then sgn(S)=sgn(A)+sgn(C)\operatorname{sgn}(S)=\operatorname{sgn}(A)+\operatorname{sgn}\left(C^{\prime}\right), with sgn(M)\operatorname{sgn}(M) denoting the signature of the matrix MM.

Let us define

η(ζ1,G)=m1+2DN′′(0)ζ1+4DN′′(0)ξG1ξ.\eta(\zeta_{1},G_{**})=m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}\zeta_{1}+4D_{N}^{\prime\prime}(0)\xi^{\top}G_{**}^{-1}\xi.

Following Lemma 2.4, we have for 1kN11\leq k\leq N-1,

{i(G)=k}={i(G)=k,η(ζ1,G)>0}{i(G)=k1,η(ζ1,G)<0},\{i(G)=k\}=\left\{i\left(G_{**}\right)=k,\eta(\zeta_{1},G_{**})>0\right\}\cup\left\{i\left(G_{**}\right)=k-1,\eta(\zeta_{1},G_{**})<0\right\},

for k=0k=0,

{i(G)=0}={i(G)=0,η(ζ1,G)>0},\{i(G)=0\}=\left\{i\left(G_{**}\right)=0,\eta(\zeta_{1},G_{**})>0\right\},

and for k=Nk=N,

{i(G)=N}={i(G)=N1,η(ζ1,G)<0}.\{i(G)=N\}=\left\{i\left(G_{**}\right)=N-1,\eta(\zeta_{1},G_{**})<0\right\}.

Moreover, by \reftagform@2.15 and \reftagform@2.18, we deduce

(η(ζ1,G)|ζ1=y)=dm1+2DN′′(0)y2DN′′(0)ξ(M+m3𝐈N1)1ξ=dm1+2DN′′(0)yDN′′(0)l=1N1Zl2λl+m3=:η(Λ),\begin{split}\left(\eta(\zeta_{1},G_{**})\middle|\zeta_{1}=y\right)&\stackrel{{\scriptstyle d}}{{=}}m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y-2\sqrt{-D_{N}^{\prime\prime}(0)}\xi^{\top}\left(M^{\prime}+m_{3}\mathbf{I}_{N-1}\right)^{-1}\xi\\ &\stackrel{{\scriptstyle d}}{{=}}m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}\frac{Z_{l}^{2}}{\lambda_{l}+m_{3}}\\ &=:\eta^{\prime}(\Lambda),\end{split} (2.19)

where Λ=(λ1,,λN1).\Lambda=(\lambda_{1},\dots,\lambda_{N-1}).

Recall that MM is an SGOI(d1,d2,d3)\operatorname{SGOI}(d_{1},d_{2},d_{3}) matrix with parameters d1,d2d_{1},d_{2} and d3d_{3} defined in \reftagform@2.12 depending on the structure function DND_{N}. Since we will use MM to deduce the main theorem in this section, it is also necessary to guarantee that MM is nondegenerate.

Lemma 2.5.

The nondegeneracy condition \reftagform@2.7 holds for r=x2r=\|x\|^{2} if and only if the SGOI(d1,d2,d3)\operatorname{SGOI}(d_{1},d_{2},d_{3}) matrix MM is nondegenerate for xN{0}x\in\mathbb{R}^{N}\setminus\{0\}, where d1d_{1}, d2d_{2} and d3d_{3} are defined in \reftagform@2.12.

Proof.

According to Lemma 2.1, it suffices to show that \reftagform@2.7 is equivalent to

d1>1N1,1+d3+d1+2d2(N1)d221+(N1)d1>0,d_{1}>-\frac{1}{N-1},\qquad\qquad 1+d_{3}+\frac{d_{1}+2d_{2}-(N-1)d_{2}^{2}}{1+(N-1)d_{1}}>0,

where d1=12+β(r)24DN′′(0),d2=α(r)β(r)r4DN′′(0),d3=α(r)2r24DN′′(0)d_{1}=\frac{1}{2}+\frac{\beta(r)^{2}}{4D_{N}^{\prime\prime}(0)},d_{2}=\frac{\alpha(r)\beta(r)r}{4D_{N}^{\prime\prime}(0)},d_{3}=\frac{\alpha(r)^{2}r^{2}}{4D_{N}^{\prime\prime}(0)} with α(r)\alpha(r) and β(r)\beta(r) defined in \reftagform@2.9, and we have replaced x2\lVert x\rVert^{2} with rr in the definitions of d1d_{1}, d2d_{2} and d3d_{3} for simplicity.

If d1>1N1d_{1}>-\frac{1}{N-1}, straightforward calculations show that 1+d3+d1+2d2(N1)d221+(N1)d1>01+d_{3}+\frac{d_{1}+2d_{2}-(N-1)d_{2}^{2}}{1+(N-1)d_{1}}>0 is equivalent to \reftagform@2.7. To be precise, multiplying both sides by 1+(N1)d11+(N-1)d_{1} implies

1+d3+(N1)(d1d3d22)+2d2+Nd1>0.1+d_{3}+(N-1)(d_{1}d_{3}-d_{2}^{2})+2d_{2}+Nd_{1}>0.

Note that d1d3d22=12d3d_{1}d_{3}-d_{2}^{2}=\frac{1}{2}d_{3} and

1+N+12d3+2d2+Nd1=1+N2+(N+1)α(r)2r28DN′′(0)+α(r)β(r)r2DN′′(0)+Nβ(r)24DN′′(0)>0.1+\frac{N+1}{2}d_{3}+2d_{2}+Nd_{1}=1+\frac{N}{2}+\frac{(N+1)\alpha(r)^{2}r^{2}}{8D_{N}^{\prime\prime}(0)}+\frac{\alpha(r)\beta(r)r}{2D_{N}^{\prime\prime}(0)}+\frac{N\beta(r)^{2}}{4D_{N}^{\prime\prime}(0)}>0.

Multiplying both sides by 2N+2(DN(r)DN(r)2rDN(0))\frac{2}{N+2}(D_{N}(r)-\frac{D_{N}^{\prime}(r)^{2}r}{D_{N}^{\prime}(0)}), we get

DN(r)DN(r)2rDN(0)+(N+1)DN′′(r)2r2(N+2)DN′′(0)+2rDN′′(r)(DN(r)DN(0))(N+2)DN′′(0)+(DN(r)DN(0))22(N+2)DN′′(0)>0,D_{N}(r)-\frac{D_{N}^{\prime}(r)^{2}r}{D_{N}^{\prime}(0)}+\frac{(N+1)D_{N}^{\prime\prime}(r)^{2}r^{2}}{(N+2)D_{N}^{\prime\prime}(0)}+\frac{2rD_{N}^{\prime\prime}(r)(D_{N}^{\prime}(r)-D_{N}^{\prime}(0))}{(N+2)D_{N}^{\prime\prime}(0)}+\frac{(D_{N}^{\prime}(r)-D_{N}^{\prime}(0))^{2}}{2(N+2)D_{N}^{\prime\prime}(0)}>0,

which is exactly \reftagform@2.7. Notice that each step of above derivations is reversible. Therefore, \reftagform@2.7 is equivalent to 1+d3+d1+2d2(N1)d221+(N1)d1>01+d_{3}+\frac{d_{1}+2d_{2}-(N-1)d_{2}^{2}}{1+(N-1)d_{1}}>0, on condition that d1>1N1d_{1}>-\frac{1}{N-1}.

It remains to show that d1>1N1d_{1}>-\frac{1}{N-1} follows from \reftagform@2.7. Dividing both sides of \reftagform@2.7 by 2N+2(DN(r)DN(r)2rDN(0))\frac{2}{N+2}(D_{N}(r)-\frac{D_{N}^{\prime}(r)^{2}r}{D_{N}^{\prime}(0)}) gives that

1+N2+(N+1)α(r)2r28DN′′(0)+α(r)β(r)r2DN′′(0)+Nβ(r)24DN′′(0)>0,1+\frac{N}{2}+\frac{(N+1)\alpha(r)^{2}r^{2}}{8D_{N}^{\prime\prime}(0)}+\frac{\alpha(r)\beta(r)r}{2D_{N}^{\prime\prime}(0)}+\frac{N\beta(r)^{2}}{4D_{N}^{\prime\prime}(0)}>0,

which is equivalent to

1+N2+(N+2)(N1)β(r)24(N+1)DN′′(0)+[(N+1)α(r)r+2β(r)]28(N+1)DN′′(0)>0.1+\frac{N}{2}+\frac{(N+2)(N-1)\beta(r)^{2}}{4(N+1)D_{N}^{\prime\prime}(0)}+\frac{[(N+1)\alpha(r)r+2\beta(r)]^{2}}{8(N+1)D_{N}^{\prime\prime}(0)}>0.

Since DN′′(0)<0D_{N}^{\prime\prime}(0)<0 implies [(N+1)α(r)r+2β(r)]28(N+1)DN′′(0)<0\frac{[(N+1)\alpha(r)r+2\beta(r)]^{2}}{8(N+1)D_{N}^{\prime\prime}(0)}<0, we obtain

1+N2+(N+2)(N1)β(r)24(N+1)DN′′(0)>0.1+\frac{N}{2}+\frac{(N+2)(N-1)\beta(r)^{2}}{4(N+1)D_{N}^{\prime\prime}(0)}>0.

Multiplying both sides by N+1(N+2)(N1)\frac{N+1}{(N+2)(N-1)} implies

N+12(N1)+β(r)24DN′′(0)=1N1+12+β(r)24DN′′(0)>0,\frac{N+1}{2(N-1)}+\frac{\beta(r)^{2}}{4D_{N}^{\prime\prime}(0)}=\frac{1}{N-1}+\frac{1}{2}+\frac{\beta(r)^{2}}{4D_{N}^{\prime\prime}(0)}>0,

which shows d1>1N1d_{1}>-\frac{1}{N-1} and thus completes the proof. ∎

Due to the rotational symmetry of isotropy, we are interested in the critical points in the shell domain

TN(R1,R2)={xN:R1<x<R2},0R1<R2<,T_{N}\left(R_{1},R_{2}\right)=\{x\in\mathbb{R}^{N}:R_{1}<\|x\|<R_{2}\},\quad 0\leq R_{1}<R_{2}<\infty,

with critical values in an arbitrary Borel subset EE. For simplicity, we write CrtN(E,(R1,R2)):=\operatorname{Crt}_{N}\left(E,\left(R_{1},R_{2}\right)\right):= CrtN(E,TN(R1,R2))\operatorname{Crt}_{N}\left(E,T_{N}\left(R_{1},R_{2}\right)\right). Together with \reftagform@2.10 and \reftagform@2.11, using the spherical coordinates and writing ρ=x\rho=\|x\|, we arrive at

𝔼[CrtN(E,(R1,R2))]\displaystyle\mathbb{E}[\operatorname{Crt}_{N}\left(E,\left(R_{1},R_{2}\right)\right)] =SN1R1R2E𝔼[|detG|]12πσYeu22σY2pHN(x)(0)ρN1dudρ,\displaystyle=S_{N-1}\int_{R_{1}}^{R_{2}}\int_{E}\mathbb{E}\left[\left|\operatorname{det}G\right|\right]\frac{1}{\sqrt{2\pi}\sigma_{Y}}e^{-\frac{u^{2}}{2\sigma_{Y}^{2}}}p_{\nabla H_{N}(x)}(0)\rho^{N-1}\,\mathrm{d}u\,\mathrm{d}\rho, (2.20)
𝔼[CrtN,k(E,(R1,R2))]\displaystyle\mathbb{E}[\operatorname{Crt}_{N,k}\left(E,\left(R_{1},R_{2}\right)\right)] =SN1R1R2E𝔼[|detG|𝟏{i(G)=k}]12πσYeu22σY2\displaystyle=S_{N-1}\int_{R_{1}}^{R_{2}}\int_{E}\mathbb{E}\left[\left|\operatorname{det}G\right|\mathbf{1}\{i(G)=k\}\right]\frac{1}{\sqrt{2\pi}\sigma_{Y}}e^{-\frac{u^{2}}{2\sigma_{Y}^{2}}}
×pHN(x)(0)ρN1dudρ,\displaystyle\phantom{XXXXXXXXXXXXXXXXXXXXXx}\times p_{\nabla H_{N}(x)}(0)\rho^{N-1}\,\mathrm{d}u\,\mathrm{d}\rho, (2.21)

where SN1=2πN/2Γ(N/2)S_{N-1}=\frac{2\pi^{N/2}}{\Gamma(N/2)} is the area of (N1)(N-1)-dimensional unit sphere; see [AZ20, (4.1)] for details.

Unlike the isotropic case in [CS18], here it is difficult to find the joint eigenvalue density of the matrix GG due to the lack of invariance property. However, we observe that conditioning on the first entry of an SGOI(d1,d2,d3d_{1},d_{2},d_{3}) matrix, its lower right (N1)×(N1)(N-1)\times(N-1) submatrix has the same distribution as a GOI(c)(c) matrix for some suitable cc. We are now ready to state our main result in this section.

Theorem 2.6.

Let HN={HN(x):xN}H_{N}=\{H_{N}(x):x\in\mathbb{R}^{N}\} be a non-isotropic Gaussian field with isotropic increments. Let EE\subset\mathbb{R} be a Borel set and TNT_{N} be the shell domain TN(R1,R2)={xN:R1<x<R2}T_{N}\left(R_{1},R_{2}\right)=\{x\in\mathbb{R}^{N}:R_{1}<\|x\|<R_{2}\}, where 0R1<R2<0\leq R_{1}<R_{2}<\infty. Assume Assumptions I, II and the nondegeneracy condition \reftagform@2.7 holds for all r>0r>0. Then we have

𝔼[CrtN(E,(R1,R2))]\displaystyle\mathrel{\phantom{=}}\mathbb{E}[\operatorname{Crt}_{N}\left(E,\left(R_{1},R_{2}\right)\right)]
=2(2DN′′(0))N/2DN(0)N/2Γ(N/2)R1R2Ee2DN′′(0)y26DN′′(0)+(β+αρ2)22π[6DN′′(0)(β+αρ2)2]eu22σY22πσYρN1dydudρ\displaystyle=\frac{2\left(-2D_{N}^{\prime\prime}(0)\right)^{N/2}}{D_{N}^{\prime}(0)^{N/2}\Gamma(N/2)}\int_{R_{1}}^{R_{2}}\int_{E}\int_{-\infty}^{\infty}\frac{e^{-\frac{2D_{N}^{\prime\prime}(0)y^{2}}{6D_{N}^{\prime\prime}(0)+(\beta+\alpha\rho^{2})^{2}}}}{\sqrt{2\pi[-6D_{N}^{\prime\prime}(0)-(\beta+\alpha\rho^{2})^{2}]}}\frac{e^{-\frac{u^{2}}{2\sigma_{Y}^{2}}}}{\sqrt{2\pi}\sigma_{Y}}\rho^{N-1}\,\mathrm{d}y\,\mathrm{d}u\,\mathrm{d}\rho
×𝔼Z{𝔼GOI(c)[|(m1+2DN′′(0)y)j=1N1(λj+m3)DN′′(0)l=1N1Zl2jlN1(λj+m3)|]},\displaystyle\quad\times\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOI}(c)}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y\right)\prod_{j=1}^{N-1}(\lambda_{j}+m_{3})-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}Z_{l}^{2}\prod_{j\neq l}^{N-1}(\lambda_{j}+m_{3})\right|\right]\right\},

and for k=0,1,,Nk=0,1,\dots,N,

𝔼[CrtN,k(E,(R1,R2))]\displaystyle\mathrel{\phantom{=}}\mathbb{E}[\operatorname{Crt}_{N,k}\left(E,\left(R_{1},R_{2}\right)\right)]
=2(2DN′′(0))N/2DN(0)N/2Γ(N/2)R1R2Ee2DN′′(0)y26DN′′(0)+(β+αρ2)22π[6DN′′(0)(β+αρ2)2]eu22σY22πσYρN1dydudρ\displaystyle=\frac{2\left(-2D_{N}^{\prime\prime}(0)\right)^{N/2}}{D_{N}^{\prime}(0)^{N/2}\Gamma(N/2)}\int_{R_{1}}^{R_{2}}\int_{E}\int_{-\infty}^{\infty}\frac{e^{-\frac{2D_{N}^{\prime\prime}(0)y^{2}}{6D_{N}^{\prime\prime}(0)+(\beta+\alpha\rho^{2})^{2}}}}{\sqrt{2\pi[-6D_{N}^{\prime\prime}(0)-(\beta+\alpha\rho^{2})^{2}]}}\frac{e^{-\frac{u^{2}}{2\sigma_{Y}^{2}}}}{\sqrt{2\pi}\sigma_{Y}}\rho^{N-1}\,\mathrm{d}y\,\mathrm{d}u\,\mathrm{d}\rho
×𝔼Z{𝔼GOI(c)[|(m1+2DN′′(0)y)j=1N1(λj+m3)DN′′(0)l=1N1Zl2jlN1(λj+m3)|𝟏Ak]},\displaystyle\;\;\times\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOI}(c)}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y\right)\prod_{j=1}^{N-1}(\lambda_{j}+m_{3})-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}Z_{l}^{2}\prod_{j\neq l}^{N-1}(\lambda_{j}+m_{3})\right|\mathbf{1}_{A_{k}}\right]\right\},

where

Ak={{λk<m3<λk+1,η(Λ)>0}{λk1<m3<λk,η(Λ)<0},1kN1,{λ1>m3,η(Λ)>0},k=0,{λN1<m3,η(Λ)<0},k=N,\displaystyle A_{k}=\begin{cases}\left\{\lambda_{k}<-m_{3}<\lambda_{k+1},\eta^{\prime}(\Lambda)>0\right\}\cup\left\{\lambda_{k-1}<-m_{3}<\lambda_{k},\eta^{\prime}(\Lambda)<0\right\},&1\leq k\leq N-1,\\ \left\{\lambda_{1}>-m_{3},\eta^{\prime}(\Lambda)>0\right\},&k=0,\\ \left\{\lambda_{N-1}<-m_{3},\eta^{\prime}(\Lambda)<0\right\},&k=N,\end{cases}

η(Λ)\eta^{\prime}(\Lambda) is defined in \reftagform@2.19, m1m_{1} and σY\sigma_{Y} are given in \reftagform@2.9, cc is defined in \reftagform@2.16, m3m_{3} is defined in \reftagform@2.17, Z=(Z1,,ZN1)Z=(Z_{1},\ldots,Z_{N-1}) is an (N1)(N-1)-dimensional standard Gaussian random vector independent of λj\lambda_{j}’s, and by convention λ0=\lambda_{0}=-\infty, λN=\lambda_{N}=\infty. We emphasize that the expectation 𝔼GOI(c)\mathbb{E}_{\mathrm{GOI}(c)} is taken with respect to the GOI(c)(c) eigenvalues λi\lambda_{i}’s.

Proof.

Recall that G=(m100m2𝐈N1)+2DN′′(0)MG=\left(\begin{smallmatrix}m_{1}&0\\ 0&m_{2}\mathbf{I}_{N-1}\end{smallmatrix}\right)+2\sqrt{-D_{N}^{\prime\prime}(0)}M, where MM is an SGOI(d1,d2,d3)\operatorname{SGOI}(d_{1},d_{2},d_{3}) matrix defined in \reftagform@2.3 with parameters d1=12+β24DN′′(0)d_{1}=\frac{1}{2}+\frac{\beta^{2}}{4D_{N}^{\prime\prime}(0)}, d2=αβρ24DN′′(0)d_{2}=\frac{\alpha\beta\rho^{2}}{4D_{N}^{\prime\prime}(0)} and d3=α2ρ44DN′′(0)d_{3}=\frac{\alpha^{2}\rho^{4}}{4D_{N}^{\prime\prime}(0)}. The Schur complement formula implies that

detG\displaystyle\operatorname{det}G =(m1+2DN′′(0)ζ1+4DN′′(0)ξG1ξ)det(G)\displaystyle=\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}\zeta_{1}+4D_{N}^{\prime\prime}(0)\xi^{\top}G_{**}^{-1}\xi\right)\det\left(G_{**}\right)
={m1+2DN′′(0)ζ12DN′′(0)ξ𝖳[(m22DN′′(0)+ζ2)𝐈N1+GOEN1]1ξ}\displaystyle=\left\{m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}\zeta_{1}-2\sqrt{-D_{N}^{\prime\prime}(0)}\xi^{\mathsf{T}}\left[\left(\frac{m_{2}}{2\sqrt{-D_{N}^{\prime\prime}(0)}}+\zeta_{2}\right)\mathbf{I}_{N-1}+\mathrm{GOE}_{N-1}\right]^{-1}\xi\right\}
×2N1(DN′′(0))N12det((m22DN′′(0)+ζ2)𝐈N1+GOEN1).\displaystyle\quad\times 2^{N-1}\left(-D_{N}^{\prime\prime}(0)\right)^{\frac{N-1}{2}}\operatorname{det}\left(\left(\frac{m_{2}}{2\sqrt{-D_{N}^{\prime\prime}(0)}}+\zeta_{2}\right)\mathbf{I}_{N-1}+\mathrm{GOE}_{N-1}\right). (2.22)

Conditioning on ζ1=y\zeta_{1}=y, using \reftagform@2.15, \reftagform@2.17 and \reftagform@2.3, we obtain

𝔼[|detG|]=2N1(DN′′(0))N122π(1+d1+2d2+d3)ey22(1+d1+2d2+d3)×𝔼[|(m1+2DN′′(0)y2DN′′(0)ξ𝖳(M+m3𝐈N1)1ξ)det(M+m3𝐈N1)|]dy,\begin{split}&\mathbb{E}\left[\left|\operatorname{det}G\right|\right]=\int_{-\infty}^{\infty}\frac{2^{N-1}\left(-D_{N}^{\prime\prime}(0)\right)^{\frac{N-1}{2}}}{\sqrt{2\pi(1+d_{1}+2d_{2}+d_{3})}}e^{-\frac{y^{2}}{2(1+d_{1}+2d_{2}+d_{3})}}\\ &\phantom{\operatorname{det}G}\times\mathbb{E}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y-2\sqrt{-D_{N}^{\prime\prime}(0)}\xi^{\mathsf{T}}\left(M^{\prime}+m_{3}\mathbf{I}_{N-1}\right)^{-1}\xi\right)\operatorname{det}\left(M^{\prime}+m_{3}\mathbf{I}_{N-1}\right)\right|\right]\mathrm{d}y,\end{split} (2.23)

where 1+d1+2d2+d3=6DN′′(0)+(β+αρ2)24DN′′(0)1+d_{1}+2d_{2}+d_{3}=\frac{6D_{N}^{\prime\prime}(0)+(\beta+\alpha\rho^{2})^{2}}{4D_{N}^{\prime\prime}(0)} and MM^{\prime} is the (N1)×(N1)(N-1)\times(N-1) GOI(c)(c) matrix with parameter cc defined in \reftagform@2.16. By \reftagform@2.18 and the independence of ZZ and λi\lambda_{i}’s, we deduce

𝔼[|(m1+2DN′′(0)y2DN′′(0)ξ𝖳(M+m3𝐈N1)1ξ)det(M+m3𝐈N1)|]=𝔼[|(m1+2DN′′(0)y)j=1N1(λj+m3)DN′′(0)l=1N1Zl2jlN1(λj+m3)|]=𝔼Z{𝔼GOI(c)[|(m1+2DN′′(0)y)j=1N1(λj+m3)DN′′(0)l=1N1Zl2jlN1(λj+m3)|]}.\begin{split}&\mathrel{\phantom{=}}\mathbb{E}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y-2\sqrt{-D_{N}^{\prime\prime}(0)}\xi^{\mathsf{T}}\left(M^{\prime}+m_{3}\mathbf{I}_{N-1}\right)^{-1}\xi\right)\operatorname{det}\left(M^{\prime}+m_{3}\mathbf{I}_{N-1}\right)\right|\right]\\ &=\mathbb{E}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y\right)\prod_{j=1}^{N-1}(\lambda_{j}+m_{3})-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}Z_{l}^{2}\prod_{j\neq l}^{N-1}(\lambda_{j}+m_{3})\right|\right]\\ &=\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOI}(c)}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y\right)\prod_{j=1}^{N-1}(\lambda_{j}+m_{3})-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}Z_{l}^{2}\prod_{j\neq l}^{N-1}(\lambda_{j}+m_{3})\right|\right]\right\}.\end{split} (2.24)

Combining \reftagform@2.20, \reftagform@2.23 and \reftagform@2.24, after some simplifications we obtain

𝔼[CrtN(E,(R1,R2))]\displaystyle\mathrel{\phantom{=}}\mathbb{E}[\operatorname{Crt}_{N}\left(E,\left(R_{1},R_{2}\right)\right)]
=2(2DN′′(0))N/2DN(0)N/2Γ(N/2)R1R2Ee2DN′′(0)y26DN′′(0)+(β+αρ2)22π[6DN′′(0)(β+αρ2)2]eu22σY22πσYρN1dydudρ\displaystyle=\frac{2\left(-2D_{N}^{\prime\prime}(0)\right)^{N/2}}{D_{N}^{\prime}(0)^{N/2}\Gamma(N/2)}\int_{R_{1}}^{R_{2}}\int_{E}\int_{-\infty}^{\infty}\frac{e^{-\frac{2D_{N}^{\prime\prime}(0)y^{2}}{6D_{N}^{\prime\prime}(0)+(\beta+\alpha\rho^{2})^{2}}}}{\sqrt{2\pi[-6D_{N}^{\prime\prime}(0)-(\beta+\alpha\rho^{2})^{2}]}}\frac{e^{-\frac{u^{2}}{2\sigma_{Y}^{2}}}}{\sqrt{2\pi}\sigma_{Y}}\rho^{N-1}\,\mathrm{d}y\,\mathrm{d}u\,\mathrm{d}\rho
×𝔼Z{𝔼GOI(c)[|(m1+2DN′′(0)y)j=1N1(λj+m3)DN′′(0)l=1N1Zl2jlN1(λj+m3)|]},\displaystyle\quad\times\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOI}(c)}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y\right)\prod_{j=1}^{N-1}(\lambda_{j}+m_{3})-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}Z_{l}^{2}\prod_{j\neq l}^{N-1}(\lambda_{j}+m_{3})\right|\right]\right\},

which is the desired result for the first part.

Let us turn to the second assertion. For 1kN11\leq k\leq N-1, combining \reftagform@2.19, \reftagform@2.3, \reftagform@2.23 and \reftagform@2.24 gives

𝔼[|detG|𝟏{i(G)=k}]\displaystyle\mathrel{\phantom{=}}\mathbb{E}\left[\left|\operatorname{det}G\right|\mathbf{1}\{i(G)=k\}\right]
=𝔼[|detG|(𝟏{i(G)=k,η(ζ1,G)>0}+𝟏{i(G)=k1,η(ζ1,G)<0})]\displaystyle=\mathbb{E}\left[\left|\operatorname{det}G\right|\left(\mathbf{1}\{i\left(G_{**}\right)=k,\eta(\zeta_{1},G_{**})>0\}+\mathbf{1}\{i\left(G_{**}\right)=k-1,\eta(\zeta_{1},G_{**})<0\}\right)\right]
=2N1(DN′′(0))N122π(1+d1+2d2+d3)ey22(1+d1+2d2+d3)\displaystyle=\int_{-\infty}^{\infty}\frac{2^{N-1}\left(-D_{N}^{\prime\prime}(0)\right)^{\frac{N-1}{2}}}{\sqrt{2\pi(1+d_{1}+2d_{2}+d_{3})}}e^{-\frac{y^{2}}{2(1+d_{1}+2d_{2}+d_{3})}}
×𝔼Z{𝔼GOI(c)[|(m1+2DN′′(0)y)j=1N1(λj+m3)DN′′(0)l=1N1Zl2jlN1(λj+m3)|\displaystyle\quad\times\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOI}(c)}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y\right)\prod_{j=1}^{N-1}(\lambda_{j}+m_{3})-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}Z_{l}^{2}\prod_{j\neq l}^{N-1}(\lambda_{j}+m_{3})\right|\right.\right.
×(𝟏{λk<m3<λk+1,η(Λ)>0}+𝟏{λk1<m3<λk,η(Λ)<0})]}dy.\displaystyle\phantom{XXXXXXX}\left.\left.\vphantom{\left|\prod_{j=1}^{N-1}\right|}\times\left(\mathbf{1}\left\{\lambda_{k}<-m_{3}<\lambda_{k+1},\eta^{\prime}(\Lambda)>0\right\}+\mathbf{1}\left\{\lambda_{k-1}<-m_{3}<\lambda_{k},\eta^{\prime}(\Lambda)<0\right\}\right)\right]\right\}\mathrm{d}y.

Plugging the above equation into \reftagform@2.21 yields that

𝔼[CrtN,k(E,(R1,R2))]\displaystyle\mathrel{\phantom{=}}\mathbb{E}[\operatorname{Crt}_{N,k}\left(E,\left(R_{1},R_{2}\right)\right)]
=2(2DN′′(0))N/2DN(0)N/2Γ(N/2)R1R2Ee2DN′′(0)y26DN′′(0)+(β+αρ2)22π[6DN′′(0)(β+αρ2)2]eu22σY22πσYρN1dydudρ\displaystyle=\frac{2\left(-2D_{N}^{\prime\prime}(0)\right)^{N/2}}{D_{N}^{\prime}(0)^{N/2}\Gamma(N/2)}\int_{R_{1}}^{R_{2}}\int_{E}\int_{-\infty}^{\infty}\frac{e^{-\frac{2D_{N}^{\prime\prime}(0)y^{2}}{6D_{N}^{\prime\prime}(0)+(\beta+\alpha\rho^{2})^{2}}}}{\sqrt{2\pi[-6D_{N}^{\prime\prime}(0)-(\beta+\alpha\rho^{2})^{2}]}}\frac{e^{-\frac{u^{2}}{2\sigma_{Y}^{2}}}}{\sqrt{2\pi}\sigma_{Y}}\rho^{N-1}\,\mathrm{d}y\,\mathrm{d}u\,\mathrm{d}\rho
×𝔼Z{𝔼GOI(c)[|(m1+2DN′′(0)y)j=1N1(λj+m3)DN′′(0)l=1N1Zl2jlN1(λj+m3)|𝟏Ak]},\displaystyle\quad\times\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOI}(c)}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y\right)\prod_{j=1}^{N-1}(\lambda_{j}+m_{3})-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}Z_{l}^{2}\prod_{j\neq l}^{N-1}(\lambda_{j}+m_{3})\right|\mathbf{1}_{A_{k}}\right]\right\},

where Ak={λk<m3<λk+1,η(Λ)>0}{λk1<m3<λk,η(Λ)<0}A_{k}=\{\lambda_{k}<-m_{3}<\lambda_{k+1},\eta^{\prime}(\Lambda)>0\}\cup\{\lambda_{k-1}<-m_{3}<\lambda_{k},\eta^{\prime}(\Lambda)<0\} for 1kN11\leq k\leq N-1.

Similarly, for k=0k=0 we have

𝔼[|detG|𝟏{i(G)=0}]\displaystyle\mathrel{\phantom{=}}\mathbb{E}\left[\left|\operatorname{det}G\right|\mathbf{1}\{i(G)=0\}\right]
=𝔼[|detG|𝟏{i(G)=0,η(ζ1,G)>0}]\displaystyle=\mathbb{E}\left[\left|\operatorname{det}G\right|\mathbf{1}\{i\left(G_{**}\right)=0,\eta(\zeta_{1},G_{**})>0\}\right]
=2N1(DN′′(0))N122π(1+d1+2d2+d3)ey22(1+d1+2d2+d3)𝔼Z{𝔼GOI(c)[𝟏{λ1>m3,η(Λ)>0}\displaystyle=\int_{-\infty}^{\infty}\frac{2^{N-1}\left(-D_{N}^{\prime\prime}(0)\right)^{\frac{N-1}{2}}}{\sqrt{2\pi(1+d_{1}+2d_{2}+d_{3})}}e^{-\frac{y^{2}}{2(1+d_{1}+2d_{2}+d_{3})}}\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOI}(c)}\left[\vphantom{\left|\prod_{j=1}^{N-1}\right|}\mathbf{1}\left\{\lambda_{1}>-m_{3},\eta^{\prime}(\Lambda)>0\right\}\right.\right.
×|(m1+2DN′′(0)y)j=1N1(λj+m3)DN′′(0)l=1N1Zl2jlN1(λj+m3)|]}dy,\displaystyle\quad\times\left.\left.\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y\right)\prod_{j=1}^{N-1}(\lambda_{j}+m_{3})-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}Z_{l}^{2}\prod_{j\neq l}^{N-1}(\lambda_{j}+m_{3})\right|\right]\right\}\mathrm{d}y,

and

𝔼[CrtN,0(E,(R1,R2))]\displaystyle\mathrel{\phantom{=}}\mathbb{E}[\operatorname{Crt}_{N,0}\left(E,\left(R_{1},R_{2}\right)\right)]
=2(2DN′′(0))N/2DN(0)N/2Γ(N/2)R1R2Ee2DN′′(0)y26DN′′(0)+(β+αρ2)22π[6DN′′(0)(β+αρ2)2]eu22σY22πσYρN1dydudρ\displaystyle=\frac{2\left(-2D_{N}^{\prime\prime}(0)\right)^{N/2}}{D_{N}^{\prime}(0)^{N/2}\Gamma(N/2)}\int_{R_{1}}^{R_{2}}\int_{E}\int_{-\infty}^{\infty}\frac{e^{-\frac{2D_{N}^{\prime\prime}(0)y^{2}}{6D_{N}^{\prime\prime}(0)+(\beta+\alpha\rho^{2})^{2}}}}{\sqrt{2\pi[-6D_{N}^{\prime\prime}(0)-(\beta+\alpha\rho^{2})^{2}]}}\frac{e^{-\frac{u^{2}}{2\sigma_{Y}^{2}}}}{\sqrt{2\pi}\sigma_{Y}}\rho^{N-1}\,\mathrm{d}y\,\mathrm{d}u\,\mathrm{d}\rho
×𝔼Z{𝔼GOI(c)[|(m1+2DN′′(0)y)j=1N1(λj+m3)DN′′(0)l=1N1Zl2jlN1(λj+m3)|𝟏A0]},\displaystyle\quad\times\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOI}(c)}\left[\left|\left(m_{1}+2\sqrt{-D_{N}^{\prime\prime}(0)}y\right)\prod_{j=1}^{N-1}(\lambda_{j}+m_{3})-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{l=1}^{N-1}Z_{l}^{2}\prod_{j\neq l}^{N-1}(\lambda_{j}+m_{3})\right|\mathbf{1}_{A_{0}}\right]\right\},

where A0={λ1>m3,η(Λ)>0}A_{0}=\{\lambda_{1}>-m_{3},\eta^{\prime}(\Lambda)>0\}. We omit the proof for index k=Nk=N here, since it is similar to the case k=0k=0. ∎

2.4 Representation with GOE matrices

In order for the large NN asymptotic analysis, it is desirable to write the GOI matrix in the above representation as a sum of a GOE matrix and an independent scalar matrix. Most of results in this subsection follow from arguments similar to those in [AZ20, AZ22]. First, as observed in these works, if the critical values are not restricted (i.e., E=E=\mathbb{R}), there is no need to consider the conditional distribution of Hessian and it is straightforward to employ GOE matrices for the Kac–Rice representation.

Theorem 2.7.

Let HN={HN(x):xN}H_{N}=\left\{H_{N}(x):x\in\mathbb{R}^{N}\right\} be a non-isotropic Gaussian field with isotropic increments and TNT_{N} be a Borel subset of N\mathbb{R}^{N}. Assume Assumptions I and II, we have

𝔼[CrtN(,TN)]\displaystyle\mathbb{E}[\operatorname{Crt}_{N}\left(\mathbb{R},T_{N}\right)] =(2DN′′(0))N/2|TN|π(N+1)/2DN(0)N/2𝔼GOE[j=1N|λj+y|]ey2dy,\displaystyle=\frac{\left(-2D_{N}^{\prime\prime}(0)\right)^{N/2}|T_{N}|}{\pi^{(N+1)/2}D_{N}^{\prime}(0)^{N/2}}\int_{-\infty}^{\infty}\mathbb{E}_{\mathrm{GOE}}\left[\prod_{j=1}^{N}\left|\lambda_{j}+y\right|\right]e^{-y^{2}}\mathrm{d}y,
𝔼[CrtN,k(,TN)]\displaystyle\mathbb{E}[\operatorname{Crt}_{N,k}\left(\mathbb{R},T_{N}\right)] =(2DN′′(0))N/2|TN|π(N+1)/2DN(0)N/2𝔼GOE[j=1N|λj+y|𝟏{λk<y<λk+1}]ey2dy,\displaystyle=\frac{\left(-2D_{N}^{\prime\prime}(0)\right)^{N/2}|T_{N}|}{\pi^{(N+1)/2}D_{N}^{\prime}(0)^{N/2}}\int_{-\infty}^{\infty}\mathbb{E}_{\mathrm{GOE}}\left[\prod_{j=1}^{N}\left|\lambda_{j}+y\right|\mathbf{1}{\{\lambda_{k}<-y<\lambda_{k+1}\}}\right]e^{-y^{2}}\mathrm{d}y,

where |TN||T_{N}| is the Lebesgue measure of TNT_{N}, 0kN,0\leq k\leq N, λ1λN\lambda_{1}\leq\cdots\leq\lambda_{N} are the ordered eigenvalues of the GOE matrix GOEN\mathrm{GOE}_{N}, and by convention λ0=\lambda_{0}=-\infty, λN+1=\lambda_{N+1}=\infty.

The proof is omitted here since it is the same as for [AZ20, Theorem 1.1] and [AZ22, Theorems 1.1, 1.2]. As a comparison to the isotropic case, we give an example below, which is analogous to [CS18, Example 3.8].

Example 2.8.

Let the assumptions in Theorem 2.7 hold. Consider N=2N=2 and T22T_{2}\subset\mathbb{R}^{2} with unit area. Using Theorem 2.7, we obtain

𝔼[Crt2,k(,T2)]=2D2′′(0)π3/2D2(0)𝔼GOE[j=12|λj+y|𝟏{λk<y<λk+1}]ey2dy.\mathbb{E}[\operatorname{Crt}_{2,k}\left(\mathbb{R},T_{2}\right)]=\frac{-2D_{2}^{\prime\prime}(0)}{\pi^{3/2}D_{2}^{\prime}(0)}\int_{-\infty}^{\infty}\mathbb{E}_{\mathrm{GOE}}\left[\prod_{j=1}^{2}\left|\lambda_{j}+y\right|\mathbf{1}{\{\lambda_{k}<-y<\lambda_{k+1}\}}\right]e^{-y^{2}}\mathrm{d}y.

Replacing the GOE\mathrm{GOE} density with the GOI(c)\mathrm{GOI(c)} density with c=12c=\frac{1}{2}, we deduce

𝔼[Crt2,k(,T2)]=2D2′′(0)πD2(0)𝔼GOI(12)[j=12|λj|𝟏{λk<0<λk+1}].\mathbb{E}[\operatorname{Crt}_{2,k}\left(\mathbb{R},T_{2}\right)]=\frac{-2D_{2}^{\prime\prime}(0)}{\pi D_{2}^{\prime}(0)}\mathbb{E}_{\mathrm{GOI}(\frac{1}{2})}\left[\prod_{j=1}^{2}\left|\lambda_{j}\right|\mathbf{1}{\{\lambda_{k}<0<\lambda_{k+1}\}}\right].

Plugging the GOI(c)\mathrm{GOI}(c) density (1) with N=2N=2 and c=12c=\frac{1}{2} into the above equality implies

𝔼[Crt2,0(,T2)]=𝔼[Crt2,2(,T2)]=12𝔼[Crt2,1(,T2)]=D2′′(0)3πD2(0).\mathbb{E}[\operatorname{Crt}_{2,0}\left(\mathbb{R},T_{2}\right)]=\mathbb{E}[\operatorname{Crt}_{2,2}\left(\mathbb{R},T_{2}\right)]=\frac{1}{2}\mathbb{E}[\operatorname{Crt}_{2,1}\left(\mathbb{R},T_{2}\right)]=\frac{-D_{2}^{\prime\prime}(0)}{\sqrt{3}\pi D_{2}^{\prime}(0)}.

The situation is more involved when the critical values are constrained. The following condition for any fixed NN turns out to be sufficient for using GOE matrices in the representation.

Assumption III.

For any r>0r>0, we have

2DN′′(0)\displaystyle-2D_{N}^{\prime\prime}(0) >(αr+β)β,\displaystyle>\left(\alpha r+\beta\right)\beta, (2.25)
4DN′′(0)\displaystyle-4D_{N}^{\prime\prime}(0) >(αr+β)αr,\displaystyle>\left(\alpha r+\beta\right)\alpha r, (2.26)
αβ\displaystyle\alpha\beta >0,\displaystyle>0,

where α=α(r)\alpha=\alpha\left(r\right) and β=β(r)\beta=\beta\left(r\right) are defined in \reftagform@2.9 with ρ2\rho^{2} replaced by rr.

This condition was used in [AZ20] for the structure functions in the class 𝒟\mathcal{D}_{\infty}, and in that case we trivially have α(r)β(r)>0\alpha(r)\beta(r)>0 for any r>0r>0. Actually, we will show that Assumption III holds for all structure functions in 𝒟\mathcal{D}_{\infty} in the next section. A natural question is the relationship between Assumption III and the nondegeneracy of the field and its derivatives. The following result shows that Assumption III is stronger than the nondegeneracy condition \reftagform@2.7. Recall the parameter c=4DN′′(0)+2β2+α2r212DN′′(0)+2(β+αr)2c=\frac{4D_{N}^{\prime\prime}(0)+2\beta^{2}+\alpha^{2}r^{2}}{12D_{N}^{\prime\prime}(0)+2(\beta+\alpha r)^{2}} with x2\|x\|^{2} replaced by rr from \reftagform@2.16. Assumption III also implies c>0c>0.

Lemma 2.9.

For all fixed NN\in\mathbb{N}, if Assumption III holds, then we have c>0c>0 and the Gaussian vector

(HN(x),HN(x),ijHN(x),1ijN),\left(H_{N}(x),\nabla H_{N}(x),\partial_{ij}H_{N}(x),1\leq i\leq j\leq N\right),

is nondegenerate.

Proof.

First of all, \reftagform@2.25 and \reftagform@2.26 imply that the denominator of cc is smaller than 0. To prove c>0c>0, it remains to show that

4DN′′(0)>2β2+α2r2,\displaystyle-4D_{N}^{\prime\prime}(0)>2\beta^{2}+\alpha^{2}r^{2}, (2.27)

which is equivalent to the inequality (2.8) and thus further yields the nondegeneracy condition by Lemma 2.3. Since αβ>0\alpha\beta>0, we have

(2αβrα2r2)(αβr2β2)=αβr(2βαr)20.(2\alpha\beta r-\alpha^{2}r^{2})(\alpha\beta r-2\beta^{2})=-\alpha\beta r(2\beta-\alpha r)^{2}\leq 0.

If 2αβrα2r202\alpha\beta r-\alpha^{2}r^{2}\geq 0, then \reftagform@2.25 implies that

4DN′′(0)>2αβr+2β2α2r2+2β2.-4D_{N}^{\prime\prime}(0)>2\alpha\beta r+2\beta^{2}\geq\alpha^{2}r^{2}+2\beta^{2}.

If 2αβrα2r2<02\alpha\beta r-\alpha^{2}r^{2}<0, we must have αβr2β20\alpha\beta r-2\beta^{2}\geq 0 and \reftagform@2.26 implies that

4DN′′(0)>α2r2+αβrα2r2+2β2.-4D_{N}^{\prime\prime}(0)>\alpha^{2}r^{2}+\alpha\beta r\geq\alpha^{2}r^{2}+2\beta^{2}.\qed

Using this lemma, we may deduce the GOE representation from Theorem 2.6 by unraveling GOE eigenvalues and an independent Gaussian random variable from the GOI(c)(c) matrix. This is equivalent to considering the conditional distribution of z3z_{3}^{\prime} given z1z_{1}^{\prime} below in (2.29), which would bring in an extra random variable in \reftagform@2.31. For transparency and for a different perspective compared with Theorem 2.6, here we provide formulas that are suitable for asymptotic analysis following [AZ20, AZ22]. Since Assumption III implies d1>0d_{1}>0, according to [AZ20, Section 3], the shifted SGOI matrix GG in (2.13) can be represented as the following form, which corresponds to setting ς=d1+d2\varsigma=d_{1}+d_{2} and ϑ=0\vartheta=0 in \reftagform@2.4 and in this case the condition (2.5) is equivalent to (2.27):

G=G(u)=(z1ξ𝖳ξ4DN′′(0)(GOEN1z3𝐈N1))=:(z1ξ𝖳ξG),\displaystyle G=G(u)=\begin{pmatrix}z_{1}^{\prime}&\xi^{\mathsf{T}}\\ \xi&\sqrt{-4D_{N}^{\prime\prime}(0)}(\mathrm{GOE}_{N-1}-z_{3}^{\prime}\mathbf{I}_{N-1})\end{pmatrix}=:\begin{pmatrix}z_{1}^{\prime}&\xi^{\mathsf{T}}\\ \xi&G_{**}\end{pmatrix}, (2.28)

where with z1,z2,z3z_{1},z_{2},z_{3} being independent standard Gaussian random variables,

z1\displaystyle z_{1}^{\prime} =σ1z1σ2z2+m1,z3=14DN′′(0)(σ2z2+αβρz3m2),\displaystyle=\sigma_{1}z_{1}-\sigma_{2}z_{2}+m_{1},\quad z_{3}^{\prime}=\frac{1}{\sqrt{-4D_{N}^{\prime\prime}(0)}}\Big{(}\sigma_{2}z_{2}+\sqrt{\alpha\beta}\rho z_{3}-m_{2}\Big{)},

ξ\xi is a centered column Gaussian vector with covariance matrix 2DN′′(0)𝐈N1-2D_{N}^{\prime\prime}(0)\mathbf{I}_{N-1}, which is independent of z1,z2,z3z_{1},z_{2},z_{3} and the GOE matrix GOEN1\mathrm{GOE}_{N-1}. The conditional distribution of z1z_{1}^{\prime} given z3=yz_{3}^{\prime}=y is

(z1z3=y)N(a¯,b2),\displaystyle(z_{1}^{\prime}\mid z_{3}^{\prime}=y)\sim N\left(\overline{\mathrm{a}},\mathrm{b}^{2}\right), (2.29)

where

a¯\displaystyle\overline{\mathrm{a}} =m1σ22(4DN′′(0)y+m2)σ22+αβρ2\displaystyle=m_{1}-\frac{\sigma_{2}^{2}\left(\sqrt{-4D_{N}^{\prime\prime}(0)}y+m_{2}\right)}{\sigma_{2}^{2}+\alpha\beta\rho^{2}}
=2DN′′(0)αρ2u(2DN′′(0)β2)DN(ρ2)DN(ρ2)2ρ2DN(0)(2DN′′(0)β2αβρ2)4DN′′(0)y2DN′′(0)β2,\displaystyle=\frac{-2D_{N}^{\prime\prime}(0)\alpha\rho^{2}u}{\left(-2D_{N}^{\prime\prime}(0)-\beta^{2}\right)\sqrt{D_{N}\left(\rho^{2}\right)-\frac{D_{N}^{\prime}\left(\rho^{2}\right)^{2}\rho^{2}}{D_{N}^{\prime}(0)}}}-\frac{\left(-2D_{N}^{\prime\prime}(0)-\beta^{2}-\alpha\beta\rho^{2}\right)\sqrt{-4D_{N}^{\prime\prime}(0)}y}{-2D_{N}^{\prime\prime}(0)-\beta^{2}},
b2\displaystyle\mathrm{b}^{2} =σ12+σ22σ24σ22+αβρ2=4DN′′(0)+2DN′′(0)α2ρ42DN′′(0)β2.\displaystyle=\sigma_{1}^{2}+\sigma_{2}^{2}-\frac{\sigma_{2}^{4}}{\sigma_{2}^{2}+\alpha\beta\rho^{2}}=-4D_{N}^{\prime\prime}(0)+\frac{2D_{N}^{\prime\prime}(0)\alpha^{2}\rho^{4}}{-2D_{N}^{\prime\prime}(0)-\beta^{2}}. (2.30)

Define the random variable

aN=aN(ρ,u,y)=a¯DN′′(0)i=1N1Zi2λiy,\displaystyle\mathrm{a}_{N}=\mathrm{a}_{N}(\rho,u,y)=\overline{\mathrm{a}}-\sqrt{-D_{N}^{\prime\prime}(0)}\sum_{i=1}^{N-1}\frac{Z_{i}^{2}}{\lambda_{i}-y}, (2.31)

where Zi,1iN1,Z_{i},1\leq i\leq N-1, are independent standard Gaussian random variables and λ1λN1\lambda_{1}\leq\cdots\leq\lambda_{N-1} are the ordered eigenvalues of the GOE matrix GOEN1\mathrm{GOE}_{N-1}. By the above analysis, given Assumption III, we can express the expected number of critical points (with or without given indices) of non-isotropic Gaussian random fields with isotropic increments using the eigenvalue density of the GOE matrix. We omit the proof here since it follows from an easy adaption of the arguments in [AZ20, Section 4] and [AZ22, Section 3.1].

Theorem 2.10.

Let HN={HN(x),xN}H_{N}=\left\{H_{N}(x),x\in\mathbb{R}^{N}\right\} be a non-isotropic Gaussian field with isotropic increments. Let EE\subset\mathbb{R} be a Borel set and TNT_{N} be the shell domain TN(R1,R2)={xN:R1<x<R2}T_{N}\left(R_{1},R_{2}\right)=\{x\in\mathbb{R}^{N}:R_{1}<\|x\|<R_{2}\}, where 0R1<R2<0\leq R_{1}<R_{2}<\infty. Assume Assumptions I, II, and III. Then we have

𝔼[CrtN(E,(R1,R2))]\displaystyle\mathrel{\phantom{=}}\mathbb{E}[\operatorname{Crt}_{N}\left(E,\left(R_{1},R_{2}\right)\right)]
=2(2DN′′(0))N/2DN(0)N/2Γ(N/2)R1R2Eexp{(4DN′′(0)y+m2)22(2DN′′(0)β2)u22σY2}2πσY2DN′′(0)β2ρN1\displaystyle=\frac{2\left(-2D_{N}^{\prime\prime}(0)\right)^{N/2}}{D_{N}^{\prime}(0)^{N/2}\Gamma(N/2)}\int_{R_{1}}^{R_{2}}\int_{E}\int_{\mathbb{R}}\frac{\exp\left\{-\frac{\left(\sqrt{-4D_{N}^{\prime\prime}(0)}y+m_{2}\right)^{2}}{2\left(-2D_{N}^{\prime\prime}(0)-\beta^{2}\right)}-\frac{u^{2}}{2\sigma_{Y}^{2}}\right\}}{2\pi\sigma_{Y}\sqrt{-2D_{N}^{\prime\prime}(0)-\beta^{2}}}\rho^{N-1}
×𝔼Z{𝔼GOE[(aN(Φ(aNb)Φ(aNb))+2bπeaN22b2)j=1N1|λjy|]}dydudρ,\displaystyle\quad\times\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOE}}\left[\left(\mathrm{a}_{N}\left(\Phi\left(\frac{\mathrm{a}_{N}}{\mathrm{~{}b}}\right)-\Phi\left(-\frac{\mathrm{a}_{N}}{\mathrm{~{}b}}\right)\right)+\frac{\sqrt{2}\mathrm{b}}{\sqrt{\pi}}e^{-\frac{\mathrm{a}_{N}^{2}}{2\mathrm{b}^{2}}}\right)\prod_{j=1}^{N-1}\left|\lambda_{j}-y\right|\right]\right\}\mathrm{d}y\,\mathrm{d}u\,\mathrm{d}\rho,
for k=0,1,,Nk=0,1,\dots,N,
𝔼[CrtN,k(E,(R1,R2))]\displaystyle\mathrel{\phantom{=}}\mathbb{E}[\operatorname{Crt}_{N,k}\left(E,\left(R_{1},R_{2}\right)\right)]
=2(2DN′′(0))N/2DN(0)N/2Γ(N/2)R1R2Eexp{(4DN′′(0)y+m2)22(2DN′′(0)β2)u22σY2}2πσY2DN′′(0)β2ρN1\displaystyle=\frac{2\left(-2D_{N}^{\prime\prime}(0)\right)^{N/2}}{D_{N}^{\prime}(0)^{N/2}\Gamma(N/2)}\int_{R_{1}}^{R_{2}}\int_{E}\int_{\mathbb{R}}\frac{\exp\left\{-\frac{\left(\sqrt{-4D_{N}^{\prime\prime}(0)}y+m_{2}\right)^{2}}{2\left(-2D_{N}^{\prime\prime}(0)-\beta^{2}\right)}-\frac{u^{2}}{2\sigma_{Y}^{2}}\right\}}{2\pi\sigma_{Y}\sqrt{-2D_{N}^{\prime\prime}(0)-\beta^{2}}}\rho^{N-1}
×𝔼Z{𝔼GOE[(aNΦ(aNb)+b2πeaN22b2)j=1N1|λjy|𝟏{λk<y<λk+1, 0kN1}\displaystyle\quad\times\mathbb{E}_{Z}\left\{\mathbb{E}_{\mathrm{GOE}}\left[\left(\mathrm{a}_{N}\Phi\left(\frac{\mathrm{a}_{N}}{\mathrm{b}}\right)+\frac{\mathrm{b}}{\sqrt{2\pi}}e^{-\frac{\mathrm{a}_{N}^{2}}{2\mathrm{b}^{2}}}\right)\prod_{j=1}^{N-1}\left|\lambda_{j}-y\right|\mathbf{1}\left\{\lambda_{k}<y<\lambda_{k+1},\,0\leq k\leq N-1\right\}\right.\right.
+(aNΦ(aNb)+b2πeaN22b2)j=1N1|λjy|𝟏{λk1<y<λk, 1kN}]}dydudρ,\displaystyle\quad\left.\left.+\left(-\mathrm{a}_{N}\Phi\left(-\frac{\mathrm{a}_{N}}{\mathrm{b}}\right)+\frac{\mathrm{b}}{\sqrt{2\pi}}e^{-\frac{\mathrm{a}_{N}^{2}}{2\mathrm{b}^{2}}}\right)\prod_{j=1}^{N-1}\left|\lambda_{j}-y\right|\mathbf{1}\left\{\lambda_{k-1}<y<\lambda_{k},\,1\leq k\leq N\right\}\right]\right\}\mathrm{d}y\,\mathrm{d}u\,\mathrm{d}\rho,

where m2m_{2} and σY\sigma_{Y} are given in \reftagform@2.9, b\mathrm{b} is given in \reftagform@2.4, aN\mathrm{a}_{N} is defined with the GOE eigenvalues λi\lambda_{i} and the independent standard Gaussian random variables ZiZ_{i}, 1iN11\leq i\leq N-1 in \reftagform@2.31, Φ\Phi is the c.d.f. of the standard Gaussian random variable and by convention λ0=\lambda_{0}=-\infty, λN=\lambda_{N}=\infty.

3 Representation for fields with structure functions in 𝒟\mathcal{D}_{\infty}

In this section, we show that if a structure function has the representation \reftagform@1.4, then Assumption III always holds.

Lemma 3.1.

When D(r)D(r) has the form \reftagform@1.4, if the inequality \reftagform@2.25 holds, then both inequalities \reftagform@2.8 and \reftagform@2.26 hold automatically.

Proof.

Since D(r)D(r) has the form \reftagform@1.4, the function D(r)D^{\prime}(r) is positive and strictly convex on [0,)[0,\infty). Together with the mean value theorem, for any r>0r>0, D(r)r>D(r)D(0)D^{\prime\prime}(r)r>D^{\prime}(r)-D^{\prime}(0), which further implies

αr>2β,αβr<2β2.\displaystyle\alpha r>2\beta,\quad\quad\alpha\beta r<2\beta^{2}. (3.1)

If the inequality \reftagform@2.25 holds, by \reftagform@3.1, we deduce that (αr+β)αr<2(αr+β)β<4D(0)\left(\alpha r+\beta\right)\alpha r<2\left(\alpha r+\beta\right)\beta<-4D^{\prime\prime}(0), which yields the inequality \reftagform@2.26. On the other hand, for any r>0r>0, note that

2D(r)2r2+(D(r)D(0))2D(r)D(r)2rD(0)=α2r22+β2.\displaystyle\frac{2D^{\prime\prime}(r)^{2}r^{2}+\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}}{D\left(r\right)-\frac{D^{\prime}(r)^{2}r}{D^{\prime}(0)}}=\frac{\alpha^{2}r^{2}}{2}+\beta^{2}.

Thus, to prove \reftagform@2.8, we only need to show

2D(0)>α2r22+β2,\displaystyle-2D^{\prime\prime}(0)>\frac{\alpha^{2}r^{2}}{2}+\beta^{2},

which follows from \reftagform@3.1 by observing that α2r22+β2<(αr+β)β<2D(0)\frac{\alpha^{2}r^{2}}{2}+\beta^{2}<\left(\alpha r+\beta\right)\beta<-2D^{\prime\prime}(0). ∎

Recall that a function f:(0,)f\colon(0,\infty)\to\mathbb{R} is a Bernstein function if ff is of class C,f(λ)0C^{\infty},f(\lambda)\geq 0 for all λ>0\lambda>0 and (1)n1f(n)(λ)0(-1)^{n-1}f^{(n)}(\lambda)\geq 0 for all nn\in\mathbb{N} and λ>0\lambda>0. For the Bernstein functions, we have the following theorem.

Theorem 3.2 ([SSV12, Theorem 3.2]).

A function f:(0,)f\colon(0,\infty)\to\mathbb{R} is a Bernstein function if and only if, it admits the representation

f(λ)=a+bλ+(0,)(1eλt)μ(dt),f(\lambda)=a+b\lambda+\int_{(0,\infty)}\left(1-e^{-\lambda t}\right)\mu(\mathrm{d}t),

where a,b0a,b\geq 0 and μ\mu is a measure on (0,)(0,\infty) satisfying (0,)(1t)μ(dt)<\int_{(0,\infty)}(1\wedge t)\mu(\mathrm{d}t)<\infty. In particular, the triplet (a,b,μ)(a,b,\mu) determines ff uniquely and vice versa.

In the paper [AZ20], the authors conjectured that all Bernstein functions with the form of D(r)D(r) defined in \reftagform@1.4 satisfy Assumption III. The following result shows that it is indeed true. Thanks to Lemma 3.1, it remains to prove the inequality \reftagform@2.25.

Theorem 3.3.

Assume Assumptions I and II. Let D(r)D(r) denote the Bernstein function defined in \reftagform@1.4. Then we have for all r>0r>0,

2D(0)>(αr+β)β,-2D^{\prime\prime}(0)>(\alpha r+\beta)\beta,

where α=α(r)\alpha=\alpha\left(r\right) and β=β(r)\beta=\beta\left(r\right) are defined in \reftagform@2.9 with ρ2\rho^{2} replaced by rr.

Proof.

It suffices to show

2D(0)(D(r)+rD(0)2rD(r))\displaystyle-2D^{\prime\prime}(0)\left(D(r)+rD^{\prime}(0)-2rD^{\prime}(r)\right) >2rD(0)(D(r)D(0))+(D(r)D(0))2,\displaystyle>2rD^{\prime\prime}(0)\left(D^{\prime}(r)-D^{\prime}(0)\right)+\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}, (3.2)
2rD(0)(D(r)D(0))2D(0)\displaystyle 2rD^{\prime\prime}(0)\frac{\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}}{D^{\prime}(0)} 2r(D(r)D(0))(D(r)D(0)),\displaystyle\geq 2r\left(D^{\prime\prime}(r)-D^{\prime\prime}(0)\right)\left(D^{\prime}(r)-D^{\prime}(0)\right), (3.3)

for any r>0r>0. Indeed, adding the above two inequalities together implies that for all r>0r>0,

2D(0)(D(r)D(r)2D(0)r)>2rD(r)(D(r)D(0))+(D(r)D(0))2,-2D^{\prime\prime}(0)\left(D(r)-\frac{D^{\prime}(r)^{2}}{D^{\prime}(0)}r\right)>2rD^{\prime\prime}(r)\left(D^{\prime}(r)-D^{\prime}(0)\right)+\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2},

which is equivalent to the desired conclusion.

To prove \reftagform@3.2, notice that D(0)<0D^{\prime\prime}(0)<0 and

(D(r)D(0))2=[(0,)t(ert1)ν(dt)]2D(0)(0,)(ert1)2ν(dt).\displaystyle\left(D^{\prime}(r)-D^{\prime}(0)\right)^{2}=\left[\int_{(0,\infty)}t\left(e^{-rt}-1\right)\nu(\mathrm{d}t)\right]^{2}\leq-D^{\prime\prime}(0)\int_{(0,\infty)}\left(e^{-rt}-1\right)^{2}\nu(\mathrm{d}t).

Therefore, it suffices to show that for r>0r>0,

2(D(r)+rD(0)2rD(r))>2r(D(r)D(0))+(0,)(ert1)2ν(dt),2\left(D(r)+rD^{\prime}(0)-2rD^{\prime}(r)\right)>-2r\left(D^{\prime}(r)-D^{\prime}(0)\right)+\int_{(0,\infty)}\left(e^{-rt}-1\right)^{2}\nu(\mathrm{d}t),

or equivalently, 2D(r)2rD(r)(0,)(1ert)2ν(dt)>02D(r)-2rD^{\prime}(r)-\int_{(0,\infty)}(1-e^{-rt})^{2}\nu(\mathrm{d}t)>0. But from the definition of D(r)D(r), we have for r>0r>0,

2D(r)2rD(r)(0,)(1ert)2ν(dt)=(0,)(12rterte2rt)ν(dt)>0,2D(r)-2rD^{\prime}(r)-\int_{(0,\infty)}\left(1-e^{-rt}\right)^{2}\nu(\mathrm{d}t)=\int_{(0,\infty)}\left(1-2rte^{-rt}-e^{-2rt}\right)\nu(\mathrm{d}t)>0,

where in the last inequality we used the fact that 12xexe2x>01-2xe^{-x}-e^{-2x}>0 for any x>0x>0.

To prove \reftagform@3.3, it suffices to show that for r>0r>0,

D(0)(D(r)D(0))D(0)(D(r)D(0)),D^{\prime\prime}(0)\left(D^{\prime}(r)-D^{\prime}(0)\right)\leq D^{\prime}(0)\left(D^{\prime\prime}(r)-D^{\prime\prime}(0)\right),

as D(0)>0D^{\prime}(0)>0 and D(r)D(0)<0D^{\prime}(r)-D^{\prime}(0)<0. By definition,

D(0)(D(r)D(0))D(0)(D(r)D(0))\displaystyle\mathrel{\phantom{=}}D^{\prime\prime}(0)\left(D^{\prime}(r)-D^{\prime}(0)\right)-D^{\prime}(0)\left(D^{\prime\prime}(r)-D^{\prime\prime}(0)\right)
=(0,)s2ν(ds)(0,)t(ert1)ν(dt)((0,)tν(dt)+A)(0,)s2(1ers)ν(ds)\displaystyle=-\int_{(0,\infty)}s^{2}\,\nu(\mathrm{d}s)\int_{(0,\infty)}t\left(e^{-rt}-1\right)\nu(\mathrm{d}t)-\left(\int_{(0,\infty)}t\,\nu(\mathrm{d}t)+A\right)\int_{(0,\infty)}s^{2}\left(1-e^{-rs}\right)\nu(\mathrm{d}s)
=(0,)2s2t(ersert)ν(ds)ν(dt)A(0,)s2(1ers)ν(ds)\displaystyle=\int_{(0,\infty)^{2}}s^{2}t\left(e^{-rs}-e^{-rt}\right)\nu(\mathrm{d}s)\nu(\mathrm{d}t)-A\int_{(0,\infty)}s^{2}\left(1-e^{-rs}\right)\nu(\mathrm{d}s)
=12(0,)2st(st)(ersert)ν(ds)ν(dt)A(0,)s2(1ers)ν(ds).\displaystyle=\frac{1}{2}\int_{(0,\infty)^{2}}st(s-t)\left(e^{-rs}-e^{-rt}\right)\nu(\mathrm{d}s)\nu(\mathrm{d}t)-A\int_{(0,\infty)}s^{2}\left(1-e^{-rs}\right)\nu(\mathrm{d}s).

Since A0A\geq 0, s2(1ers)>0s^{2}\left(1-e^{-rs}\right)>0, and st(st)(ersert)0st(s-t)\left(e^{-rs}-e^{-rt}\right)\leq 0 for any s,t,r>0s,t,r>0, we find for r>0r>0,

D(0)(D(r)D(0))D(0)(D(r)D(0))0,D^{\prime\prime}(0)\left(D^{\prime}(r)-D^{\prime}(0)\right)-D^{\prime}(0)\left(D^{\prime\prime}(r)-D^{\prime\prime}(0)\right)\leq 0,

which gives \reftagform@3.3 and completes the proof. ∎

On the other hand, Assumption III is not expected to hold for all structure functions in the class 𝒟N\mathcal{D}_{N} with fixed NN\in\mathbb{N}, since the Bessel functions in the representation \reftagform@1.3 are oscillatory. The following example gives a family of functions in 𝒟1𝒟\mathcal{D}_{1}\setminus\mathcal{D}_{\infty} satisfying Assumption III.

Example 3.4.

For any ε(0,18]\varepsilon\in(0,\frac{1}{8}], the function

f(r)=(r+1)11121+ε0r1costtdt\displaystyle f(r)=(r+1)^{\frac{11}{12}}-1+\varepsilon\int_{0}^{r}\frac{1-\cos\sqrt{t}}{t}\,\mathrm{d}t

is in 𝒟1𝒟\mathcal{D}_{1}\setminus\mathcal{D}_{\infty} and satisfies Assumption III. For simplicity, we denote

f1(r)=(r+1)11121,f2(r)=0r1costtdt.\displaystyle f_{1}(r)=(r+1)^{\frac{11}{12}}-1,\qquad\qquad f_{2}(r)=\int_{0}^{r}\frac{1-\cos\sqrt{t}}{t}\,\mathrm{d}t.

Then f(r)=f1(r)+εf2(r)f(r)=f_{1}(r)+\varepsilon f_{2}(r) for ε(0,18]\varepsilon\in(0,\frac{1}{8}]. Let us verify that ff satisfies the desired properties.

(1) We first show that f𝒟1𝒟f\in\mathcal{D}_{1}\setminus\mathcal{D}_{\infty}. Indeed, it is well known that f1f_{1} is a Bernstein function [SSV12] with f1(0)=0f_{1}(0)=0, which yields f1𝒟𝒟1f_{1}\in\mathcal{D}_{\infty}\subset\mathcal{D}_{1}. For N=1N=1, we have Λ1(x)=cosx\Lambda_{1}(x)=\cos x. If we set ν1(dt)=1t𝟏(0,1)(t)dt\nu_{1}(\mathrm{d}t)=\frac{1}{t}\mathbf{1}_{(0,1)}(t)\,\mathrm{d}t, then

f2(r)=0r1costtdt=011cosrttdt=(0,)(1Λ1(rt))ν1(dt).\displaystyle f_{2}(r)=\int_{0}^{r}\frac{1-\cos\sqrt{t}}{t}\,\mathrm{d}t=\int_{0}^{1}\frac{1-\cos\sqrt{rt}}{t}\,\mathrm{d}t=\int_{(0,\infty)}\left(1-\Lambda_{1}(\sqrt{rt})\right)\nu_{1}(\mathrm{d}t).

It follows that f2𝒟1f_{2}\in\mathcal{D}_{1} and thus f=f1+εf2𝒟1f=f_{1}+\varepsilon f_{2}\in\mathcal{D}_{1}. To check f𝒟f\notin\mathcal{D}_{\infty}, we compute

f1(r)\displaystyle f_{1}^{\prime}(r) =1112(r+1)112,\displaystyle=\frac{11}{12}(r+1)^{-\frac{1}{12}}, f1(r)\displaystyle f_{1}^{\prime\prime}(r) =11144(r+1)1312,\displaystyle=-\frac{11}{144}(r+1)^{-\frac{13}{12}},
f2(r)\displaystyle f_{2}^{\prime}(r) ={1cosrr,r>0,12,r=0,\displaystyle=\begin{cases}\frac{1-\cos\sqrt{r}}{r},&r>0,\\ \frac{1}{2},&r=0,\end{cases} f2(r)\displaystyle f_{2}^{\prime\prime}(r) ={rsinr+2cosr22r2,r>0,124,r=0,\displaystyle=\begin{cases}\frac{\sqrt{r}\sin\sqrt{r}+2\cos\sqrt{r}-2}{2r^{2}},&r>0,\\ -\frac{1}{24},&r=0,\end{cases}

and

f1(r)=o(1r2),f2(r)=cosr4r2+o(1r2),as r.\displaystyle f_{1}^{\prime\prime\prime}(r)=o\left(\frac{1}{r^{2}}\right),\qquad f_{2}^{\prime\prime\prime}(r)=\frac{\cos\sqrt{r}}{4r^{2}}+o\left(\frac{1}{r^{2}}\right),\qquad\text{as }r\to\infty.

Therefore, f(r)=f1(r)+εf2(r)=(εcosr+o(1))/(4r2)f^{\prime\prime\prime}(r)=f_{1}^{\prime\prime\prime}(r)+\varepsilon f_{2}^{\prime\prime\prime}(r)=(\varepsilon\cos\sqrt{r}+o(1))/(4r^{2}), as rr\to\infty. From here we can find an r0>0r_{0}>0 such that f(r0)<0f^{\prime\prime\prime}(r_{0})<0, which indicates f𝒟f\notin\mathcal{D}_{\infty}.

(2) We check α,β<0\alpha,\beta<0. This would follow from f(r)<0f^{\prime\prime}(r)<0 for all r>0r>0. In fact, from calculus we know xsinx+2cosx2<0x\sin x+2\cos x-2<0 for x(0,π)x\in(0,\pi), which implies f2(r)<0f_{2}^{\prime\prime}(r)<0 for all r(0,9)r\in(0,9). Together with f1(r)<0f_{1}^{\prime\prime}(r)<0, we have f(r)<0f^{\prime\prime}(r)<0 for r(0,9)r\in(0,9). And for r[9,+)r\in[9,+\infty), since f2(r)(rsinr)/(2r2)1/(2r3/2)f_{2}^{\prime\prime}(r)\leq(\sqrt{r}\sin\sqrt{r})/(2r^{2})\leq 1/(2r^{3/2}), we obtain

f(r)=f1(r)+εf2(r)11144(r+1)1312+ε2r32[11144(rr+1)32+116]r32<0.\displaystyle f^{\prime\prime}(r)=f_{1}^{\prime\prime}(r)+\varepsilon f_{2}^{\prime\prime}(r)\leq-\frac{11}{144}(r+1)^{-\frac{13}{12}}+\frac{\varepsilon}{2}r^{-\frac{3}{2}}\leq\left[-\frac{11}{144}\left(\frac{r}{r+1}\right)^{\frac{3}{2}}+\frac{1}{16}\right]r^{-\frac{3}{2}}<0.

(3) With the same decomposition as in the proof of Theorem 3.3, \reftagform@2.25 follows from

2f(0)(f(r)+rf(0)2rf(r))\displaystyle-2f^{\prime\prime}(0)\left(f(r)+rf^{\prime}(0)-2rf^{\prime}(r)\right) >2rf(0)(f(r)f(0))+(f(r)f(0))2,\displaystyle>2rf^{\prime\prime}(0)\left(f^{\prime}(r)-f^{\prime}(0)\right)+\left(f^{\prime}(r)-f^{\prime}(0)\right)^{2}, (3.4)
2rf(0)(f(r)f(0))2f(0)\displaystyle 2rf^{\prime\prime}(0)\frac{\left(f^{\prime}(r)-f^{\prime}(0)\right)^{2}}{f^{\prime}(0)} 2r(f(r)f(0))(f(r)f(0)).\displaystyle\geq 2r\left(f^{\prime\prime}(r)-f^{\prime\prime}(0)\right)\left(f^{\prime}(r)-f^{\prime}(0)\right). (3.5)

Notice that \reftagform@3.4 is equivalent to g(r):=(f(r)f(0))2+2f(0)(f(r)rf(r))<0g(r):=(f^{\prime}(r)-f^{\prime}(0))^{2}+2f^{\prime\prime}(0)(f(r)-rf^{\prime}(r))<0. One can check that xsinx+2cosx2>x412x\sin x+2\cos x-2>-\frac{x^{4}}{12} for x>0x>0, which further implies f2(r)>124=f2(0)f_{2}^{\prime\prime}(r)>-\frac{1}{24}=f_{2}^{\prime\prime}(0) for r>0r>0. Since f1(r)>f1(0)f_{1}^{\prime\prime}(r)>f_{1}^{\prime\prime}(0), we have f(r)>f(0)f^{\prime\prime}(r)>f^{\prime\prime}(0) for r>0r>0. Therefore,

g(r)=2f(r)(f(r)f(0)rf(0))=2f(r)0r(f(s)f(0))ds<0forr>0,\displaystyle g^{\prime}(r)=2f^{\prime\prime}(r)\left(f^{\prime}(r)-f^{\prime}(0)-rf^{\prime\prime}(0)\right)=2f^{\prime\prime}(r)\int_{0}^{r}(f^{\prime\prime}(s)-f^{\prime\prime}(0))\,\mathrm{d}s<0\ \ \text{for}\ \ r>0,

which implies g(r)<0g(r)<0 and \reftagform@3.4 for r>0r>0.

Since f(r)f(0)<0f^{\prime}(r)-f^{\prime}(0)<0, \reftagform@3.5 is equivalent to f(0)(f(r)f(0))f(0)(f(r)f(0))f^{\prime\prime}(0)(f^{\prime}(r)-f^{\prime}(0))\leq f^{\prime}(0)(f^{\prime\prime}(r)-f^{\prime\prime}(0)). Since f(0)=12f(0)f^{\prime}(0)=-12f^{\prime\prime}(0), it suffices to prove f(r)+12f(r)0f^{\prime}(r)+12f^{\prime\prime}(r)\geq 0. By Taylor expansion we know 1cosx>x22x4241-\cos x>\frac{x^{2}}{2}-\frac{x^{4}}{24} for x>0x>0. It follows that f2(r)>12r24f_{2}^{\prime}(r)>\frac{1}{2}-\frac{r}{24} for r>0r>0. Together with f2(r)>124f_{2}^{\prime\prime}(r)>-\frac{1}{24}, we have

f2(r)+12f2(r)>124r\displaystyle f_{2}^{\prime}(r)+12f_{2}^{\prime\prime}(r)>-\frac{1}{24}r

for r>0.r>0. If r(0,9)r\in(0,9), we have

f1(r)+12f1(r)=1112(r+1)1312r>111200r,\displaystyle f_{1}^{\prime}(r)+12f_{1}^{\prime\prime}(r)=\frac{11}{12}(r+1)^{-\frac{13}{12}}r>\frac{11}{1200}r,

which implies that for r(0,9)r\in(0,9),

f(r)+12f(r)>(111200ε24)r>0.\displaystyle f^{\prime}(r)+12f^{\prime\prime}(r)>\left(\frac{11}{1200}-\frac{\varepsilon}{24}\right)r>0. (3.6)

For r[9,+)r\in[9,+\infty), since f2(r)0f_{2}^{\prime}(r)\geq 0, we find

f1(r)+12f1(r)=1112(r+1)1312r1112(910)1312r112,\displaystyle f_{1}^{\prime}(r)+12f_{1}^{\prime\prime}(r)=\frac{11}{12}(r+1)^{-\frac{13}{12}}r\geq\frac{11}{12}\left(\frac{9}{10}\right)^{\frac{13}{12}}r^{-\frac{1}{12}},
f2(r)+12f2(r)12×|rsinr+2cosr2|2r230r32,\displaystyle f_{2}^{\prime}(r)+12f_{2}^{\prime\prime}(r)\geq-12\times\frac{\left\lvert\sqrt{r}\sin\sqrt{r}+2\cos\sqrt{r}-2\right\rvert}{2r^{2}}\geq-30r^{-\frac{3}{2}},

which implies that

f(r)+12f(r)[1112(910)131230εr1]r112>0\displaystyle f^{\prime}(r)+12f^{\prime\prime}(r)\geq\left[\frac{11}{12}\left(\frac{9}{10}\right)^{\frac{13}{12}}-30\varepsilon r^{-1}\right]r^{-\frac{1}{12}}>0 (3.7)

for r[9,+)r\in[9,+\infty). Now \reftagform@3.5 follows from \reftagform@3.6 and \reftagform@3.7.

(4) It remains to show \reftagform@2.26. As in Lemma 3.1, we just have to prove αr>2β\alpha r>2\beta. To this end, we consider

rf2(r)(f2(r)f2(0))=r+rsinr+4cosr42r.\displaystyle rf_{2}^{\prime\prime}(r)-(f_{2}^{\prime}(r)-f_{2}^{\prime}(0))=\frac{r+\sqrt{r}\sin\sqrt{r}+4\cos\sqrt{r}-4}{2r}.

One can check that r+rsinr+4cosr4>0r+\sqrt{r}\sin\sqrt{r}+4\cos\sqrt{r}-4>0 for r>0r>0. Note that rf1(r)(f1(r)f1(0))>0rf_{1}^{\prime\prime}(r)-(f_{1}^{\prime}(r)-f_{1}^{\prime}(0))>0. It follows that rf(r)(f(r)f(0))>0rf^{\prime\prime}(r)-(f^{\prime}(r)-f^{\prime}(0))>0 for r>0r>0 which further implies αr>2β\alpha r>2\beta.

Remark 3.1.

From Example 3.4, we also find that f2𝒟1𝒟f_{2}\in\mathcal{D}_{1}\setminus\mathcal{D}_{\infty}. Since f2f_{2}^{\prime} is not monotone, f2f_{2} clearly violates Assumption III.

In the isotropic case, it was shown in [CS18, Section 3.3] that one can use GOE in the Kac–Rice representation if and only if κ2:=BN(0)2BN(0)<1\kappa^{2}:=\frac{B_{N}^{\prime}(0)^{2}}{B_{N}^{\prime\prime}(0)}<1. In particular, we have κ2<1\kappa^{2}<1 if B(r2)B(r^{2}) is positive definite on N\mathbb{R}^{N} for all NN\in\mathbb{N}. Consider B(r)=1p+pΛ2(r)B(r)=1-p+p\Lambda_{2}(\sqrt{r}) for 0<p<120<p<\frac{1}{2}. One can check directly κ2<1\kappa^{2}<1 but B(r)B(r) is not completely monotone. It follows from Schoenberg’s theorem [Sch38] that B(r2)B(r^{2}) is not positive definite on d\mathbb{R}^{d} for dd large enough.

The above discussion shows that for locally isotropic Gaussian random fields the class 𝒟\mathcal{D}_{\infty} is sufficient for the GOE representation but not necessary.

Acknowledgements.

We are grateful to the anonymous referees for careful reading and many constructive suggestions which have significantly improved the paper.

References