Local repulsion of planar Gaussian critical points
Abstract.
We study the local repulsion between critical points of a stationary isotropic smooth planar Gaussian field. We show that the critical points can experience a soft repulsion which is maximal in the case of the random planar wave model, or a soft attraction of arbitrary high order. If the type of critical points is specified (extremum, saddle point), the points experience a hard local repulsion, that we quantify with the precise magnitude of the second factorial moment of the number of points in a small ball.
Key words: Gaussian random fields; Stationary random fields; Critical points; Kac-Rice formula; repulsive point process.
AMS Classification: 60G60- 60G15
indent 2 FSMP, [email protected]
1. Introduction
The main topic of this paper is a local analysis of the critical points of a smooth stationary planar Gaussian field. The study of critical points, their number as well as their positions, are important issues in various application areas such as sea waves modeling [7] , astronomy [15,3,14] or neuroimaging [18, 22,23,24]. In these situations, practitioners are particularly interested in the detection of peaks of the random field under study or in high level asymptotics of maximal points [8,22,23]. At the opposite of these Extremes Theory results, some situations require the topological study of excursion sets over moderate levels [2,9] or the location study of critical points (not only extremal ones) [17].
Repulsive point processes have known a surge of interest in the recent years, they are useful in a number of applications, such as sampling for quasi Monte-Carlo methods [6], data mining, texture synthesis in Image Analysis [13], training set selection in machine learning, or numerical integration, see for instance [12], or as coresets for subsampling large datasets [21]. Critical points of Gaussian fields could be an alternative to determinantal point processes, which are commonly used for their repulsion properties despite the difficult issue of their synthesis [10]. Several definitions exist to characterize the repulsion properties of a stationary point process. We will use the following informal definition of local repulsion: A stationary random set of points is locally repulsive at the second order if, denoting by its number of points in a ball centred in with radius , we have
(1) |
where for an integer is the second order factorial power. This definition is motivated by the heuristic computation where we consider randomly sampled in and
where the remainder terms are hopefully negligible when is small. In other words, a point process is locally repulsive if the probability to find a point in a small ball diminishes if we know that there is already a point in this ball. The constant is called the (second order) local repulsion factor, it is a dimensionless parameter that is invariant under rescaling or rotation of the process . It equals if is a homogenous Poisson process, which is universally considered non-interacting. We say that the point process is weakly locally repulsive (resp. attractive) if (resp. ), and strongly repulsive if .
We study the repulsion properties of the stationary process formed by critical points of a planar stationary isotropic Gaussian field . We show that, depending on the covariance function of the field, they form a weakly locally repulsive or a weakly locally attractive point process, and that the minimal repulsion factor is , reached when is a Gaussian random wave model, which hence yields the most locally repulsive process of Gaussian critical points. There is on the other hand no maximal value for the limit. We also show that the subprocess formed by the local maxima of the field is strongly repulsive, as well as the subprocess formed by the saddle points, and give the precise magnitude of the ratio decay in the left hand member of (1).
Let us quote two recent articles that are concerned with a very similar question. The first one, which has been a source of inspiration, is [5]. In this paper, Belyaev, Cammarota and Wigman study the repulsion of the critical points for a particular Gaussian field, the Berry’s Planar Random Wave Model, whose spectral measure is uniformly spread on a circle centred in . They obtain the exact repulsion ratio for critical points and upper bounds for the repulsivity for specific types of critical points (saddle, extrema). Azais and Delmas [1] have studied the attraction or repulsion of critical points for general stationary Gaussian fields in any dimension. Using a different computation method, they get an upper bound for the second factorial moment which is compatible with the order of magnitude that we obtain. Their method is borrowed from techniques in random matrix theory, as suggested by Fyodorov [11]. Namely, an explicit expression for the joint density of GOE eigenvalues is exploited.
In order to quantify the repulsion of the critical points, we compute the second factorial moment using the Rice or Kac-Rice formulas (see [2] or [4] for details), as the vast majority of works concerned with counting the zeros or critical points of a random field. We get the asymptotics as the ball radius tends to 0 by performing a fine asymptotic analysis on the conditional expectations that are involved in the Kac-Rice formulas.
The paper is organized as follows: In Section 2, we present the Gaussian fields, which are the probabilistic object of our study, and the basic tools we will use for their study. In Section 3, we derive the Kac-Rice formula, in a context adapted to our framework. The purpose of section 4 is to compute the expectation of the number of critical points and also the number of extrema, minima, maxima and saddle (see Proposition 3). In Section 5, we study the second factorial moment and discuss the repulsion properties of the critical points.
2. Assumptions and tools
The main actors of this article are centered random Gaussian functions whose law is invariant under translations, and whose realisations are smooth. Formally it means that for , is a centered Gaussian vector which law is invariant under translation of the ’s (and rotations if isotropy is further assumed), and that the sample paths are a.s. of class (or more). See [2] for a rigourous and detailed exposition of Gaussian fields. Such a field is characterised by its reduced covariance function
for some , and if the field is furthermore assumed to be isotropic (i.e. its law is invariant under rotations)
(2) |
for some , where denotes the Euclidean norm of .
We denote by the gradient of at , by the Hessian matrix evaluated at , when these quantities are well defined. For a smooth random field , the set of critical points is denoted by
and the number of critical points in a small disc of radius is defined by
When there is no ambiguity about the random field , we simply write instead of . Similarly, we denote by resp. the number of resp. saddles, extrema, maxima and minima, critical points characterised by the signs of the Hessian eigenvalues.
As will be explained at Section 5, to perform a second order local analysis of the repulsion of ’s critical points, we must assume fourth order differentiation of , and for technical reasons we further assume that the fourth order derivative is -Hölder for some , we call this property regularity. It is implied by being of class for some , see Proposition 1 below. In this case, the Hölder constant is a random variable with Gaussian tail (see below).
Assumption 2.1.
Assume that is a non-constant stationary Gaussian field on and its reduced covariance is of class for some
This assumption implies the regularity of by applying the proposition below to ’s 4th order derivatives.
Proposition 1.
Let be a stationary Gaussian field , with reduced covariance function . Then if for some , for sufficiently small
then for there is a random variable with Gaussian tail such that for all ,
Proof.
It follows from the classical result from Landau and Shepp [2, (2.1.4)] that for a centred Gaussian field a.s. bounded on a Euclidean compact , there is such that for large enough ,
We wish to apply this result to and
Let The fact that is bounded is the consequence of the fact that ’s path are locally -Holder for , see for instance [20, Corollary 4.8].∎
Definition 1.
Say that some random variables satisfy if where is a random variable with a Gaussian tail, i.e.
for some
Proposition 1 hence implies that if a stationary field ’s reduced covariance is of class , then
2.1. Dependency structure
Stationarity conveys strong constraints on the dependence structure between the field’s partial derivatives at a given point. Let us recall formula [2, (5.5.4)-(5.5.5)]: if is differentiable for some , for natural integers such that ,
In particular if we have the spectral representation
(3) |
where the symmetric spectral measure is uniquely defined by
(4) |
Let us state important consequences of (3), and in particular of the fact that, due to the symmetry of , the integral vanishes if or is an odd number. For this reason, when the integral does not vanish, and is symmetric in and
Remark 1.
For all and are independent for , hence and are independent, and furthermore for any two natural integers which difference is odd, any partial derivatives of orders and
Non-independence and technical difficulties will mainly emerge from dependence between even degrees of differentiation of the field, such as and , or and , or between the values of the field at different locations, say and A case we must discard is that of constant , i.e. for some Gaussian variable , and this is what we call a trivial Gaussian field.
Also, Cauchy-Schwarz inequality yields that for
and there is equality only if is proportionnal to -a.s. In the isotropic case (i.e. is invariant under spatial rotations), unless it can only happen if is the Dirac mass in , i.e.
(5) |
Proposition 2.
The two last equalities illustrate the fact that isotropy and polar change of coordinates yield other relations between the of the form
where the coefficients don’t depend on .
Example 1.
Let be the Bessel function of the first order
For let be the Gaussian random wave with parameter , i.e. the isotropic stationary Gaussian field with reduced covariance function
As is apparent from (4), this is the centered Gaussian field whose spectral measure is the uniform law on the centred circle with radius . It is important as it is the unique (in law) stationary Gaussian field for which
up to a multiplicative constant. See for instance [5, 16, 19] and references therein for recent works about diverse aspects of planar random wave models. As proved at Section 6.2, it is the only non-trivial stationary isotropic stationary field satisfying a linear partial differential equation of order three or less. As critical points are not modified by adding a constant, we also consider shifted Gaussian random waves (SGRW), of the form , where is a GRW and is an independent centered standard Gaussian variable. The spectral measure of a SGRW is the sum of a uniform measure on a circle of centred in and a finite mass in
3. The Kac-Rice formula
The Kac-Rice formula gives a description of the factorial moments of the zeros of a random field. Let us give a formula adapted to counting the critical points of a certain type. The following result can be proved by combining the proofs of Theorems 6.3 and 6.4 from [4], see also [1, Appendix A].
Theorem 3.1.
Let isotropic satisfying Assumption 2.1. Let , some open subsets of ,
Then for sufficiently small
(7) |
where we have the -point correlation function :
where is the density probability function of a Gaussian vector .
We are specifically interested in a finite class of sets , namely
In this case, the exponent in or is replaced by the subscript of , e.g.
Proof.
With , we have
Let us show that hypothesis (iii’) of [4, Th.6.3] is satisfied, that is for small enough and , the law of is non-degenerated. Let us expand
By isotropy it suffices to evaluate it in for . Let us write the covariance matrix in function of
(10) |
where
Hence the block determinant is
This is equivalent to
where we have in virtue of (6). Hence the determinant is non zero for sufficiently small. Then the modification of the proof of Theorem 6.3 following the proof of Theorem 6.4 of [4] yields the result, see Appendix A in [1].
It yields in particular
(11) |
∎
4. First order
In this section, we are interested in the computation of the expectated number of critical points in a Borel set
Proposition 3.
A sufficient condition for being of class is that is of classe for some , see Proposition 1.
Remark 2.
By stationarity, is the intensity of , i.e. the mean number of critical points per unit volume.
Proof.
According to Theorem 3.1, we must simply evaluate
The stationarity of implies that is independent of , see formula (7). So, we get
(14) |
Using the matrix with in (10), we immediately obtain the probability density function of (two-dimensional vector) evaluated at point :
(15) |
where From this point until the end of the proof we will use the method of the article [5]. Since the first and the second derivatives of are independent at every fixed point then:
(16) |
To evaluate (16), we consider the transformation , , and we write in terms of a conditional expectation as follows:
(17) |
where is a centered Gaussian vector field with covariance matrix
Use Proposition 2 and Remark 1, we have
The conditional distribution of is Gaussian with covariance matrix
and expectation
for
The conditioned Gaussian vector is distributed as where are two independent standard Gaussian random variables, hence we have
where is a square random variable with density
So
then
(18) |
By combining Equations (14), (15),(16) and (Proof), we obtain Formula (12).
Now, we turn to the evaluation of the expected number of the extrema and saddle points. We have
As previously, we apply the Kac-Rice formula from Section 3. We get:
where
Since the first and the second derivatives of are independent at every fixed point we obtain
(19) |
(20) |
Using the same argument as in the case of critical points, we write
(21) |
and
(22) |
By combining Equations (15), (19), (20), (Proof) and (Proof), we obtain Formula (13).
Finally, we turn to the calculation of the expectation of the number of minima and maxima in .
We know that:
so
By symmetry of the Gaussian field , we have the following equality: for therefore
Finally, we obtain
5. Second order
In this section, we will study the asymptotic behaviour of the second factorial moment of when goes to zero. The following theorem is the main result of this paper. Given two quantitites write if for two constants we have for sufficiently small, and if , with the convention
Theorem 5.1.
Let be an isotropic Gaussian field that satisfies Assumption 2.1.The repulsion factor is given by
As we have the following asymptotic equivalent expression for the second factorial moment of the number of critical points
(23) |
Depending on the law of , can take any prescribed value in , and is the minimal possible value, it is reached iff is a shifted Gaussian random wave (Example 1).
For the numbers of extrema, saddles in a ball of radius , we have as
(24) |
(25) |
(26) |
Remark 3.
The repulsion factor terminology comes from and by the heuristic explanation after (1).
Remark 4.
Example 2 (Bargmann Fock field).
Consider the Bargmann-Fock field with parameter , which is the stationary isotropic Gaussian field with reduced covariance function
According to Proposition 3, we have for the first order
Hence the attraction factor is
which means that the process of critical points is locally weakly repulsive. It logically does not depend on the scaling factor
Example 3 (Gaussian random waves).
Consider the Gaussian random wave introduced at Example 1 by . We have :
Hence the attraction factor takes the smallest possible value
which means that the process of critical points is locally weakly repulsive. We retrieve the second factorial moment of Beliaev, Cammarota and Wigman [5]
Example 4.
Consider the centered stationary Gaussian random field with spectral measure
One has by Proposition 2, hence , but Theorem 5.1 does not apply precisely because ’s higher moments are infinite, meaning that is not of class . Hence we consider for . We have as
It implies that the repulsion factor of can reach arbitrarily high values. In particular, this parametric model provides processes of critical points that are weakly locally attractive.
5.1. Discussion and related litterature
The equivalence (23) generalises the results of [5], and shows that locally, the random planar wave model yields the more repulsive critical points. We also show that for a general process , the subprocesses formed by extrema and saddle points experience locally a strong repulsion with three more orders of magnitude for . It confirms the idea that close to a large portion of saddle points, there is an extremal point nearby, and conversely, but that the closest point of the same type (extrema or saddle) is typically much further away.
A current novelty is also to derive the precise asymptotic repulsion for the extrema process and the saddle process. Hence we are able to state that the ratio between the internal repulsion forces among extremal points and among saddle points tends to infinity as the radius of the observation ball goes to .
Azais and Delmas [1] derived upper bounds about such quantities in any dimension. In particular, their results are consistant with ours in the critic-critic, extrema-extrema and extrema-saddle cases.
6. Proofs
6.1. Conditioning
The proofs of all formulas of Theorem 5.1 are based on the Kac-Rice formula in Theorem 3.1, for instance if , we have the second factorial moment of the number of critical points
(27) |
where is the -point correlation function : :
(28) | ||||
Let us briefly introduce where the difficulty comes from and why higher order differentiability is required. For close from , if , then the second order derivatives are also small, and the determinant is dominated by third order differentials. When one imposes additional constraints on the determinant signs, it yields other cancellations within third order derivatives, requiring fourth order differentiability.
Thanks to the stationarity and isotropy of it suffices to compute for and for all To evaluate , the idea is to change the conditioning in To symmetrize the problem, we introduce some notations for near , , exploiting Proposition 1 and Definition 1,
(29) |
The crucial point is that is equivalent to Let us introduce
so that is equivalent to , and
We will see later that is non-degenerate, hence is also non-degenerate for small enough, by continuity of the covariance matrix.
We denote the conditionnal probability and expectation with respect to by
Remark 5.
Let be a Gaussian vector with non-degenerate. If is non-singular matrix and if is a measurable function with polynomial bounds, then
So, since obviously
the 2-point correlation function becomes :
(30) |
Using (11) we can evaluate the density in , and the previous expression becomes
It remains to express the product of determinants under the conditioning in function of , this involves higher order derivatives (see(29)).
Lemma 1.
Proof.
Define
We can explicitly write the expression of
(35) |
with
and
(36) |
with
∎
6.2. Dependency of derivatives
In view of the previous result, we will have to estimate quantities related to the random vectors
and . We must consider the case where is degenerate. Examining Remark 1, we can split the variables involved in several groups that are mutually independent, there are for instance only two groups of size ,
Other groups, such as , have less members, and in the isotropic case they won’t be in a linear relation because of (5):
There is actually no other case to consider. Let us elucidate what can happen within the two bigger groups.
Proposition 4.
Assume the spectral measure is isotropic and not reduced to a Dirac mass in . There is such that a.s. iff and is uniformly spread along a circle of radius , i.e. if is a SGRW with parameter .
There is no such that a.s. .
Proof.
Using (3) and recalling the symmetry
Hence, -a.s., either or . By isotropy, it implies that and that ’s support is concentrated on zero and the circle with radius . It corresponds to the GRW with radius plus an additional constant term.
In the same way,
implies that is trivial if is isotropic. ∎
In conclusion, the only non-trivial linear relations possibly satisfied by the derivatives involved in is and can only be satisfied by a SGRW. In the light of these results, functionals of interest only depend on the law of the vector under the conditioning , where
because if is a shifted GRW and , a.s. hence is directly expressible in function of
Lemma 2.
The conditional density of knowing converges pointwise to the density of knowing . There is furthermore such that for sufficiently small,
where is the density of iid Gaussian variables with common variance Hence for any non-negative functional
Proof.
Since the vector is non-degenerate, by continuity of the covariance matrix, the vector is non-degenerate either for sufficiently small, and the density of the former converges pointwise to the density of the latter. Hence the conditionnal density of converges to the non-degenerate multivariate conditional Gaussian density of .
Let be the covariance matrix of the conditional vector For we denote by the -th eigenvalue of the matrix . Since , there exists constant such that for sufficiently small
(38) |
Hence is bounded between and for some , which gives the desired claims. ∎
6.3. Proof of (23) in Theorem 5.1
From (37), we have
According to Lemma 2, the conditional density of knowing converges pointwise to the non-degenerate density of knowing , and is uniformly bounded by a polynomial , Lebesgue’s Theorem then yields
(39) |
To compute the conditionnal law of , recall that in virtue of (3) and Remark 1 and are independent, and the covariance matrix of is
The covariance matrix of and is
It follows that the conditional covariance of knowing is
the diagonal terms are positive in virtue of (5). By conditionnal independence of and , and using Proposition 2
Finally, the second factorial moment of when is given by
Recalling that
yields indeed
Let us show that . Given a measure on , denote by
Since is isotropic, define as the radial part of , yielding with a polar change of coordinates
Introduce the probability measure, for
Using the spectral representation in Proposition 2 yields for some
by the Cauchy-Schwarz inequality. The ratio is minimal if the equality is obtained in the Cauchy-Schwarz inequality, i.e. when is proportionnal to -a.e.. This is the case only if is uniformly spread on a circle of , with perhaps also an additional atom in . This corresponds exactly to the class of fields derived in Example 1, which are the SGRW. For the precise computation of the constant , see Example 3.
In example 4, we derive spectral measures which achieve repulsion factors in an interval of the form for some . Therefore it remains to show that all values between and can be achieved. For that we use an interpolation
where is the spectral measure of a GRW and belongs to the parametric family . The ratio of moments
evolves continuously with because all the members of the numerator and denominator do, hence the repulsion factor evolves continuously between and and achieves all intermediary values.
6.4. Proof of (24) in Theorem 5.1
To compute the second factorial moment of when , we apply the Kac-rice formula of Theorem 3.1 in the case
(41) |
where
It becomes in virtue of (31)
(42) |
where
To be able to prove (24), we need to establish an upper bound and a lower bound of So the proof is separated into two parts. We first give in Lemma 3 an asymptotic expression of to get rid of superfluous variables.
Lemma 3.
Let ,
for and as
(43) |
Proof.
From (6.1)-(6.1) in the proof of Lemma 1,
where
is a variable with Gaussian tail. Recall that , hence using (34),
(44) |
Let such that and , then Holder’s inequality yields
The probability on the right hand member can be bounded by
All variables involved in have a Gaussian tail, hence
By Lemma 2 with and Lemma 5-(i) (with ),
(45) |
hence finally
(46) |
To simplify indicators, remark that in virtue of (6.1),(6.1),
Both the events imply . If , has a sign different from , or has a sign different from . In both cases it implies another event of magnitude because
Let now such that . Since also implies that either or and so collecting (44),(46),
We have
By an application of Lemma 2 similar to (45) with of the form and Lemma 5-(i); the first member is in , hence finally for some
6.4.1. Upper bound in (24)
According to the previous lemma it suffices to give an upper bound of We stress that the crucial point that justifies the absence of a log term in the final result (compared to (25)) is the following inequality
hence since is a polynomial in we can use Lemma 5-(iii) several times and get for some
(47) |
Then, from (42),(47) and (37), we deduce that for some
(48) |
Finally, from (41) and (48), we deduce for some
6.4.2. Lower bound in (24)
Thanks to Lemma 3, it is sufficient to give a lower bound of Let us first assume that the Gaussian field is not a SGRW (Example 1), hence the derivatives involved in and are not linearly linked. Define the event
We recall
Hence under
Hence for sufficiently small, , in particular and we obtain
Then since is non-degenerate, its density is uniformly bounded and the proof is concluded with
for some In the degenerate case of the SGRW, if and we put instead
If and is realised,
hence for small enough and the same method can be applied because has a bounded density. Therefore, it holds for some
(49) |
From (42), (37)and (49), we get for some
(50) |
Finally, from (41) and (50), we deduce that for some
6.5. Proof of (25) in Theorem 5.1
The difference is hence on the sign of the determinants, becomes
(51) |
where (see (6.1))
The asymetry of the expression of the determinant yields a different estimate than in the previous case. To be able to prove (24), we need to establish an upper bound and a lower bound of as in the previous section (Lemma 3). We give in Lemma 4 an asymptotic expression of .
The proof is similar but there are also significant differences. The difference with respect to before is that the two signs of the determinants are negative, hence we replace by
and emphasize that does not have the same law as
Lemma 4.
We have for
The proof is omitted as the proof of Lemma 3 can be repdroduced verbatim, with resp. in place of resp.
6.5.1. Upper bound
6.5.2. Lower bound
We recall the expression of and
The strategy is the same than at Section 6.4.2.
6.6. Proof of (26) in Theorem 5.1
We recall that hence
So, we have:
Combining this formula with previous estimates (23), (24) and (25), we obtain
ending the proof of (26).
Lemma 5.
Let be a non-degenerate Gaussian vector and real fixed coefficients. Then,
-
(i)
(i) for depending on the law of the (and not on or the ),
-
(ii)
for
(ii) for some
-
(iii)
Let some coefficients , be a Gaussian vector. Then, for some
(iii)
Proof.
(i) Assume first that the are iid Gaussian. Let us study for , . Since have a density bounded by (universal), we have for
uniformly on Then it remains to notice that
where are independent of . Then
In the non-independent Gaussian case, the joint density of is bounded by for some ( would be the smallest eigenvalue of the covariance matrix). From there on the conclusion is easy:
and the right hand member corresponds to the independent case, already treated.
(ii) For the second assertion, assume first that the are independent. Without loss of generality, assume . We have for , for some
Coming back to the main estimate with , using conditional expectations, for some
The non-independent (non-degenerate) case can be treated as before by bounding the density of the by an independent density of the same order.
(iii) By Holder’s inequality
hence we can assume wlog that only one , say , is non-zero. For we have an orthogonal decomposition of the form where is independent of , hence we can assume wlog that or . For , the bound is
and it only remains to treat the case . In this case we decompose orthogonally where is independent of . Then the bounded densities of and easily yields the result
Acknowledgements
We warmfully thank Anne Estrade, who participated to the conception of the project, the supervision of this work and to the elaboration of this article. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 754362.
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/49d6c69a-2b11-4dd6-ab25-d7ea22cedae1/eu_flag.jpeg)
∎
References
- AD [22] J. Azais and C. Delmas. Mean number and correlation function of critical points of isotropic Gaussian fields. Stoc. Proc. Appl., 150:411–445, 2022.
- AT [09] R. J. Adler and J. E. Taylor. Random fields and geometry. Springer Science & Business Media, 2009.
- ATW [07] R. J. Adler, J. E. Taylor, and K. Worsley. Applications of random fields and geometry: Foundations and case studies. In In preparation, available on R. Adler home page. Citeseer, 2007.
- AW [09] J. Azaïs and M. Wschebor. Level sets and extrema of random processes and fields. John Wiley & Sons, 2009.
- BCW [19] D. Beliaev, V. Cammarota, and I. Wigman. Two point function for critical points of a random plane wave. International Mathematics Research Notices, 2019(9):2661–2689, 2019.
- BH [20] R. Bardenet and A. Hardy. Monte carlo with determinantal point processes. Ann. Appl. Prob., 30(1):368–417, 2020.
- CG [13] C. Chevalier and D. Ginsbourger. Fast computation of the multi-points expected improvement with applications in batch selection. In International Conference on Learning and Intelligent Optimization, pages 59–69. Springer, 2013.
- CS [17] D. Cheng and A. Schwartzman. Multiple testing of local maxima for detection of peaks in random fields. Annals of statistics, 45(2):529, 2017.
- CX [16] D. Cheng and Y. Xiao. The mean Euler characteristic and excursion probability of Gaussian random fields with stationary increments. The Annals of Applied Probability, 26(2):722–759, 2016.
- DGL [17] A. Desolneux, B. Galerne, and C. Launay. Etude de la répulsion des processus pixelliques déterminantaux. Juan les pins, France, September 2017.
- Fyo [04] Y. V. Fyodorov. Complexity of random energy landscapes, glass transition, and absolute value of the spectral determinant of random matrices. Physical review letters, 92(24):240601, 2004.
- KT [12] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. Found. Trends Mach. learn., 5(2-3), 2012.
- LGD [21] C. Launay, B. Galerne, and A. Desolneux. Determinantal point processes for image processing. SIAM Journal on Imaging Sciences, 14(1), 2021.
- Lin [72] G. Lindgren. Local maxima of Gaussian fields. Arkiv för matematik, 10(1-2):195–218, 1972.
- LW [04] D. L. Larson and B. D. Wandelt. The hot and cold spots in the wilkinson microwave anisotropy probe data are not hot and cold enough. The Astrophysical Journal Letters, 613(2):L85, 2004.
- [16] S. Muirhead, A. Rivera, and H. Vanneuville. The phase transition for planar Gaussian percolation models without fkg. https://arxiv.org/abs/2010.11770.
- Mui [20] S. Muirhead. A second moment bound for critical points of planar Gaussian fields in shrinking height windows. Statistics & Probability Letters, 160:108698, 2020.
- NH [03] T. Nichols and S. Hayasaka. Controlling the familywise error rate in functional neuroimaging: a comparative review. Statistical methods in medical research, 12(5):419–446, 2003.
- NPR [19] I. Nourdin, G. Peccati, and M. Rossi. Nodal statistics of planar random waves. Comm. Math. Phys., 369:99–151, 2019.
- Pot [09] J. Potthof. Sample properties of random fields. ii. continuity. Comm. Stoc. Anal., 3(3):331–348, 2009.
- TBA [19] N. Tremblay, S. Barthelmé, and P. Amblard. Determinantal point processes for coresets. J. Mach. Learn. Res., 20:1–70, 2019.
- TW [07] J. E. Taylor and K. J. Worsley. Detecting sparse signals in random fields, with an application to brain mapping. Journal of the American Statistical Association, 102(479):913–928, 2007.
- WMNE [96] K. J. Worsley, S. Marrett, P. Neelin, and A.C. Evans. Searching scale space for activation in pet images. Human brain mapping, 4(1):74–90, 1996.
- WTTL [04] K. J. Worsley, J. E. Taylor, T. F. Tomaiuolo, and J. Lerch. Unified univariate and multivariate random field theory. Neuroimage, 23:S189–S195, 2004.