Asymptotics of running maxima for -subgaussian random double arrays
Abstract
The article studies the running maxima where is a double array of -subgaussian random variables and is a double array of constants. Asymptotics of the maxima of the double arrays of positive and negative parts of are studied, when have suitable “exponential-type” tail distributions. The main results are specified for various important particular scenarios and classes of -subgaussian random variables.
Dedicated to the memory of Professor Yuri Kozachenko (1940-2020)
Keywords: Random double array, running maxima, -subgaussian random variables, almost sure convergence.
AMS MSC 2000 Mathematics Subject Classification: 60F15, 60G70, 60G60.
1 Introduction
The main focus of the this investigation is to obtain convergence theorems for the running maxima of -subgaussian random variables. The roots of the subject are in classical probability theory, and can be traced back to Gnedenko’s theory of limiting behaviour of maximas of random variables. We refer to the excellent book by Embrechts, Klüppelberg, and Mikosch [7] that contains classical and more recent results on limit theorems of maximas of random variables with numerous examples of important practical applications in finance, economics, insurance and other fields.
Let be a double array of centered random variables (a 2D random field defined on the grid ) that are not necessarily independent or identically distributed. We assume that these variables are defined on the same probability space
Studying properties of normalised maxima of random sequences and processes is one of the classical problems in probability theory that attracted considerable interest in the literature, see, for example, [20, 21, 24, 25] and the references therein. The known asymptotic results broadly belong to three classes that use different probabilistic tools to study properties of
- (1)
- (2)
-
(3)
asymptotic distributions of normalised maximas (see, for example, a comprehensive collection of results in [24]).
The case of Gaussian random variables has been extensively investigated for each of these classes. However, there are still numerous open problems, in particular about an extension of the known results to non-Gaussian scenarios and multidimensional arrays.
This article studies sufficient conditions on the tail distributions of that guarantee the existence of such a sequence that the random variables
converge to 0 almost surely as the number of random variables in the above maximum tends to infinity. This type of convergence is called the convergence of running maxima.
Contrary to the majority of classical results on the limiting behaviour of the maxima of random variables, where the convergence in distribution was considered (the third item above), we are interested in the almost surely convergence to zero. First results of this type were obtained by Pickands [23], where the classical case of Gaussian random variables was considered. Later this result was generalized to wider classes of distributions. In [9] running maxima of one-dimensional random sequences were considered and the generalization to the subgaussian case was studied. In [10], the results of [9] were generalized to the case of -subgaussian random variables. For recent publications on this subject we refer to Giuliano and Macci [11], Csáki and Gonchigdanzan [4] and references therein.
The class of subgaussian and -subgaussian random variables is a natural extension of the Gaussian class. The popularity of the Gaussian distribution was justified by the central limit theorem for sums of random variables with small variances. However, asymptotics can be non-Gaussian if summands have large variances. Nevertheless, -subgaussianity still can be an appropriate assumption. Numerous probability distributions belong to the -subgaussian class. For example, reflected Weibull, centered bounded-supported, and sums of independent Gaussian and centered bounded random variables are in this class. -subgaussian random variables were introduced to generalize various properties of subgaussian class considered by Dudley [6], Fernique [8], Kahane [15], Ledoux and Talagrand [22]. Then, several publications used this class of random variables to construct stochastic processes and fields, see [17, 18, 19]. The monograph [3] discusses subgaussianity and -subgaussianity in detail and provides numerous important examples and properties.
The main aim of this paper is to investigate the convergence of the running maxima of centered double arrays with more general exponential types of the tail distributions of than in [10]. The integrability conditions on the subgaussian function will obviously change.
The main results of the paper are Theorems 1–5. In these results the array is split into two parts:
and the convergence of the arrays is investigated. The obtained results clearly show how the running maxima behaves depending on the right and left tail distributions of . The dependence of the array on the function which is the Young-Fenchel transform of , and -subgaussian norm of is demonstrated. The paper also examines the rate of convergence of the positive parts array
This paper investigates almost sure and convergence of random functionals of double arrays. More details about these and other types of convergence and their applications can be found in the publications [5, 13, 14, 16] and the references therein.
The novelty of the paper compared to the known in the literature results for one-dimensional sequences are:
- the case of random double arrays is studied,
- convergence is used,
- -subgaussian norms of random variables in the arrays can unboundedly increase,
- conditions on exponential-type bounded tails are weaker than in the literature,
- several assumptions are less restrictive than even in the known results for the one-dimensional case,
- specifications for various important cases and particular scenarious are provided.
This paper is organized as follows. Section 2 provides required definitions and notations. The main results of this article are proved in Section 3 and 4. Conditions on the convergence of running maxima are presented in Section 3. Estimates of the rate of convergence are given in Section 4. Specifications of the main results and important particular cases are considered in Section 5. Section 6 presents some simulation studies. Finally, conclusions and some problems for future investigations are given in the last section.
Throughout the paper, denotes stands for the set of positive real numbers, and represents a generic finite positive constant, which is not necessarily same in each appearance.
All computations, plotting, and simulations in this article were performed using the software R version 4.0.3. A reproducible version of the code in this paper is available in the folder “Research materials” from the website https://protect-au.mimecast.com/s/w-hDCk8vzZULyROOuVK_Qw?domain=sites.google.com.
2 Definitions and auxiliary results
This section presents definitions, notations, and technical results that will be used in the proofs of the main results later.
For double arrays of random variables, due to the lack of linear ordering of , there are multiple ways to define different modes of convergence. See the monograph [16] for a comprehensive discussion.
This paper considers convergence. Let be a double array of real numbers.
Definition 1.
The array converges to as if for every there exists an integer such that if then
In the following this convergence will be denoted by or by as .
This paper uses the next notations of -subgaussianity.
Definition 2.
A continuous function , is called an Orlicz -function if
a) it is even and convex,
b) ,
c) is a monotone increasing function for ,
d) and .
In the following the notation is used for an Orlicz -function.
Example 1.
The function is an Orlicz -function.
Definition 3.
A function , given by is called the Young-Fenchel transform of .
It is well-known that is an Orlicz -function.
Example 2.
If , then where .
Any Orlicz -function can be represented in the integral form
where , is its density. The density is non-decreasing and there exists a generalized inverse defined by
Then,
As a consequence, the function is increasing, differentiable, and .
Definition 4.
A random variable is -subgaussian if and there exists a finite constant such that for all . The -subgaussian norm is defined as
The definition of a -subgaussian random variable is given in terms of expectations, but it is essentially a condition on the tail of the distribution. Namely, the following result holds, see [3, Lemma 4.3, p. 66].
Lemma 1.
If is an Orlicz -function and a random variable is -subgaussian, then for all the following inequality holds
Remark 1.
For readers’ convenience, we present a brief discussion of recent relevant results in the literature on one-dimensional sequences of -subgaussian random variables.
Consider a zero-mean sequence of random variables, and set
If are independent and Gaussian random variables, then a.s., see, for instance, [23].
In [10] the following proposition was proved for .
Proposition 1.
Suppose that there exists such that for every , the generalized inverse of the density of the Orlicz -function satisfies the conditions
and
Then a.s.
It is natural to try to extend Proposition 1 to the multidimensional arrays. This is done in the next section.
Next, the behaviour of was also studied in [10], but some additional assumptions on the left tail distribution of were required. Unfortunately, these assumptions cannot be derived from the -subgaussianity assumption (see Remark 2 in [10]). In contrast to Proposition 1, the independence assumption is also required.
Proposition 2.
Assume that is a sequence of zero-mean independent random variables and there exists a number such that, for every and all , we have
where is a positive differentiable function with non-decreasing for . Suppose that there exists an such that for every it holds
Then a.s.
In the exponential-type tail condition Proposition 2 uses the same function as in the definition of -subgaussianity of The next section will extend it to the case of arbitrary functions.
Proposition 3.
Let be a sequence of -subgaussian random variables such that and let . Then
Remark 2.
The statement of Proposition 3 is obvious for . Hence, only the case of i.e. is interesting.
Note that [10] also examined the rate of convergence of the sequence . In Proposition 3, the rate of convergence for is given, while usually only results for , were obtained in the existing literature. As for any it also follows from the assumptions of the proposition that .
It was also shown in [10] that, Proposition 3 is sharp in some sense. Namely, the following result is true.
Proposition 4.
Let be a zero-mean sequence of independent random variables and there exists a strictly increasing differentiable function and such that, for every and it holds
Assume that there exists such that
Then, for every real number and for every it holds
3 On asymptotic behaviour of running maxima of -subgaussian double arrays
In this section, we establish sufficient conditions on the tail distributions of that guarantee that positive and negative parts of random variables converge to 0 almost surely as
Let be an Orlicz -function, be its density, the function is the Young-Fenchel transform of , and the function be the generalized inverse of the density .
Let us consider a double array (2D random field defined on the integer grid ) of zero-mean random variables . The next notations will be used to formulate the main results
where is an increasing function with respect to each of and variables, where
Let
Indices and of the random variables can be viewed as the parameters defining the rectangular observation window of the random field on .
The following proofs will use the next extension of Lemma 2 from [10] to the case of double arrays.
Lemma 2.
For any
where i.o. stands for infinitely often.
Proof.
It is easy to see that
Also, as is an increasing function of and , it holds
which completes the proof. ∎
Remark 3.
Let be an infinite sequence of events. By we denote an event that infinitely many events from holds true. The importance of the notion i.o. can be explained by the following well-known statement, which is crucial for proving the almost sure convergence: almost surely, when if and only if for all
The following result extends Proposition 1 to the case of double arrays of random variables.
Theorem 1.
Let be a double array of -subgaussian random variables and be a non-decreasing function such that for all
(1) |
and
Suppose that there exists an such that for every
(2) |
Then a.s.
Remark 4.
In the following, without loss of generality, we consider only non-degenerated random variables with non-zero -subgaussian norms. Therefore, it holds For the case of identically distributed we assume that
Remark 5.
Proof.
It follows from Lemma 2, Remark 3 and the Borel-Cantelli lemma that it is enough to show that for any
because then .
Note, that by Lemma 1 and assumption (1) for all except a finite number, it holds
since is increasing.
Therefore, it is enough to prove that the double sum
(3) |
converges.
Let us fix and investigate the behaviour of the inner sum
Note that
Now, for
Because and are increasing functions,
which results in
Therefore, for fixed ,
(4) | |||||
as for and .
To study the last integral in (4) we use the substitution . Then and
By the mean value theorem and it follows that there exists such that it holds
as is a non-decreasing function.
Thus, we obtain the next upper bound
(5) |
By substitution and changing the order of integration
(9) |
where the finiteness of the last integral follows from and the assumption (2).
For the case of double array of random variables with bounded -subgaussian norms the function can be selected identically equal to a constant. Therefore, Theorem 1 can be specified as follows.
Corollary 1.
Let be a double array of -subgaussian random variables with Suppose, that there exist such that for every
Then,
The asymptotic behaviour of the sequence cannot be described in terms of subgaussianity only. Roughly speaking, an opposite type of the inequality is required (see Remark 2 in [10]). Moreover, in addition to the conditions of Proposition 1, it is assumed that random variables in the double array are independent. The following result is an extension of Proposition 2 to the case of double arrays.
Theorem 2.
Let be a double array of independent -subgaussian random variables and the array and function are defined in Theorem 1. Let be a positive increasing differentiable function with the derivative non-decreasing for . Assume that there exists such that for every and all
and
(10) |
for some function Suppose that there exists such that for every
and
Then a.s.
Proof.
Using the Borel-Cantelli lemma, we will prove that for every . By the independence of one gets
where we used the monotonicity of the function and Therefore,
(11) |
As the functions and are non-decreasing, then for any fixed we can majorize the second sum as
As for the above integral can be estimated by
By the change of variables , this integral equals
(12) |
By the mean value theorem, as is non-decreasing, it holds
Thus, applying the above inequality and assumption (10) one gets the following upper bound for the integral in (12)
(13) |
By the change of variables and the change of the order of integration we obtain that the last integral equals
(14) |
For the case of double arrays of random variables with uniformly bounded -subgaussian norms the next specification, holds true.
Corollary 2.
Let be a double array of -subgaussian random variables with Let be a positive increasing differentiable function with the derivative that is non-decreasing for and for some function Assume that there exists such that for every and
Suppose that there exist constants such that for every
(15) |
and
(16) |
Then,
Remark 6.
Lemma 1 provides the upper bound on the tail probability of the -subgaussian random variable
The condition in some sence is opposite. Namely, the lower bound on the tail probability
implies
as
Theorem 3.
4 On convergence rate of running maxima of random double arrays
This section investigates the series
It proves that the series converges for a suitable constant
The following theorem and corollary are generalizations of Proposition 3 to the case of -subgaussian arrays with not necessary uniformly bounded -subgaussian norms.
Theorem 4.
Let be a double array of -subgaussian random variables such that for all and some positive-valued function it holds
and
Then
(17) |
Proof.
Hence,
by the assumption of the Theorem. ∎
Proof.
It follows from the assumptions that
Hence,
as the right hand side converges for which completes the proof. ∎
Remark 7.
Now we proceed with extending Proposition 4, showing that the rate of convergence is sharp.
Theorem 5.
Let be a double array of independent -subgaussian random variables with satisfying the following assumptions:
-
(i)
there exists a strictly increasing function such that for every and some positive constant C it holds
-
(ii)
there exists such that
for all where ;
-
(iii)
for some
Then, for any it holds
Proof.
By the theorem’s assumption one can take and obtain
Using the inequality one obtains
Then, by the inequality it follows that
By Lemma 1 and assumption
Noting that is an increasing function, one obtains
Then, by assumption
It follows from assumption that
as is an increasing function.
Hence, it holds
Therefore,
when which completes the proof. ∎
5 Theoretical examples
This section provides theoretical examples for important particular classes of -subgaussian distributions. Specifications of functions and such that the obtained theoretical results hold true, are given.
Example 3.
Let be a double array of standard Gaussian random variables. It is well-known that which implies that is the double array of -subgaussian random variables with The -subgaussian norm of a Gaussian variable equals to its standard deviation that is 1 in this example, i.e. The Young-Fenchel transform of is with the density
One can easily see that the condition (15) of Corollary 1 is satisfied. Indeed, for any positive the following integral is finite
Let us show that the conditions of Corollary 2 are satisfied too.
By [1], for all it holds
As
it is also positive.
It follows from
that is non-decreasing.
Also, it easy to see that
Let us show that the assumption (15) is satisfied with these specifications of functions and . Indeed, by the change of variables one obtains and
(18) |
where
By Bernoulli’s inequality
As a polynomial growth is faster than the logarithmic one, the integral in (18) is bounded from above by
for some
As exponentials grow faster than polynomials, for sufficiently large
and
for some positive constants and
The last integral is finite because
for sufficiently large
By Theorem 3 one gets that a.s.
Example 4.
Let be a double array of independent identically distributed reflected Weibull random variables with the probability density
Consider reflected Weibull random variables with and They belong to the -subgaussian class. Indeed, tails of reflected Weibull random variables equal
Hence, by [3, Corollary 4.1, p. 68] is the double-array of -subgaussian random variables, where
see [3, Example 2.5, p. 46], and The density of is
Let us chose a such value of the parameter that is the double-array of -subgaussian random variables with -subgaussian norms see Section 6. We will show that in this case the conditions of Corollaries 1 and 2 are satisfied.
The conditions of Corollary 1 are satisfied because and the following integral is finite for all positive
Let us show that the conditions of Corollary 2 are satisfied too. By Remark 6 and the equality it follows that and Hence,
because
The assumption (15) can be rewritten as
Let use the change of variables Then, and the above integral equals to
(19) |
where
As exponentials grow faster than polynomials we obtain that for sufficiently large
and
for some positive constants and
6 Numerical examples
This section provides numerical examples that confirm the obtained theoretical results. By simulating double arrays of random variables satisfying Theorem 3, we show that the running maxima functionals of these double arrays converge to , as the size of observation windows tends to infinity. As the rate of convergence is very slow to better illustrate asymptotic behaviour we selected arrays with constant -subgaussian norms close to one.
Consider a double array that consists of independent reflected Weibull random variables (see Example 4) with the parameters and These values of and were selected to get as in Example 4. The probability density function of the underlying random variables and a realization of the double array in a square window are shown in Figure 1.


The underlying random variables are -subgaussian random variables with
A calculation of the -subgaussian norm by using Definition 4 is not trivial in a general case and may require different approaches. The following method was used to estimate the -subgaussian norm of By [3, Lemma 4.2, p. 65] the -subgaussian norm allows the representation
For the reflected Weibull random variables the above expectation can be calculated as
where denotes the moment generating function of the corresponding Weibull distribution and is given by
By using this representation one gets
Thus, for sufficiently large the -subgaussian norm of the reflected Weibull random variables can be approximated by
As increases very quickly even small values of provide a very accurate approximation of the series and the norm.
Figure 2 shows the graph of the function under the supremum and the supremum value for As the function is symmetric only the range is plotted. For and the supremum is attained at and the estimated value of the norm is 0.997. Thus, the double array satisfies the conditions of Theorem 3, see Example 4.



Then, 1000 realizations of the double array in the square region were generated. Using the obtained realizations of the double array, values of were computed for two sets of observation windows. The windows are shown in Figure 3 by using logarithmic scales for and coordinates. For the set of observation windows in Fig. 3(a) and a realization of the reflected Weibull random array, the corresponding running maximas are shown in Fig 4(a). For all rectangular observation windows inside locations of maximas are shown in Fig 4(b). The locations are very sparse and the majority of them is concentrated closely to the left and bottom borders of


For the simulated 1000 realizations and the corresponding sets of the observation windows from Figure 3 the box plots of running maxima functionals are shown in Figure 5. It is clear that the distribution of the running maxima concentrates around zero when the size of the observation window increases, but the rate of convergence seems to be rather slow.


Table 1 shows the corresponding Root Mean Square Error (RMSE) of the running maxima functionals from Figure 5, the table confirms the convergence of to zero when the observation window increases.
1 | 2 | 3 | 4 | 5 | 6 | |
---|---|---|---|---|---|---|
The first case | 0.026 | 0.022 | 0.021 | 0.019 | 0.017 | 0.016 |
The second case | 0.038 | 0.032 | 0.024 | 0.024 | 0.018 | 0.017 |

Finally, to demonstrate the convergence 1000 simulated realizations of the reflected Weibull double array were used. The running maxima functionals were calculated for all possible pairs In Figure 6 the boxplots of the obtained values of were computed for 6 groups depending on values of the parameter in the corresponding observation subwindows. The lower bound for the parameter increases with the group number. Namely, in group the values where The obtained boxplots in Figure 6 confirm the convergence.
7 Conclusions and the future studies
The asymptotic behaviour of running maxima of random double arrays was investigated. The conditions of the obtained results allow to consider a wide class of -subgaussian random fields and are weaker than even in the known results for the one-dimensional case. The rate of convergence was also studied. The results were derived for a general class of rectangular observation windows and convergence.
In the future studies, it would be interesting to extend the obtained results to:
- the case of -dimension arrays,
- other types of observation windows,
- continuous -subgaussian random fields,
- different types of dependencies.
Acknowledgements
This research was supported by La Trobe University SEMS CaRE Grant ”Asymptotic analysis for point and interval estimation in some statistical models”.
This research includes computations using the Linux computational cluster Gadi of the National Computational Infrastructure (NCI), which is supported by the Australian Government and La Trobe University.
References
- Birnbaum, [1942] Birnbaum, Z. (1942). An inequality for Mill’s ratio. Ann. Math. Statist. 13(2): 245-246. DOI: 10.1214/aoms/1177731611.
- Borovkov et al., [2017] Borovkov, K., Mishura, Yu., Novikov, A., and Zhitlukhin, M. (2017) Bounds for expected maxima of Gaussian processes and their discrete approximations. Stochastics, 89(1): 21–37. DOI: 10.1080/17442508.2015.1126282.
- Buldygin and Kozachenko, [2000] Buldygin, V. and Kozachenko, Y. (2000). Metric Characterization of Random Variables and Random Processes. Providence, R.I.: American Mathematical Society. DOI: 10.1090/mmono/188.
- Csáki and Gonchigdanzan, [2002] Csáki, E., and Gonchigdanzan, K. (2002) Almost sure limit theorems for the maximum of stationary Gaussian sequences. Statist. Probab. Lett. 58(2): 195–203. DOI: 10.1016/s0167-7152(02)00128-1.
- [5] Donhauzer, I., Olenko, A., and Volodin, A. (2020). Strong law of large numbers for functionals of random fields with unboundedly increasing covariances. To appear in Commun. Stat. Theory Methods. DOI: 10.1080/03610926.2020.1868515.
- Dudley, [1967] Dudley, R. (1967). The sizes of compact subsets of Hilbert space and continuity of Gaussian processes. J. Funct. Anal. 1(3):290–330. DOI: 10.1016/0022-1236(67)90017-1.
- Embrechts et al., [1997] Embrechts, P. Klüppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance. Berlin: Springer-Verlag. DOI: 10.1007/978-3-642-33483-2-7.
- Fernique, [1975] Fernique, X. (1975). Regularité des trajectoires des fonctions aléatoires gaussiennes. In: Ecole d’Eté de Probabilités de Saint-Flour IV—1974 (pp. 1-94). Berlin: Springer. DOI: 10.1007/bfb0080190.
- Giuliano, [1995] Giuliano, R. (1995). Remarks on maxima of real random sequences. Note Mat. 15(2):143–145. DOI: 10.1285/i15900932v15n2p143.
- Giuliano et al., [2013] Giuliano, R., Ngamkham, T., and Volodin, A. (2013). On the asymptotic behavior of the sequence and series of running maxima from a real random sequence. Stat. Probabil. Lett. 83(2):534–542. DOI: 10.1016/j.spl.2012.10.010.
- Giuliano and Macci, [2014] Giuliano, R. and Macci, C. (2014) Large deviation principles for sequences of maxima and minima. Comm. Statist. Theory Methods. 43(6): 1077–1098. DOI: 10.1080/03610926.2012.668606.
- Hoffmann-Jørgensen, [1994] Hoffmann-Jørgensen, J. (1994). Probability With a View Toward Statistics. New York: Chapman & Hall. DOI: 10.1007/978-1-4899-3019-4.
- Hu et al., [2020] Hu, T., Rosalsky, A., Volodin, A., and Zhang, S. (2020). A complete convergence theorem for row sums from arrays of rowwise independent random elements in Rademacher type Banach spaces. To appear in Stoch. Anal. Appl. DOI: 10.1080/07362994.2020.1791721.
- Hu et al., [2019] Hu, T., Rosalsky, A., and Volodin, A. (2019). Complete convergence theorems for weighted row sums from arrays of random elements in Rademacher type and martingale type Banach spaces. Stoch. Anal. Appl. 37(6):1092–1106. DOI: 10.1080/07362994.2019.1641414.
- Kahane, [1960] Kahane, J. (1960). Propriétés locales des fonctions à séries de Fourier aléatoires. Studia Math. 19(1):1–25. DOI: 10.4064/sm-19-1-1-25.
- Klesov, [2014] Klesov, O. (2014). Limit Theorems for Multi-Indexed Sums of Random Variables. Heidelberg: Springer. DOI: 10.1007/978-3-662-44388-0.
- [17] Kozachenko, Y. and Olenko, A. (2016). Aliasing-truncation errors in sampling approximations of sub-gaussian signals. IEEE Trans. Inf. Theory. 62(10):5831–5838. DOI: 10.1109/tit.2016.2597146.
- Kozachenko and Olenko, [2016] Kozachenko, Y. and Olenko, A. (2016). Whittaker–Kotel’nikov–Shannon approximation of -sub-gaussian random processes. J. Math. Anal. Appl. 443(2):926–946. DOI: 10.1109/tit.2016.2597146.
- Kozachenko et al., [2015] Kozachenko, Y., Olenko, A., and Polosmak, O. (2015). Convergence in of wavelet expansions of -sub-gaussian random processes. Methodol. Comput. Appl. Probab. 17(1):139–153. DOI: 10.1007/s11009-013-9346-7.
- Kratz, [2006] Kratz, M. (2006). Level crossings and other level functionals of stationary Gaussian processes. Probab. Surveys. 3:230–288. DOI: 10.1214/154957806000000087.
- Leadbetter et al., [1983] Leadbetter, M., Lindgren, G. and Rootzén, H. (1983). Extremes and Related Properties of Random Sequences and Processes. New York: Springer. DOI: 10.1007/978-1-4612-5449-2.
- Ledoux and Talagrand, [2013] Ledoux, M. and Talagrand, M. (2013). Probability in Banach Spaces: Isoperimetry and Processes. Berlin: Springer. DOI: 10.1007/978-3-642-20212-4.
- Pickands, [1967] Pickands, J. (1967). Maxima of stationary Gaussian processes. Z. Wahrscheinlichkeitstheor. verw. Geb. 7(3):190–223. DOI: 10.1007/bf00532637.
- Piterbarg, [1996] Piterbarg, V. (1996). Asymptotic Methods in the Theory of Gaussian Processes and Fields. Providence, RI: American Mathematical Society. DOI: 10.1090/mmono/148.
- Talagrand, [2014] Talagrand, M. (2014). Upper and Lower Bounds for Stochastic Processes. Modern Methods and Classical Problems. Heidelberg: Springer. DOI: 10.1007/978-3-642-54075-2.