This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Asymptotics of running maxima for φ\varphi-subgaussian random double arrays

Nour Al Hayek 1  Illia Donhauzer2  Rita Giuliano 3  Andriy Olenko2  Andrei Volodin 1 N. Al Hayek. Email:[email protected]. Donhauzer. Email:[email protected]. Giuliano. Email:[email protected] A. Olenko. Email:[email protected]. Volodin. Email:[email protected] 1 University of Regina, Regina, Canada 2 La Trobe University, Melbourne, Australia 3 Università di Pisa, Pisa, Italy
Abstract

The article studies the running maxima Ym,j=max1km,1njXk,nam,jY_{m,j}=\max_{1\leq k\leq m,1\leq n\leq j}X_{k,n}-a_{m,j} where {Xk,n,k1,n1}\{X_{k,n},k\geq 1,n\geq 1\} is a double array of φ\varphi-subgaussian random variables and {am,j,m1,j1}\{a_{m,j},m\geq 1,j\geq 1\} is a double array of constants. Asymptotics of the maxima of the double arrays of positive and negative parts of {Ym,j,m1,j1}\{Y_{m,j},m\geq 1,j\geq 1\} are studied, when {Xk,n,k1,n1}\{X_{k,n},k\geq 1,n\geq 1\} have suitable “exponential-type” tail distributions. The main results are specified for various important particular scenarios and classes of φ\varphi-subgaussian random variables.

Dedicated to the memory of Professor Yuri Kozachenko (1940-2020)

Keywords: Random double array, running maxima, φ\varphi-subgaussian random variables, almost sure convergence.

AMS MSC 2000 Mathematics Subject Classification: 60F15, 60G70, 60G60.

1 Introduction

The main focus of the this investigation is to obtain convergence theorems for the running maxima of φ\varphi-subgaussian random variables. The roots of the subject are in classical probability theory, and can be traced back to Gnedenko’s theory of limiting behaviour of maximas of random variables. We refer to the excellent book by Embrechts, Klüppelberg, and Mikosch [7] that contains classical and more recent results on limit theorems of maximas of random variables with numerous examples of important practical applications in finance, economics, insurance and other fields.

Let {Xk,n,k1,n1}\{X_{k,n},k\geq 1,n\geq 1\} be a double array of centered random variables (a 2D random field defined on the grid ×\mathbb{N}\times\mathbb{N}) that are not necessarily independent or identically distributed. We assume that these variables are defined on the same probability space {Ω,,P}.\big{\{}\Omega,\mathcal{F},P\big{\}}.

Studying properties of normalised maxima of random sequences and processes is one of the classical problems in probability theory that attracted considerable interest in the literature, see, for example, [20, 21, 24, 25] and the references therein. The known asymptotic results broadly belong to three classes that use different probabilistic tools to study properties of

  • (1)

    expected maxima (see, for example, [2] and [25]),

  • (2)

    convergence almost surely (see [10] and [23]),

  • (3)

    asymptotic distributions of normalised maximas (see, for example, a comprehensive collection of results in [24]).

The case of Gaussian random variables has been extensively investigated for each of these classes. However, there are still numerous open problems, in particular about an extension of the known results to non-Gaussian scenarios and multidimensional arrays.

This article studies sufficient conditions on the tail distributions of Xk,nX_{k,n} that guarantee the existence of such a sequence {am,j,m1,j1}\{a_{m,j},m\geq 1,j\geq 1\} that the random variables

Ym,j=max1km,1njXk,nam,jY_{m,j}=\max_{1\leq k\leq m,1\leq n\leq j}X_{k,n}-a_{m,j}

converge to 0 almost surely as the number of random variables Xk,nX_{k,n} in the above maximum tends to infinity. This type of convergence is called the convergence of running maxima.

Contrary to the majority of classical results on the limiting behaviour of the maxima of random variables, where the convergence in distribution was considered (the third item above), we are interested in the almost surely convergence to zero. First results of this type were obtained by Pickands [23], where the classical case of Gaussian random variables was considered. Later this result was generalized to wider classes of distributions. In [9] running maxima of one-dimensional random sequences were considered and the generalization to the subgaussian case was studied. In [10], the results of [9] were generalized to the case of φ\varphi-subgaussian random variables. For recent publications on this subject we refer to Giuliano and Macci [11], Csáki and Gonchigdanzan [4] and references therein.

The class of subgaussian and φ\varphi-subgaussian random variables is a natural extension of the Gaussian class. The popularity of the Gaussian distribution was justified by the central limit theorem for sums of random variables with small variances. However, asymptotics can be non-Gaussian if summands have large variances. Nevertheless, φ\varphi-subgaussianity still can be an appropriate assumption. Numerous probability distributions belong to the φ\varphi-subgaussian class. For example, reflected Weibull, centered bounded-supported, and sums of independent Gaussian and centered bounded random variables are in this class. φ\varphi-subgaussian random variables were introduced to generalize various properties of subgaussian class considered by Dudley [6], Fernique [8], Kahane [15], Ledoux and Talagrand [22]. Then, several publications used this class of random variables to construct stochastic processes and fields, see [17, 18, 19]. The monograph [3] discusses subgaussianity and φ\varphi-subgaussianity in detail and provides numerous important examples and properties.

The main aim of this paper is to investigate the convergence of the running maxima of centered double arrays with more general exponential types of the tail distributions of Xk,nX_{k,n} than in [10]. The integrability conditions on the subgaussian function φ\varphi will obviously change.

The main results of the paper are Theorems 15. In these results the array {Ym,j,m1,j1}\{Y_{m,j},m\geq 1,j\geq 1\} is split into two parts:

Ym,j+=max(Ym,j,0),Ym,j=max(Ym,j,0),m1,j1,Y_{m,j}^{+}=\max(Y_{m,j},0),\ \ Y_{m,j}^{-}=\max(-Y_{m,j},0),\ m\geq 1,j\geq 1,

and the convergence of the arrays {Ym,j+,m1,j1},\{Y_{m,j}^{+},m\geq 1,j\geq 1\}, {Ym,j,m1,j1}\{Y_{m,j}^{-},m\geq 1,j\geq 1\} is investigated. The obtained results clearly show how the running maxima behaves depending on the right and left tail distributions of Xn,kX_{n,k}. The dependence of the array {am,j,m1,j1}\{a_{m,j},m\geq 1,j\geq 1\} on the function ψ()\psi(\cdot) which is the Young-Fenchel transform of φ\varphi, and φ\varphi-subgaussian norm of Xk,nX_{k,n} is demonstrated. The paper also examines the rate of convergence of the positive parts array {Ym,j+,m1,j1}.\{Y_{m,j}^{+},m\geq 1,j\geq 1\}.

This paper investigates almost sure and lim(max)\lim(\max) convergence of random functionals of double arrays. More details about these and other types of convergence and their applications can be found in the publications [5, 13, 14, 16] and the references therein.

The novelty of the paper compared to the known in the literature results for one-dimensional sequences are:

- the case of random double arrays is studied,

- lim(max)\lim(\max) convergence is used,

- φ\varphi-subgaussian norms of random variables in the arrays can unboundedly increase,

- conditions on exponential-type bounded tails are weaker than in the literature,

- several assumptions are less restrictive than even in the known results for the one-dimensional case,

- specifications for various important cases and particular scenarious are provided.

This paper is organized as follows. Section 2 provides required definitions and notations. The main results of this article are proved in Section 3 and  4. Conditions on the convergence of running maxima are presented in Section 3. Estimates of the rate of convergence are given in Section  4. Specifications of the main results and important particular cases are considered in Section 5. Section 6 presents some simulation studies. Finally, conclusions and some problems for future investigations are given in the last section.

Throughout the paper, uvu\vee v denotes max(u,v),\max(u,v), +\mathbb{R}^{+} stands for the set of positive real numbers, and CC represents a generic finite positive constant, which is not necessarily same in each appearance.

All computations, plotting, and simulations in this article were performed using the software R version 4.0.3. A reproducible version of the code in this paper is available in the folder “Research materials” from the website https://protect-au.mimecast.com/s/w-hDCk8vzZULyROOuVK_Qw?domain=sites.google.com.

2 Definitions and auxiliary results

This section presents definitions, notations, and technical results that will be used in the proofs of the main results later.

For double arrays of random variables, due to the lack of linear ordering of ×\mathbb{N}\times\mathbb{N}, there are multiple ways to define different modes of convergence. See the monograph [16] for a comprehensive discussion.

This paper considers lim(max)\lim(\max) convergence. Let {am,j,m1,j1}\{a_{m,j},m\geq 1,j\geq 1\} be a double array of real numbers.

Definition 1.

The array {am,j,m1,j1}\{a_{m,j},m\geq 1,j\geq 1\} converges to aa\in\mathbb{R} as mjm\vee j\to\infty if for every ε>0\varepsilon>0 there exists an integer NN such that if mjNm\vee j\geq N then

|am,ja|<ε.|a_{m,j}-a|<\varepsilon.

In the following this convergence will be denoted by limmjam,j=a\lim_{m\vee j\to\infty}a_{m,j}=a or by am,jaa_{m,j}\to a as mj+m\vee j\to+\infty.

This paper uses the next notations of φ\varphi-subgaussianity.

Definition 2.

A continuous function φ(x),x\varphi(x),x\in{\mathbb{R}}, is called an Orlicz NN-function if

a) it is even and convex,

b) φ(0)=0\varphi(0)=0,

c) φ(x)\varphi(x) is a monotone increasing function for x>0x>0,

d) limx0φ(x)x=0\lim\limits_{x\to 0}\frac{\varphi(x)}{x}=0 and limx+φ(x)x=+\lim\limits_{x\to+\infty}\frac{\varphi(x)}{x}=+\infty.

In the following the notation φ(x)\varphi(x) is used for an Orlicz NN-function.

Example 1.

The function φ(x)=|x|rr,r>1,\varphi(x)=\frac{|x|^{r}}{r},\ r>1, is an Orlicz NN-function.

Definition 3.

A function ψ(x),x\psi(x),\ x\in{\mathbb{R}}, given by ψ(x):=supy(xyφ(y))\psi(x):=\sup_{y\in{\mathbb{R}}}\left(xy-\varphi(y)\right) is called the Young-Fenchel transform of φ(x)\varphi(x).

It is well-known that ψ()\psi(\cdot) is an Orlicz NN-function.

Example 2.

If φ(x)=|x|rr,r>1\varphi(x)=\frac{|x|^{r}}{r},\ r>1, then ψ(x)=|x|qq,\psi(x)=\frac{|x|^{q}}{q}, where 1r+1q=1\frac{1}{r}+\frac{1}{q}=1.

Any Orlicz NN-function φ(x)\varphi(x) can be represented in the integral form

φ(x)=0|x|pφ(t)𝑑t\varphi(x)=\int_{0}^{|x|}p_{\varphi}(t)\ dt

where pφ(t)p_{\varphi}(t), t0,t\geq 0, is its density. The density pφ()p_{\varphi}(\cdot) is non-decreasing and there exists a generalized inverse qφ()q_{\varphi}(\cdot) defined by

qφ(t):=sup{u0:pφ(u)t}.q_{\varphi}(t):=\sup\{u\geq 0:p_{\varphi}(u)\leq t\}.

Then,

ψ(x)=0|x|qφ(t)𝑑t.\psi(x)=\int_{0}^{|x|}q_{\varphi}(t)\ dt.

As a consequence, the function ψ()\psi(\cdot) is increasing, differentiable, and ψ()=qφ()\psi^{\prime}(\cdot)=q_{\varphi}(\cdot).

Definition 4.

A random variable XX is φ\varphi-subgaussian if E(X)=0E(X)=0 and there exists a finite constant a>0a>0 such that Eexp(tX)exp(φ(at))E\exp\left(tX\right)\leq\exp\left(\varphi(at)\right) for all tt\in{\bf\mathbb{R}}. The φ\varphi-subgaussian norm τφ(X)\tau_{\varphi}(X) is defined as

τφ(X):=inf{a>0:Eexp(tX)exp(φ(at)),t}.\tau_{\varphi}(X):=\inf\{a>0:E\exp\left(tX\right)\leq\exp\left(\varphi(at)\right),t\in{\mathbb{R}}\}.

The definition of a φ\varphi-subgaussian random variable is given in terms of expectations, but it is essentially a condition on the tail of the distribution. Namely, the following result holds, see [3, Lemma 4.3, p. 66].

Lemma 1.

If φ()\varphi(\cdot) is an Orlicz NN-function and a random variable XX is φ\varphi-subgaussian, then for all x>0x>0 the following inequality holds

P(Xx)exp(ψ(xτφ(X))).P\big{(}X\geq x\big{)}\leq\exp\left(-\psi\left(\frac{x}{\tau_{\varphi}(X)}\right)\right).
Remark 1.

We refer to the monograph [3] where the notion of φ\varphi-subgaussianity was introduced and discussed in detail. Various examples were also provided in [3]. In the case φ(x)=x22\varphi(x)=\frac{x^{2}}{2} the notion of φ\varphi-subgaussianity reduces to the classical subgaussianity (see, for example, [12, Section 4.29]).

For readers’ convenience, we present a brief discussion of recent relevant results in the literature on one-dimensional sequences of φ\varphi-subgaussian random variables.

Consider a zero-mean sequence {Xk,k1}\{X_{k},k\geq 1\} of random variables, and set

Yn=max1knXk2lnn.Y_{n}=\max_{1\leq k\leq n}X_{k}-\sqrt{2\ln n}.

If XkX_{k} are independent and Gaussian random variables, then limnYn=0\lim_{n\to\infty}Y_{n}=0 a.s., see, for instance, [23].

In [10] the following proposition was proved for Yn+=max(Yn,0)Y_{n}^{+}=\max(Y_{n},0).

Proposition 1.

Suppose that there exists ε0>0\varepsilon_{0}>0 such that for every εε0\varepsilon\leq\varepsilon_{0}, the generalized inverse of the density pφ()p_{\varphi}(\cdot) of the Orlicz NN-function φ()\varphi(\cdot) satisfies the conditions

0qφ(x)exp(εqφ(x))𝑑x<+\int_{0}^{\infty}q_{\varphi}(x)\exp\big{(}-\varepsilon q_{\varphi}(x)\big{)}\ dx<+\infty

and

supk1τφ(Xk)=C1.\sup_{k\geq 1}\tau_{\varphi}(X_{k})=C\leq 1.

Then limnYn+=0\lim_{n\to\infty}Y_{n}^{+}=0 a.s.

It is natural to try to extend Proposition 1 to the multidimensional arrays. This is done in the next section.

Next, the behaviour of Yn=max(Yn,0),n1,Y_{n}^{-}=\max(-Y_{n},0),\ n\geq 1, was also studied in [10], but some additional assumptions on the left tail distribution of XnX_{n} were required. Unfortunately, these assumptions cannot be derived from the φ\varphi-subgaussianity assumption (see Remark 2 in [10]). In contrast to Proposition 1, the independence assumption is also required.

Proposition 2.

Assume that {Xk,k1}\{X_{k},k\geq 1\} is a sequence of zero-mean independent random variables and there exists a number C>0C>0 such that, for every k1k\geq 1 and all x>0x>0, we have

P(Xk<x)exp(Ceψ(x)),P\big{(}X_{k}<x\big{)}\leq\exp\big{(}-Ce^{-\psi(x)}\big{)},

where ψ()\psi(\cdot) is a positive differentiable function with q(x)=ψ(x)q(x)=\psi^{\prime}(x) non-decreasing for x>0x>0. Suppose that there exists an ε0>0\varepsilon_{0}>0 such that for every εε0\varepsilon\leq\varepsilon_{0} it holds

0+exp(ψ(x)Ceεq(xε))q(x)𝑑x<+.\int_{0}^{+\infty}\exp\big{(}\psi(x)-Ce^{\varepsilon q(x-\varepsilon)}\big{)}q(x)dx<+\infty.

Then limnYn=0\lim_{n\to\infty}Y_{n}^{-}=0 a.s.

In the exponential-type tail condition Proposition 2 uses the same function ψ()\psi(\cdot) as in the definition of φ\varphi-subgaussianity of Xk.X_{k}. The next section will extend it to the case of arbitrary functions.

Proposition 3.

Let {Xk,k1}\{X_{k},k\geq 1\} be a sequence of φ\varphi-subgaussian random variables such that supk1τφ(Xk)=c\sup_{k\geq 1}\tau_{\varphi}(X_{k})=c and let α>21c\alpha>2-\frac{1}{c}. Then

k=1+kαP(Yk+>0)<+.\sum_{k=1}^{+\infty}k^{-\alpha}P\big{(}Y_{k}^{+}>0\big{)}<+\infty.
Remark 2.

The statement of Proposition 3 is obvious for α>1\alpha>1. Hence, only the case of α<1\alpha<1 i.e. c<1,c<1, is interesting.

Note that [10] also examined the rate of convergence of the sequence {Yn+,n1}\{Y_{n}^{+},n\geq 1\}. In Proposition 3, the rate of convergence for P(Yn+>0)P(Y_{n}^{+}>0) is given, while usually only results for P(Yn+>ε)P\big{(}Y_{n}^{+}>\varepsilon\big{)}, ε>0,\varepsilon>0, were obtained in the existing literature. As P(Yn+>ε)P(Yn+>0)P\big{(}Y_{n}^{+}>\varepsilon\big{)}\leq P\big{(}Y_{n}^{+}>0\big{)} for any ε>0\varepsilon>0 it also follows from the assumptions of the proposition that k=1+kαP(Yk+>ε)<+\sum_{k=1}^{+\infty}k^{-\alpha}P\big{(}Y_{k}^{+}>\varepsilon\big{)}<+\infty.

It was also shown in [10] that, Proposition 3 is sharp in some sense. Namely, the following result is true.

Proposition 4.

Let {Xk,k1}\{X_{k},k\geq 1\} be a zero-mean sequence of independent random variables and there exists a strictly increasing differentiable function ψ:++\psi:{\mathbb{R}}^{+}\to{\mathbb{R}}^{+} and t0>0t_{0}>0 such that, for every k1k\geq 1 and t>t0t>t_{0} it holds

P(Xk>t)exp(ψ(t)).P\big{(}X_{k}>t\big{)}\geq\exp\big{(}-\psi(t)\big{)}.

Assume that there exists ε0>0\varepsilon_{0}>0 such that

lim supx+maxxξx+ε0ψ(ξ)ψ(x)=l<.\limsup_{x\to+\infty}\frac{\max_{x\leq\xi\leq x+\varepsilon_{0}}\psi^{\prime}(\xi)}{\psi(x)}=l<\infty.

Then, for every real number α<1\alpha<1 and for every 0<ε<(1α)/l0<\varepsilon<(1-\alpha)/l it holds

k=1kαP(Yk+>ε)=+.\sum_{k=1}^{\infty}k^{-\alpha}P\big{(}Y_{k}^{+}>\varepsilon\big{)}=+\infty.

3 On asymptotic behaviour of running maxima of φ\varphi-subgaussian double arrays

In this section, we establish sufficient conditions on the tail distributions of Xk,nX_{k,n} that guarantee that positive and negative parts of random variables Ym,jY_{m,j} converge to 0 almost surely as mj.m\vee j\to\infty.

Let φ()\varphi(\cdot) be an Orlicz NN-function, pφ()p_{\varphi}(\cdot) be its density, the function ψ()\psi(\cdot) is the Young-Fenchel transform of φ()\varphi(\cdot), and the function qφ()q_{\varphi}(\cdot) be the generalized inverse of the density pφ()p_{\varphi}(\cdot).

Let us consider a double array (2D random field defined on the integer grid ×\mathbb{N}\times\mathbb{N}) of zero-mean random variables {Xk,n,k1,n1}\{X_{k,n},k\geq 1,n\geq 1\}. The next notations will be used to formulate the main results

Ym,j:\displaystyle Y_{m,j}: =\displaystyle= max1km,1njXk,nam,j,\displaystyle\max_{1\leq k\leq m,1\leq n\leq j}X_{k,n}-a_{m,j},
Zm,j:\displaystyle Z_{m,j}: =\displaystyle= Xm,jam,j,\displaystyle X_{m,j}-a_{m,j},

where am,ja_{m,j} is an increasing function with respect to each of mm and jj variables, where m,j1.m,j\geq 1.

Let

Ym,j+:=max(Ym,j,0)andYm,j:=max(Ym,j,0).Y^{+}_{m,j}:=\max(Y_{m,j},0)\ \ {\rm{and}}\ \ Y^{-}_{m,j}:=\max(-Y_{m,j},0).

Indices mm and jj of the random variables Ym,jY_{m,j} can be viewed as the parameters defining the rectangular observation window {(k,n):km,nj,k,n}\{(k,n):k\leq m,n\leq j,\ k,n\in\mathbb{N}\} of the random field Xk,nX_{k,n} on ×\mathbb{N}\times\mathbb{N}.

The following proofs will use the next extension of Lemma 2 from [10] to the case of double arrays.

Lemma 2.

For any ε>0\varepsilon>0

{ωΩ:Ym,j+>ε i.o.}={ωΩ:Zm,j+>ε i.o.},\{\omega\in\Omega:Y^{+}_{m,j}>\varepsilon\mbox{ i.o.}\}=\{\omega\in\Omega:Z_{m,j}^{+}>\varepsilon\mbox{ i.o.}\},

where i.o. stands for infinitely often.

Proof.

It is easy to see that

{ωΩ:Zm,j+>εi.o.}={ωΩ:Xm,j>ε+am,ji.o.}\big{\{}\omega\in\Omega:Z_{m,j}^{+}>\varepsilon\ i.o.\big{\}}=\big{\{}\omega\in\Omega:X_{m,j}>\varepsilon+a_{m,j}\ i.o.\big{\}}
{ωΩ:max1km,1njXk,n>ε+am,ji.o.}={ωΩ:Ym,j+>εi.o.}.\subset\big{\{}\omega\in\Omega:\max_{1\leq k\leq m,1\leq n\leq j}X_{k,n}>\varepsilon+a_{m,j}\ i.o.\big{\}}=\big{\{}\omega\in\Omega:Y^{+}_{m,j}>\varepsilon\ i.o.\big{\}}.

Also, as am,ja_{m,j} is an increasing function of mm and jj, it holds

{ωΩ:Ym,j+>εi.o.}={ωΩ:Xk,n>ε+am,j,for 1km,1nji.o.}\big{\{}\omega\in\Omega:Y_{m,j}^{+}>\varepsilon\ i.o.\big{\}}=\big{\{}\omega\in\Omega:X_{k,n}>\varepsilon+a_{m,j},\ {\rm{for}}\ 1\leq k\leq m,1\leq n\leq j\ i.o.\big{\}}
{ωΩ:Xk,n>ε+ak,ni.o.}={ωΩ:Zm,j+>εi.o.},\subset\big{\{}\omega\in\Omega:X_{k,n}>\varepsilon+a_{k,n}\ i.o.\big{\}}=\big{\{}\omega\in\Omega:Z_{m,j}^{+}>\varepsilon\ i.o.\big{\}},

which completes the proof. ∎

Remark 3.

Let {An}n=1\{A_{n}\}_{n=1}^{\infty} be an infinite sequence of events. By {Ani.o.}\{A_{n}\ i.o.\} we denote an event that infinitely many events from {An}n=1\{A_{n}\}_{n=1}^{\infty} holds true. The importance of the notion i.o. can be explained by the following well-known statement, which is crucial for proving the almost sure convergence: Xk,n0X_{k,n}\to 0 almost surely, when kn+,k\vee n\to+\infty, if and only if for all ε>0,P(|Xk,n|ε i.o.)=0.\varepsilon>0,P(|X_{k,n}|\geq\varepsilon\mbox{ i.o.})=0.

The following result extends Proposition 1 to the case of double arrays of random variables.

Theorem 1.

Let {Xk,n,k1,n1}\{X_{k,n},k\geq 1,n\geq 1\} be a double array of φ\varphi-subgaussian random variables and g()g(\cdot) be a non-decreasing function such that for all k,n1k,n\geq 1

τφ(Xk,n)g(ln(kn))\tau_{\varphi}(X_{k,n})\leq g(\ln(kn)) (1)

and

am,j=g(ln(mj))ψ1(ln(mj)).a_{m,j}=g(\ln(mj))\psi^{-1}(\ln(mj)).

Suppose that there exists an ε0>0\varepsilon_{0}>0 such that for every ε(0,ε0]\varepsilon\in(0,\varepsilon_{0}]

0ψ(x)qφ(x)exp(εqφ(x)g(ψ(x)+ln(2)))𝑑x<+.\int_{0}^{\infty}\psi(x)q_{\varphi}(x)\exp\left(-\frac{\varepsilon q_{\varphi}(x)}{g(\psi(x)+\ln(2))}\right)dx<+\infty. (2)

Then limmjYm,j+=0\lim_{m\vee j\to\infty}Y^{+}_{m,j}=0 a.s.

Remark 4.

In the following, without loss of generality, we consider only non-degenerated random variables with non-zero φ\varphi-subgaussian norms. Therefore, it holds g()>0.g(\cdot)>0. For the case of identically distributed Xk,nX_{k,n} we assume that g(x)τφ(Xk,n)C>0.g(x)\equiv\tau_{\varphi}(X_{k,n})\equiv C>0.

Remark 5.

If the function g()g(\cdot) is bounded by a constant CC from above then (1) is the same as the corresponding assumption in Propositions 1 and 3.

Proof.

It follows from Lemma 2, Remark 3 and the Borel-Cantelli lemma that it is enough to show that for any ε>0\varepsilon>0

m=1j=1P(Zm,j+ε)<,\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}P\big{(}Z^{+}_{m,j}\geq\varepsilon\big{)}<\infty,

because then P(Zm,j+ε i.o.)=0P\big{(}Z^{+}_{m,j}\geq\varepsilon\mbox{ i.o.}\big{)}=0.

Note, that by Lemma 1 and assumption (1) for all m,j,m,j\in\mathbb{N}, except a finite number, it holds

P(Zm,j+ε)=P(Zm,jε)=P(Xm,jg(ln(mj))ψ1(ln(mj))+ε)P\big{(}Z^{+}_{m,j}\geq\varepsilon\big{)}=P\big{(}Z_{m,j}\geq\varepsilon\big{)}=P\big{(}X_{m,j}\geq g(\ln(mj))\psi^{-1}(\ln(mj))+\varepsilon\big{)}
=P(Xm,jg(ln(mj))ψ1(ln(mj))+εg(ln(mj)))=P\left(\frac{X_{m,j}}{g(\ln(mj))}\geq\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)
P(Xm,jτφ(Xm,j)ψ1(ln(mj))+εg(ln(mj)))\leq P\left(\frac{X_{m,j}}{\tau_{\varphi}(X_{m,j})}\geq\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)
exp(ψ(ψ1(ln(mj))+εg(ln(mj))))\leq\exp\left(-\psi\left(\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)\right)

since ψ()\psi(\cdot) is increasing.

Therefore, it is enough to prove that the double sum

S:=m=1j=1exp(ψ(ψ1(ln(mj))+εg(ln(mj))))S:=\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)\right) (3)

converges.

Let us fix m1m\geq 1 and investigate the behaviour of the inner sum

j=1exp(ψ(ψ1(ln(mj))+εg(ln(mj)))).\sum_{j=1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)\right).

Note that

j=1exp(ψ(ψ1(ln(mj))+εg(ln(mj))))\sum_{j=1}^{\infty}\exp\bigg{(}-\psi\left(\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)\bigg{)}
=exp(ψ(ψ1(ln(m))+εg(ln(m))))+j=2exp(ψ(ψ1(ln(mj))+εg(ln(mj)))).=\exp\bigg{(}-\psi\left(\psi^{-1}(\ln(m))+\frac{\varepsilon}{g(\ln(m))}\right)\bigg{)}+\sum_{j=2}^{\infty}\exp\bigg{(}-\psi\left(\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)\bigg{)}.

Now, for x[j1,j]:x\in[j-1,j]:

ln(mx)ln(mj)andln(m(x+1))ln(mj).\ln(mx)\leq\ln(mj)\quad\mbox{and}\quad\ln(m(x+1))\geq\ln(mj).

Because ψ(),\psi(\cdot), ψ1()\psi^{-1}(\cdot) and g()g(\cdot) are increasing functions,

ψ1(ln(mj))+εg(ln(mj))ψ1(ln(mx))+εg(ln(m(x+1))),\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\geq\psi^{-1}(\ln(mx))+\frac{\varepsilon}{g(\ln(m(x+1)))},
ψ(ψ1(ln(mj))+εg(ln(mj)))ψ(ψ1(ln(mx))+εg(ln(m(x+1)))),-\psi\left(\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)\leq-\psi\left(\psi^{-1}(\ln(mx))+\frac{\varepsilon}{g(\ln(m(x+1)))}\right),

which results in

j=2exp(ψ(ψ1(ln(mj))+εg(ln(mj))))\sum_{j=2}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)\right)
1exp(ψ(ψ1(ln(mx))+εg(ln(m(x+1)))))𝑑x.\leq\int_{1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(mx))+\frac{\varepsilon}{g(\ln(m(x+1)))}\right)\right)dx.

Therefore, for fixed m1m\geq 1,

j=1exp(ψ(ψ1(ln(mj))+εg(ln(mj))))\displaystyle\sum_{j=1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(mj))+\frac{\varepsilon}{g(\ln(mj))}\right)\right) (4)
\displaystyle\leq exp(ψ(ψ1(ln(m))+εg(ln(m))))\displaystyle\exp\left(-\psi\left(\psi^{-1}(\ln(m))+\frac{\varepsilon}{g(\ln(m))}\right)\right)
+1exp(ψ(ψ1(ln(mx))+εg(ln(m(x+1)))))𝑑x\displaystyle+\int_{1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(mx))+\frac{\varepsilon}{g(\ln(m(x+1)))}\right)\right)dx
\displaystyle\leq exp(ψ(ψ1(ln(m))+εg(ln(m))))\displaystyle\exp\left(-\psi\left(\psi^{-1}(\ln(m))+\frac{\varepsilon}{g(\ln(m))}\right)\right)
+1exp(ψ(ψ1(ln(mx))+εg(ln(mx)+ln(2))))𝑑x\displaystyle+\int_{1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(mx))+\frac{\varepsilon}{g(\ln(mx)+\ln(2))}\right)\right)dx

as ln(m(x+1))ln(mx)+ln(2)\ln(m(x+1))\leq\ln(mx)+\ln(2) for m1m\geq 1 and x1x\geq 1.

To study the last integral in (4) we use the substitution t=ψ1(ln(mx))t=\psi^{-1}(\ln(mx)). Then x=exp(ψ(t))mx=\frac{\exp(\psi(t))}{m} and

1exp(ψ(ψ1(ln(mx))+εg(ln(mx)+ln(2))))𝑑x\int_{1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(mx))+\frac{\varepsilon}{g(\ln(mx)+\ln(2))}\right)\right)dx
1mψ1(lnm)ψ(t)exp(ψ(t)ψ(t+εg(ψ(t)+ln(2))))𝑑t.\leq\frac{1}{m}\int_{\psi^{-1}(\ln m)}^{\infty}\psi^{\prime}(t)\exp\left(\psi(t)-\psi\left(t+\frac{\varepsilon}{g(\psi(t)+\ln(2))}\right)\right)dt.

By the mean value theorem and ψ()=qφ()\psi^{\prime}(\cdot)=q_{\varphi}(\cdot) it follows that there exists such ξ[t,t+εg(ψ(t)+ln(2))]\xi\in\left[t,t+\frac{\varepsilon}{g(\psi(t)+\ln(2))}\right] that it holds

ψ(t)ψ(t+εg(ψ(t)+ln(2)))=εg(ψ(t)+ln(2))ψ(ξ)\psi(t)-\psi\left(t+\frac{\varepsilon}{g(\psi(t)+\ln(2))}\right)=-\frac{\varepsilon}{g(\psi(t)+\ln(2))}\psi^{\prime}(\xi)
=εg(ψ(t)+ln(2))qφ(ξ)εqφ(t)g(ψ(t)+ln(2)),=-\frac{\varepsilon}{g(\psi(t)+\ln(2))}q_{\varphi}(\xi)\leq-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))},

as qφ(t)q_{\varphi}(t) is a non-decreasing function.

Thus, we obtain the next upper bound

1exp(ψ(ψ1(ln(mx))+εg(ln(mx)+ln(2))))𝑑x\int_{1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(mx))+\frac{\varepsilon}{g(\ln(mx)+\ln(2))}\right)\right)dx
1mψ1(lnm)qφ(t)exp(εqφ(t)g(ψ(t)+ln(2)))𝑑t.\leq\frac{1}{m}\int_{\psi^{-1}(\ln m)}^{\infty}q_{\varphi}(t)\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)dt. (5)

Therefore, by (4), (5) the double sum in (3) can be estimated as

Sm=1exp(ψ(ψ1(ln(m))+εg(ln(m))))S\leq\sum_{m=1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(m)\big{)}+\frac{\varepsilon}{g(\ln(m))}\right)\right)
+m=11mψ1(lnm)qφ(t)exp(εqφ(t)g(ψ(t)+ln(2)))𝑑t.+\sum_{m=1}^{\infty}\frac{1}{m}\int_{\psi^{-1}(\ln m)}^{\infty}q_{\varphi}(t)\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)dt. (6)

Similar to the above computations the first sum in (6) can be bounded as

m=1exp(ψ(ψ1(ln(m))+εg(ln(m))))exp(ψ(ψ1(0)+εg(0)))\sum_{m=1}^{\infty}\exp\left(-\psi\left(\psi^{-1}\big{(}\ln(m)\big{)}+\frac{\varepsilon}{g(\ln(m))}\right)\right)\leq\exp\left(-\psi\left(\psi^{-1}(0)+\frac{\varepsilon}{g(0)}\right)\right)
+1exp(ψ(ψ1(ln(x))+εg(ln(x+1))))𝑑x+\int_{1}^{\infty}\exp\left(-\psi\left(\psi^{-1}(\ln(x))+\frac{\varepsilon}{g(\ln(x+1))}\right)\right)dx
=exp(ψ(εg(0)))+0qφ(t)exp(εqφ(t)g(ψ(t)+ln(2)))𝑑t.=\exp\left(-\psi\left(\frac{\varepsilon}{g(0)}\right)\right)+\int_{0}^{\infty}q_{\varphi}(t)\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)dt. (7)

As ψ1()\psi^{-1}(\cdot) is an increasing function, for the second sum in (6) one gets

m=11mψ1(ln(m))qφ(t)exp(εqφ(t)g(ψ(t)+ln(2)))𝑑t\sum_{m=1}^{\infty}\frac{1}{m}\int_{\psi^{-1}(\ln(m))}^{\infty}q_{\varphi}(t)\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)dt
0qφ(t)exp(εqφ(t)g(ψ(t)+ln(2)))𝑑t\leq\int_{0}^{\infty}q_{\varphi}(t)\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)dt
+01uψ1(ln(u))qφ(t)exp(εqφ(t)g(ψ(t)+ln(2)))𝑑t𝑑u.+\int_{0}^{\infty}\frac{1}{u}\int_{\psi^{-1}(\ln(u))}^{\infty}q_{\varphi}(t)\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)dtdu. (8)

By substitution y=ln(u)y=\ln(u) and changing the order of integration

01uψ1(ln(u))qφ(t)exp(εqφ(t)g(ψ(t)+ln(2)))𝑑t𝑑u\int_{0}^{\infty}\frac{1}{u}\int_{\psi^{-1}(\ln(u))}^{\infty}q_{\varphi}(t)\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)dtdu
=0+ψ1(y)+qφ(t)exp(εqφ(t)g(ψ(t)+ln(2)))𝑑t𝑑y=\int_{0}^{+\infty}\int_{\psi^{-1}(y)}^{+\infty}q_{\varphi}(t)\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)dtdy
=0exp(εqφ(t)g(ψ(t)+ln(2)))0ψ(t)𝑑y𝑑t=\int_{0}^{\infty}\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)\int_{0}^{\psi(t)}dydt
=0exp(εqφ(t)g(ψ(t)+ln(2)))ψ(t)𝑑t<+,=\int_{0}^{\infty}\exp\left(-\frac{\varepsilon q_{\varphi}(t)}{g(\psi(t)+\ln(2))}\right)\psi(t)dt<+\infty, (9)

where the finiteness of the last integral follows from limxψ(x)x=+\lim_{x\to\infty}\frac{\psi(x)}{x}=+\infty and the assumption (2).

Combining (3) with (6), (7), (8) we obtain the convergence of SS which completes the proof. ∎

For the case of double array of random variables with bounded φ\varphi-subgaussian norms the function g()g(\cdot) can be selected identically equal to a constant. Therefore, Theorem 1 can be specified as follows.

Corollary 1.

Let {Xk,n,k1,n1}\{X_{k,n},\ k\geq 1,\ n\geq 1\} be a double array of φ\varphi-subgaussian random variables with supk,nτφ(Xk,n)1.\sup_{k,n\in\mathbb{N}}\tau_{\varphi}(X_{k,n})\leq 1. Suppose, that there exist ε0>0\varepsilon_{0}>0 such that for every ε(0,ε0]\varepsilon\in(0,\varepsilon_{0}]

0+ψ(x)qφ(x)exp(εqφ(x))𝑑x<+.\int_{0}^{+\infty}\psi(x)q_{\varphi}(x)\exp\left(-\varepsilon q_{\varphi}(x)\right)dx<+\infty.

Then,

limmj+(max1km,1njXk,nψ1(ln(mj)))+=0,a.s.\lim_{m\vee j\to+\infty}\left(\max_{1\leq k\leq m,1\leq n\leq j}X_{k,n}-\psi^{-1}(\ln(mj))\right)^{+}=0,\ a.s.

The asymptotic behaviour of the sequence {Ym,j,m,j1}\{Y_{m,j}^{-},\ m,j\geq 1\} cannot be described in terms of subgaussianity only. Roughly speaking, an opposite type of the inequality is required (see Remark 2 in [10]). Moreover, in addition to the conditions of Proposition 1, it is assumed that random variables in the double array are independent. The following result is an extension of Proposition 2 to the case of double arrays.

Theorem 2.

Let {Xk,n,k,n1}\{X_{k,n},k,n\geq 1\} be a double array of independent φ\varphi-subgaussian random variables and the array {am,j,m,j1}\{a_{m,j},m,j\geq 1\} and function g()g(\cdot) are defined in Theorem 1. Let κ(x)\kappa(x) be a positive increasing differentiable function with the derivative r(x)=κ(x)r(x)=\kappa^{\prime}(x) non-decreasing for x>0x>0. Assume that there exists C>0C>0 such that for every k,n1k,n\geq 1 and all x>0x>0

P(Xk,ng(ln(kn))<x)exp(Ceκ(x)),P\left(\frac{X_{k,n}}{g(\ln(kn))}<x\right)\leq\exp\big{(}-Ce^{-\kappa(x)}\big{)},

and

ψ(x)κ(xg(x)g(0))C0(x)\psi(x)-\kappa\left(\frac{xg(x)}{g(0)}\right)\geq C_{0}(x) (10)

for some function C0().C_{0}(\cdot). Suppose that there exists A,ε0>0A,\ \varepsilon_{0}>0 such that for every ε(0,ε0]\varepsilon\in(0,\varepsilon_{0}]

A+exp(Cy2exp(κ(g(ln(y))g(0)ψ1(ln(y))εg(ln(y)))))<+\int_{A}^{+\infty}\exp\left(-\frac{Cy}{2}\exp\left(-\kappa\left(\frac{g(\ln(y))}{g(0)}\psi^{-1}(\ln(y))-\frac{\varepsilon}{g(\ln(y))}\right)\right)\right)<+\infty

and

A+ψ(y)qφ(y)exp(ψ(y)C2exp(C0(y)+r(yg(ψ(y))g(0)εg(ψ(y)))g(ψ(y))))𝑑y<+.\int_{A}^{+\infty}\psi(y)q_{\varphi}(y)\exp\left(\psi(y)-\frac{C}{2}\exp\left(C_{0}(y)+\frac{r\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right)}{g(\psi(y))}\right)\right)dy<+\infty.

Then limmjYm,j=0\lim_{m\vee j\rightarrow\infty}Y^{-}_{m,j}=0 a.s.

Proof.

Using the Borel-Cantelli lemma, we will prove that for every ε(0,ε0]\varepsilon\in(0,\varepsilon_{0}] P(Ym,j>ε i. o.)=0P\big{(}Y_{m,j}^{-}>\varepsilon\mbox{ i. o.}\big{)}=0. By the independence of Xk,nX_{k,n} one gets

P(Ym,j>ε)=P(Ym,j<ε)=P(max1km,1njXk,n<g(ln(mj))ψ1(ln(mj))ε)P\big{(}Y^{-}_{m,j}>\varepsilon\big{)}=P\big{(}Y_{m,j}<-\varepsilon\big{)}=P\big{(}\max_{1\leq k\leq m,1\leq n\leq j}X_{k,n}<g(\ln(mj))\psi^{-1}(\ln(mj))-\varepsilon\big{)}
=k=1mn=1jP(Xk,n<g(ln(mj))ψ1(ln(mj))ε)=\prod_{k=1}^{m}\prod_{n=1}^{j}P\big{(}X_{k,n}<g(\ln(mj))\psi^{-1}(\ln(mj))-\varepsilon\big{)}
=k=1mn=1jP(Xk,ng(ln(kn))<g(ln(mj))g(ln(kn))ψ1(ln(mj))εg(ln(kn)))=\prod_{k=1}^{m}\prod_{n=1}^{j}P\left(\frac{X_{k,n}}{g(\ln(kn))}<\frac{g(\ln(mj))}{g(\ln(kn))}\psi^{-1}(\ln(mj))-\frac{\varepsilon}{g(\ln(kn))}\right)
k=1mn=1jP(Xk,ng(ln(kn))<g(ln(mj))g(0)ψ1(ln(mj))εg(ln(mj)))\leq\prod_{k=1}^{m}\prod_{n=1}^{j}P\left(\frac{X_{k,n}}{g(\ln(kn))}<\frac{g(\ln(mj))}{g(0)}\psi^{-1}(\ln(mj))-\frac{\varepsilon}{g(\ln(mj))}\right)
exp(Cmjexp(κ(g(ln(mj))g(0)ψ1(ln(mj))εg(ln(mj))))),\leq\exp\left(-Cmj\exp\left(-\kappa\left(\frac{g(\ln(mj))}{g(0)}\psi^{-1}(\ln(mj))-\frac{\varepsilon}{g(\ln(mj))}\right)\right)\right),

where we used the monotonicity of the function g()g(\cdot) and g(0)1.g(0)\geq 1. Therefore,

m=1j=1P(Ym,j>ε)m=1j=1exp(Cmj\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}P\left(Y^{-}_{m,j}>\varepsilon\right)\leq\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}\exp\left(-Cmj\right.
×exp(κ(g(ln(mj))g(0)ψ1(ln(mj))εg(ln(mj))))).\left.\times\exp\left(-\kappa\left(\frac{g(\ln(mj))}{g(0)}\psi^{-1}(\ln(mj))-\frac{\varepsilon}{g(\ln(mj))}\right)\right)\right). (11)

As the functions g()g(\cdot) and ψ1()\psi^{-1}(\cdot) are non-decreasing, then for any fixed m1m\geq 1 we can majorize the second sum as

j=1exp(Cmjexp(κ(g(ln(mj))g(0)ψ1(ln(mj))εg(ln(mj)))))\sum_{j=1}^{\infty}\exp\left(-Cmj\exp\left(-\kappa\left(\frac{g(\ln(mj))}{g(0)}\psi^{-1}(\ln(mj))-\frac{\varepsilon}{g(\ln(mj))}\right)\right)\right)
exp(Cmexp(κ(g(ln(m))g(0)ψ1(ln(m))εg(ln(m)))))\leq\exp\left(-Cm\exp\left(-\kappa\left(\frac{g(\ln(m))}{g(0)}\psi^{-1}(\ln(m))-\frac{\varepsilon}{g(\ln(m))}\right)\right)\right)
+2exp(Cm(x1)exp(κ(g(ln(mx))g(0)ψ1(ln(mx))εg(ln(mx)))))𝑑x.+\int_{2}^{\infty}\exp\left(-Cm(x-1)\exp\left(-\kappa\left(\frac{g(\ln(mx))}{g(0)}\psi^{-1}(\ln(mx))-\frac{\varepsilon}{g(\ln(mx))}\right)\right)\right)dx.

As x/2x1x/2\leq x-1 for x2,x\geq 2, the above integral can be estimated by

2exp(Cmx2exp(κ(g(ln(mx))g(0)ψ1(ln(mx))εg(ln(mx)))))𝑑x.\int_{2}^{\infty}\exp\left(-\frac{Cmx}{2}\exp\left(-\kappa\left(\frac{g(\ln(mx))}{g(0)}\psi^{-1}(\ln(mx))-\frac{\varepsilon}{g(\ln(mx))}\right)\right)\right)dx.

By the change of variables y=ψ1(ln(mx)),x=1mexp(ψ(y))y=\psi^{-1}(\ln(mx)),\ x=\frac{1}{m}\exp(\psi(y)), this integral equals

1mψ1(ln(2m))exp(ψ(y)C2exp(ψ(y)κ(yg(ψ(y))g(0)εg(ψ(y)))))qφ(y)𝑑y\frac{1}{m}\int_{\psi^{-1}(\ln(2m))}^{\infty}\exp\left(\psi(y)-\frac{C}{2}\exp\left(\psi(y)-\kappa\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right)\right)\right)q_{\varphi}(y)dy
=1mψ1(ln(2m))exp(ψ(y)C2exp(ψ(y)κ(yg(ψ(y))g(0))=\frac{1}{m}\int_{\psi^{-1}(\ln(2m))}^{\infty}\exp\left(\psi(y)-\frac{C}{2}\exp\left(\psi(y)\right.\right.-\kappa\left(\frac{yg(\psi(y))}{g(0)}\right)
+κ(yg(ψ(y))g(0))κ(yg(ψ(y))g(0)εg(ψ(y)))))qφ(y)dy.\left.\left.+\kappa\left(\frac{yg(\psi(y))}{g(0)}\right)-\kappa\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right)\right)\right)q_{\varphi}(y)dy. (12)

By the mean value theorem, as r()r(\cdot) is non-decreasing, it holds

κ(yg(ψ(y))g(0))κ(yg(ψ(y))g(0)εg(ψ(y)))εg(ψ(y))r(yg(ψ(y))g(0)εg(ψ(y))).\kappa\left(\frac{yg(\psi(y))}{g(0)}\right)-\kappa\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right)\geq\frac{\varepsilon}{g(\psi(y))}r\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right).

Thus, applying the above inequality and assumption (10) one gets the following upper bound for the integral in (12)

1mψ1(ln(2m))+exp(ψ(y)C2exp(C0(y)+εr(yg(ψ(y))g(0)εg(ψ(y)))g(ψ(y))))qφ(y)𝑑y.\frac{1}{m}\int_{\psi^{-1}(\ln(2m))}^{+\infty}\exp\left(\psi(y)-\frac{C}{2}\exp\left(C_{0}(y)+\frac{\varepsilon r\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right)}{g(\psi(y))}\right)\right)q_{\varphi}(y)dy.

Hence, the right hand side of (11) can be estimated by

m=1exp(Cmexp(κ(g(ln(m))g(0)ψ1(ln(m))εg(ln(m)))))\sum_{m=1}^{\infty}\exp\left(-Cm\exp\left(-\kappa\left(\frac{g(\ln(m))}{g(0)}\psi^{-1}(\ln(m))-\frac{\varepsilon}{g(\ln(m))}\right)\right)\right)
+m=11mψ1(ln(2m))+exp(ψ(y)C2exp(C0(y)+εr(yg(ψ(y))g(0)εg(ψ(y)))g(ψ(y))))qφ(y)𝑑y+{\sum_{m=1}^{\infty}\frac{1}{m}\int_{\psi^{-1}(\ln(2m))}^{+\infty}\exp\left(\psi(y)-\frac{C}{2}\exp\left(C_{0}(y)+\frac{\varepsilon r\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right)}{g(\psi(y))}\right)\right)q_{\varphi}(y)dy}
exp(Cexp(κ(ψ1(0)εg(0))))\leq\exp\left(-C\exp\left(-\kappa\left(\psi^{-1}(0)-\frac{\varepsilon}{g(0)}\right)\right)\right)
+2exp(Cu2exp(κ(g(ln(u))g(0)ψ1(ln(u))εg(ln(u)))))𝑑u+\int_{2}^{\infty}\exp\left(-\frac{Cu}{2}\exp\left(-\kappa\left(\frac{g(\ln(u))}{g(0)}\psi^{-1}(\ln(u))-\frac{\varepsilon}{g(\ln(u))}\right)\right)\right)du
+ψ1(ln(2))+exp(ψ(y)C2exp(C0(y)+εr(yg(ψ(y))g(0)εg(ψ(y)))g(ψ(y))))qφ(y)𝑑y+\int_{\psi^{-1}(\ln(2))}^{+\infty}\exp\left(\psi(y)-\frac{C}{2}\exp\left(C_{0}(y)+\frac{\varepsilon r\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right)}{g(\psi(y))}\right)\right)q_{\varphi}(y)dy
+1+1uψ1(ln(2u))+exp(ψ(y)C2exp(C0(y)+εr(yg(ψ(y))g(0)εg(ψ(y)))g(ψ(y))))qφ(y)𝑑y𝑑u.+\int\limits_{1}^{+\infty}\frac{1}{u}\int\limits_{\psi^{-1}(\ln(2u))}^{+\infty}\exp\left(\psi(y)-\frac{C}{2}\exp\left(C_{0}(y)+\frac{\varepsilon r\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right)}{g(\psi(y))}\right)\right)q_{\varphi}(y)dydu. (13)

By the change of variables t=ln(2u)t=\ln(2u) and the change of the order of integration we obtain that the last integral equals

ψ1(ln(2))+(ψ(y)ln(2))qφ(y)exp(ψ(y)C2exp(C0(y)+εr(yg(ψ(y))g(0)εg(ψ(y)))g(ψ(y))))𝑑y\int_{\psi^{-1}(\ln(2))}^{+\infty}(\psi(y)-\ln(2))q_{\varphi}(y)\exp\left(\psi(y)-\frac{C}{2}\exp\left(C_{0}(y)+\frac{\varepsilon r\left(\frac{yg(\psi(y))}{g(0)}-\frac{\varepsilon}{g(\psi(y))}\right)}{g(\psi(y))}\right)\right)dy (14)

Then, the boundedness m=1j=1P(Ym,j>ε)<+\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}P(Y_{m,j}^{-}>\varepsilon)<+\infty follows from (13), (14) and the assumptions of the theorem. ∎

For the case of double arrays of random variables with uniformly bounded φ\varphi-subgaussian norms the next specification, holds true.

Corollary 2.

Let {Xk,n,k1,n1}\{X_{k,n},\ k\geq 1,\ n\geq 1\} be a double array of φ\varphi-subgaussian random variables with supk,nτφ(Xk,n)1.\sup_{k,n\in\mathbb{N}}\tau_{\varphi}(X_{k,n})\leq 1. Let κ(x)\kappa(x) be a positive increasing differentiable function with the derivative r(x)=κ(x)r(x)=\kappa^{\prime}(x) that is non-decreasing for x>0x>0 and ψ(x)κ(x)C0(x)\psi(x)-\kappa(x)\geq C_{0}(x) for some function C0().C_{0}(\cdot). Assume that there exists C>0C>0 such that for every k,n1k,n\geq 1 and x>0x>0

P(Xk,n<x)exp(Cexp(κ(x))).P(X_{k,n}<x)\leq\exp\big{(}-C\exp(-\kappa(x))\big{)}.

Suppose that there exist constants A,ε0>0A,\varepsilon_{0}>0 such that for every ε(0,ε0]\varepsilon\in(0,\varepsilon_{0}]

A+exp(Cy2exp(κ(ψ1(ln(y))ε)))𝑑y<+\int_{A}^{+\infty}\exp\left(-\frac{Cy}{2}\exp\big{(}-\kappa(\psi^{-1}(\ln(y))-\varepsilon)\big{)}\right)dy<+\infty (15)

and

A+ψ(y)qφ(y)exp(ψ(y)C2exp(C0(y)+εr(yε)))𝑑y<+.\int_{A}^{+\infty}\psi(y)q_{\varphi}(y)\exp\left(\psi(y)-\frac{C}{2}\exp\big{(}C_{0}(y)+\varepsilon r(y-\varepsilon)\big{)}\right)dy<+\infty. (16)

Then,

limmj+(max1km,1njXk,nψ1(ln(mj)))=0a.s.\lim_{m\vee j\to+\infty}\left(\max_{1\leq k\leq m,1\leq n\leq j}X_{k,n}-\psi^{-1}(\ln(mj))\right)^{-}=0\ \ a.s.
Remark 6.

Lemma 1 provides the upper bound on the tail probability of the φ\varphi-subgaussian random variable Xk,nX_{k,n}

P(Xk,nx)exp(ψ(Ax)),x>0.P\big{(}X_{k,n}\geq x\big{)}\leq\exp\big{(}-\psi(Ax)\big{)},\ x>0.

The condition P(Xk,n<x)exp(Cexp(κ(x)))P\big{(}X_{k,n}<x\big{)}\leq\exp\big{(}-C\exp(-\kappa(x))\big{)} in some sence is opposite. Namely, the lower bound on the tail probability

P(Xk,nx)Cexp(κ(x))P\big{(}X_{k,n}\geq x\big{)}\geq C\exp\big{(}-\kappa(x)\big{)}

implies

P(Xk,n<x)exp(P(Xk,nx))exp(Cexp(κ(x)))P\big{(}X_{k,n}<x\big{)}\leq\exp\big{(}-P(X_{k,n}\geq x)\big{)}\leq\exp\big{(}-C\exp(-\kappa(x))\big{)}

as texp((1t)).t\leq\exp(-(1-t)).

Theorem 3.

Assume that {Xk,n,k1,n1}\{X_{k,n},k\geq 1,n\geq 1\} is a double array of independent φ\varphi-subgaussian random variables. If the assumptions of Theorems 1 and  2 are satisfied, then limmjYm,j=0\lim_{m\vee j\to\infty}Y_{m,j}=0 a.s.

The proof of Theorem 3 follows from the proofs of Theorems 1 and 2.

4 On convergence rate of running maxima of random double arrays

This section investigates the series

m=1+j=1+(mj)αP(Ym,j+>ε),ε>0.\sum_{m=1}^{+\infty}\sum_{j=1}^{+\infty}(mj)^{-\alpha}P(Y_{m,j}^{+}>\varepsilon),\ \varepsilon>0.

It proves that the series converges for a suitable constant α.\alpha.

The following theorem and corollary are generalizations of Proposition 3 to the case of φ\varphi-subgaussian arrays with not necessary uniformly bounded φ\varphi-subgaussian norms.

Theorem 4.

Let {Xk,n,k1,n1}\{X_{k,n},k\geq 1,n\geq 1\} be a double array of φ\varphi-subgaussian random variables such that for all m,j1,m,j\geq 1, 1km, 1nj,1\leq k\leq m,\ 1\leq n\leq j, and some positive-valued function f()f(\cdot) it holds

g(ln(mj))τφ(Xk,n)f(mjkn)1\frac{g(\ln(mj))}{\tau_{\varphi}(X_{k,n})}\geq f\left(\frac{mj}{kn}\right)\geq 1

and

m=1j=1k=1mn=1j(mj)αf(mjkn)<+.\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}\sum_{k=1}^{m}\sum_{n=1}^{j}\left(mj\right)^{-\alpha-f(\frac{mj}{kn})}<+\infty.

Then

m=1j=1(mj)αP(Ym,j+>0)<+.\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}(mj)^{-\alpha}P(Y_{m,j}^{+}>0)<+\infty. (17)
Proof.

By Lemma 1 it follows that

P(Ym,j+>0)=P(max1km,1njXk,n>g(ln(mj))ψ1(ln(mj)))P(Y_{m,j}^{+}>0)=P\left(\max_{1\leq k\leq m,1\leq n\leq j}X_{k,n}>g(\ln(mj))\psi^{-1}\left(\ln(mj)\right)\right)
k=1mn=1jP(Xk,nτφ(Xk,n)>g(ln(mj))ψ1(ln(mj))τφ(Xk,n))\leq\sum_{k=1}^{m}\sum_{n=1}^{j}P\left(\frac{X_{k,n}}{\tau_{\varphi}(X_{k,n})}>\frac{g(\ln(mj))\psi^{-1}(\ln(mj))}{\tau_{\varphi}(X_{k,n})}\right)
k=1mn=1jexp(ψ(f(mjkn)ψ1(ln(mj))))\leq\sum_{k=1}^{m}\sum_{n=1}^{j}\exp\left(-\psi\left(f\left(\frac{mj}{kn}\right)\psi^{-1}(\ln(mj))\right)\right)
k=1mn=1jexp(f(mjkn)ln(mj)).\leq\sum_{k=1}^{m}\sum_{n=1}^{j}\exp\left(-f\left(\frac{mj}{kn}\right)\ln(mj)\right).

The last inequality follows from ψ(θx)θψ(x),θ1,\psi(\theta x)\geq\theta\psi(x),\ \theta\geq 1, that is true for any Orlicz NN-function.

Hence,

m=1+j=1+(mj)αP(Ym,j+>0)m=1+j=1+k=1mn=1j(mj)αf(mjkn)<+,\sum_{m=1}^{+\infty}\sum_{j=1}^{+\infty}(mj)^{-\alpha}P(Y_{m,j}^{+}>0)\leq\sum_{m=1}^{+\infty}\sum_{j=1}^{+\infty}\sum_{k=1}^{m}\sum_{n=1}^{j}(mj)^{-\alpha-f(\frac{mj}{kn})}<+\infty,

by the assumption of the Theorem. ∎

Corollary 3.

Let the conditions of Theorem 4 are satisfied and f(x)c0>0f(x)\geq c_{0}>0 for x1.x\geq 1. Then, (17) holds true for α>2c0\alpha>2-c_{0}.

Proof.

It follows from the assumptions that

k=1mn=1j(mj)αf(mjkn)(mj)1αc0.\sum_{k=1}^{m}\sum_{n=1}^{j}(mj)^{-\alpha-f(\frac{mj}{kn})}\leq(mj)^{1-\alpha-c_{0}}.

Hence,

m=1j=1(mj)αP(Ym,j+>0)(m=1m1αc0)2<+\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}(mj)^{-\alpha}P(Y_{m,j}^{+}>0)\leq\left(\sum_{m=1}^{\infty}m^{1-\alpha-c_{0}}\right)^{2}<+\infty

as the right hand side converges for α>2c0,\alpha>2-c_{0}, which completes the proof. ∎

Remark 7.

If the conditions of Theorem 4 or Corollary 3 are satisfied, then for every ε>0\varepsilon>0

m=1+j=1+(mj)αP(Ym,j+>ε)<\sum_{m=1}^{+\infty}\sum_{j=1}^{+\infty}(mj)^{-\alpha}P(Y_{m,j}^{+}>\varepsilon)<\infty

as the inequality P(Ym,j+>ε)P(Ym,j+>0)P(Y_{m,j}^{+}>\varepsilon)\leq P(Y_{m,j}^{+}>0) holds true.

Now we proceed with extending Proposition 4, showing that the rate of convergence is sharp.

Theorem 5.

Let {Xk,n,k,n1}\{X_{k,n},k,n\geq 1\} be a double array of independent φ\varphi-subgaussian random variables with τφ(Xk,n)1\tau_{\varphi}(X_{k,n})\equiv 1 satisfying the following assumptions:

  1. (i)

    there exists a strictly increasing function κ:++\kappa:\mathbb{R}^{+}\to\mathbb{R}^{+} such that for every k,n1k,n\geq 1 and some positive constant C it holds

    P(Xk,n>x)Cexp(κ(x)),x>0;P(X_{k,n}>x)\geq C\exp(-\kappa(x)),\ x>0;
  2. (ii)

    there exists x0>0x_{0}>0 such that

    exp(κ(x))C1exp(Bψ(x)),\exp(-\kappa(x))\geq C_{1}\exp(-B\psi(x)),

    for all xx0,x\geq x_{0}, where B,C1>0B,C_{1}>0;

  3. (iii)

    for some ε>0\varepsilon>0

    supx>x0qφ(x+ε)ψ(x)C2<+.\sup_{x>x_{0}}\frac{q_{\varphi}(x+\varepsilon)}{\psi(x)}\leq C_{2}<+\infty.

Then, for any α<2B(1+C2ε)\alpha<2-B(1+C_{2}\varepsilon) it holds

m=1j=1(mj)αP(Ym,j+>ε)=+.\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}(mj)^{-\alpha}P(Y_{m,j}^{+}>\varepsilon)=+\infty.
Proof.

By the theorem’s assumption one can take g()1g(\cdot)\equiv 1 and obtain

P(Ym,j+>ε)=P(max1km,1njXk,n>ψ1(ln(mj))+ε)P(Y_{m,j}^{+}>\varepsilon)=P\left(\max_{1\leq k\leq m,1\leq n\leq j}X_{k,n}>\psi^{-1}\left(\ln(mj)\right)+\varepsilon\right)
=1k=1mn=1j(1P(Xk,nψ1(ln(mj))+ε))=1-\prod_{k=1}^{m}\prod_{n=1}^{j}\left(1-P\left(X_{k,n}\geq\psi^{-1}(\ln(mj))+\varepsilon\right)\right)
1(1Cexp(κ(ψ1(ln(mj))+ε)))mj.\geq 1-\left(1-C\exp\left(-\kappa\left(\psi^{-1}(\ln(mj))+\varepsilon\right)\right)\right)^{mj}.

Using the inequality 1tet,t0,1-t\leq e^{-t},\ t\geq 0, one obtains

P(Ym.j+ε)1exp(Cmjexp(κ(ψ1(ln(mj))+ε))).P(Y_{m.j}^{+}\geq\varepsilon)\geq 1-\exp\left(-Cmj\exp\left(-\kappa\left(\psi^{-1}\left(\ln(mj)\right)+\varepsilon\right)\right)\right).

Then, by the inequality 1exp(t)texp(t),t0,1-\exp(-t)\geq t\exp(-t),\ t\geq 0, it follows that

P(Ym,n+ε)Cmjexp(κ(ψ1(ln(mj))+ε))P(Y_{m,n}^{+}\geq\varepsilon)\geq Cmj\exp\left(-\kappa\left(\psi^{-1}\left(\ln(mj)\right)+\varepsilon\right)\right)
×exp(Cmjexp(κ(ψ1(ln(mj))+ε))).\times\exp\left(-Cmj\exp\left(-\kappa\left(\psi^{-1}\left(\ln(mj)\right)+\varepsilon\right)\right)\right).

By Lemma 1 and assumption (i)(i)

Cexp(κ(x))exp(ψ(x)),x0.C\exp(-\kappa(x))\leq\exp(-\psi(x)),\ x\geq 0.

Noting that κ()\kappa(\cdot) is an increasing function, one obtains

P(Ym,j+>ε)Cmjexp(κ(ψ1(ln(mj))+ε))P(Y_{m,j}^{+}>\varepsilon)\geq Cmj\exp\left(-\kappa\left(\psi^{-1}\left(\ln(mj)\right)+\varepsilon\right)\right)
×exp(Cmjexp(κ(ψ1(ln(mj)))))\times\exp\left(-Cmj\exp\left(-\kappa\left(\psi^{-1}\left(\ln(mj)\right)\right)\right)\right)
Cmjexp(κ(ψ1(ln(mj))+ε))exp(Cmjexp(ψ(ψ1(ln(mj)))))\geq Cmj\exp\left(-\kappa\left(\psi^{-1}\left(\ln(mj)\right)+\varepsilon\right)\right)\exp\left(-Cmj\exp\left(-\psi\left(\psi^{-1}\left(\ln(mj)\right)\right)\right)\right)
Cmjexp(1)exp(κ(ψ1(ln(mj)+ε))).\geq Cmj\exp\left(-1\right)\exp\left(-\kappa\left(\psi^{-1}\left(\ln(mj)+\varepsilon\right)\right)\right).

Then, by assumption (ii)(ii)

P(Ym,j+>ε)CC1exp(1)mjexp(Bψ(ψ1(ln(mj)))+ε).P(Y_{m,j}^{+}>\varepsilon)\geq CC_{1}\exp(-1)mj\exp\left(-B\psi\left(\psi^{-1}(\ln(mj))\right)+\varepsilon\right).

It follows from assumption (iii)(iii) that

ψ(ψ1(ln(mj))+ε)ψ(ψ1(ln(mj)))=0ψ1(ln(mj))qφ(x)𝑑x+ψ1(ln(mj))ψ1(ln(mj))+εqφ(x)𝑑x0ψ1(ln(mj))qφ(x)𝑑x\frac{\psi\left(\psi^{-1}\left(\ln(mj)\right)+\varepsilon\right)}{\psi\left(\psi^{-1}\left(\ln(mj)\right)\right)}=\frac{\int_{0}^{\psi^{-1}(\ln(mj))}q_{\varphi}(x)dx+\int_{\psi^{-1}(\ln(mj))}^{\psi^{-1}(\ln(mj))+\varepsilon}q_{\varphi}(x)dx}{\int_{0}^{\psi^{-1}(\ln(mj))}q_{\varphi}(x)dx}
1+εqφ(ψ1(ln(mj))+ε)ψ(ψ1ln(mj))1+C2ε,\leq 1+\frac{\varepsilon q_{\varphi}(\psi^{-1}(\ln(mj))+\varepsilon)}{\psi(\psi^{-1}\ln(mj))}\leq 1+C_{2}\varepsilon,

as qφ()q_{\varphi}(\cdot) is an increasing function.

Hence, it holds

P(Ym,j+>ε)Cmjexp(B(1+C2ε)ψ(ψ1(ln(mj))))=C(mj)1B(1+C2ε).P(Y_{m,j}^{+}>\varepsilon)\geq Cmj\exp\left(-B(1+C_{2}\varepsilon)\psi(\psi^{-1}(\ln(mj)))\right)=C(mj)^{1-B(1+C_{2}\varepsilon)}.

Therefore,

m=1j=1(mj)αP(Ym,j>ε)C(m=1m1αB(1+C2ε))2=+,\sum_{m=1}^{\infty}\sum_{j=1}^{\infty}(mj)^{-\alpha}P(Y_{m,j}>\varepsilon)\geq C\left(\sum_{m=1}^{\infty}m^{1-\alpha-B(1+C_{2}\varepsilon)}\right)^{2}=+\infty,

when 1αB(1+C2ε)>1,1-\alpha-B(1+C_{2}\varepsilon)>-1, which completes the proof. ∎

5 Theoretical examples

This section provides theoretical examples for important particular classes of φ\varphi-subgaussian distributions. Specifications of functions κ()\kappa(\cdot) and φ(),\varphi(\cdot), such that the obtained theoretical results hold true, are given.

Example 3.

Let {Xk,n,k,n1}\{X_{k,n},\ k,n\geq 1\} be a double array of standard Gaussian random variables. It is well-known that EetXk,n=et2/2Ee^{tX_{k,n}}=e^{t^{2}/2} which implies that {Xk,n,k,n1}\{X_{k,n},\ k,n\geq 1\} is the double array of φ\varphi-subgaussian random variables with φ(x)=x2/2.\varphi(x)=x^{2}/2. The φ\varphi-subgaussian norm of a Gaussian variable equals to its standard deviation that is 1 in this example, i.e. τφ(Xk,n)1.\tau_{\varphi}(X_{k,n})\equiv 1. The Young-Fenchel transform of φ()\varphi(\cdot) is ψ(x)=x2/2\psi(x)=x^{2}/2 with the density qφ(x)=x.q_{\varphi}(x)=x.

One can easily see that the condition (15) of Corollary 1 is satisfied. Indeed, for any positive ε\varepsilon the following integral is finite

0+ψ(x)qφ(x)exp(εqφ(x))𝑑x=0+x32exp(εx)𝑑x<+.\int_{0}^{+\infty}\psi(x)q_{\varphi}(x)\exp\left(-\varepsilon q_{\varphi}(x)\right)dx=\int_{0}^{+\infty}\frac{x^{3}}{2}\exp\left(-\varepsilon x\right)dx<+\infty.

Let us show that the conditions of Corollary 2 are satisfied too.

By [1], for all x>0x>0 it holds

P(Xk,nx)122π(4+x2x)ex22P(X_{k,n}\geq x)\geq\frac{1}{2\sqrt{2\pi}}(\sqrt{4+x^{2}}-x)e^{-\frac{x^{2}}{2}}
=2πex224+x2+x=2πex22ln(4+x2+x).=\sqrt{\frac{2}{\pi}}\frac{e^{-\frac{x^{2}}{2}}}{\sqrt{4+x^{2}}+x}=\sqrt{\frac{2}{\pi}}e^{-\frac{x^{2}}{2}-\ln({\sqrt{4+x^{2}}+x})}.

By Remark 6 it means that κ(x)=x22+ln(4+x2+x)\kappa(x)=\frac{x^{2}}{2}+\ln({\sqrt{4+x^{2}}+x}) and C=2π.C=\sqrt{\frac{2}{\pi}}. The function κ(x)\kappa(x) is increasing, positive and κ(x)ln(2)\kappa(x)\geq\ln(2) for x>0.x>0.

As

r(x)=κ(x)=x+1+x4+x2x+4+x2=x+14+x2>0,x>0,r(x)=\kappa^{\prime}(x)=x+\frac{1+\frac{x}{\sqrt{4+x^{2}}}}{x+\sqrt{4+x^{2}}}=x+\frac{1}{\sqrt{4+x^{2}}}>0,\ x>0,

it is also positive.

It follows from

r(x)=1x(4+x2)3/2>0,x>0,r^{\prime}(x)=1-\frac{x}{(4+x^{2})^{3/2}}>0,\ x>0,

that r(x),x>0,r(x),\ x>0, is non-decreasing.

Also, it easy to see that C0(x)=ln(x+4+x2),x>0.C_{0}(x)=-\ln(x+\sqrt{4+x^{2}}),x>0.

Let us show that the assumption (15) is satisfied with these specifications of functions κ()\kappa(\cdot) and ψ()\psi(\cdot). Indeed, by the change of variables x=ψ1(ln(y))εx=\psi^{-1}(\ln(y))-\varepsilon one obtains y=eψ(x+ε)y=e^{\psi(x+\varepsilon)}and

A+exp(Cy2exp(κ(ψ1(ln(y))ε)))𝑑y\int_{A}^{+\infty}\exp\left(-\frac{Cy}{2}\exp\left(-\kappa(\psi^{-1}(\ln(y))-\varepsilon)\right)\right)dy
=A+qφ(x+ε)exp(ψ(x+ε)C2eψ(x+ε)κ(x))𝑑x,=\int_{A^{{}^{\prime}}}^{+\infty}q_{\varphi}(x+\varepsilon)\exp\left(\psi(x+\varepsilon)-\frac{C}{2}e^{\psi(x+\varepsilon)-\kappa(x)}\right)dx, (18)

where A=ψ1(ln(A))ε.A^{{}^{\prime}}=\psi^{-1}(\ln(A))-\varepsilon.

By Bernoulli’s inequality

ψ(x+ε)κ(x)=x22((1+εx)21)ln(4+x2+x)εxln(4+x2+x).\psi(x+\varepsilon)-\kappa(x)=\frac{x^{2}}{2}\left(\left(1+\frac{\varepsilon}{x}\right)^{2}-1\right)-\ln(\sqrt{4+x^{2}}+x)\geq\varepsilon x-\ln(\sqrt{4+x^{2}}+x).

As a polynomial growth is faster than the logarithmic one, the integral in (18) is bounded from above by

C~A+(x+ε)exp((x+ε)22eε~x)𝑑x\widetilde{C}\int_{A^{{}^{\prime}}}^{+\infty}(x+\varepsilon)\exp\left(\frac{(x+\varepsilon)^{2}}{2}-e^{\widetilde{\varepsilon}x}\right)dx

for some C~,ε~>0.\tilde{C},\widetilde{\varepsilon}>0.

As exponentials grow faster than polynomials, for sufficiently large xx

(x+ε)22exp(ε~x)Cexp(ε~x)\frac{(x+\varepsilon)^{2}}{2}-\exp\left(\widetilde{\varepsilon}x\right)\leq-{C^{\prime}}\exp\left(\widetilde{\varepsilon}x\right)

and

exp(Cexp(ε~x))exp(C′′x),\exp\left(-{C^{\prime}}\exp\left(\widetilde{\varepsilon}x\right)\right)\leq\exp\left(-C^{\prime\prime}x\right),

for some positive constants C{C^{\prime}} and C′′.C^{\prime\prime}.

The assumption (16) is satisfied too. Indeed, the assumption (16) takes the form

Aψ(y)qφ(y)exp(ψ(y)C2exp(C0(y)+εr(yε)))𝑑y\int_{A}^{\infty}\psi(y)q_{\varphi}(y)\exp\left(\psi(y)-\frac{C}{2}\exp\left(C_{0}(y)+\varepsilon r(y-\varepsilon)\right)\right)dy
=A+y32exp(y22C2(y+4+y2)exp(ε(yε+14+(yε)2))).=\int_{A}^{+\infty}\frac{y^{3}}{2}\exp\left(\frac{y^{2}}{2}-\frac{C}{2(y+\sqrt{4+y^{2}})}\exp\left(\varepsilon\left(y-\varepsilon+\frac{1}{\sqrt{4+(y-\varepsilon)^{2}}}\right)\right)\right).

The last integral is finite because

y22C2(y+4+y2)exp(ε(yε+14+(yε)2))<exp(εy2)\frac{y^{2}}{2}-\frac{C}{2(y+\sqrt{4+y^{2}})}\exp\left(\varepsilon\left(y-\varepsilon+\frac{1}{\sqrt{4+(y-\varepsilon)^{2}}}\right)\right)<-\exp\left(\frac{\varepsilon y}{2}\right)

for sufficiently large y.y.

By Theorem 3 one gets that limmjYm,n=0\lim_{m\vee j\to\infty}Y_{m,n}=0 a.s.

Example 4.

Let {Xk,n,k,n1}\{X_{k,n},\ k,n\geq 1\} be a double array of independent identically distributed reflected Weibull random variables with the probability density

p(x)=θ2b(|x|b)θ1e(|x|/b)θ,θ>0,b>0.p(x)=\frac{\theta}{2b}\left(\frac{|x|}{b}\right)^{\theta-1}e^{-(|x|/b)^{\theta}},\quad\theta>0,\ b>0.

Consider reflected Weibull random variables with θ>1\theta>1 and b>0.b>0. They belong to the φ\varphi-subgaussian class. Indeed, tails of reflected Weibull random variables equal

P(|Xk,n|>x)=e(xb)θ,x0.P(|X_{k,n}|>x)=e^{-(\frac{x}{b})^{\theta}},\ x\geq 0.

Hence, by [3, Corollary 4.1, p. 68] {Xk,n,k,n1}\{X_{k,n},\ k,n\geq 1\} is the double-array of φ\varphi-subgaussian random variables, where

ψ(x)=(xb)θ,φ(x)=θ1θ(θbθ)1/(θ1)xθ/(θ1),x0,\psi(x)=\left(\frac{x}{b}\right)^{\theta},\quad\varphi(x)=\frac{\theta-1}{\theta}\left(\frac{\theta}{b^{\theta}}\right)^{1/(\theta-1)}x^{\theta/(\theta-1)},\quad x\geq 0,

see [3, Example 2.5, p. 46], and τφ(Xk,n)c<+.\tau_{\varphi}(X_{k,n})\equiv c<+\infty. The density of ψ(x)\psi(x) is qφ(x)=θxθ1/bθ,x0.q_{\varphi}(x)={\theta}x^{\theta-1}/{b^{\theta}},\ x\geq 0.

Let us chose a such value of the parameter bb that {Xk,n,k,n1}\{X_{k,n},\ k,n\geq 1\} is the double-array of φ\varphi-subgaussian random variables with φ\varphi-subgaussian norms τφ(Xk,n)c1,\tau_{\varphi}(X_{k,n})\equiv c\leq 1, see Section 6. We will show that in this case the conditions of Corollaries 1 and 2 are satisfied.

The conditions of Corollary 1 are satisfied because τφ(Xk,n)c\tau_{\varphi}(X_{k,n})\equiv c and the following integral is finite for all positive ε\varepsilon

0+ψ(x)qφ(x)exp(εqφ(x))𝑑x=θb2θ0+x2θ1exp(εθbθxθ1)𝑑x<+.\int_{0}^{+\infty}\psi(x)q_{\varphi}(x)\exp\left(-\varepsilon q_{\varphi}(x)\right)dx=\frac{\theta}{b^{2\theta}}\int_{0}^{+\infty}x^{2\theta-1}\exp\left(-\frac{\varepsilon\theta}{b^{\theta}}x^{\theta-1}\right)dx<+\infty.

Let us show that the conditions of Corollary 2 are satisfied too. By Remark 6 and the equality P(Xk,n>x)=12e(xb)θ,x0,P(X_{k,n}>x)=\frac{1}{2}e^{-(\frac{x}{b})^{\theta}},\ x\geq 0, it follows that κ(x)=ψ(x)=(xb)θ,\kappa(x)=\psi(x)=\left(\frac{x}{b}\right)^{\theta}, C=1/2,C={1}/{2}, and r(x)=qφ(x).r(x)=q_{\varphi}(x). Hence,

ψ(x)κ(cx)=(1cθ)(xb)θC0(x)=0\psi(x)-\kappa(cx)=(1-c^{\theta})\left(\frac{x}{b}\right)^{\theta}\geq C_{0}(x)=0

because g()c1.g(\cdot)\equiv c\leq 1.

The assumption (15) can be rewritten as

A+exp(y4exp(κ(ψ1(ln(y))ε)))𝑑y\int_{A}^{+\infty}\exp\left(-\frac{y}{4}\exp\left(-\kappa\left(\psi^{-1}(\ln(y))-\varepsilon\right)\right)\right)dy
=A+exp(y4exp(ψ(ψ1(ln(y))ε)))𝑑y.=\int_{A}^{+\infty}\exp\left(-\frac{y}{4}\exp\left(-\psi\left(\psi^{-1}(\ln(y))-\varepsilon\right)\right)\right)dy.

Let use the change of variables x=ψ1(ln(y))ε.x=\psi^{-1}(\ln(y))-\varepsilon. Then, y=eψ(x+ε)y=e^{\psi(x+\varepsilon)} and the above integral equals to

A+qφ(x+ε)exp(ψ(x+ε)eψ(x+ε)ψ(x)4)𝑑x\int_{A^{\prime}}^{+\infty}q_{\varphi}(x+\varepsilon)\exp\left(\psi(x+\varepsilon)-\frac{e^{\psi(x+\varepsilon)-\psi(x)}}{4}\right)dx
=θbθA+(x+ε)θ1exp((x+ε)θbθ14exp((x+ε)θxθbθ))𝑑x,=\frac{\theta}{b^{\theta}}\int_{A^{\prime}}^{+\infty}(x+\varepsilon)^{\theta-1}\exp\left(\frac{(x+\varepsilon)^{\theta}}{b^{\theta}}-\frac{1}{4}\exp\left(\frac{(x+\varepsilon)^{\theta}-x^{\theta}}{b^{\theta}}\right)\right)dx, (19)

where A=ψ1(ln(A))ε.A^{\prime}=\psi^{-1}(\ln(A))-\varepsilon.

By Bernoulli’s inequality

(x+ε)θxθ=xθ((1+εx)θ1)εθxθ1(x+\varepsilon)^{\theta}-x^{\theta}=x^{\theta}\left(\left(1+\frac{\varepsilon}{x}\right)^{\theta}-1\right)\geq\varepsilon\theta x^{\theta-1}

and the integral in (19) is bounded by

θbθA+(x+ε)θ1exp((x+ε)θbθ14exp(εθbθxθ1))𝑑x.\frac{\theta}{b^{\theta}}\int_{A^{\prime}}^{+\infty}(x+\varepsilon)^{\theta-1}\exp\left(\frac{(x+\varepsilon)^{\theta}}{b^{\theta}}-\frac{1}{4}\exp\left(\frac{\varepsilon\theta}{b^{\theta}}x^{\theta-1}\right)\right)dx.

As exponentials grow faster than polynomials we obtain that for sufficiently large xx

(x+ε)θbθ14exp(εθbθxθ1)Cexp(εθbθxθ1)\frac{(x+\varepsilon)^{\theta}}{b^{\theta}}-\frac{1}{4}\exp\left(\frac{\varepsilon\theta}{b^{\theta}}x^{\theta-1}\right)\leq-C\exp\left(\frac{\varepsilon\theta}{b^{\theta}}x^{\theta-1}\right)

and

exp(Cexp(εθbθxθ1))exp(C~xθ1)\exp\left(-C\exp\left(\frac{\varepsilon\theta}{b^{\theta}}x^{\theta-1}\right)\right)\leq\exp\left(-\widetilde{C}x^{\theta-1}\right)

for some positive constants CC and C~.\widetilde{C}.

Finally, as for θ>1\theta>1

θbθA(x+ε)θ1exp(C~xθ1)𝑑x<+\frac{\theta}{b^{\theta}}\int_{A^{\prime}}^{\infty}(x+\varepsilon)^{\theta-1}\exp\left(-\widetilde{C}x^{\theta-1}\right)dx<+\infty

we obtain (15).

Now, let us check the assumption (16). In our case, it takes the form

A+ψ(y)qφ(y)exp(ψ(y)exp(εqφ(yε))4)𝑑y\int_{A}^{+\infty}\psi(y)q_{\varphi}(y)\exp\left(\psi(y)-\frac{\exp\big{(}\varepsilon q_{\varphi}(y-\varepsilon)\big{)}}{4}\right)dy
=θbθA+y2θ1exp((yb)θexp(εθbθ(yε)θ1)4)𝑑y.=\frac{\theta}{b^{\theta}}\int_{A}^{+\infty}y^{2\theta-1}\exp\left(\left(\frac{y}{b}\right)^{\theta}-\frac{\exp\left(\frac{\varepsilon\theta}{b^{\theta}}(y-\varepsilon)^{\theta-1}\right)}{4}\right)dy.

Again, as θ>1\theta>1 then (yb)θexp(εθbθ(yε)θ1)<y(\frac{y}{b})^{\theta}-\exp(\frac{\varepsilon\theta}{b^{\theta}}(y-\varepsilon)^{\theta-1})<-y for sufficiently large y,y, which means that the integral is finite. Thus, Corollaries 1 and 2 hold true. Moreover, by Theorem 3 one gets that limmjYm,n=0\lim_{m\vee j\to\infty}Y_{m,n}=0 a.s.

6 Numerical examples

This section provides numerical examples that confirm the obtained theoretical results. By simulating double arrays of random variables satisfying Theorem 3, we show that the running maxima functionals of these double arrays converge to 0, as the size of observation windows tends to infinity. As the rate of convergence is very slow to better illustrate asymptotic behaviour we selected arrays with constant φ\varphi-subgaussian norms close to one.

Consider a double array {Xk,n,k,n1}\{X_{k,n},k,n\geq 1\} that consists of independent reflected Weibull random variables (see Example 4) with the parameters θ=9\theta=9 and b=1.25.b=1.25. These values of θ\theta and bb were selected to get τφ(Xk,n)1\tau_{\varphi}(X_{k,n})\leq 1 as in Example 4. The probability density function of the underlying random variables Xk,n,k,n1,X_{k,n},k,n\geq 1, and a realization of the double array {Xk,n,k,n1}\{X_{k,n},k,n\geq 1\} in a square window are shown in Figure 1.

Refer to caption
(a) Probability density function
Refer to caption
(b) Realization of the double array
Figure 1: Double array of Weibull random variables

The underlying random variables Xk,n,k,n1,X_{k,n},\ k,n\geq 1, are φ\varphi-subgaussian random variables with

ψ(x)=(x1.25)9,φ(x)=89(91.259)1/8x9/8,x0.\psi(x)=\left(\frac{x}{1.25}\right)^{9},\quad\varphi(x)=\frac{8}{9}\left(\frac{9}{1.25^{9}}\right)^{1/8}x^{9/8},\quad x\geq 0.

A calculation of the φ\varphi-subgaussian norm by using Definition 4 is not trivial in a general case and may require different approaches. The following method was used to estimate the φ\varphi-subgaussian norm of Xk,n,k,n1.X_{k,n},\ k,n\geq 1. By [3, Lemma 4.2, p. 65] the φ\varphi-subgaussian norm allows the representation

τφ(Xk,n)=supλ0φ(1)(ln(Eexp(λXk,n)))|λ|.\tau_{\varphi}(X_{k,n})=\sup_{\lambda\neq 0}\frac{\varphi^{(-1)}\left(\ln\left(E\exp(\lambda X_{k,n})\right)\right)}{|\lambda|}.

For the reflected Weibull random variables the above expectation can be calculated as

Eexp(λXk,n)=eλxp(x)𝑑x=0+eλxp(x)𝑑x+0+eλxp(x)𝑑xE\exp(\lambda X_{k,n})=\int_{\mathbb{R}}e^{\lambda x}p(x)dx=\int_{0}^{+\infty}e^{\lambda x}p(x)dx+\int_{0}^{+\infty}e^{-\lambda x}p(x)dx
=12(MGF(λ)+(MGF(λ)),=\frac{1}{2}\left(MGF(\lambda)+(MGF(-\lambda)\right),

where MGF()MGF(\cdot) denotes the moment generating function of the corresponding Weibull distribution and is given by

MGF(λ)=n=0+λnbnn!Γ(1+nθ).MGF(\lambda)=\sum_{n=0}^{+\infty}\frac{\lambda^{n}b^{n}}{n!}\Gamma\left(1+\frac{n}{\theta}\right).

By using this representation one gets

Eexp(λXk,n)=12(n=0+λnbnn!Γ(1+nθ)+n=0+(λ)nbnn!Γ(1+nθ))E\exp(\lambda X_{k,n})=\frac{1}{2}\left(\sum_{n=0}^{+\infty}\frac{\lambda^{n}b^{n}}{n!}\Gamma\left(1+\frac{n}{\theta}\right)+\sum_{n=0}^{+\infty}\frac{(-\lambda)^{n}b^{n}}{n!}\Gamma\left(1+\frac{n}{\theta}\right)\right)
=n=0+λ2nb2n(2n)!Γ(1+2nθ).=\sum_{n=0}^{+\infty}\frac{\lambda^{2n}b^{2n}}{(2n)!}\Gamma\left(1+\frac{2n}{\theta}\right).

Thus, for sufficiently large MM the φ\varphi-subgaussian norm of the reflected Weibull random variables can be approximated by

τφ(Xk,n)supλ0φ(1)(ln(n=0Mλ2nb2n(2n)!Γ(1+2nθ)))|λ|\tau_{\varphi}(X_{k,n})\approx\sup_{\lambda\neq 0}\frac{\varphi^{(-1)}\left(\ln\left(\sum_{n=0}^{M}\frac{\lambda^{2n}b^{2n}}{(2n)!}\Gamma\left(1+\frac{2n}{\theta}\right)\right)\right)}{|\lambda|}
=supλ0(98(1.2599)1/8(ln(n=0Mλ2nb2n(2n)!Γ(1+2nθ)))8/9|λ|).=\sup_{\lambda\neq 0}\left(\frac{9}{8}\left(\frac{1.25^{9}}{9}\right)^{1/8}\frac{\left(\ln\left(\sum_{n=0}^{M}\frac{\lambda^{2n}b^{2n}}{(2n)!}\Gamma\left(1+\frac{2n}{\theta}\right)\right)\right)^{8/9}}{|\lambda|}\right).

As (2n)!(2n)! increases very quickly even small values of MM provide a very accurate approximation of the series and the norm.

Figure 2 shows the graph of the function under the supremum and the supremum value for M=50.M=50. As the function is symmetric only the range λ>0\lambda>0 is plotted. For θ=9\theta=9 and b=1.25b=1.25 the supremum is attained at λ=8.5801\lambda=8.5801 and the estimated value of the norm τφ(Xk,n)\tau_{\varphi}(X_{k,n}) is 0.997. Thus, the double array {Xk,n,k,n1}\{X_{k,n},k,n\geq 1\} satisfies the conditions of Theorem 3, see Example 4.

Refer to caption
Figure 2: Estimation of τφ(Xk,n).\tau_{\varphi}(X_{k,n}).
Refer to caption
(a) The first set
Refer to caption
(b) The second set
Figure 3: Observation windows.

Then, 1000 realizations of the double array {Xk,n,k,n1}\{X_{k,n},k,n\geq 1\} in the square region (1000)={(m,j):m,j1000,m,j}\square(1000)=\{(m,j):m,j\leq 1000,m,j\in\mathbb{N}\} were generated. Using the obtained realizations of the double array, values of Ym,jY_{m,j} were computed for two sets of observation windows. The windows are shown in Figure 3 by using logarithmic scales for xx and yy coordinates. For the set of observation windows in Fig. 3(a) and a realization of the reflected Weibull random array, the corresponding running maximas are shown in Fig 4(a). For all rectangular observation windows inside (1000)\square(1000) locations of maximas are shown in Fig 4(b). The locations are very sparse and the majority of them is concentrated closely to the left and bottom borders of (1000).\square(1000).

Refer to caption
(a) Running maxima Ym,jY_{m,j} for the first set of windows
Refer to caption
(b) Locations of maximas for all rectangular subwindows in (1000)\square(1000)
Figure 4: Running maxima of a realization over a set of windows

For the simulated 1000 realizations and the corresponding sets of the observation windows from Figure 3 the box plots of running maxima functionals Ym,jY_{m,j} are shown in Figure 5. It is clear that the distribution of the running maxima concentrates around zero when the size of the observation window increases, but the rate of convergence seems to be rather slow.

Refer to caption
(a) The first case
Refer to caption
(b) The second case
Figure 5: Box plots of running maximas for two sets of observation windows.

Table 1 shows the corresponding Root Mean Square Error (RMSE) of the running maxima functionals Ym,jY_{m,j} from Figure 5, the table confirms the convergence of Ym,jY_{m,j} to zero when the observation window increases.

Observationwindow\rm{Observation\ window} 1 2 3 4 5 6
The first case 0.026 0.022 0.021 0.019 0.017 0.016
The second case 0.038 0.032 0.024 0.024 0.018 0.017
Table 1: RMSE of Ym,jY_{m,j}.
Refer to caption
Figure 6: Boxplots of running maximas Ym,jY_{m,j} for 6 groups.

Finally, to demonstrate the lim(max)\lim(\max) convergence 1000 simulated realizations of the reflected Weibull double array were used. The running maxima functionals Ym,jY_{m,j} were calculated for all possible pairs (m,j),(m,j), m,j=1,2,,1200.m,j=1,2,...,1200. In Figure 6 the boxplots of the obtained values of Ym,jY_{m,j} were computed for 6 groups depending on values of the parameter r=mjr=m\vee j in the corresponding observation subwindows. The lower bound for the parameter rr increases with the group number. Namely, in group i=1,,6i=1,...,6 the values rri,r\geq r_{i}, where ri=10,20,100,300,600,1000.r_{i}=10,20,100,300,600,1000. The obtained boxplots in Figure 6 confirm the lim(max)\lim(\max) convergence.

7 Conclusions and the future studies

The asymptotic behaviour of running maxima of random double arrays was investigated. The conditions of the obtained results allow to consider a wide class of φ\varphi-subgaussian random fields and are weaker than even in the known results for the one-dimensional case. The rate of convergence was also studied. The results were derived for a general class of rectangular observation windows and lim(max)\lim(\max) convergence.

In the future studies, it would be interesting to extend the obtained results to:

- the case of nn-dimension arrays,

- other types of observation windows,

- continuous φ\varphi-subgaussian random fields,

- different types of dependencies.

Acknowledgements

This research was supported by La Trobe University SEMS CaRE Grant ”Asymptotic analysis for point and interval estimation in some statistical models”.

This research includes computations using the Linux computational cluster Gadi of the National Computational Infrastructure (NCI), which is supported by the Australian Government and La Trobe University.

References

  • Birnbaum, [1942] Birnbaum, Z. (1942). An inequality for Mill’s ratio. Ann. Math. Statist. 13(2): 245-246. DOI: 10.1214/aoms/1177731611.
  • Borovkov et al., [2017] Borovkov, K., Mishura, Yu., Novikov, A., and Zhitlukhin, M. (2017) Bounds for expected maxima of Gaussian processes and their discrete approximations. Stochastics, 89(1): 21–37. DOI: 10.1080/17442508.2015.1126282.
  • Buldygin and Kozachenko, [2000] Buldygin, V. and Kozachenko, Y. (2000). Metric Characterization of Random Variables and Random Processes. Providence, R.I.: American Mathematical Society. DOI: 10.1090/mmono/188.
  • Csáki and Gonchigdanzan, [2002] Csáki, E., and Gonchigdanzan, K. (2002) Almost sure limit theorems for the maximum of stationary Gaussian sequences. Statist. Probab. Lett. 58(2): 195–203. DOI: 10.1016/s0167-7152(02)00128-1.
  • [5] Donhauzer, I., Olenko, A., and Volodin, A. (2020). Strong law of large numbers for functionals of random fields with unboundedly increasing covariances. To appear in Commun. Stat. Theory Methods. DOI: 10.1080/03610926.2020.1868515.
  • Dudley, [1967] Dudley, R. (1967). The sizes of compact subsets of Hilbert space and continuity of Gaussian processes. J. Funct. Anal. 1(3):290–330. DOI: 10.1016/0022-1236(67)90017-1.
  • Embrechts et al., [1997] Embrechts, P. Klüppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance. Berlin: Springer-Verlag. DOI: 10.1007/978-3-642-33483-2-7.
  • Fernique, [1975] Fernique, X. (1975). Regularité des trajectoires des fonctions aléatoires gaussiennes. In: Ecole d’Eté de Probabilités de Saint-Flour IV—1974 (pp. 1-94). Berlin: Springer. DOI: 10.1007/bfb0080190.
  • Giuliano, [1995] Giuliano, R. (1995). Remarks on maxima of real random sequences. Note Mat. 15(2):143–145. DOI: 10.1285/i15900932v15n2p143.
  • Giuliano et al., [2013] Giuliano, R., Ngamkham, T., and Volodin, A. (2013). On the asymptotic behavior of the sequence and series of running maxima from a real random sequence. Stat. Probabil. Lett. 83(2):534–542. DOI: 10.1016/j.spl.2012.10.010.
  • Giuliano and Macci, [2014] Giuliano, R. and Macci, C. (2014) Large deviation principles for sequences of maxima and minima. Comm. Statist. Theory Methods. 43(6): 1077–1098. DOI: 10.1080/03610926.2012.668606.
  • Hoffmann-Jørgensen, [1994] Hoffmann-Jørgensen, J. (1994). Probability With a View Toward Statistics. New York: Chapman & Hall. DOI: 10.1007/978-1-4899-3019-4.
  • Hu et al., [2020] Hu, T., Rosalsky, A., Volodin, A., and Zhang, S. (2020). A complete convergence theorem for row sums from arrays of rowwise independent random elements in Rademacher type pp Banach spaces. To appear in Stoch. Anal. Appl. DOI: 10.1080/07362994.2020.1791721.
  • Hu et al., [2019] Hu, T., Rosalsky, A., and Volodin, A. (2019). Complete convergence theorems for weighted row sums from arrays of random elements in Rademacher type pp and martingale type pp Banach spaces. Stoch. Anal. Appl. 37(6):1092–1106. DOI: 10.1080/07362994.2019.1641414.
  • Kahane, [1960] Kahane, J. (1960). Propriétés locales des fonctions à séries de Fourier aléatoires. Studia Math. 19(1):1–25. DOI: 10.4064/sm-19-1-1-25.
  • Klesov, [2014] Klesov, O. (2014). Limit Theorems for Multi-Indexed Sums of Random Variables. Heidelberg: Springer. DOI: 10.1007/978-3-662-44388-0.
  • [17] Kozachenko, Y. and Olenko, A. (2016). Aliasing-truncation errors in sampling approximations of sub-gaussian signals. IEEE Trans. Inf. Theory. 62(10):5831–5838. DOI: 10.1109/tit.2016.2597146.
  • Kozachenko and Olenko, [2016] Kozachenko, Y. and Olenko, A. (2016). Whittaker–Kotel’nikov–Shannon approximation of φ\varphi-sub-gaussian random processes. J. Math. Anal. Appl. 443(2):926–946. DOI: 10.1109/tit.2016.2597146.
  • Kozachenko et al., [2015] Kozachenko, Y., Olenko, A., and Polosmak, O. (2015). Convergence in Lp([0,t])L_{p}([0,t]) of wavelet expansions of φ\varphi-sub-gaussian random processes. Methodol. Comput. Appl. Probab. 17(1):139–153. DOI: 10.1007/s11009-013-9346-7.
  • Kratz, [2006] Kratz, M. (2006). Level crossings and other level functionals of stationary Gaussian processes. Probab. Surveys. 3:230–288. DOI: 10.1214/154957806000000087.
  • Leadbetter et al., [1983] Leadbetter, M., Lindgren, G. and Rootzén, H. (1983). Extremes and Related Properties of Random Sequences and Processes. New York: Springer. DOI: 10.1007/978-1-4612-5449-2.
  • Ledoux and Talagrand, [2013] Ledoux, M. and Talagrand, M. (2013). Probability in Banach Spaces: Isoperimetry and Processes. Berlin: Springer. DOI: 10.1007/978-3-642-20212-4.
  • Pickands, [1967] Pickands, J. (1967). Maxima of stationary Gaussian processes. Z. Wahrscheinlichkeitstheor. verw. Geb. 7(3):190–223. DOI: 10.1007/bf00532637.
  • Piterbarg, [1996] Piterbarg, V. (1996). Asymptotic Methods in the Theory of Gaussian Processes and Fields. Providence, RI: American Mathematical Society. DOI: 10.1090/mmono/148.
  • Talagrand, [2014] Talagrand, M. (2014). Upper and Lower Bounds for Stochastic Processes. Modern Methods and Classical Problems. Heidelberg: Springer. DOI: 10.1007/978-3-642-54075-2.