This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.


Extrema of multi-dimensional Gaussian processes over random intervals

Lanpeng Ji Lanpeng Ji, School of Mathematics, University of Leeds, Woodhouse Lane, Leeds LS2 9JT, United Kingdom [email protected]  and  Xiaofan Peng Xiaofan Peng, School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China [email protected]

Abstract: This paper studies the joint tail asymptotics of extrema of the multi-dimensional Gaussian process over random intervals defined as

P(u):={i=1n(supt[0,𝒯i](Xi(t)+cit)>aiu)},u,P(u):=\mathbb{P}\left\{\cap_{i=1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}(X_{i}(t)+c_{i}t)>a_{i}u\right)\right\},\ \ \ u\rightarrow\infty,

where Xi(t),t0X_{i}(t),t\geq 0, i=1,2,,n,i=1,2,\cdots,n, are independent centered Gaussian processes with stationary increments, 𝓣=(𝒯1,,𝒯n)\boldsymbol{\mathcal{T}}=(\mathcal{T}_{1},\cdots,\mathcal{T}_{n}) is a regularly varying random vector with positive components, which is independent of the Gaussian processes, and cic_{i}\in\mathbb{R}, ai>0a_{i}>0, i=1,2,,ni=1,2,\cdots,n. Our result shows that the structure of the asymptotics of P(u)P(u) is determined by the signs of the drifts cic_{i}’s. We also discuss a relevant multi-dimensional regenerative model and derive the corresponding ruin probability.

Key Words: Joint tail asymptotics; Gaussian processes; perturbed random walk; ruin probability; fluid model; fractional Brownian motion; regenerative model.

AMS Classification: Primary 60G15; secondary 60G70

1. Introduction

Let X(t),t0X(t),t\geq 0 be an almost surely (a.s.) continuous centered Gaussian process with stationary increments and X(0)=0X(0)=0. Motivated by its applications to the hybrid fluid and ruin models, the seminal paper [1] derived the exact tail asymptotics of

(1) {supt[0,𝒯]X(t)>u},u,\displaystyle\mathbb{P}\left\{\sup_{t\in[0,\mathcal{T}]}X(t)>u\right\},\ \ \ u\rightarrow\infty,

with 𝒯\mathcal{T} being an independent of XX regularly varying random variable. Since then the study of the tail asymptotics of supremum on random interval has attracted substantial interest in the literature. We refer to [2, 3, 4, 5, 6, 7] for various extensions to general (non-centered) Gaussian or Gaussian-related processes. In the aforementioned contributions, various different tail distributions for 𝒯\mathcal{T} have been discussed, and it has been shown that the variability of 𝒯\mathcal{T} influences the form of the asymptotics of (1), leading to qualitatively different structures.

The primary aim of this paper is to analyze the asymptotics of a multi-dimensional counterpart of (1). More precisely, consider a multi-dimensional centered Gaussian process

(2) 𝑿(t)=(X1(t),X2(t),,Xn(t)),t0,\displaystyle\boldsymbol{X}(t)=(X_{1}(t),X_{2}(t),\cdots,X_{n}(t)),\ \ \ t\geq 0,

with independent coordinates, each Xi(t),t0,X_{i}(t),t\geq 0, has stationary increments, a.s. continuous sample paths and Xi(0)=0X_{i}(0)=0, and let 𝓣=(𝒯1,,𝒯n)\boldsymbol{\mathcal{T}}=(\mathcal{T}_{1},\cdots,\mathcal{T}_{n}) be a regularly varying random vector with positive components, which is independent of 𝑿\boldsymbol{X}. We are interested in the exact asymptotics of

(3) P(u):={i=1n(supt[0,𝒯i](Xi(t)+cit)>aiu)},u,\displaystyle P(u):=\mathbb{P}\left\{\cap_{i=1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}(X_{i}(t)+c_{i}t)>a_{i}u\right)\right\},\ \ \ u\rightarrow\infty,

where cic_{i}\in\mathbb{R}, ai>0a_{i}>0, i=1,2,,ni=1,2,\cdots,n.

Extremal analysis of multi-dimensional Gaussian processes has been an active research area in recent years; see [8, 9, 10, 11, 12] and references therein. In most of these contributions, the asymptotic behaviour of the probability that 𝑿\boldsymbol{X} (possibly with trend) enters an upper orthant over a finite-time or infinite-time interval is discussed, this problem is also connected with the conjunction problem for Gaussian processes firstly studied by Worsley and Friston [13]. Investigations on the joint tail asymptotics of multiple extrema as defined in (3) have been known to be more challenging. Current literature has only focused on the case with deterministic times 𝒯1==𝒯n\mathcal{T}_{1}=\cdots=\mathcal{T}_{n} and some additional assumptions on the correlation structure of XiX_{i}’s. In [14, 8] large deviation type results are obtained, and more recently in [15, 16] exact asymptotics are obtained for correlated two-dimensional Brownian motion. It is worth mentioning that a large derivation result for the multivariate maxima of a discrete Gaussian model is discussed recently in [17].

In order to avoid more technical difficulties, the coordinates of the multi-dimensional process 𝑿\boldsymbol{X} in (2) are assumed to be independent. The dependence among the extrema in (3) is driven by the structure of the multivariate regularly varying 𝓣\boldsymbol{\mathcal{T}}. Interestingly, we observe in Theorem 3.1 that the form of the asymptotics of (3) is determined by the signs of the drifts cic_{i}’s.

Apart from its theoretical interest, the motivation to analyse the asymptotic properties of P(u)P(u) is related to numerous applications in modern multi-dimensional risk theory, financial mathematics or fluid queueing networks. For example, we consider an insurance company which runs nn lines of business. The surplus process of the iith business line can be modelled by a time-changed Gaussian process

Ri(t)=aiu+ciYi(t)Xi(Yi(t)),t0,\displaystyle R_{i}(t)=a_{i}u+c_{i}Y_{i}(t)-X_{i}(Y_{i}(t)),\ \ t\geq 0,

where aiu>0a_{i}u>0 is the initial capital (considered as a proportion of uu allocated to the iith business line, with i=1nai=1\sum_{i=1}^{n}a_{i}=1), ci>0c_{i}>0 is the net premium rate, Xi(t),t0X_{i}(t),t\geq 0 is the net loss process, and Yi(t),t0Y_{i}(t),t\geq 0 is a positive increasing function modelling the so-called “operational time” for the iith business line. We refer to [18, 19] and [5] for detailed discussions on multi-dimensional risk models and time-changed risk models, respectively. Of interest in risk theory is the study of the probability of ruin of all the business lines within some finite (deterministic) time T>0T>0, defined by

φ(u):={i=1n(inft[0,T]Ri(t)<0)}={i=1n(supt[0,T](Xi(Yi(t))+ciYi(t))>aiu)}.\displaystyle\varphi(u):=\mathbb{P}\left\{\cap_{i=1}^{n}\left(\inf_{t\in[0,T]}R_{i}(t)<0\right)\right\}=\mathbb{P}\left\{\cap_{i=1}^{n}\left(\sup_{t\in[0,T]}(X_{i}(Y_{i}(t))+c_{i}Y_{i}(t))>a_{i}u\right)\right\}.

If additionally all the operational time processes Yi(t),t0Y_{i}(t),t\geq 0 have a.s. continuous sample paths, then we have φ(u)=P(u)\varphi(u)=P(u) with 𝓣=𝒀(T)\boldsymbol{\mathcal{T}}=\boldsymbol{Y}(T), and thus the derived result can be applied to estimate this ruin probability. Note that the dependence among different business lines is introduced by the dependence among the operational time processes YiY_{i}’s. As a simple example we can consider Yi(t)=Θit,Y_{i}(t)=\Theta_{i}t, t0t\geq 0, with 𝚯=(Θ1,,Θn)\boldsymbol{\Theta}=(\Theta_{1},\cdots,\Theta_{n}) being a multivariate regularly varying random vector. Additionally, multi-dimensional time-changed (or subordinate) Gaussian processes have been recently proved to be good candidates for modelling the log-return processes of multiple assets; see, e.g., [20, 21, 22]. As the joint distribution of extrema of asset returns is important in finance problems, e.g., [23], we expect the obtained results for (3) might also be interesting in financial mathematics.

As a relevent application, we shall discuss a multi-dimensional regenerative model, which is motivated from its relevance to risk model and fluid queueing model. Essentially, the multi-dimensional regenerative process is a process with a random alternating environment, where an independent multi-dimensional fractional Brownian motion (fBm) with trend is assigned at each environment alternating time. We refer to Section 4 for more detail. By analysing a related multi-dimensional perturbed random walk, we obtain in Theorem 4.1 the ruin probability of the multi-dimensional regenerative model. This generalizes some of the results in [24] and [25] to the multi-dimensional setting. Note in passing that some related stochastic models with random sampling or resetting have been discussed in the recent literature; see, e.g., [26, 27, 28].

Organization of the rest of the paper: In Section 2 we introduce some notation, recall the definition of the multivariate regularly variation, and present some preliminary results on the extremes of one-dimensional Gaussian process. The result for (3) is displayed in Section 3, and the ruin probability of the multi-dimensional regenerative model is discussed in Section 4. The proofs are relegated to Section 5 and Section 6. Some useful results on multivariate regularly variation are discussed in the Appendix.

2. Notation and Preliminaries

We shall use some standard notation which is common when dealing with vectors. All the operations on vectors are meant componentwise. For instance, for any given 𝒙=(x1,,xn)n\boldsymbol{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n} and 𝒚=(y1,,yn)n\boldsymbol{y}=(y_{1},\ldots,y_{n})\in\mathbb{R}^{n}, we write 𝒙𝒚=(x1y1,,xnyn)\boldsymbol{x}\boldsymbol{y}=(x_{1}y_{1},\cdots,x_{n}y_{n}), and write 𝒙>𝒚\boldsymbol{x}>\boldsymbol{y} if and only if xi>yix_{i}>y_{i} for all 1in1\leq i\leq n. Furthermore, for two positive functions f,hf,h and some u0[,]u_{0}\in[-\infty,\infty], write f(u)h(u)f(u)\lesssim h(u) or h(u)f(u)h(u)\gtrsim f(u) if lim supuu0f(u)/h(u)1\limsup_{u\rightarrow u_{0}}f(u)/h(u)\leq 1, write h(u)f(u)h(u)\sim f(u) if limuu0f(u)/h(u)=1\lim_{u\rightarrow u_{0}}f(u)/h(u)=1, write f(u)=o(h(u))f(u)=o(h(u)) if limuu0f(u)/h(u)=0\lim_{u\rightarrow u_{0}}{f(u)}/{h(u)}=0, and write f(u)h(u)f(u)\asymp h(u) if f(u)/h(u)f(u)/h(u) is bounded from both below and above for all sufficiently large uu.

Next, let us recall the definition and some implications of multivariate regularly variation. We refer to [29, 30, 31] for more detailed discussions. Let ¯0n=¯n{𝟎}\overline{\mathbb{R}}_{0}^{n}=\overline{\mathbb{R}}^{n}\setminus\{\boldsymbol{0}\} with ¯={,}\overline{\mathbb{R}}=\mathbb{R}\cup\{-\infty,\infty\}. An n\mathbb{R}^{n}-valued random vector 𝑿\boldsymbol{X} is said to be regularly varying if there exists a non-null Radon measure ν\nu on the Borel σ\sigma-field (¯0n)\mathcal{B}(\overline{\mathbb{R}}_{0}^{n}) with ν(¯nn)=0\nu(\overline{\mathbb{R}}^{n}\setminus\mathbb{R}^{n})=0 such that

{x1𝑿}{|𝑿|>x}𝑣ν(),x.\displaystyle\frac{\mathbb{P}\left\{x^{-1}\boldsymbol{X}\in\cdot\right\}}{\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}}\ \overset{v}{\rightarrow}\ \nu(\cdot),\ \ \ \ x\rightarrow\infty.

Here ||\left\lvert\ \cdot\ \right\rvert is any norm in n\mathbb{R}^{n} and 𝑣\overset{v}{\rightarrow} refers to vague convergence on (¯0n)\mathcal{B}(\overline{\mathbb{R}}_{0}^{n}). It is known that ν\nu necessarily satisfies the homogeneous property ν(sK)=sαν(K)\nu(sK)=s^{-\alpha}\nu(K), s>0s>0, for some α>0\alpha>0 and all Borel set KK in (¯0n)\mathcal{B}(\overline{\mathbb{R}}_{0}^{n}). In what follows, we say that such defined 𝑿\boldsymbol{X} is regularly varying with index α\alpha and limiting measure ν\nu. An implication of the homogeneity property of ν\nu is that all the rectangle sets of the form [𝒂,𝒃]={𝒙:𝒂𝒙𝒃}[\boldsymbol{a},\boldsymbol{b}]=\{\boldsymbol{x}:\boldsymbol{a}\leq\boldsymbol{x}\leq\boldsymbol{b}\} in ¯0n\overline{\mathbb{R}}_{0}^{n} are ν\nu-continuity sets. Furthermore, we have that |𝑿|\left\lvert\boldsymbol{X}\right\rvert is regularly varying at infinity with index α\alpha, i.e., {|𝑿|>x}xαL(x),x\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}\sim x^{-\alpha}L(x),x\rightarrow\infty, with some slowly varying function L(x)L(x). Some useful results on the multivariate regularly variation are discussed in Appendix.

In what follows, we review some results on the extremes of one-dimensional Gaussian process with nagetive drift derived in [32]. Let X(t),t0X(t),t\geq 0 be an a.s. continuous centered Gaussian process with stationary increments and X(0)=0X(0)=0, and let c>0c>0 be some constant. We shall present the exact asymptotics of

ψ(u):={supt0(X(t)ct)>u},u.\psi(u):=\mathbb{P}\left\{\sup_{t\geq 0}(X(t)-ct)>u\right\},\ \ \ u\rightarrow\infty.

Below are some assumptions that the variance function σ2(t)=Var(X(t))\sigma^{2}(t)=\mathrm{Var}(X(t)) might satisfy:

  • C1:

    σ\sigma is continuous on [0,)[0,\infty) and ultimately strictly increasing;

  • C2:

    σ\sigma is regularly varying at infinity with index HH for some H(0,1)H\in(0,1);

  • C3:

    σ\sigma is regularly varying at 0 with index λ\lambda for some λ(0,1)\lambda\in(0,1);

  • C4:

    σ2\sigma^{2} is ultimately twice continuously differentiable and its first derivative σ˙2\dot{\sigma}^{2} and second derivative σ¨2\ddot{\sigma}^{2} are both ultimately monotone.

Note that in the above σ˙2\dot{\sigma}^{2} and σ¨2\ddot{\sigma}^{2} denote the first and second derivative of σ2\sigma^{2}, not the square of the derivatives of σ\sigma. In the sequel, provided it exists we denote by σ\overleftarrow{\sigma} an asymptotic inverse near infinity or zero of σ\sigma; recall that it is (asymptotically uniquely) defined by σ(σ(t))σ(σ(t))t.\overleftarrow{\sigma}(\sigma(t))\sim\sigma(\overleftarrow{\sigma}(t))\sim t. It depends on the context whether σ\overleftarrow{\sigma} is an asymptotic inverse near zero or infinity.

One known example that satisfies the assumptions C1-C4 is the fBm {BH(t),t0}\{B_{H}(t),t\geq 0\} with Hurst index H(0,1)H\in(0,1), i.e., an HH-self-similar centered Gaussian process with stationary increments and covariance function given by

Cov(BH(t),BH(s))=12(|t|2H+|s|2H|ts|2H),t,s.\displaystyle Cov(B_{H}(t),B_{H}(s))=\frac{1}{2}(\left\lvert t\right\rvert^{2H}+\left\lvert s\right\rvert^{2H}-\left\lvert t-s\right\rvert^{2H}),\ \ t,s\in\mathbb{R}.

We introduce the following notation:

CH,λ1,λ2=211/λ2πλ1(1H)1/λ2(H1H)λ1+H12+1λ2(1H).\displaystyle C_{H,\lambda_{1},\lambda_{2}}=\sqrt{2^{1-1/\lambda_{2}}\pi}\lambda_{1}\left(\frac{1}{H}\right)^{1/\lambda_{2}}\left(\frac{H}{1-H}\right)^{\lambda_{1}+H-\frac{1}{2}+\frac{1}{\lambda_{2}}\left(1-H\right)}.

For an a.s. continuous centered Gaussian process Z(t),t0Z(t),t\geq 0 with stationary increments and variance function σZ2\sigma_{Z}^{2}, we define the generalized Pickands constant

Z=limT1T𝔼{exp(supt[0,T](2Z(t)σZ2(t)))}\displaystyle\mathcal{H}_{Z}=\lim_{T\rightarrow\infty}\frac{1}{T}\mathbb{E}\left\{\exp\left(\sup_{t\in[0,T]}(\sqrt{2}Z(t)-\sigma_{Z}^{2}(t))\right)\right\}

provided both the expectation and the limit exist. When Z=BHZ=B_{H} the constant BH\mathcal{H}_{B_{H}} is the well-known Pickands constant; see [33]. For convenience, sometimes we also write σZ2\mathcal{H}_{\sigma_{Z}^{2}} for Z\mathcal{H}_{Z}. Denote in the following by Ψ()\Psi(\cdot) the survival function of the N(0,1)N(0,1) distribution. It is known that

(4) Ψ(u)=12πuex22𝑑x12πueu22,u.\displaystyle\Psi(u)=\frac{1}{\sqrt{2\pi}}\int_{u}^{\infty}e^{-\frac{x^{2}}{2}}dx\ \sim\ \frac{1}{\sqrt{2\pi}u}e^{-\frac{u^{2}}{2}},\ \ \ \ u\rightarrow\infty.

The following result is derived in Proposition 2 in [32] (here we consider a particular trend function ϕ(t)=ct,t0\phi(t)=ct,t\geq 0).

Proposition 2.1.

Let X(t),t0X(t),t\geq 0 be an a.s. continuous centered Gaussian process with stationary increments and X(0)=0X(0)=0. Suppose that C1–C4 hold. We have, as uu\rightarrow\infty

(i) if σ2(u)/u\sigma^{2}(u)/u\rightarrow\infty, then

ψ(u)BHCH,1,H(1HH)c1Hσ(u)σ(σ2(u)/u)Ψ(inft0u(1+t)σ(ut/c));\displaystyle\psi(u)\sim\mathcal{H}_{B_{H}}C_{H,1,H}\left(\frac{1-H}{H}\right)\frac{c^{1-H}\sigma(u)}{{\color[rgb]{0,0,0}\overleftarrow{\sigma}}(\sigma^{2}(u)/u)}\Psi\left(\inf_{t\geq 0}\frac{u(1+t)}{\sigma(ut/c)}\right);

(ii) if σ2(u)/u𝒢(0,)\sigma^{2}(u)/u\rightarrow\mathcal{G}\in(0,\infty), then

ψ(u)(2c2/𝒢2)σ2(2/πc1+HH)σ(u)Ψ(inft0u(1+t)σ(ut/c));\displaystyle\psi(u)\sim\mathcal{H}_{(2c^{2}/\mathcal{G}^{2})\sigma^{2}}\left(\frac{\sqrt{2/\pi}}{c^{1+H}H}\right)\sigma(u)\Psi\left(\inf_{t\geq 0}\frac{u(1+t)}{\sigma(ut/c)}\right);

(iii) if σ2(u)/u0\sigma^{2}(u)/u\rightarrow 0, then [here we need regularity of σ\sigma and its inverse at 0]

ψ(u)BλCH,1,λ(1HH)H/λc1H+2H/λσ(u)σ(σ2(u)/u)Ψ(inft0u(1+t)σ(ut/c)).\displaystyle\psi(u)\sim\mathcal{H}_{B_{\lambda}}C_{H,1,\lambda}\left(\frac{1-H}{H}\right)^{H/\lambda}\frac{c^{{\color[rgb]{0,0,0}-1-H+2H/\lambda}}\sigma(u)}{{\color[rgb]{0,0,0}\overleftarrow{\sigma}}(\sigma^{2}(u)/u)}\Psi\left(\inf_{t\geq 0}\frac{u(1+t)}{\sigma(ut/c)}\right).

As a special case of the Proposition 2.1 we have the following result (see Corollary 1 in [32] or [19]). This will be useful in the proofs below.

Corollary 2.2.

If X(t)=BH(t),t0X(t)=B_{H}(t),t\geq 0 is the fBm with index H(0,1)H\in(0,1), then as uu\rightarrow\infty

{supt0(BH(t)ct)>u}KHBHuH+1/H2Ψ(cHu1HHH(1H)1H).\displaystyle\mathbb{P}\left\{\sup_{t\geq 0}(B_{H}(t)-ct)>u\right\}\sim K_{H}\mathcal{H}_{{B_{H}}}u^{H+1/H-2}\ \!\Psi\left(\frac{c^{H}u^{1-H}}{H^{H}(1-H)^{1-H}}\right).

with constant KH=21212HπH(1H)(cHHH(1H)1H)1/H1K_{H}=2^{\frac{1}{2}-\frac{1}{2H}}\frac{\sqrt{\pi}}{\sqrt{H(1-H)}}\left(\frac{c^{H}}{H^{H}(1-H)^{1-H}}\right)^{1/H-1}.

3. Main results

Without loss of generality, we assume that in (3) there are nn_{-} coordinates with negative drift, n0n_{0} coordinates without drift and n+n_{+} coordinates with positive drift, i.e.,

ci<0,i=1,,n,\displaystyle c_{i}<0,\ \ i=1,\cdots,n_{-},
ci=0,i=n+1,,n+n0,\displaystyle c_{i}=0,\ \ i=n_{-}+1,\cdots,n_{-}+n_{0},
ci>0,i=n+n0+1,,n,\displaystyle c_{i}>0,\ \ i=n_{-}+n_{0}+1,\cdots,n,

where 0n,n0,n+n0\leq n_{-},n_{0},n_{+}\leq n such that n+n0+n+=nn_{-}+n_{0}+n_{+}=n. We impose the following assumptions for the standard deviation functions σi(t)=Var(Xi(t))\sigma_{i}(t)=\sqrt{Var(X_{i}(t))} of the Gaussian processes Xi(t),i=1,,nX_{i}(t),i=1,\cdots,n.

Assumption I: For i=1,,n,i=1,\cdots,n_{-}, σi(t)\sigma_{i}(t) satisfies the assumptions C1-C4 with the parameters involved indexed by ii. For i=n+1,,n+n0,i=n_{-}+1,\cdots,n_{-}+n_{0}, σi(t)\sigma_{i}(t) satisfies the assumptions C1-C3 with the parameters involved indexed by ii. For i=n+n0+1,,n,i=n_{-}+n_{0}+1,\cdots,n, σi(t)\sigma_{i}(t) satisfies the assumptions C1-C2 with the parameters involved indexed by ii.

Denote

(5) ξi:=supt[0,1]BHi(t),ti=Hi1Hi.\displaystyle\xi_{i}:=\sup_{t\in[0,1]}B_{H_{i}}(t),\ \ \ t^{*}_{i}=\frac{H_{i}}{1-H_{i}}.

Given a Radon measure ν\nu, define

(6) ν~(K)=:𝔼{ν(𝝃1/𝑯K)},K([0,]n{𝟎}),\displaystyle\widetilde{\nu}(K)=:\mathbb{E}\left\{\nu(\boldsymbol{\xi}^{-1/{\boldsymbol{H}}}K)\right\},\quad K\subset\mathcal{B}([0,\infty]^{n}\setminus\{\boldsymbol{0}\}),

where 𝝃1/𝑯K={(ξ11/H1d1,,ξn1/Hndn),(d1,,dn)K}\boldsymbol{\xi}^{-1/{\boldsymbol{H}}}K=\{(\xi_{1}^{-1/{H_{1}}}d_{1},\cdots,\xi_{n}^{-1/{H_{n}}}d_{n}),(d_{1},\cdots,d_{n})\in K\}. Further, note that for i=1,,n,i=1,\cdots,n_{-}, (where ci<0c_{i}<0), the asymptotic formula, as uu\rightarrow\infty, of

(7) ψi(u)={supt0(Xi(t)+cit)>u}.\displaystyle\psi_{i}(u)=\mathbb{P}\left\{\sup_{t\geq 0}(X_{i}(t)+c_{i}t)>u\right\}.

is available from Proposition 2.1 under Assumption I.

Theorem 3.1.

Suppose that 𝐗(t),t0\boldsymbol{X}(t),t\geq 0 satisfies the Assumption I, and 𝓣\boldsymbol{\mathcal{T}} is an independent of 𝐗\boldsymbol{X} regularly varying random vector with index α\alpha and limiting measure ν\nu. Further assume, without loss of generality, that there are m(n0)m(\leq n_{0}) positive constants kik_{i}’s such that σi(u)kiσn+1(u)\overleftarrow{\sigma_{i}}(u)\sim k_{i}\overleftarrow{\sigma}_{n_{-}+1}(u) for i=n+1,,n+mi=n_{-}+1,\cdots,n_{-}+m and σi(u)=o(σn+1(u))\overleftarrow{\sigma_{i}}(u)=o({\color[rgb]{0,0,0}\overleftarrow{\sigma}_{n_{-}+1}}(u)) for i=n+m+1,,n+n0i=n_{-}+m+1,\cdots,n_{-}+n_{0}. We have, with the convention i=10=1\prod_{i=1}^{0}=1,

  • (i)

    If n0>0n_{0}>0, then, as uu\rightarrow\infty,

    P(u)ν~((𝒌𝒂01/Hn+1,]){|𝓣|>σn+1(u)}i=1nψi(aiu),\displaystyle P(u)\sim\widetilde{\nu}((\boldsymbol{k}\boldsymbol{a}_{0}^{1/{H_{n_{-}+1}}},\boldsymbol{\infty}])\ \mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma}_{n_{-}+1}(u)\right\}\ \prod_{i=1}^{n_{-}}\psi_{i}(a_{i}u),

    where ν~\widetilde{\nu} and ψis\psi_{i}^{\prime}s are defined in (6) and (7), respectively, and
    𝒌𝒂01/Hn+1=(0,,0,kn+1an+11/Hn+1,,kn+man+m1/Hn+1,0,,0).\boldsymbol{k}\boldsymbol{a}_{0}^{1/{H_{n_{-}+1}}}=(0,\cdots,0,k_{n_{-}+1}a_{n_{-}+1}^{1/H_{n_{-}+1}},\cdots,k_{n_{-}+m}a_{n_{-}+m}^{1/H_{n_{-}+1}},0,\cdots,0).

  • (ii)

    If n0=0n_{0}=0, then, as uu\rightarrow\infty,

    P(u)ν((𝒂1,]){|𝓣|>u}i=1nψi(aiu),\displaystyle P(u)\sim\nu((\boldsymbol{a}_{1},\boldsymbol{\infty}])\ \mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>u\right\}\ \prod_{i=1}^{n_{-}}\psi_{i}(a_{i}u),

    where 𝒂1=(t1/|c1|,tn/|cn|,an+1/cn+1,,an/cn).\boldsymbol{a}_{1}=(t^{*}_{1}/\left\lvert c_{1}\right\rvert\cdots,t^{*}_{n_{-}}/\left\lvert c_{n_{-}}\right\rvert,a_{n_{-}+1}/{c_{n_{-}+1}},\cdots,a_{n}/c_{n}).

Remark 3.2.

As a special case, we can obtain from Theorem 3.1 some results for the one-dimensional model. Specifically, let c>0c>0 be some constant, then as uu\rightarrow\infty,

(8) {supt[0,𝒯]X(t)>u}\displaystyle\mathbb{P}\left\{\sup_{t\in[0,\mathcal{T}]}X(t)>u\right\} \displaystyle\sim 𝔼{(supt[0,1]BH(t))α/H}{𝒯>σ(u)},\displaystyle\mathbb{E}\left\{\left(\sup_{t\in[0,1]}B_{H}(t)\right)^{\alpha/H}\right\}\mathbb{P}\left\{\mathcal{T}>\overleftarrow{\sigma}(u)\right\},
(9) {supt[0,𝒯](X(t)ct)>u}\displaystyle\mathbb{P}\left\{\sup_{t\in[0,\mathcal{T}]}(X(t)-ct)>u\right\} \displaystyle\sim (c(1H)/H)α{𝒯>u}ψ(u),\displaystyle(c(1-H)/H)^{\alpha}\mathbb{P}\left\{\mathcal{T}>u\right\}\psi(u),
(10) {supt[0,𝒯](X(t)+ct)>u}\displaystyle\mathbb{P}\left\{\sup_{t\in[0,\mathcal{T}]}(X(t)+ct)>u\right\} \displaystyle\sim cα{𝒯>u}.\displaystyle c^{\alpha}\mathbb{P}\left\{\mathcal{T}>u\right\}.

Note that (8) is derived in Theorem 2.1 of [1], (9) is discussed in [5] only for the fBm case. The result in (10) seems to be new.

We conclude this section with an interesting example of multi-dimensional subordinate Brownian motion; see, e.g., [21].

Example 3.3.

For each i=0,1,,ni=0,1,\cdots,n, let {Si(t),t0}\{S_{i}(t),t\geq 0\} be independent αi\alpha_{i}-stable subordinator with αi(0,1)\alpha_{i}\in(0,1), i.e., Si(t)=𝐷𝒮αi(t1/αi,1,0)S_{i}(t)\overset{D}{=}\mathcal{S}_{\alpha_{i}}(t^{1/\alpha_{i}},1,0), where 𝒮α(σ,β,d)\mathcal{S}_{\alpha}(\sigma,\beta,d) denotes a stable random variable with stability index α\alpha, scale parameter σ\sigma, skewness parameter β\beta and drift parameter dd. It is known (e.g., Property 1.2.15 in [34]) that for any fixed constant T>0T>0,

{Si(T)>t}Cαi,Ttαi,t,\displaystyle\mathbb{P}\left\{S_{i}(T)>t\right\}\ \sim\ C_{\alpha_{i},T}t^{-\alpha_{i}},\quad\quad t\rightarrow\infty,

with Cαi,T=TΓ(1αi)cos(παi/2).C_{\alpha_{i},T}=\frac{T}{\Gamma(1-\alpha_{i})\cos(\pi\alpha_{i}/2)}. Assume α0<αi,\alpha_{0}<\alpha_{i}, for all i=1,2,n.i=1,2\cdots,n. Define an nn-dimensional subordinator as

𝒀(t):=(S0(t)+S1(t),,S0(t)+Sn(t)),t0.\boldsymbol{Y}(t):=(S_{0}(t)+S_{1}(t),\cdots,S_{0}(t)+S_{n}(t)),\ \ \ t\geq 0.

We consider an nn-dimensional subordinate Brownian motion with drift defined as

𝑿(t)=(B1(Y1(t))+c1Y1(t),,Bn(Yn(t))+cnYn(t)),t0,\displaystyle\boldsymbol{X}(t)=(B_{1}(Y_{1}(t))+c_{1}Y_{1}(t),\cdots,B_{n}(Y_{n}(t))+c_{n}Y_{n}(t)),\ \ \ \ t\geq 0,

where Bi(t),t0B_{i}(t),t\geq 0, i=1,,n,i=1,\cdots,n, are independent standard Brownian motions which are independent of 𝐘\boldsymbol{Y} and cic_{i}\in\mathbb{R}. Define, for any ai>0,i=1,2,,na_{i}>0,i=1,2,\cdots,n, T>0T>0 and u>0,u>0,

PB(u):={i=1n(supt[0,T](Bi(Yi(t))+ciYi(t))>aiu)}.\displaystyle P_{B}(u):=\mathbb{P}\left\{\cap_{i=1}^{n}\left(\sup_{t\in[0,T]}(B_{i}(Y_{i}(t))+c_{i}Y_{i}(t))>a_{i}u\right)\right\}.

For illustrative purpose and to avoid further technicality, we only consider the case where all cic_{i}’s in the above have the same sign. As an application of Theorem 3.1 we obtain the asymptotic behaviour of PB(u),u,P_{B}(u),u\rightarrow\infty, as follows:

  • (i)

    If ci>0c_{i}>0 for all i=1,,n,i=1,\cdots,n, then PB(u)Cα0,T(maxi=1n(ai/ci)u)α0.P_{B}(u)\sim C_{\alpha_{0},T}(\max_{i=1}^{n}(a_{i}/c_{i})u)^{-\alpha_{0}}.

  • (ii)

    If ci=0c_{i}=0 for all i=1,,n,i=1,\cdots,n, then PB(u)u2α0.P_{B}(u)\asymp u^{-2\alpha_{0}}.

  • (iii)

    If ci<0c_{i}<0 and the density function of Si(T)S_{i}(T) is ultimately monotone for all i=0,1,,n,i=0,1,\cdots,n, then lnPB(u)2i=1n(aici)u.\ln P_{B}(u)\sim 2\sum_{i=1}^{n}(a_{i}c_{i})u.

The proof of the above is displayed in Section 5.

4. Ruin probability of a multi-dimensional regenerative model

As it is known in the literature that the maximum of random processes over random interval is relevant to the regenerated models (e.g., [24, 25]), this section is focused on a multi-dimensional regenerative model which is motivated from its applications in queueing theory and ruin theory. More precisely, there are four elements in this model: Two sequences of strictly positive random variables, {Ti:i1}\{T_{i}:i\geq 1\} and {Si:i1}\{S_{i}:i\geq 1\}, and two sequences of nn-dimensional processes, {{𝑿(i)(t),t0}:i1}\{\{\boldsymbol{X}^{(i)}(t),t\geq 0\}:i\geq 1\} and {{𝒀(i)(t),t0}:i1}\{\{\boldsymbol{Y}^{(i)}(t),t\geq 0\}:i\geq 1\}, where 𝑿(i)(t)=(X1(i)(t),,Xn(i)(t))\boldsymbol{X}^{(i)}(t)=(X_{1}^{(i)}(t),\cdots,X_{n}^{(i)}(t)) and 𝒀(i)(t)=(Y1(i)(t),,Yn(i)(t))\boldsymbol{Y}^{(i)}(t)=(Y_{1}^{(i)}(t),\cdots,Y_{n}^{(i)}(t)). We assume that the above four elements are mutually independent. Here Ti,SiT_{i},S_{i} are two successive times representing the random length of the alternating environment (called TT-stage and SS-stage), and we assume a TT-stage starts at time 0. The model grows according to {𝑿(i)(t),t0}\{\boldsymbol{X}^{(i)}(t),t\geq 0\} during the iith TT-stage and according to {𝒀(i)(t),t0}\{\boldsymbol{Y}^{(i)}(t),t\geq 0\} during the iith SS-stage.

Based on the above we define an alternating renewal process with renewal epochs

0=V0<V1<V2<V3<0=V_{0}<V_{1}<V_{2}<V_{3}<\cdots

with Vi=(T1+S1)++(Ti+Si)V_{i}=(T_{1}+S_{1})+\cdots+(T_{i}+S_{i}) which is the iith environment cycle time. Then the resulting nn-dimensional process 𝒁(t)=(Z1(t),,Zn(t))\boldsymbol{Z}(t)=(Z_{1}(t),\cdots,Z_{n}(t)), is defined as

𝒁(t):={𝒁(Vi)+𝑿(i+1)(tVi),if Vi<tVi+Ti+1;𝒁(Vi)+𝑿(i+1)(Ti+1)+𝒀(i+1)(tViTi+1),if Vi+Ti+1<tVi+1.\displaystyle\boldsymbol{Z}(t):=\left\{\begin{array}[]{ll}\boldsymbol{Z}(V_{i})+\boldsymbol{X}^{(i+1)}(t-V_{i}),&\hbox{if \ $V_{i}<t\leq V_{i}+T_{i+1}$;}\\ \boldsymbol{Z}(V_{i})+\boldsymbol{X}^{(i+1)}(T_{i+1})+\boldsymbol{Y}^{(i+1)}(t-V_{i}-T_{i+1}),&\hbox{if \ $V_{i}+T_{i+1}<t\leq V_{i+1}$}.\end{array}\right.

Note that this is a multi-dimensional regenerative process with regeneration epochs ViV_{i}. This is a generalization of the one-dimensional model discussed in [26].

We assume that {{𝑿(i)(t),t0}:i1}\{\{\boldsymbol{X}^{(i)}(t),t\geq 0\}:i\geq 1\} and {{𝒀(i)(t),t0}:i1}\{\{\boldsymbol{Y}^{(i)}(t),t\geq 0\}:i\geq 1\} are independent samples of {𝑿(t),t0}\{\boldsymbol{X}(t),t\geq 0\} and {𝒀(t),t0}\{\boldsymbol{Y}(t),t\geq 0\}, respectively, where

Xj(t)=BHj(t)+pjt,t0, 1jn,\displaystyle X_{j}(t)=B_{H_{j}}(t)+p_{j}t,\ \ \ t\geq 0,\ \ \ \ 1\leq j\leq n,
Yj(t)=B~H~j(t)qjt,t0, 1jn,\displaystyle Y_{j}(t)=\widetilde{B}_{\widetilde{H}_{j}}(t)-q_{j}t,\ \ \ t\geq 0,\ \ \ \ 1\leq j\leq n,

with all the fBm’s BHj,B~H~jB_{H_{j}},\widetilde{B}_{\widetilde{H}_{j}} being mutually independent and pj,qj>0,1jnp_{j},q_{j}>0,1\leq j\leq n. Suppose that (Ti,Si),i1(T_{i},S_{i}),i\geq 1 are independent samples of (T,S)(T,S) and TT is regularly varying with index λ>1\lambda>1. We further assume that

(12) {S>x}=o({T>x}),pj𝔼{T}<qj𝔼{S}< 1jn.\displaystyle\mathbb{P}\left\{S>x\right\}=o\left(\mathbb{P}\left\{T>x\right\}\right),\ \ \ p_{j}\mathbb{E}\left\{T\right\}<q_{j}\mathbb{E}\left\{S\right\}<\infty\ \ 1\leq j\leq n.

For notational simplicity we shall restrict ourselves to the 2-dimensional case. The general nn-dimensional problem can be analysed similarly. Thus, for the rest of this section and related proofs in Section 6, all vectors (or multi-dimensional processes) are considered to be two-dimensional ones.

We are interested in the asymptotics of the following tail probability

Q(u):={n1:supt[Vn1,Vn]Z1(t)>a1u,sups[Vn1,Vn]Z2(s)>a2u},u,\displaystyle Q(u):=\mathbb{P}\left\{\exists n\geq 1:\ \sup_{t\in[V_{n-1},V_{n}]}Z_{1}(t)>a_{1}u,\sup_{s\in[V_{n-1},V_{n}]}Z_{2}(s)>a_{2}u\right\},\ \ \ u\rightarrow\infty,

with a1,a2>0a_{1},a_{2}>0. In the fluid queueing context, Q(u)Q(u) can be interpreted as the probability that both buffers overflow in some environment cycle. In the insurance context, Q(u)Q(u) can be interpreted as the probability that in some business cycle the two lines of business of the insurer are both ruined (not necessarily at the same time). Similar one-dimensional models have been discussed in the literature; see, e.g., [25, 24, 18].

We introduce the following notation:

(13) 𝑼(n)=(U1(n),U2(n)):=𝒁(Vn)𝒁(Vn1),n1,𝑼(0)=𝟎,\displaystyle\boldsymbol{U}^{(n)}=(U_{1}^{(n)},U_{2}^{(n)}):=\boldsymbol{Z}(V_{n})-\boldsymbol{Z}(V_{n-1}),\ \ n\geq 1,\ \ \ \boldsymbol{U}^{(0)}=\boldsymbol{0},
(14) 𝑴(n)=(M1(n),M2(n)):=(supt[Vn1,Vn)Z1(t)Z1(Vn1),sups[Vn1,Vn)Z2(s)Z2(Vn1)),n1.\displaystyle\boldsymbol{M}^{(n)}=(M_{1}^{(n)},M_{2}^{(n)}):=\left(\sup_{t\in[V_{n-1},V_{n})}Z_{1}(t)-Z_{1}(V_{n-1}),\sup_{s\in[V_{n-1},V_{n})}Z_{2}(s)-Z_{2}(V_{n-1})\right),\ \ n\geq 1.

Then we have

Q(u)={n1:i=1nU1(i1)+M1(n)>a1u,i=1nU2(i1)+M2(n)>a2u}.\displaystyle Q(u)=\mathbb{P}\left\{\exists n\geq 1:\ \sum_{i=1}^{n}U^{(i-1)}_{1}+M_{1}^{(n)}>a_{1}u,\sum_{i=1}^{n}U^{(i-1)}_{2}+M_{2}^{(n)}>a_{2}u\right\}.

Note that 𝑼(n),n0\boldsymbol{U}^{(n)},n\geq 0 and 𝑴(n),n0\boldsymbol{M}^{(n)},n\geq 0 are both IID sequences. By the second assumption in (12) we have

(15) 𝔼{𝑼(1)}=(p1𝔼{T}q1𝔼{S},p2𝔼{T}q2𝔼{S})=:𝒄<𝟎,\displaystyle\mathbb{E}\left\{\boldsymbol{U}^{(1)}\right\}=(p_{1}\mathbb{E}\left\{T\right\}-q_{1}\mathbb{E}\left\{S\right\},p_{2}\mathbb{E}\left\{T\right\}-q_{2}\mathbb{E}\left\{S\right\})=:-\boldsymbol{c}<\boldsymbol{0},

which ensures that the event in the above probability is a rare event for large uu, i.e., Q(u)0,Q(u)\rightarrow 0, as uu\rightarrow\infty.

It is noted that our question now becomes an exit problem of a 2-dimensional perturbed random walk. The exit problems of multi-dimensional random walk has been discussed in many papers, e.g., [31]. However, it seems that multi-dimensional perturbed random walk has not been discussed in the existing literature.

Since TT is regularly varying with index λ>1\lambda>1, we have that

(16) 𝑻~:=(p1T,p2T)\displaystyle\widetilde{\boldsymbol{T}}:=(p_{1}T,p_{2}T)

is regularly varying with index λ\lambda and some limiting measure μ\mu (whose form depends on the norm |||\cdot| that is chosen). We present next the main result of this section, leaving its proof to Section 6.

Theorem 4.1.

Under the above assumptions on regenerative model 𝐙(t),t0{\boldsymbol{Z}}(t),t\geq 0, we have that, as u,u\rightarrow\infty,

Q(u)0μ((v𝒄+𝒂,])𝑑v{|𝑻~|>u}u,\displaystyle Q(u)\sim\int_{0}^{\infty}\mu((v\boldsymbol{c}+\boldsymbol{a},\boldsymbol{\infty}])dv\ \mathbb{P}\left\{\left\lvert\widetilde{\boldsymbol{T}}\right\rvert>u\right\}u,

where 𝐜\boldsymbol{c} and 𝐓~\widetilde{\boldsymbol{T}} is given by (15) and (16), respectively.

Remark 4.2.

Consider ||\left\lvert\ \cdot\ \right\rvert to be the L1L^{1} norm in Theorem 4.1. We have

μ([𝒂,])=((p1+p2)max(a1/p1,a2/p2))λ,\displaystyle\mu([\boldsymbol{a},\boldsymbol{\infty}])=\left((p_{1}+p_{2})\max(a_{1}/p_{1},a_{2}/p_{2})\right)^{-\lambda},

and thus, as u,u\rightarrow\infty,

Q(u)0max((a1+c1v)/p1,(a2+c2v)/p2)λdv{T>u}u.\displaystyle Q(u)\sim\int_{0}^{\infty}\max((a_{1}+c_{1}v)/p_{1},(a_{2}+c_{2}v)/p_{2})^{-\lambda}dv\ \mathbb{P}\left\{T>u\right\}u.

5. Proof of main results

This section is devoted to the proof of Theorem 3.1 followed by a short proof of Example 3.3.

First we give a result in line with Proposition 2.1. Note that in the proof of the main results in [32] the minimum point tut_{u}^{*} of the function

fu(t):=u(1+t)σ(ut/c),t0,\displaystyle f_{u}(t):=\frac{u(1+t)}{\sigma(ut/c)},\ \ t\geq 0,

plays an important role. It has been discussed therein that tut_{u}^{*} converges, as u,u\rightarrow\infty, to t:=H/(1H)t^{*}:=H/(1-H) which is the unique minimum point of limufu(t)σ(u)/u=(1+t)/(t/c)H,t0\lim_{u\rightarrow\infty}f_{u}(t)\sigma(u)/u=(1+t)/(t/c)^{H},t\geq 0. In this sense, tut_{u}^{*} is asymptotically unique. We have the following corollary of [32], which is useful for the proofs below.

Lemma 5.1.

Let X(t),t0X(t),t\geq 0 be an a.s. continuous centered Gaussian process with stationary increments and X(0)=0X(0)=0. Suppose that C1–C4 hold. For any fixed 0<ε<t/c0<\varepsilon<t^{*}/c, we have, as u,u\rightarrow\infty,

{supt0,(t/c+ε)u](X(t)ct)>u}ψ(u),\displaystyle\mathbb{P}\left\{\sup_{t\in 0,(t^{*}/c+\varepsilon)u]}(X(t)-ct)>u\right\}\sim\psi(u),

with ψ(u)\psi(u) the same as in Proposition 2.1. Furthermore, we have that for any γ>0\gamma>0

limu{supt[0,(t/cε)u](X(t)ct)>u}ψ(u)uγ=0.\displaystyle\lim_{u\rightarrow\infty}\frac{\mathbb{P}\left\{\sup_{t\in[0,(t^{*}/c-\varepsilon)u]}(X(t)-ct)>u\right\}}{\psi(u)u^{-\gamma}}=0.

Proof of Lemma 5.1: Note that

{supt[0,(t/c+ε)u](X(t)ct)>u}={supt[0,(t+cε)]X(ut/c)1+t>u}.\displaystyle\mathbb{P}\left\{\sup_{t\in[0,(t^{*}/c+\varepsilon)u]}(X(t)-ct)>u\right\}=\mathbb{P}\left\{\sup_{t\in[0,(t^{*}+c\varepsilon)]}\frac{X(ut/c)}{1+t}>u\right\}.

The first claim follows from [32], as the main interval which determines the asymptotics is in[0,(t+cε)][0,(t^{*}+c\varepsilon)] (see Lemma 7 and the comments in Section 2.1 therein). Similarly, we have

{supt[0,(t/cε)u](X(t)ct)>u}={supt[0,(tcε)]X(ut/c)1+t>u}.\displaystyle\mathbb{P}\left\{\sup_{t\in[0,(t^{*}/c-\varepsilon)u]}(X(t)-ct)>u\right\}=\mathbb{P}\left\{\sup_{t\in[0,(t^{*}-c\varepsilon)]}\frac{X(ut/c)}{1+t}>u\right\}.

Since tut^{*}_{u} is asymptotically unique and limutu=t\lim_{u\rightarrow\infty}t^{*}_{u}=t^{*}, we can show that for all uu large

inft[0,(tcε)]fu(t)ρfu(tu)=ρinft0fu(t)\displaystyle\inf_{t\in[0,(t^{*}-c\varepsilon)]}f_{u}(t)\geq\rho f_{u}(t^{*}_{u})=\rho\inf_{t\geq 0}f_{u}(t)

for some ρ>1\rho>1. Thus, by similar arguments as in the proof of Lemma 7 in [32] using the Borel inequality we conclude the second claim. \Box

The following lemma is crucial for the proof of Theorem 3.1.

Lemma 5.2.

Let Xi(t),t0X_{i}(t),t\geq 0, i=1,2,,n0(<n)i=1,2,\cdots,n_{0}(<n) be independent centered Gaussian processes with stationary increments, and let 𝓣\boldsymbol{\mathcal{T}} be an independent regularly varying random vector with index α\alpha and limiting measure ν\nu. Suppose that all of σi(t),i=1,2,,n0\sigma_{i}(t),i=1,2,\cdots,n_{0} satisfy the assumptions C1-C3 with the parameters involved indexed by ii, which further satisfy that, σi(u)kiσ1(u)\overleftarrow{\sigma_{i}}(u)\sim k_{i}\overleftarrow{\sigma}_{1}(u) for some positive constants ki,i=1,2,,mn0k_{i},i=1,2,\cdots,m\leq n_{0} and σj(u)=o(σ1(u))\overleftarrow{\sigma_{j}}(u)=o(\overleftarrow{\sigma_{1}}(u)) for all j=m+1,,n0j=m+1,\cdots,n_{0}. Then, for any increasing to infinity functions hi(u),n0+1inh_{i}(u),n_{0}+1\leq i\leq n such that hi(u)=o(σ1(u)),n0+1inh_{i}(u)=o(\overleftarrow{\sigma_{1}}(u)),n_{0}+1\leq i\leq n, and any ai>0a_{i}>0,

{i=1n0(supt[0,𝒯i]Xi(t)>aiu),i=n0+1n(𝒯i>hi(u))}ν~((𝒌𝒂m,01/𝑯,]){|𝓣|>σ1(u)},\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n_{0}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right),\cap_{i=n_{0}+1}^{n}\left(\mathcal{T}_{i}>h_{i}(u)\right)\right\}\sim\widetilde{\nu}((\boldsymbol{k}\boldsymbol{a}_{m,0}^{1/{\boldsymbol{H}}},\boldsymbol{\infty}])\ \mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma_{1}}(u)\right\},

where ν~\widetilde{\nu} is defined in (6) and 𝐤𝐚m,01/𝐇=(k1a11/H1,,kmam1/Hm,0,0)\boldsymbol{k}\boldsymbol{a}_{m,0}^{1/{\boldsymbol{H}}}=(k_{1}a_{1}^{1/H_{1}},\cdots,k_{m}a_{m}^{1/H_{m}},0\cdots,0) with H1=H2==HmH_{1}=H_{2}=\cdots=H_{m}.

Proof of Lemma 5.2: We use a similar argument as in the proof of Theorem 2.1 in [1] to verify our conclusion. For notational convenience denote

H(u)=:{i=1n0(supt[0,𝒯i]Xi(t)>aiu),i=n0+1n(𝒯i>hi(u))}.\displaystyle H(u)=:\mathbb{P}\left\{\cap_{i=1}^{n_{0}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right),\cap_{i=n_{0}+1}^{n}\left(\mathcal{T}_{i}>h_{i}(u)\right)\right\}.

We first give a asymptotically lower bound for H(u)H(u). Let G(𝒙)={𝓣𝒙}G(\boldsymbol{x})=\mathbb{P}\left\{\boldsymbol{\mathcal{T}}\leq\boldsymbol{x}\right\} be the distribution function of 𝓣\boldsymbol{\mathcal{T}}. Note that, for any constants 0<r<R0<r<R,

H(u)\displaystyle H(u) \displaystyle\geq {i=1n0(supt[0,𝒯i]Xi(t)>aiu),i=1m(rσ1(u)𝒯iRσ1(u)),i=m+1n(𝒯i>rσ1(u))}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n_{0}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right),\cap_{i=1}^{m}(r\overleftarrow{\sigma_{1}}(u)\leq\mathcal{T}_{i}\leq R\overleftarrow{\sigma_{1}}(u)),\cap_{i=m+1}^{n}\left(\mathcal{T}_{i}>r\overleftarrow{\sigma_{1}}(u)\right)\right\}
=\displaystyle= [r,R]m×(r,)nm{i=1n0(supt[0,σ1(u)ti]Xi(t)>aiu)}𝑑G(σ1(u)t1,,σ1(u)tn)\displaystyle\oint_{[r,R]^{m}\times(r,\infty)^{n-m}}\mathbb{P}\left\{\cap_{i=1}^{n_{0}}\left(\sup_{t\in[0,\overleftarrow{\sigma_{1}}(u)t_{i}]}X_{i}(t)>a_{i}u\right)\right\}dG(\overleftarrow{\sigma_{1}}(u)t_{1},\cdots,\overleftarrow{\sigma_{1}}(u)t_{n})
=\displaystyle= [r,R]m×(r,)nmi=1n0{sups[0,1]Xiu,ti(s)>aiui(ti)}dG(σ1(u)t1,,σ1(u)tn)\displaystyle\oint_{[r,R]^{m}\times(r,\infty)^{n-m}}\prod_{i=1}^{n_{0}}\mathbb{P}\left\{\sup_{s\in[0,1]}X_{i}^{u,t_{i}}(s)>a_{i}u_{i}(t_{i})\right\}dG(\overleftarrow{\sigma_{1}}(u)t_{1},\cdots,\overleftarrow{\sigma_{1}}(u)t_{n})

holds for sufficiently large uu, where

Xiu,ti(s)=:Xi(σ1(u)tis)σi(σ1(u)ti),ui(ti)=:uσi(σ1(u)ti),s[0,1],(t1,t2,,tn0)[r,R]m×(r,)n0m.\displaystyle X_{i}^{u,t_{i}}(s)=:\frac{X_{i}(\overleftarrow{\sigma_{1}}(u)t_{i}s)}{\sigma_{i}(\overleftarrow{\sigma_{1}}(u)t_{i})},u_{i}(t_{i})=:\frac{u}{\sigma_{i}(\overleftarrow{\sigma_{1}}(u)t_{i})},s\in[0,1],(t_{1},t_{2},\cdots,t_{n_{0}})\in[r,R]^{m}\times(r,\infty)^{n_{0}-m}.

By Lemma 5.2 in [1], we know that, as uu\rightarrow\infty, the processes Xiu,ti(s)X_{i}^{u,t_{i}}(s) converges weakly in C([0,1])C([0,1]) to BHi(s)B_{H_{i}}(s), uniformly in ti(r,)t_{i}\in(r,\infty), respectively for i=1,2,,n0i=1,2,\cdots,n_{0}. Further, according to the assumptions on σi(t)\sigma_{i}(t), Theorem 1.5.2 and Theorem 1.5.6 in [35], we have, as uu\rightarrow\infty, ui(ti)u_{i}(t_{i}) converges to kiHitiHik_{i}^{H_{i}}t_{i}^{-H_{i}} uniformly in ti[r,R]t_{i}\in[r,R], respectively for i=1,2,,mi=1,2,\cdots,m, and ui(ti)u_{i}(t_{i}) converges to 0 uniformly in ti[r,)t_{i}\in[r,\infty), respectively for i=m+1,,n0i=m+1,\cdots,n_{0}. Then, by the continuous mapping theorem and recalling ξi\xi_{i} defined in (5) is a continuous random variable (e.g., [36]), we get

H(u)\displaystyle H(u) \displaystyle\gtrsim [r,R]m×(r,)nmi=1m{sups[0,1]BHi(s)>aikiHitiHi}dG(σ1(u)t1,,σ1(u)tn)\displaystyle\oint_{[r,R]^{m}\times(r,\infty)^{n-m}}\prod_{i=1}^{m}\mathbb{P}\left\{\sup_{s\in[0,1]}B_{H_{i}}(s)>a_{i}k_{i}^{H_{i}}t_{i}^{-H_{i}}\right\}dG(\overleftarrow{\sigma_{1}}(u)t_{1},\cdots,\overleftarrow{\sigma_{1}}(u)t_{n})
=\displaystyle= {i=1m(ξi1Hi𝒯i>kiai1Hiσ1(u)),i=1m(rσ1(u)𝒯iRσ1(u)),i=m+1n(𝒯i>rσ1(u))}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{m}\left(\xi_{i}^{\frac{1}{H_{i}}}\mathcal{T}_{i}>k_{i}a_{i}^{\frac{1}{H_{i}}}\overleftarrow{\sigma_{1}}(u)\right),\cap_{i=1}^{m}\left(r\overleftarrow{\sigma_{1}}(u)\leq\mathcal{T}_{i}\leq R\overleftarrow{\sigma_{1}}(u)\right),\cap_{i=m+1}^{n}\left(\mathcal{T}_{i}>r\overleftarrow{\sigma_{1}}(u)\right)\right\}
=\displaystyle= J1(u)J2(u),\displaystyle J_{1}(u)-J_{2}(u),

where

J1(u)\displaystyle J_{1}(u) =:\displaystyle=: {i=1m(ξi1Hi𝒯i>kiai1Hiσ1(u)),i=m+1n(𝒯i>rσ1(u))},\displaystyle\mathbb{P}\left\{\cap_{i=1}^{m}\left(\xi_{i}^{\frac{1}{H_{i}}}\mathcal{T}_{i}>k_{i}a_{i}^{\frac{1}{H_{i}}}\overleftarrow{\sigma_{1}}(u)\right),\cap_{i=m+1}^{n}\left(\mathcal{T}_{i}>r\overleftarrow{\sigma_{1}}(u)\right)\right\},
J2(u)\displaystyle J_{2}(u) =:\displaystyle=: {i=1m(ξi1Hi𝒯i>kiai1Hiσ1(u)),i=m+1n(𝒯i>rσ1(u)),i=1m((𝒯i<rσ1(u))(𝒯i>Rσ1(u)))}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{m}\left(\xi_{i}^{\frac{1}{H_{i}}}\mathcal{T}_{i}>k_{i}a_{i}^{\frac{1}{H_{i}}}\overleftarrow{\sigma_{1}}(u)\right),\cap_{i=m+1}^{n}\left(\mathcal{T}_{i}>r\overleftarrow{\sigma_{1}}(u)\right),\cup_{i=1}^{m}\left((\mathcal{T}_{i}<r\overleftarrow{\sigma_{1}}(u))\cup(\mathcal{T}_{i}>R\overleftarrow{\sigma_{1}}(u))\right)\right\}

Putting 𝜼=(ξ11/H1,,ξm1/Hm,1,,1){\boldsymbol{\eta}}=(\xi_{1}^{1/H_{1}},\cdots,\xi_{m}^{1/H_{m}},1,\cdots,1), then by Lemma 7.2 and the continuity of the limiting measure ν^\widehat{\nu} defined therein, we have

(18) limr0limuJ1(u){|𝓣|>σ1(u)}=ν~((𝒌𝒂m,01/𝑯,]).\displaystyle\lim_{r\rightarrow 0}\lim_{u\rightarrow\infty}\frac{J_{1}(u)}{\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma_{1}}(u)\right\}}=\widetilde{\nu}((\boldsymbol{k}{\boldsymbol{a}}_{m,0}^{1/{\boldsymbol{H}}},\boldsymbol{\infty}]).

Furthermore,

J2(u)i=1m({ξi1Hi𝒯i>kiai1Hiσ1(u),𝒯i<rσ1(u)}+{𝒯i>Rσ1(u)}).\displaystyle J_{2}(u)\leq\sum_{i=1}^{m}\left(\mathbb{P}\left\{\xi_{i}^{\frac{1}{H_{i}}}\mathcal{T}_{i}>k_{i}a_{i}^{\frac{1}{H_{i}}}\overleftarrow{\sigma_{1}}(u),\mathcal{T}_{i}<r\overleftarrow{\sigma_{1}}(u)\right\}+\mathbb{P}\left\{\mathcal{T}_{i}>R\overleftarrow{\sigma_{1}}(u)\right\}\right).

Then, by the fact that |𝓣|\left\lvert\boldsymbol{\mathcal{T}}\right\rvert is regularly varying with index α\alpha, and using the same arguments as in the the proof of Theorem 2.1 in [1] (see the asymptotic for integral I4I_{4} and (5.14) therein), we conclude that

(19) limr0,Rlim supuJ2(u){|𝓣|>σ1(u)}=0,\displaystyle\lim_{r\rightarrow 0,R\rightarrow\infty}\limsup_{u\rightarrow\infty}\frac{J_{2}(u)}{\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma_{1}}(u)\right\}}=0,

which combined with (5) and (18) yields

(20) limr0,Rlim infuH(u){|𝓣|>σ1(u)}ν~((𝒌𝒂m,01/𝑯,]).\displaystyle\lim_{r\rightarrow 0,R\rightarrow\infty}\liminf_{u\rightarrow\infty}\frac{H(u)}{\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma_{1}}(u)\right\}}\geq\widetilde{\nu}((\boldsymbol{k}{\boldsymbol{a}}_{m,0}^{1/{\boldsymbol{H}}},\boldsymbol{\infty}]).

Next, we give an asymptotic upper bound for H(u)H(u). Note

H(u)\displaystyle H(u) \displaystyle\leq {i=1m(supt[0,𝒯i]Xi(t)>aiu)}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{m}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right)\right\}
=\displaystyle= {i=1m(supt[0,𝒯i]Xi(t)>aiu),i=1m(rσ1(u)𝒯iRσ1(u))}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{m}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right),\cap_{i=1}^{m}(r\overleftarrow{\sigma_{1}}(u)\leq\mathcal{T}_{i}\leq R\overleftarrow{\sigma_{1}}(u))\right\}
+{i=1m(supt[0,𝒯i]Xi(t)>aiu),i=1m((𝒯i<rσ1(u))(𝒯i>Rσ1(u)))}\displaystyle+\ \ \mathbb{P}\left\{\cap_{i=1}^{m}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right),\cup_{i=1}^{m}\left((\mathcal{T}_{i}<r\overleftarrow{\sigma_{1}}(u))\cup(\mathcal{T}_{i}>R\overleftarrow{\sigma_{1}}(u))\right)\right\}
=:\displaystyle=: J3(u)+J4(u).\displaystyle J_{3}(u)+J_{4}(u).

By the same reasoning as that used in the deduction for (5), we can show that

(21) limr0,RlimuJ3(u){|𝓣|>σ1(u)}=ν~((𝒌𝒂m,01/𝑯,]).\displaystyle\lim_{r\rightarrow 0,R\rightarrow\infty}\lim_{u\rightarrow\infty}\frac{J_{3}(u)}{\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma_{1}}(u)\right\}}=\widetilde{\nu}((\boldsymbol{k}{\boldsymbol{a}}_{m,0}^{1/{\boldsymbol{H}}},\boldsymbol{\infty}]).

Moreover,

J4(u)i=1m({supt[0,𝒯i]Xi(t)>aiu,𝒯i<rσ1(u)}+{𝒯i>Rσ1(u)}).\displaystyle J_{4}(u)\leq\sum_{i=1}^{m}\left(\mathbb{P}\left\{\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u,\mathcal{T}_{i}<r\overleftarrow{\sigma_{1}}(u)\right\}+\mathbb{P}\left\{\mathcal{T}_{i}>R\overleftarrow{\sigma_{1}}(u)\right\}\right).

Thus, by the same arguments as in the proof of Theorem 2.1 in [1] (see the asymptotics for integrals I1,I2,I4I_{1},I_{2},I_{4} therein), we conclude

limr0,Rlim supuJ4(u){|𝓣|>σ1(u)}=0,\displaystyle\lim_{r\rightarrow 0,R\rightarrow\infty}\limsup_{u\rightarrow\infty}\frac{J_{4}(u)}{\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma_{1}}(u)\right\}}=0,

which together with (21) implies that

(22) limr0,Rlim supuH(u){|𝓣|>σ1(u)}ν~((𝒌𝒂m,01/𝑯,]).\displaystyle\lim_{r\rightarrow 0,R\rightarrow\infty}\limsup_{u\rightarrow\infty}\frac{H(u)}{\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma_{1}}(u)\right\}}\leq\widetilde{\nu}((\boldsymbol{k}{\boldsymbol{a}}_{m,0}^{1/{\boldsymbol{H}}},\boldsymbol{\infty}]).

Notice that by the assumptions on {σi(u)}i=1m\{\overleftarrow{\sigma_{i}}(u)\}_{i=1}^{m}, we in fact have H1=H2==HmH_{1}=H_{2}=\cdots=H_{m}. Consequently, combing (20) and (22) we complete the proof. \Box

Proof of Theorem 3.1: We use in the following the convention that i=10=Ω\cap_{i=1}^{0}=\Omega, the sample space. We first verify the claim for case (i), n0>0n_{0}>0. For arbitrarily small ε>0\varepsilon>0, we have

P(u)\displaystyle P(u) \displaystyle\geq {i=1n(supt[0,𝒯i](Xi(t)+cit)>aiu,𝒯i>(ti/|ci|+ε)u),i=n+1n+n0(supt[0,𝒯i]Xi(t)>aiu),\displaystyle\mathbb{P}\Bigg{\{}\cap_{i=1}^{n_{-}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}(X_{i}(t)+c_{i}t)>a_{i}u,\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert+\varepsilon)u\right),\cap_{i=n_{-}+1}^{n_{-}+n_{0}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right),
i=n+n0+1n(supt[0,𝒯i](Xi(t)+cit)>aiu,𝒯i>ai+εciu)}\displaystyle\ \ \ \ \ \cap_{i=n_{-}+n_{0}+1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}(X_{i}(t)+c_{i}t)>a_{i}u,\mathcal{T}_{i}>\frac{a_{i}+\varepsilon}{c_{i}}u\right)\Bigg{\}}
\displaystyle\geq {i=1n(supt[0,(ti/|ci|+ε)u](Xi(t)+cit)>aiu,𝒯i>(ti/|ci|+ε)u),\displaystyle\mathbb{P}\Bigg{\{}\cap_{i=1}^{n_{-}}\left(\sup_{t\in[0,(t^{*}_{i}/\left\lvert c_{i}\right\rvert+\varepsilon)u]}(X_{i}(t)+c_{i}t)>a_{i}u,\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert+\varepsilon)u\right),
i=n+1n+n0(supt[0,𝒯i]Xi(t)>aiu),i=n+n0+1n(Xi(ai+εciu)>εu,𝒯i>ai+εciu)}\displaystyle\ \ \ \ \ \cap_{i=n_{-}+1}^{n_{-}+n_{0}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right),\cap_{i=n_{-}+n_{0}+1}^{n}\left(X_{i}\left(\frac{a_{i}+\varepsilon}{c_{i}}u\right)>-\varepsilon u,\mathcal{T}_{i}>\frac{a_{i}+\varepsilon}{c_{i}}u\right)\Bigg{\}}
=\displaystyle= Q1(u)×Q2(u)×Q3(u),\displaystyle Q_{1}(u)\times Q_{2}(u)\times Q_{3}(u),

where

Q1(u)\displaystyle Q_{1}(u) :=\displaystyle:= {i=1n(supt[0,(ti/|ci|+ε)u]Xi(t)+cit>aiu)}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n_{-}}\left(\sup_{t\in[0,(t^{*}_{i}/\left\lvert c_{i}\right\rvert+\varepsilon)u]}X_{i}(t)+c_{i}t>a_{i}u\right)\right\}
Q2(u)\displaystyle Q_{2}(u) :=\displaystyle:= {i=1n(𝒯i>(ti/|ci|+ε)u),i=n+1n+n0(supt[0,𝒯i]Xi(t)>aiu),i=n+n0+1n(𝒯i>ai+εciu)},\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n_{-}}\left(\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert+\varepsilon)u\right),\cap_{i=n_{-}+1}^{n_{-}+n_{0}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right),\cap_{i=n_{-}+n_{0}+1}^{n}\left(\mathcal{T}_{i}>\frac{a_{i}+\varepsilon}{c_{i}}u\right)\right\},
Q3(u)\displaystyle Q_{3}(u) :=\displaystyle:= i=n+n0+1n{Ni>εuσi(ai+εciu)}1,u,\displaystyle\prod_{i=n_{-}+n_{0}+1}^{n}\mathbb{P}\left\{N_{i}>\frac{-\varepsilon u}{\sigma_{i}(\frac{a_{i}+\varepsilon}{c_{i}}u)}\right\}\rightarrow 1,\ \ \ u\rightarrow\infty,

with Ni,i=n+n0+1,,nN_{i},i=n_{-}+n_{0}+1,\cdots,n being standard Normal distributed random variables. By Lemma 5.1, we know, as uu\rightarrow\infty,

Q1(u)i=1nψi(aiu).\displaystyle Q_{1}(u)\sim\prod_{i=1}^{n_{-}}\psi_{i}(a_{i}u).

Further, according to the assumptions on σi\sigma_{i}’s and Lemma 5.2, we get

limε0limuQ2(u){|𝓣|>σn+1(u)}=ν~((𝒌𝒂01/Hn+1,]),\displaystyle\lim_{\varepsilon\rightarrow 0}\lim_{u\rightarrow\infty}\frac{Q_{2}(u)}{\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma}_{n_{-}+1}(u)\right\}}=\widetilde{\nu}((\boldsymbol{k}\boldsymbol{a}_{0}^{1/{H_{n_{-}+1}}},\boldsymbol{\infty}]),

and thus

P(u)ν~((𝒌𝒂01/Hn+1,]){|𝓣|>σn+1(u)}i=1nψi(aiu),u.\displaystyle P(u)\gtrsim\widetilde{\nu}((\boldsymbol{k}\boldsymbol{a}_{0}^{1/{H_{n_{-}+1}}},\boldsymbol{\infty}])\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma}_{n_{-}+1}(u)\right\}\prod_{i=1}^{n_{-}}\psi_{i}(a_{i}u),\quad u\rightarrow\infty.

Similarly, we can show

P(u)\displaystyle P(u) \displaystyle\leq {i=1n(supt[0,)Xi(t)+cit>aiu),i=n+1n+n0(supt[0,𝒯i]Xi(t)>aiu)}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n_{-}}\left(\sup_{t\in[0,\infty)}X_{i}(t)+c_{i}t>a_{i}u\right),\cap_{i=n_{-}+1}^{n_{-}+n_{0}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>a_{i}u\right)\right\}
ν~((𝒌𝒂01/Hn+1,]){|𝓣|>σn+1(u)}i=1nψi(aiu),u.\displaystyle\sim\ \widetilde{\nu}((\boldsymbol{k}\boldsymbol{a}_{0}^{1/{H_{n_{-}+1}}},\boldsymbol{\infty}])\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>\overleftarrow{\sigma}_{n_{-}+1}(u)\right\}\prod_{i=1}^{n_{-}}\psi_{i}(a_{i}u),\quad u\rightarrow\infty.

This completes the proof of case (i).

Next we consider case (ii), n0=0n_{0}=0. Similarly as in case (i) we have, for any small ε>0\varepsilon>0

P(u)\displaystyle P(u) \displaystyle\geq {i=1n(supt[0,(ti/|ci|+ε)u](Xi(t)+cit)>aiu,𝒯i>(ti/|ci|+ε)u),\displaystyle\mathbb{P}\Bigg{\{}\cap_{i=1}^{n_{-}}\left(\sup_{t\in[0,(t^{*}_{i}/\left\lvert c_{i}\right\rvert+\varepsilon)u]}(X_{i}(t)+c_{i}t)>a_{i}u,\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert+\varepsilon)u\right),
i=n+1n(Xi(ai+εciu)>εu,𝒯i>ai+εciu)}\displaystyle\ \ \ \ \ \cap_{i=n_{-}+1}^{n}\left(X_{i}\left(\frac{a_{i}+\varepsilon}{c_{i}}u\right)>-\varepsilon u,\mathcal{T}_{i}>\frac{a_{i}+\varepsilon}{c_{i}}u\right)\Bigg{\}}
=\displaystyle= Q1(u)×Q3(u)×Q4(u),\displaystyle Q_{1}(u)\times Q_{3}(u)\times Q_{4}(u),

where

Q4(u):={i=1n(𝒯i>(ti/|ci|+ε)u),i=n+1n(𝒯i>ai+εciu)}.\displaystyle Q_{4}(u):=\mathbb{P}\left\{\cap_{i=1}^{n_{-}}\left(\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert+\varepsilon)u\right),\cap_{i=n_{-}+1}^{n}\left(\mathcal{T}_{i}>\frac{a_{i}+\varepsilon}{c_{i}}u\right)\right\}.

By Lemma 7.1, we know

limε0limuQ4(u){|𝓣|>u}=ν(𝒂1,],\displaystyle\lim_{\varepsilon\rightarrow 0}\lim_{u\rightarrow\infty}\frac{Q_{4}(u)}{\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>u\right\}}=\nu(\boldsymbol{a}_{1},\boldsymbol{\infty}],

and thus

P(u)ν(𝒂1,]{|𝓣|>u}i=1nψi(aiu),u.\displaystyle P(u)\gtrsim\nu(\boldsymbol{a}_{1},\boldsymbol{\infty}]\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>u\right\}\prod_{i=1}^{n_{-}}\psi_{i}(a_{i}u),\quad u\rightarrow\infty.

For the upper bound, we have for any small ε>0\varepsilon>0

P(u)I1(u)+I2(u),\displaystyle P(u)\leq I_{1}(u)+I_{2}(u),

with

I1(u)\displaystyle I_{1}(u) :=\displaystyle:= {i=1n(supt[0,𝒯i]Xi(t)+cit>aiu),i=1n(𝒯i>(ti/|ci|ε)u),i=n+1n(supt[0,𝒯i]Xi(t)+ci𝒯i>aiu)},\displaystyle\mathbb{P}\Bigg{\{}\cap_{i=1}^{n_{-}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)+c_{i}t>a_{i}u\right),\cap_{i=1}^{n_{-}}\left(\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u\right),\cap_{i=n_{-}+1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)+c_{i}\mathcal{T}_{i}>a_{i}u\right)\Bigg{\}},
I2(u)\displaystyle I_{2}(u) :=\displaystyle:= {i=1n(supt[0,𝒯i]Xi(t)+cit>aiu),i=1n(𝒯i(ti/|ci|ε)u),i=n+1n(supt[0,𝒯i]Xi(t)+ci𝒯i>aiu)}.\displaystyle\mathbb{P}\Bigg{\{}\cap_{i=1}^{n_{-}}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)+c_{i}t>a_{i}u\right),\cup_{i=1}^{n_{-}}\left(\mathcal{T}_{i}\leq(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u\right),\cap_{i=n_{-}+1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)+c_{i}\mathcal{T}_{i}>a_{i}u\right)\Bigg{\}}.

It follows that

I1(u)\displaystyle I_{1}(u) \displaystyle\leq {i=1n(supt[0,)Xi(t)+cit>aiu),i=1n(𝒯i>(ti/|ci|ε)u),i=n+1n(supt[0,𝒯i]Xi(t)+ci𝒯i>aiu)}\displaystyle\mathbb{P}\Bigg{\{}\cap_{i=1}^{n_{-}}\left(\sup_{t\in[0,\infty)}X_{i}(t)+c_{i}t>a_{i}u\right),\cap_{i=1}^{n_{-}}\left(\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u\right),\cap_{i=n_{-}+1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)+c_{i}\mathcal{T}_{i}>a_{i}u\right)\Bigg{\}}
=\displaystyle= i=1nψi(aiu){i=1n(𝒯i>(ti/|ci|ε)u),i=n+1n(supt[0,𝒯i]Xi(t)+ci𝒯i>aiu)}.\displaystyle\prod_{i=1}^{n_{-}}\psi_{i}(a_{i}u)\mathbb{P}\left\{\cap_{i=1}^{n_{-}}\left(\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u\right),\cap_{i=n_{-}+1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)+c_{i}\mathcal{T}_{i}>a_{i}u\right)\right\}.

Next, we have for the small chosen ε>0\varepsilon>0

{i=1n(𝒯i>(ti/|ci|ε)u),i=n+1n(supt[0,𝒯i]Xi(t)+ci𝒯i>aiu)}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n_{-}}\left(\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u\right),\cap_{i=n_{-}+1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)+c_{i}\mathcal{T}_{i}>a_{i}u\right)\right\}
={i=1n(𝒯i>(ti/|ci|ε)u),i=n+1n(supt[0,𝒯i]Xi(t)+ci𝒯i>aiu,supt[0,𝒯i]Xi(t)εu)}\displaystyle=\mathbb{P}\left\{\cap_{i=1}^{n_{-}}\left(\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u\right),\cap_{i=n_{-}+1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)+c_{i}\mathcal{T}_{i}>a_{i}u,\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)\leq\varepsilon u\right)\right\}
+{i=1n(𝒯i>(ti/|ci|ε)u),i=n+1n(supt[0,𝒯i]Xi(t)+ci𝒯i>aiu),i=n+1n(supt[0,𝒯i]Xi(t)>εu)}\displaystyle\ \ +\mathbb{P}\left\{\cap_{i=1}^{n_{-}}\left(\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u\right),\cap_{i=n_{-}+1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)+c_{i}\mathcal{T}_{i}>a_{i}u\right),\cup_{i=n_{-}+1}^{n}\left(\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>\varepsilon u\right)\right\}
{i=1n(𝒯i>(ti/|ci|ε)u),i=n+1n(ci𝒯i>(aiε)u)}+i=n+1n{supt[0,𝒯i]Xi(t)>εu}.\displaystyle\leq\mathbb{P}\left\{\cap_{i=1}^{n_{-}}\left(\mathcal{T}_{i}>(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u\right),\cap_{i=n_{-}+1}^{n}\left(c_{i}\mathcal{T}_{i}>(a_{i}-\varepsilon)u\right)\right\}+\sum_{i=n_{-}+1}^{n}\mathbb{P}\left\{\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>\varepsilon u\right\}.

Furthermore, it follows from Theorem 2.1 in [1] that for any i=n+1,,ni=n_{-}+1,\cdots,n

{supt[0,𝒯i]Xi(t)>εu}Ci(ε){𝒯i>σi(u)},u,\displaystyle\mathbb{P}\left\{\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>\varepsilon u\right\}\sim C_{i}(\varepsilon)\mathbb{P}\left\{\mathcal{T}_{i}>{\color[rgb]{0,0,0}\overleftarrow{\sigma_{i}}}(u)\right\},\ \ \ \ u\rightarrow\infty,

with some constant Ci(ε)>0C_{i}(\varepsilon)>0. This implies that

i=n+1n{supt[0,𝒯i]Xi(t)>εu}=o({|𝓣|>u}),u.\displaystyle\sum_{i=n_{-}+1}^{n}\mathbb{P}\left\{\sup_{t\in[0,\mathcal{T}_{i}]}X_{i}(t)>\varepsilon u\right\}=o(\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>u\right\}),\quad u\rightarrow\infty.

Consequently, applying Lemma 7.1 and letting ε0\varepsilon\rightarrow 0 we can obtain the required asymptotic upper bound, if we can further show

(23) limuI2(u)i=1nψi(aiu){|𝓣|>u}=0.\displaystyle\lim_{u\rightarrow\infty}\frac{I_{2}(u)}{\prod_{i=1}^{n_{-}}\psi_{i}(a_{i}u)\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>u\right\}}=0.

Indeed, we have

(24) I2(u)\displaystyle I_{2}(u) \displaystyle\leq i=1n{j=1n(supt[0,𝒯j]Xj(t)+cjt>aju),𝒯i(ti/|ci|ε)u}\displaystyle\sum_{i=1}^{n_{-}}\mathbb{P}\Bigg{\{}\cap_{j=1}^{n_{-}}\left(\sup_{t\in[0,\mathcal{T}_{j}]}X_{j}(t)+c_{j}t>a_{j}u\right),\mathcal{T}_{i}\leq(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u\Bigg{\}}
\displaystyle\leq i=1nj=1njiψj(aju){supt[0,(ti/|ci|ε)u]Xi(t)+cit>aiu}.\displaystyle\sum_{i=1}^{n_{-}}\underset{j\neq i}{\prod_{j=1}^{n_{-}}}\psi_{j}(a_{j}u)\mathbb{P}\left\{\sup_{t\in[0,(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u]}X_{i}(t)+c_{i}t>a_{i}u\right\}.

Furthermore, by Lemma 5.1 we have that for any γ>0\gamma>0

limu{supt[0,(ti/|ci|ε)u]Xi(t)+cit>aiu}ψi(aiu)uγ=0,i=1,2,,n,\displaystyle\lim_{u\rightarrow\infty}\frac{\mathbb{P}\left\{\sup_{t\in[0,(t^{*}_{i}/\left\lvert c_{i}\right\rvert-\varepsilon)u]}X_{i}(t)+c_{i}t>a_{i}u\right\}}{\psi_{i}(a_{i}u)u^{-\gamma}}=0,\quad i=1,2,\cdots,n_{-},

which together with (24) implies (23). This completes the proof. \Box

Proof of Example 3.3: The proof is based on the following obvious bounds

(25) PL(u)\displaystyle P_{L}(u) :=\displaystyle:= {i=1n((Bi(Yi(T))+ciYi(T))>aiu)}PB(u)\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n}\left((B_{i}(Y_{i}(T))+c_{i}Y_{i}(T))>a_{i}u\right)\right\}\leq P_{B}(u)
{i=1n(supt[0,Yi(T)](Bi(t)+cit)>aiu)}=:PU(u).\displaystyle\leq\mathbb{P}\left\{\cap_{i=1}^{n}\left(\sup_{t\in[0,Y_{i}(T)]}(B_{i}(t)+c_{i}t)>a_{i}u\right)\right\}=:P_{U}(u).

Since α0<mini=1nαi,\alpha_{0}<\min_{i=1}^{n}\alpha_{i}, by Lemma 7.3 we have that 𝒀(T)\boldsymbol{Y}(T) is a multivariate regularly varying random vector with index α0\alpha_{0} and the same limiting measure ν\nu as that of 𝑺0(T):=(S0(T),,S0(T))n\boldsymbol{S}_{0}(T):=(S_{0}(T),\cdots,S_{0}(T))\in\mathbb{R}^{n}, and further {|𝒀(T)|>x}{|𝑺0(T)|>x},x.\mathbb{P}\left\{\left\lvert\boldsymbol{Y}(T)\right\rvert>x\right\}\sim\mathbb{P}\left\{\left\lvert\boldsymbol{S}_{0}(T)\right\rvert>x\right\},x\rightarrow\infty. The asymptotics of PU(u)P_{U}(u) can be obtained by applying Theorem 3.1. Below we focus on PL(u)P_{L}(u).

First, consider case (i) where ci>0c_{i}>0 for all i=1,,ni=1,\cdots,n. We have

PL(u)={i=1n((Bi(1)Yi(T)+ciYi(T))>aiu)}.\displaystyle P_{L}(u)=\mathbb{P}\left\{\cap_{i=1}^{n}\left((B_{i}(1)\sqrt{Y_{i}(T)}+c_{i}Y_{i}(T))>a_{i}u\right)\right\}.

Thus, by Lemma 7.3 we obtain

PL(u){i=1n(ciS0(T)>aiu)}Cα0,T(maxi=1n(ai/ci)u)α0,u,\displaystyle P_{L}(u)\sim\mathbb{P}\left\{\cap_{i=1}^{n}\left(c_{i}S_{0}(T)>a_{i}u\right)\right\}\sim C_{\alpha_{0},T}(\max_{i=1}^{n}(a_{i}/c_{i})u)^{-\alpha_{0}},\ \ u\rightarrow\infty,

which is the same as the asymptotic upper bound obtained by using (ii) of Theorem 3.1.

Next, consider case (ii) where ci=0c_{i}=0 for all i=1,,ni=1,\cdots,n. We have

PL(u)={i=1n(Bi(1)Yi(T)>aiu)}=12n{i=1n(Bi(1)2Yi(T)>(aiu)2)}.\displaystyle P_{L}(u)=\mathbb{P}\left\{\cap_{i=1}^{n}\left(B_{i}(1)\sqrt{Y_{i}(T)}>a_{i}u\right)\right\}=\frac{1}{2^{n}}\mathbb{P}\left\{\cap_{i=1}^{n}\left(B_{i}(1)^{2}Y_{i}(T)>(a_{i}u)^{2}\right)\right\}.

Thus, by Lemma 7.2 and Lemma 7.3 we obtain

PL(u)O(u2α0),u,\displaystyle P_{L}(u)\asymp O(u^{-2\alpha_{0}}),\ \ u\rightarrow\infty,

which is the same as the asymptotic upper bound obtained by using (i) of Theorem 3.1.

Finally, consider the case (iii) where ci<0c_{i}<0 for all i=1,,ni=1,\cdots,n. We have

PL(u){i=1n(Bi(Yi(T))+ciYi(T)>aiu,Yi(T)[aiu/|ci|u,aiu/|ci|+u])}\displaystyle P_{L}(u)\geq\mathbb{P}\left\{\cap_{i=1}^{n}\left(B_{i}(Y_{i}(T))+c_{i}Y_{i}(T)>a_{i}u,Y_{i}(T)\in[a_{i}u/\left\lvert c_{i}\right\rvert-\sqrt{u},a_{i}u/\left\lvert c_{i}\right\rvert+\sqrt{u}]\right)\right\}
i=1n(mint[aiu/|ci|u,aiu/|ci|+u]{B1(t)+cit>aiu}){i=1n(Yi(T)[aiu/|ci|u,aiu/|ci|+u])}.\displaystyle\geq\prod_{i=1}^{n}\left(\min_{t\in[a_{i}u/\left\lvert c_{i}\right\rvert-\sqrt{u},a_{i}u/\left\lvert c_{i}\right\rvert+\sqrt{u}]}\mathbb{P}\left\{B_{1}(t)+c_{i}t>a_{i}u\right\}\right)\mathbb{P}\left\{\cap_{i=1}^{n}\left(Y_{i}(T)\in[a_{i}u/\left\lvert c_{i}\right\rvert-\sqrt{u},a_{i}u/\left\lvert c_{i}\right\rvert+\sqrt{u}]\right)\right\}.

Recalling (4), we derive that

mint[aiu/|ci|u,aiu/|ci|+u]{B1(t)+cit>aiu}\displaystyle\min_{t\in[a_{i}u/\left\lvert c_{i}\right\rvert-\sqrt{u},a_{i}u/\left\lvert c_{i}\right\rvert+\sqrt{u}]}\mathbb{P}\left\{B_{1}(t)+c_{i}t>a_{i}u\right\} =\displaystyle= mint[ai/|ci|1/u,ai/|ci|+1/u]{B1(1)>(aicit)u/t}\displaystyle\min_{t\in[a_{i}/\left\lvert c_{i}\right\rvert-1/\sqrt{u},a_{i}/\left\lvert c_{i}\right\rvert+1/\sqrt{u}]}\mathbb{P}\left\{B_{1}(1)>(a_{i}-c_{i}t)\sqrt{u}/\sqrt{t}\right\}
\displaystyle\gtrsim constant1ue2aiciu+o(u),u.\displaystyle constant\cdot\frac{1}{\sqrt{u}}e^{2a_{i}c_{i}u+o(u)},\ \ \ u\rightarrow\infty.

Furthermore,

{i=1n(Yi(T)[aiu/|ci|u,aiu/|ci|+u])}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n}\left(Y_{i}(T)\in[a_{i}u/\left\lvert c_{i}\right\rvert-\sqrt{u},a_{i}u/\left\lvert c_{i}\right\rvert+\sqrt{u}]\right)\right\}
(26) i=0n{Si(T)[aiu/|2ci|u/2,aiu/|2ci|+u/2]}.\displaystyle\geq\prod_{i=0}^{n}\mathbb{P}\left\{S_{i}(T)\in[a_{i}u/\left\lvert 2c_{i}\right\rvert-\sqrt{u}/2,a_{i}u/\left\lvert 2c_{i}\right\rvert+\sqrt{u}/2]\right\}.

Due to the assumptions on the density functions of Si(T),i=0,1,,nS_{i}(T),i=0,1,\cdots,n, then by Monotone Density Theorem (see e.g. in [37]), we know that (26) is asymptotically larger than CuβCu^{-\beta} for some constants C,β>0C,\beta>0. Therefore,

lnPL(u)2i=1n(aici)u,u.\ln P_{L}(u)\gtrsim 2\sum_{i=1}^{n}(a_{i}c_{i})u,\ \ \ \ u\rightarrow\infty.

The same asymptotic upper bound can be obtained by the fact that {supt>0(Bi(t)+cit)>aiu}=e2aiciu\mathbb{P}\left\{\sup_{t>0}(B_{i}(t)+c_{i}t)>a_{i}u\right\}=e^{2a_{i}c_{i}u} for ci<0c_{i}<0. This completes the proof. \Box

6. Proof of Theorem 4.1

We first show one lemma which is crucial for the proof of Theorem 4.1.

Lemma 6.1.

Let 𝐔(1)\boldsymbol{U}^{(1)}, 𝐌(1)\boldsymbol{M}^{(1)} and 𝐓~\widetilde{\boldsymbol{T}} be given by (13), (14) and (16) respectively. Then, 𝐔(1),𝐌(1)\boldsymbol{U}^{(1)},\boldsymbol{M}^{(1)} are both regularly varying with the same index λ\lambda and limiting measure μ\mu as that of 𝐓~\widetilde{\boldsymbol{T}}. Moreover,

{|𝑼(1)|>x}{|𝑴(1)|>x}{|𝑻~|>x},x.\displaystyle\mathbb{P}\left\{\left\lvert\boldsymbol{U}^{(1)}\right\rvert>x\right\}\sim\mathbb{P}\left\{\left\lvert\boldsymbol{M}^{(1)}\right\rvert>x\right\}\sim\mathbb{P}\left\{\left\lvert\widetilde{\boldsymbol{T}}\right\rvert>x\right\},\ x\rightarrow\infty.

Proof of Lemma 6.1: First note that by self-similarity of fBm’s

𝑼(1)=(X1(1)(T1)+Y1(1)(S1),X2(1)(T1)+Y2(1)(S1))=𝐷(𝑻~+𝒁1+𝒁2+𝒁3),\displaystyle\boldsymbol{U}^{(1)}=(X_{1}^{(1)}(T_{1})+Y_{1}^{(1)}(S_{1}),X_{2}^{(1)}(T_{1})+Y_{2}^{(1)}(S_{1}))\overset{D}{=}(\boldsymbol{\widetilde{\boldsymbol{T}}}+\boldsymbol{Z}_{1}+\boldsymbol{Z}_{2}+\boldsymbol{Z}_{3}),

where

𝒁1=(BH1(1)TH1,BH2(1)TH2),𝒁2=(B~H~1(1)SH~1,B~H~2(1)SH~2),𝒁3=(q1S,q2S).\displaystyle\boldsymbol{Z}_{1}=(B_{H_{1}}(1)T^{H_{1}},B_{H_{2}}(1)T^{H_{2}}),\ \boldsymbol{Z}_{2}=(\widetilde{B}_{\widetilde{H}_{1}}(1)S^{\widetilde{H}_{1}},\widetilde{B}_{\widetilde{H}_{2}}(1)S^{\widetilde{H}_{2}}),\ \boldsymbol{Z}_{3}=(-q_{1}S,-q_{2}S).

Since every two norms on RdR^{d} are equivalent, then by the fact that Hi,H~i<1H_{i},\widetilde{H}_{i}<1 for i=1,2i=1,2 and (12), we have

max({|(TH1,TH2)|>x},{|(SH~1,SH~2)|>x},{|𝒁3|>x})=o({|𝑻~|>x}),x.\displaystyle\max\Big{(}\mathbb{P}\left\{\left\lvert(T^{H_{1}},T^{H_{2}})\right\rvert>x\right\},\mathbb{P}\left\{\left\lvert(S^{\widetilde{H}_{1}},S^{\widetilde{H}_{2}})\right\rvert>x\right\},\mathbb{P}\left\{\left\lvert\boldsymbol{Z}_{3}\right\rvert>x\right\}\Big{)}=o\Big{(}\mathbb{P}\left\{\left\lvert\widetilde{\boldsymbol{T}}\right\rvert>x\right\}\Big{)},\ x\rightarrow\infty.

Thus, the claim for 𝑼(1)\boldsymbol{U}^{(1)} follows directly by Lemma 7.3.

Next, note that

𝑴(1)\displaystyle\boldsymbol{M}^{(1)} =𝐷\displaystyle\overset{D}{=} (sup0tT+S(X1(t)I(0t<T)+(X1(T)+Y1(tT))I(Tt<T+S)),\displaystyle\Big{(}\sup_{0\leq t\leq T+S}\left(X_{1}(t)I_{(0\leq t<T)}+(X_{1}(T)+Y_{1}(t-T))I_{(T\leq t<T+S)}\right),
sup0tT+S(X2(t)I(0t<T)+(X2(T)+Y2(tT))I(Tt<T+S)))=:𝑴,\displaystyle\ \ \ \ \ \ \ \ \sup_{0\leq t\leq T+S}\left(X_{2}(t)I_{(0\leq t<T)}+(X_{2}(T)+Y_{2}(t-T))I_{(T\leq t<T+S)}\right)\Big{)}=:\boldsymbol{M},

then

𝑴(X1(T),X2(T))=𝐷𝑻~+𝒁1\displaystyle\boldsymbol{M}\geq(X_{1}(T),X_{2}(T))\overset{D}{=}\widetilde{\boldsymbol{T}}+\boldsymbol{Z}_{1}

and

𝑴\displaystyle\boldsymbol{M} \displaystyle\leq (sup0tTBH1(t)+p1T+supt0Y1(t),sup0tTBH2(t)+p1T+supt0Y2(t))\displaystyle\Big{(}\sup_{0\leq t\leq T}B_{H_{1}}(t)+p_{1}T+\sup_{t\geq 0}Y_{1}(t),\sup_{0\leq t\leq T}B_{H_{2}}(t)+p_{1}T+\sup_{t\geq 0}Y_{2}(t)\Big{)}
=𝐷\displaystyle\overset{D}{=} (ξ1TH1+supt0Y1(t),ξ2TH2+supt0Y2(t))+𝑻~,\displaystyle(\xi_{1}T^{H_{1}}+\sup_{t\geq 0}Y_{1}(t),\xi_{2}T^{H_{2}}+\sup_{t\geq 0}Y_{2}(t))+\widetilde{\boldsymbol{T}},

with ξi\xi_{i} defined in (5). By Corollary 2.2, we know {supt0Yi(t)>x}=o({T>x})\mathbb{P}\left\{\sup_{t\geq 0}Y_{i}(t)>x\right\}=o(\mathbb{P}\left\{T>x\right\}) as xx\rightarrow\infty. Therefore, the claim for 𝑴(1)\boldsymbol{M}^{(1)} is a direct consequence of Lemma 7.3 and Lemma 7.4. This completes the proof. \Box

Proof of Theorem 4.1: First, note that, for any 𝒂,𝒄>𝟎\boldsymbol{a},\boldsymbol{c}>\boldsymbol{0}, by the homogeneous property of μ\mu,

(27) 0μ((v𝒄+𝒂,])𝑑vμ((𝒂,])+1vλμ((𝒄+𝒂/v,])𝑑vμ((𝒂,])+1λ1μ((𝒄,]).\displaystyle\int_{0}^{\infty}\mu((v\boldsymbol{c}+\boldsymbol{a},\boldsymbol{\infty}])dv\leq\mu((\boldsymbol{a},\boldsymbol{\infty}])+\int_{1}^{\infty}v^{-\lambda}\mu((\boldsymbol{c}+\boldsymbol{a}/v,\boldsymbol{\infty}])dv\leq\mu((\boldsymbol{a},\boldsymbol{\infty}])+\frac{1}{\lambda-1}\mu((\boldsymbol{c},\boldsymbol{\infty}]).

For simplicity we denote 𝑾(n):=i=1n𝑼(i).\boldsymbol{W}^{(n)}:=\sum_{i=1}^{n}\boldsymbol{U}^{(i)}. We consider the lower bound, for which we adopt a standard technique of ”one big jump” (see [24]). Informally speaking, we choose an event on which 𝑾(n1)+𝑴(n),n1,\boldsymbol{W}^{(n-1)}+\boldsymbol{M}^{(n)},n\geq 1, behaves in a typical way up to some time kk for which 𝑴(k+1)\boldsymbol{M}^{(k+1)} is large. Let δ,ε\delta,\varepsilon be small positive numbers. By the Weak Law of Large Numbers, we can choose large K=Kε,δK=K_{\varepsilon,\delta} so that

{𝑾(n)>n(1+ε)𝒄K𝟏}>1δ,n=1,2,.\displaystyle\mathbb{P}\left\{\boldsymbol{W}^{(n)}>-n(1+\varepsilon)\boldsymbol{c}-K\boldsymbol{1}\right\}>1-\delta,\ \ \ \ n=1,2,\cdots.

For any u>0u>0, we have

Q(u)\displaystyle Q(u) =\displaystyle= {n1,𝑾(n1)+𝑴(n)>𝒂u}\displaystyle\mathbb{P}\left\{\exists n\geq 1,\ \boldsymbol{W}^{(n-1)}+\boldsymbol{M}^{(n)}>\boldsymbol{a}u\right\}
=\displaystyle= {𝑴(1)>𝒂u}+k1{n=1k(𝑾(n1)+𝑴(n)𝒂u),𝑾(k)+𝑴(k+1)>𝒂u}\displaystyle\mathbb{P}\left\{\boldsymbol{M}^{(1)}>\boldsymbol{a}u\right\}+\sum_{k\geq 1}\mathbb{P}\left\{\cap_{n=1}^{k}(\boldsymbol{W}^{(n-1)}+\boldsymbol{M}^{(n)}\not>\boldsymbol{a}u),\boldsymbol{W}^{(k)}+\boldsymbol{M}^{(k+1)}>\boldsymbol{a}u\right\}
\displaystyle\geq {𝑴(1)>𝒂u}+k1{n=1k(𝑾(n1)+𝑴(n)𝒂u),𝑾(k)>k(1+ε)𝒄K𝟏,\displaystyle\mathbb{P}\left\{\boldsymbol{M}^{(1)}>\boldsymbol{a}u\right\}+\sum_{k\geq 1}\mathbb{P}\Big{\{}\cap_{n=1}^{k}(\boldsymbol{W}^{(n-1)}+\boldsymbol{M}^{(n)}\not>\boldsymbol{a}u),\boldsymbol{W}^{(k)}>-k(1+\varepsilon)\boldsymbol{c}-K\boldsymbol{1},
𝑴(k+1)>𝒂u+k(1+ε)𝒄+K𝟏}\displaystyle\qquad\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \qquad\ \ \ \ \ \ \ \ \qquad\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \boldsymbol{M}^{(k+1)}>\boldsymbol{a}u+k(1+\varepsilon)\boldsymbol{c}+K\boldsymbol{1}\Big{\}}
\displaystyle\geq {𝑴(1)>𝒂u}+k1(1δ{n=1k(𝑾(n1)+𝑴(n)>𝒂u)}){𝑴(k+1)>𝒂u+k(1+ε)𝒄+K𝟏}\displaystyle\mathbb{P}\left\{\boldsymbol{M}^{(1)}>\boldsymbol{a}u\right\}+\sum_{k\geq 1}\left(1-\delta-\mathbb{P}\left\{\cup_{n=1}^{k}(\boldsymbol{W}^{(n-1)}+\boldsymbol{M}^{(n)}>\boldsymbol{a}u)\right\}\right)\mathbb{P}\left\{\boldsymbol{M}^{(k+1)}>\boldsymbol{a}u+k(1+\varepsilon)\boldsymbol{c}+K\boldsymbol{1}\right\}
\displaystyle\geq (1δQ(u))k0{𝑴(1)>𝒂u+k(1+ε)𝒄+K𝟏}\displaystyle\left(1-\delta-Q(u)\right)\sum_{k\geq 0}\mathbb{P}\left\{\boldsymbol{M}^{(1)}>\boldsymbol{a}u+k(1+\varepsilon)\boldsymbol{c}+K\boldsymbol{1}\right\}
\displaystyle\geq (1δQ(u))1+ε0{𝑴(1)>𝒂u+v𝒄+K𝟏}𝑑v.\displaystyle\frac{\left(1-\delta-Q(u)\right)}{1+\varepsilon}\int_{0}^{\infty}\mathbb{P}\left\{\boldsymbol{M}^{(1)}>\boldsymbol{a}u+v\boldsymbol{c}+K\boldsymbol{1}\right\}dv.

For uu sufficiently large such that εu>K\varepsilon u>K, we have

Q(u)(1δQ(u))1+ε0{𝑴(1)>(𝒂+ε𝟏)u+v𝒄}𝑑v.\displaystyle Q(u)\geq\frac{\left(1-\delta-Q(u)\right)}{1+\varepsilon}\int_{0}^{\infty}\mathbb{P}\left\{\boldsymbol{M}^{(1)}>(\boldsymbol{a}+\varepsilon\boldsymbol{1})u+v\boldsymbol{c}\right\}dv.

Rearranging the above inequality and using a change of variable, we obtain

(28) Q(u)(1δ)u0{𝑴(1)>u(𝒂+ε𝟏+v𝒄)}𝑑v1+ε+0{𝑴(1)>(𝒂+ε𝟏)u+v𝒄}𝑑v,\displaystyle Q(u)\geq\frac{(1-\delta)u\int_{0}^{\infty}\mathbb{P}\left\{\boldsymbol{M}^{(1)}>u(\boldsymbol{a}+\varepsilon\boldsymbol{1}+v\boldsymbol{c})\right\}dv}{1+\varepsilon+\int_{0}^{\infty}\mathbb{P}\left\{\boldsymbol{M}^{(1)}>(\boldsymbol{a}+\varepsilon\boldsymbol{1})u+v\boldsymbol{c}\right\}dv},

and thus by Lemma 6.1 and Fatou’s lemma

lim infuQ(u)u{|𝑻~|>u}1δ1+ε0μ((𝒂+ε𝟏+v𝒄,])𝑑v.\displaystyle\liminf_{u\rightarrow\infty}\frac{Q(u)}{u\mathbb{P}\left\{\left\lvert\widetilde{\boldsymbol{T}}\right\rvert>u\right\}}\geq\frac{1-\delta}{1+\varepsilon}\int_{0}^{\infty}\mu((\boldsymbol{a}+\varepsilon\boldsymbol{1}+v\boldsymbol{c},\boldsymbol{\infty}])dv.

Since ε\varepsilon and δ\delta are arbitrary, and by (27) the integration on the right hand side is finite, taking ε0,δ0\varepsilon\rightarrow 0,\delta\rightarrow 0 and applying dominated convergence theorem yields

lim infuQ(u)u{|𝑻~|>u}0μ((𝒂+v𝒄,])𝑑v.\displaystyle\liminf_{u\rightarrow\infty}\frac{Q(u)}{u\mathbb{P}\left\{\left\lvert\widetilde{\boldsymbol{T}}\right\rvert>u\right\}}\geq\int_{0}^{\infty}\mu((\boldsymbol{a}+v\boldsymbol{c},\boldsymbol{\infty}])dv.

Next, we consider the asymptotic upper bound. Let y1,y2>0y_{1},y_{2}>0 be given. We shall construct an auxiliary random walk 𝑾~(n),n0\widetilde{\boldsymbol{W}}^{(n)},n\geq 0, with 𝑾~(0)=0\widetilde{\boldsymbol{W}}^{(0)}=0 and 𝑾~(n)=i=1n𝑼~(i),n1\widetilde{\boldsymbol{W}}^{(n)}=\sum_{i=1}^{n}\widetilde{\boldsymbol{U}}^{(i)},n\geq 1, where 𝑼~(n)=(U~1(n),U~2(n))\widetilde{\boldsymbol{U}}^{(n)}=(\widetilde{U}_{1}^{(n)},\widetilde{U}_{2}^{(n)}) is given by

U~i(n)={Mi(n),if Mi(n)>y1;Ui(n),if y2<Ui(n)Mi(n)y1;y2,if Mi(n)y1,Ui(n)y2,i=1,2.\displaystyle\widetilde{U}_{i}^{(n)}=\left\{\begin{array}[]{ll}M^{(n)}_{i},&\hbox{if \ $M^{(n)}_{i}>y_{1}$};\\ U^{(n)}_{i},&\hbox{if \ $-y_{2}<U^{(n)}_{i}\leq M^{(n)}_{i}\leq y_{1}$};\\ -y_{2},&\hbox{if \ $M^{(n)}_{i}\leq y_{1},U^{(n)}_{i}\leq-y_{2}$},\end{array}\right.\qquad i=1,2.

Obviously, 𝑾(n)𝑾~(n)\boldsymbol{W}^{(n)}\leq\widetilde{\boldsymbol{W}}^{(n)} for any n1.n\geq 1. Furthermore, one can show that

Mi(n)\displaystyle M_{i}^{(n)} \displaystyle\leq U~i(n)+(y1+y2).\displaystyle\widetilde{U}_{i}^{(n)}+(y_{1}+y_{2}).

Then,

𝑾(n1)+𝑴(n)𝑾~(n)+(y1+y2)𝟏,n1.\displaystyle\boldsymbol{W}^{(n-1)}+\boldsymbol{M}^{(n)}\leq\widetilde{\boldsymbol{W}}^{(n)}+(y_{1}+y_{2})\boldsymbol{1},\ \ \ n\geq 1.

Thus, for any ε>0\varepsilon>0 and sufficiently large uu,

Q(u)\displaystyle Q(u) \displaystyle\leq {n1,𝑾~(n)>𝒂u(y1+y2)𝟏}\displaystyle\mathbb{P}\left\{\exists n\geq 1,\ \widetilde{\boldsymbol{W}}^{(n)}>\boldsymbol{a}u-(y_{1}+y_{2})\boldsymbol{1}\right\}
\displaystyle\leq {n1,𝑾~(n)>(𝒂ε𝟏)u}.\displaystyle\mathbb{P}\left\{\exists n\geq 1,\ \widetilde{\boldsymbol{W}}^{(n)}>(\boldsymbol{a}-\varepsilon\boldsymbol{1})u\right\}.

Define cy1,y2=𝔼{𝑼~(1)}c_{y_{1},y_{2}}=-\mathbb{E}\left\{\widetilde{\boldsymbol{U}}^{(1)}\right\}. Since limy1,y2𝒄y1,y2=𝒄\lim_{y_{1},y_{2}\rightarrow\infty}\boldsymbol{c}_{y_{1},y_{2}}=\boldsymbol{c}, we have that for any y1,y2y_{1},y_{2} large enough 𝒄y1,y2>𝟎\boldsymbol{c}_{y_{1},y_{2}}>\boldsymbol{0}. It follows from Lemma 6.1 and Lemma 7.4 that for any y1,y2>0y_{1},y_{2}>0, 𝑼~(1)\widetilde{\boldsymbol{U}}^{(1)} is regularly varying with index λ\lambda and limiting measure μ\mu, and {|𝑼~(1)|>u}{|𝑻~|>u}\mathbb{P}\left\{\left\lvert\widetilde{\boldsymbol{U}}^{(1)}\right\rvert>u\right\}\sim\mathbb{P}\left\{\left\lvert\widetilde{\boldsymbol{T}}\right\rvert>u\right\} as uu\rightarrow\infty. Then, applying Theorem 3.1 and Remark 3.2 of [31] we obtain that

{n1,𝑾~(n1)>(𝒂ε𝟏)u}\displaystyle\mathbb{P}\left\{\exists n\geq 1,\ \widetilde{\boldsymbol{W}}^{(n-1)}>(\boldsymbol{a}-\varepsilon\boldsymbol{1})u\right\} \displaystyle\sim u{|𝑼~(1)|>u}0μ((𝒄y1,y2v+𝒂ε𝟏,])𝑑v\displaystyle u\mathbb{P}\left\{\left\lvert\widetilde{\boldsymbol{U}}^{(1)}\right\rvert>u\right\}\int_{0}^{\infty}\mu((\boldsymbol{c}_{y_{1},y_{2}}v+\boldsymbol{a}-\varepsilon\boldsymbol{1},\boldsymbol{\infty}])dv
\displaystyle\sim u{|𝑻~|>u}0μ((𝒄y1,y2v+𝒂ε𝟏,])𝑑v.\displaystyle u\mathbb{P}\left\{\left\lvert\widetilde{\boldsymbol{T}}\right\rvert>u\right\}\int_{0}^{\infty}\mu((\boldsymbol{c}_{y_{1},y_{2}}v+\boldsymbol{a}-\varepsilon\boldsymbol{1},\boldsymbol{\infty}])dv.

Consequently, the claimed asymptotic upper bound is obtained by letting ε0\varepsilon\rightarrow 0, y1,y2y_{1},y_{2}\rightarrow\infty. The proof is complete. \Box

7. Appendix

This section includes some results on the regularly varying random vectors.

Lemma 7.1.

Let 𝓣>𝟎\boldsymbol{\mathcal{T}}>\boldsymbol{0} be a regularly varying random vector with index α\alpha and limiting measure ν\nu, and let xi(u),1inx_{i}(u),1\leq i\leq n be increasing (to infinity) functions such that for some 1mn,1\leq m\leq n, x1(u)xm(u)x_{1}(u)\sim\cdots\sim x_{m}(u), and xj(u)=o(x1(u))x_{j}(u)=o(x_{1}(u)) for all j=m+1,,nj=m+1,\cdots,n. Then, for any 𝐚>𝟎\boldsymbol{a}>\boldsymbol{0},

{i=1n(𝒯i>aixi(u))}{i=1m(𝒯i>aix1(u))}ν([𝒂m,0,]){|𝓣|>x1(u)}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n}(\mathcal{T}_{i}>a_{i}x_{i}(u))\right\}\sim\mathbb{P}\left\{\cap_{i=1}^{m}(\mathcal{T}_{i}>a_{i}x_{1}(u))\right\}\sim\nu([\boldsymbol{a}_{m,0},\boldsymbol{\infty}])\ \mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>x_{1}(u)\right\}

holds as uu\rightarrow\infty, with 𝐚m,0=(a1,,am,0,,0)\boldsymbol{a}_{m,0}=(a_{1},\cdots,a_{m},0,\cdots,0).

Proof of Lemma 7.1: Obviously, for any small enough ε>0\varepsilon>0 we have that when uu is sufficiently large

{i=1n(𝒯i>aixi(u))}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n}(\mathcal{T}_{i}>a_{i}x_{i}(u))\right\} \displaystyle\leq {i=1m(𝒯i>(aiε)x1(u)),i=m+1n(𝒯i>0)}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{m}(\mathcal{T}_{i}>(a_{i}-\varepsilon)x_{1}(u)),\cap_{i=m+1}^{n}(\mathcal{T}_{i}>0)\right\}
\displaystyle\sim ν([𝒂ε,]){|𝓣|>x1(u)},\displaystyle\nu([\boldsymbol{a}_{-\varepsilon},\boldsymbol{\infty}])\ \mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>x_{1}(u)\right\},

where 𝒂ε=(a1ε,,amε,0,,0)\boldsymbol{a}_{-\varepsilon}=(a_{1}-\varepsilon,\cdots,a_{m}-\varepsilon,0,\cdots,0). Next, for any small enough ε>0\varepsilon>0 we have that when uu is sufficiently large

{i=1n(𝒯i>aixi(u))}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n}(\mathcal{T}_{i}>a_{i}x_{i}(u))\right\} \displaystyle\geq {i=1m(𝒯i>(ai+ε)x1(u)),i=m+1n(𝒯i>ai(εx1(u)))}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{m}(\mathcal{T}_{i}>(a_{i}+\varepsilon)x_{1}(u)),\cap_{i=m+1}^{n}(\mathcal{T}_{i}>a_{i}(\varepsilon x_{1}(u)))\right\}
\displaystyle\sim ν([𝒂ε+,]){|𝓣|>x1(u)}\displaystyle\nu([\boldsymbol{a}_{\varepsilon+},\boldsymbol{\infty}])\mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>x_{1}(u)\right\}

with 𝒂ε+=(a1+ε,,am+ε,am+1ε,,anε)\boldsymbol{a}_{\varepsilon+}=(a_{1}+\varepsilon,\cdots,a_{m}+\varepsilon,a_{m+1}\varepsilon,\cdots,a_{n}\varepsilon). Letting ε0\varepsilon\rightarrow 0, the claim follows by the continuity of ν([𝒂ε±,])\nu([\boldsymbol{a}_{\varepsilon\pm},\boldsymbol{\infty}]) in ε\varepsilon. The proof is complete. \Box

Lemma 7.2.

Let 𝓣\boldsymbol{\mathcal{T}}, aia_{i}’s, xi(u)sx_{i}(u)^{\prime}s and 𝐚m,0\boldsymbol{a}_{m,0} be the same as in Lemma 7.1. Further, consider 𝛈=(η1,,ηn){\boldsymbol{\eta}}=(\eta_{1},\cdots,\eta_{n}) to be an independent of 𝓣\boldsymbol{\mathcal{T}} nonnegative random vector such that max1in𝔼{ηiα+δ}<\max_{1\leq i\leq n}\mathbb{E}\left\{\eta_{i}^{\alpha+\delta}\right\}<\infty for some δ>0\delta>0. Then,

{i=1n(𝒯iηi>aixi(u))}{i=1m(𝒯iηi>aix1(u))}ν^([𝒂m,0,]){|𝓣|>x1(u)}\displaystyle\mathbb{P}\left\{\cap_{i=1}^{n}(\mathcal{T}_{i}\eta_{i}>a_{i}x_{i}(u))\right\}\sim\mathbb{P}\left\{\cap_{i=1}^{m}(\mathcal{T}_{i}\eta_{i}>a_{i}x_{1}(u))\right\}\sim\widehat{\nu}([\boldsymbol{a}_{m,0},\boldsymbol{\infty}])\ \mathbb{P}\left\{\left\lvert\boldsymbol{\mathcal{T}}\right\rvert>x_{1}(u)\right\}

holds as uu\rightarrow\infty, where ν^(K)=𝔼{ν(𝛈1K)}\widehat{\nu}(K)=\mathbb{E}\left\{\nu(\boldsymbol{\eta}^{-1}K)\right\}, with 𝛈1K={(η11b1,,ηn1bn),(b1,,bn)K}\boldsymbol{\eta}^{-1}K=\{(\eta_{1}^{-1}b_{1},\cdots,\eta_{n}^{-1}b_{n}),(b_{1},\cdots,b_{n})\in K\} for any K([0,]n{𝟎})K\subset\mathcal{B}([0,\infty]^{n}\setminus\{\boldsymbol{0}\}).

Proof of Lemma 7.2: It follows directly from Lemma 4.6 of [29] (see also Proposition A.1 of [38]) that the second asymptotic equivalence holds. The first claim follows from the same arguments as in Lemma 7.1. \Box

Lemma 7.3.

Assume 𝐗n\boldsymbol{X}\in\mathbb{R}^{n} is regularly varying with index α\alpha and limiting measure μ\mu, 𝐀\boldsymbol{A} is a random n×dn\times d matrix independent of random vector 𝐘d\boldsymbol{Y}\in\mathbb{R}^{d}. If 0<𝔼{𝐀α+δ}<0<\mathbb{E}\left\{\lVert\boldsymbol{A}\rVert^{\alpha+\delta}\right\}<\infty for some δ>0\delta>0, with \lVert\cdot\rVert some matrix norm and

(30) {|𝒀|>x}=o({|𝑿|>x}),x,\displaystyle\mathbb{P}\left\{\left\lvert\boldsymbol{Y}\right\rvert>x\right\}=o\left(\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}\right),\ \ \ \ x\rightarrow\infty,

then, 𝐗+𝐀𝐘\boldsymbol{X}+\boldsymbol{A}\boldsymbol{Y} is regularly varying with index α\alpha and limiting measure μ\mu, and

{|𝑿+𝑨𝒀|>x}{|𝑿|>x},x.\displaystyle\mathbb{P}\left\{\left\lvert\boldsymbol{X}+\boldsymbol{A}\boldsymbol{Y}\right\rvert>x\right\}\sim\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\},\ \ \ \ x\rightarrow\infty.

Proof of Lemma 7.3: By Lemma 3.12 of [29], it suffices to show that

(31) {|𝑨𝒀|>x}=o({|𝑿|>x}),x.\displaystyle\mathbb{P}\left\{\left\lvert\boldsymbol{A}\boldsymbol{Y}\right\rvert>x\right\}=o\left(\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}\right),\ \ \ \ x\rightarrow\infty.

Defining g(x)=xα+δ/2α+δ,x0g(x)=x^{\frac{\alpha+\delta/2}{\alpha+\delta}},x\geq 0, we have

(32) {|𝑨𝒀|>x}{𝑨|𝒀|>x}0g(x){|𝒀|>x/t}{𝑨dt}+{𝑨>g(x)}.\displaystyle\mathbb{P}\left\{\left\lvert\boldsymbol{A}\boldsymbol{Y}\right\rvert>x\right\}\leq\mathbb{P}\left\{\lVert\boldsymbol{A}\rVert\left\lvert\boldsymbol{Y}\right\rvert>x\right\}\leq\int_{0}^{g(x)}\mathbb{P}\left\{\left\lvert\boldsymbol{Y}\right\rvert>x/t\right\}\mathbb{P}\left\{\lVert\boldsymbol{A}\rVert\in dt\right\}+\mathbb{P}\left\{\lVert\boldsymbol{A}\rVert>g(x)\right\}.

Due to (30), for arbitrary ε>0\varepsilon>0,

0g(x){|𝒀|>x/t}{𝑨dt}ε0g(x){|𝑿|>x/t}{𝑨dt},\displaystyle\int_{0}^{g(x)}\mathbb{P}\left\{\left\lvert\boldsymbol{Y}\right\rvert>x/t\right\}\mathbb{P}\left\{\lVert\boldsymbol{A}\rVert\in dt\right\}\leq\varepsilon\int_{0}^{g(x)}\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x/t\right\}\mathbb{P}\left\{\lVert\boldsymbol{A}\rVert\in dt\right\},

hold for large enough xx. Furthermore, by Potter’s Theorem (see, e.g., Theorem 1.5.6 of [35]), we have

{|𝑿|>x/t}{|𝑿|>x}I(t1)+2tα+δI(1<tg(x)),t(0,g(x))\displaystyle\frac{\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x/t\right\}}{\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}}\leq I_{(t\leq 1)}+2t^{\alpha+\delta}I_{(1<t\leq g(x))},\quad t\in(0,g(x))

holds for sufficiently large xx, and thus by the dominated convergence theorem,

(33) limx0g(x){|𝒀|>x/t}{|𝑿|>x}{𝑨dt}limx0g(x)ε{|𝑿|>x/t}{|𝑿|>x}{𝑨dt}=ε𝔼{𝑨α}.\displaystyle\lim_{x\rightarrow\infty}\int_{0}^{g(x)}\frac{\mathbb{P}\left\{\left\lvert\boldsymbol{Y}\right\rvert>x/t\right\}}{\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}}\mathbb{P}\left\{\lVert\boldsymbol{A}\rVert\in dt\right\}\leq\lim_{x\rightarrow\infty}\int_{0}^{g(x)}\frac{\varepsilon\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x/t\right\}}{\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}}\mathbb{P}\left\{\lVert\boldsymbol{A}\rVert\in dt\right\}=\varepsilon\mathbb{E}\left\{\lVert\boldsymbol{A}\rVert^{\alpha}\right\}.

Moreover, Markov inequality implies that

(34) limx{𝑨>g(x)}{|𝑿|>x}limx𝔼{𝑨α+δ}g(x)α+δ{|𝑿|>x}=0.\displaystyle\lim_{x\rightarrow\infty}\frac{\mathbb{P}\left\{\lVert\boldsymbol{A}\rVert>g(x)\right\}}{\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}}\leq\lim_{x\rightarrow\infty}\frac{\mathbb{E}\left\{\lVert\boldsymbol{A}\rVert^{\alpha+\delta}\right\}}{g(x)^{\alpha+\delta}\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}}=0.

Therefore, the claim (31) follows from (32)-(34) and the arbitrariness of ε\varepsilon. This completes the proof. \Box

Lemma 7.4.

Assume 𝐗,𝐘n\boldsymbol{X},\boldsymbol{Y}\in\mathbb{R}^{n} are regularly varying with same index α\alpha and same limiting measure μ\mu. Moreover, if 𝐗𝐘\boldsymbol{X}\geq\boldsymbol{Y} and {|𝐗|>x}{|𝐘|>x}\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}\sim\mathbb{P}\left\{\left\lvert\boldsymbol{Y}\right\rvert>x\right\} as xx\rightarrow\infty, then for any random vector 𝐙\boldsymbol{Z} satisfying 𝐗𝐙𝐘\boldsymbol{X}\geq\boldsymbol{Z}\geq\boldsymbol{Y}, 𝐙\boldsymbol{Z} is regularly varying with index α\alpha and limiting measure μ\mu, and {|𝐙|>x}{|𝐗|>x}\mathbb{P}\left\{\left\lvert\boldsymbol{Z}\right\rvert>x\right\}\sim\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\} as xx\rightarrow\infty.

Proof of Lemma 7.4: We only prove the claim for n=2n=2, a similar argument can be used to verify the claim for n3n\geq 3. For any x>0x>0, define a measure μx\mu_{x} as

μx(A)=:{x1𝒁A}{|𝑿|>x},A(¯02).\displaystyle\mu_{x}(A)=:\frac{\mathbb{P}\left\{x^{-1}{\boldsymbol{Z}}\in A\right\}}{\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}},\quad A\in\mathcal{B}(\overline{\mathbb{R}}_{0}^{2}).

We shall show that

(35) μxvμ,x.\displaystyle\mu_{x}\stackrel{{\scriptstyle v}}{{\longrightarrow}}\mu,\quad x\rightarrow\infty.

Given that the above is established, by letting A={𝒙:|𝒙|>1}A=\{\boldsymbol{x}:|\boldsymbol{x}|>1\} (which is relatively compact and satisfies μ(A)=0\mu(\partial A)=0), we have μx(A)μ(A)=1\mu_{x}(A)\rightarrow\mu(A)=1 as xx\rightarrow\infty and thus {|𝒁|>x}{|𝑿|>x}\mathbb{P}\left\{|\boldsymbol{Z}|>x\right\}\sim\mathbb{P}\left\{|\boldsymbol{X}|>x\right\}. Furthermore, by substituting the denominator in the definition of μx\mu_{x} by {|𝒁|>x}\mathbb{P}\left\{|\boldsymbol{Z}|>x\right\}, we conclude that

{x1𝒁}{|𝒁|>x}vμ(),x,\displaystyle\frac{\mathbb{P}\left\{x^{-1}{\boldsymbol{Z}}\in\cdot\right\}}{\mathbb{P}\left\{|\boldsymbol{Z}|>x\right\}}\stackrel{{\scriptstyle v}}{{\longrightarrow}}\mu(\cdot),\quad x\rightarrow\infty,

showing that 𝒁\boldsymbol{Z} is regularly varying with index α\alpha and limiting measure μ\mu.

Now it remains to prove (35). To this end, we define a set 𝒟\mathcal{D} consisting of all sets in ¯02\overline{\mathbb{R}}_{0}^{2} that are of the following form:

a):(a1,]×[a2,],a1>0,a2,\displaystyle a):\quad(a_{1},\infty]\times[a_{2},\infty],\ \ \ \ a_{1}>0,a_{2}\in\mathbb{R},
b):[,a1]×(a2,],a1,a2>0,\displaystyle b):\quad[-\infty,a_{1}]\times(a_{2},\infty],\ \ \ \ a_{1}\in\mathbb{R},a_{2}>0,
c):[,a1)×[,a2],a1<0,a2,\displaystyle c):\quad[-\infty,a_{1})\times[-\infty,a_{2}],\ \ \ \ a_{1}<0,a_{2}\in\mathbb{R},
d):[a1,]×[,a2),a1,a2<0.\displaystyle d):\quad[a_{1},\infty]\times[-\infty,a_{2}),\ \ \ \ a_{1}\in\mathbb{R},a_{2}<0.

Note that every A𝒟A\in\mathcal{D} is relatively compact and satisfies μ(A)=0\mu(\partial A)=0. We first show that

(36) limxμx(A)=μ(A),A𝒟.\displaystyle\lim_{x\rightarrow\infty}\mu_{x}(A)=\mu(A),\ \ \ \ \forall A\in\mathcal{D}.

If A=(a1,]×(a2,]A=(a_{1},\infty]\times(a_{2},\infty] or A=(a1,]×[a2,]A=(a_{1},\infty]\times[a_{2},\infty] with aia_{i}\in\mathbb{R} and at least one ai>0,i=1,2a_{i}>0,i=1,2, or A=¯×(a2,]A=\overline{\mathbb{R}}\times(a_{2},\infty] with some a2>0a_{2}>0, by the order relations of 𝑿,𝒀,𝒁\boldsymbol{X},\boldsymbol{Y},\boldsymbol{Z}, we have for any x>0x>0

(37) {x1𝒀A}{|𝑿|>x}μx(A){x1𝑿A}{|𝑿|>x}.\displaystyle\frac{\mathbb{P}\left\{x^{-1}\boldsymbol{Y}\in A\right\}}{\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}}\leq\mu_{x}(A)\leq\frac{\mathbb{P}\left\{x^{-1}\boldsymbol{X}\in A\right\}}{\mathbb{P}\left\{\left\lvert\boldsymbol{X}\right\rvert>x\right\}}.

Letting xx\rightarrow\infty, using the regularity properties as supposed for 𝑿\boldsymbol{X} and 𝒀\boldsymbol{Y}, and then appealing to Proposition 3.12(ii) in [39], we verify (36) for case a). If A=[,a1]×(a2,]A=[-\infty,a_{1}]\times(a_{2},\infty] with some a1,a2>0a_{1}\in\mathbb{R},a_{2}>0, then we have

μx(A)=μx(¯×(a2,])μx((a1,]×(a2,]),\displaystyle\mu_{x}(A)=\mu_{x}(\overline{\mathbb{R}}\times(a_{2},\infty])-\mu_{x}((a_{1},\infty]\times(a_{2},\infty]),

and thus by the convergence in case a),

limxμx(A)=μ(¯×(a2,])μ((a1,]×(a2,])=μ(A),\displaystyle\lim_{x\rightarrow\infty}\mu_{x}(A)=\mu(\overline{\mathbb{R}}\times(a_{2},\infty])-\mu((a_{1},\infty]\times(a_{2},\infty])=\mu(A),

this validates (36) for case b). If A=[,a1)×[,a2]A=[-\infty,a_{1})\times[-\infty,a_{2}] or A=[,a1)×[,a2)A=[-\infty,a_{1})\times[-\infty,a_{2}) with aia_{i}\in\mathbb{R} and at least one ai<0,i=1,2a_{i}<0,i=1,2, or A=¯×[,a2)A=\overline{\mathbb{R}}\times[-\infty,a_{2}) with some a2<0a_{2}<0, then we get a similar formula as (37) with the reverse inequalities. If A=[a1,]×[,a2)A=[a_{1},\infty]\times[-\infty,a_{2}) with some a1,a2<0,a_{1}\in\mathbb{R},a_{2}<0, then

μx(A)=μx(¯×[,a2))μx([,a1)×[,a2)).\displaystyle\mu_{x}(A)=\mu_{x}(\overline{\mathbb{R}}\times[-\infty,a_{2}))-\mu_{x}([-\infty,a_{1})\times[-\infty,a_{2})).

Therefore, similarly as the proof for the cases a)-b), one can establish (36) for the cases c) and d).

Next, let ff defined on ¯02\overline{\mathbb{R}}_{0}^{2} be any positive, continuous function with compact support. We see that the support of ff is contained in [𝒂,𝒃]c[\boldsymbol{a},\boldsymbol{b}]^{c} for some 𝒂<𝟎<𝒃\boldsymbol{a}<\boldsymbol{0}<\boldsymbol{b}. Note that

[𝒂,𝒃]c=(b1,]×[a2,][,b1]×(b2,][,a1)×[,b2][a1,]×[,a2)=:i=14Ai,\displaystyle[\boldsymbol{a},\boldsymbol{b}]^{c}=(b_{1},\infty]\times[a_{2},\infty]\cup[-\infty,b_{1}]\times(b_{2},\infty]\cup[-\infty,a_{1})\times[-\infty,b_{2}]\cup[a_{1},\infty]\times[-\infty,a_{2})=:\bigcup_{i=1}^{4}A_{i},

where AiA_{i}’s are sets of the form a)-d) respectively, and thus (36) holds for these AiA_{i}’s. Therefore,

supx>0μx(f)sup𝒛¯02f(𝒛)supx>0μx([𝒂,𝒃]c)sup𝒛¯02f(𝒛)i=14supx>0μx(Ai)<,\displaystyle\sup_{x>0}\mu_{x}(f)\leq\sup_{\boldsymbol{z}\in\overline{\mathbb{R}}_{0}^{2}}f(\boldsymbol{z})\cdot\sup_{x>0}\mu_{x}([\boldsymbol{a},\boldsymbol{b}]^{c})\leq\sup_{\boldsymbol{z}\in\overline{\mathbb{R}}_{0}^{2}}f(\boldsymbol{z})\cdot\sum_{i=1}^{4}\sup_{x>0}\mu_{x}(A_{i})<\infty,

which by Proposition 3.16 of [39] implies that {μx}x>0\{\mu_{x}\}_{x>0} is a vaguely relatively compact subset of the metric space consisting of all the nonnegative Radon measures on (¯02,(¯02))(\overline{\mathbb{R}}_{0}^{2},\mathcal{B}(\overline{\mathbb{R}}_{0}^{2})). If μ0\mu_{0} and μ0\mu_{0}^{\prime} are two subsequential vague limits of {μx}x>0\{\mu_{x}\}_{x>0} as xx\rightarrow\infty, then by (36) we have μ0(A)=μ0(A)\mu_{0}(A)=\mu_{0}^{\prime}(A) for any A𝒟A\in\mathcal{D}. Since any rectangle in ¯02\overline{\mathbb{R}}_{0}^{2} can be obtained from a finite number of sets in 𝒟\mathcal{D} by operating union, intersection, difference or complementary, and these rectangles constitutes a π\pi-system and generate the σ\sigma-field (¯02)\mathcal{B}(\overline{\mathbb{R}}_{0}^{2}), we get μ0=μ0\mu_{0}=\mu_{0}^{\prime} on ¯02\overline{\mathbb{R}}_{0}^{2}. Consequently, (35) is valid, and thus the proof is complete. \Box


Acknowledgement: The research of Xiaofan Peng is partially supported by National Natural Science Foundation of China (11701070,71871046).

References

  • [1] K. Dȩbicki, B. Zwart, and S. Borst, “The supremum of a Gaussian process over a random interval,” Statistics and Probability letters, vol. 68, no. 3, pp. 221–234, 2004.
  • [2] M. Arendarczyk and K. Dȩbicki, “Asymptotics of supremum distribution of a Gaussian process over a Weibullian time,” Bernoulli, vol. 17, no. 1, pp. 194–210, 2011.
  • [3] M. Arendarczyk and K. Dȩbicki, “Exact asymptotics of supremum of a stationary Gaussian process over a random interval,” Statistics and Probability Letters, 2011.
  • [4] Z. Tan and E. Hashorva, “Exact asymptotics and limit theorems for supremum of stationary chi-processes over a random interval,” Stochastic Processes and Their Applications, vol. 123, pp. 2983–2998, 2013.
  • [5] K. D
    e
    bicki, E. Hashorva, and L. Ji, “Tail asymptotics of supremum of certain Gaussian processes over threshold dependent random intervals,” Extremes, vol. 17, no. 3, pp. 411–429, 2014.
  • [6] M. Arendarczyk, “On the asymptotics of supremum distribution for some interated processes,” Extremes, vol. 20, pp. 451–474, 2017.
  • [7] K. D
    e
    bicki and X. Peng, “Sojourns of stationary Gaussian processes over a random interval,” Preprint, https://arxiv.org/pdf/2004.12290.pdf, 2020.
  • [8] K. D
    e
    bicki, K. Kosiński, M. Mandjes, and T. Rolski, “Extremes of multi-dimensional Gaussian processes,” Stochastic Processes and their Applications, vol. 120, no. 12, pp. 2289–2301, 2010.
  • [9] K. Dȩbicki, E. Hashorva, L. Ji, and K. Tabiś, “Extremes of multi-dimensional Gaussian processes: Exact asymptotics,” Stochastic Processes and Their Applications, vol. 125, pp. 4039–4065, 2015.
  • [10] J.-M. Azais and V. Pham, “Geometry of conjunction set of smooth stationary Gaussian fields,” Preprint, https://arxiv.org/abs/1909.07090v1, 2019.
  • [11] V. Pham, “Conjunction probability of smooth centered Gaussian processes,” Acta Math Vietnam, https://doi.org/10.1007/s40306-019-00351-4, 2020.
  • [12] K. D
    e
    bicki, E. Harshorva, and L. Wang, “Extremes of multi-dimensional Gaussian processes,” Stochastic Processes and their Applications, 2020.
  • [13] K. Worsley and K. Friston, “A test for a conjunction,” Statistics and Probability Letters, vol. 47, no. 2, pp. 135–140, 2000.
  • [14] V. Piterbarg and B. Stamatovich, “Rough asymptotics of the probability of simultaneous high extrema of two Gaussian processes: the dual action functional,” Uspekhi Mat. Nauk, vol. 60, no. 1(361), pp. 171–172, 2005.
  • [15] K. D
    e
    bicki, L. Ji, and T. Rolski, “Exact asymptotics of component-wise extrema of two-dimensional Brownian motion,” Extremes, https://doi.org/10.1007/s10687-020-00387-y, 2020.
  • [16] K. D
    e
    bicki, E. Harshorva, and K. Krystecki, “Finite-time ruin probability for correlated Brownian motions,” Preprint, https://arxiv.org/pdf/2004.14015.pdf, 2020.
  • [17] R. van der Hofstad and H. Honnappa, “Large deviations of bivariate Gaussian extrema,” Queueing Systems, vol. 93, pp. 333–349, 2019.
  • [18] S. Asmussen and H. Albrecher, Ruin probabilities. Advanced Series on Statistical Science & Applied Probability, 14, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, second ed., 2010.
  • [19] L. Ji and S. Robert, “Ruin problem of a two-dimensional fractional Brownian motion risk process,” Stochastic Models, vol. 34, no. 1, pp. 73–97, 2018.
  • [20] O. E. Barndorff-Nielsen, J. Pedersen, and K.-I. Sato, “Multivariate subordination, self-decomposability and statility,” Advances in Applied Probability, vol. 33, pp. 160–187, 2001.
  • [21] E. Luciano and P. Semeraro, “Multivariate time changes for Lévy asset models: Characterization and calibration,” Journal of Computational and Applied Mathematics, vol. 233, pp. 1937–1953, 2010.
  • [22] Y. S. Kim, “The fractional multivariate normal tempered stable process,” Applied Mathematics Letters, vol. 25, pp. 2396–2401, 2012.
  • [23] H. He, W. P. Keirstead, and J. Rebholz, “Double lookbacks,” Mathematical Finance, vol. 8, no. 3, pp. 201–228, 1998.
  • [24] Z. Palmowski and B. Zwart, “Tail asymptotics of the supremum of a regenerative process,” Journal of Applied Probability, vol. 44, no. 2, pp. 349–365, 2007.
  • [25] B. Zwart, S. Borst, and K. D
    e
    bicki, “Subexponential asymptotics of hybrid fluid and ruin models,” The Annals of Applied Probability, vol. 15, no. 1A, pp. 500–517, 2005.
  • [26] O. Kella and W. Whitt, “A storage model with a two-state random evironment,” Operations Research, vol. 40, pp. 257–262, 1992.
  • [27] C. Constantinescu, G. Delsing, M. Mandjes, and L. Rojas Nandayapa, “A ruin model with a resampled environment,” Scandinavian Actuarial Journal, vol. 4, pp. 323–341, 2020.
  • [28] N. Ratanov, “Kac-levy process,” Journal of Theoretical Probability, vol. 33, pp. 239–267, 2020.
  • [29] A. H. Jessen and T. Mikosch, “Regular varying functions,” Publications de L’Institut Mathematique, vol. 80, no. 94, pp. 171–192, 2006.
  • [30] S. I. Resnick, Heavy-Tail Phenomena. Probabilistic and Statistical Modeling. London: Springer, 2007.
  • [31] H. Hult, F. Lindskog, T. Mikosch, and G. Samorodnitsky, “Functional large deviations for multivariate regularly varying random walks,” The Annals of Applied Probability, vol. 15, no. 4, pp. 2651–2680, 2006.
  • [32] A. Dieker, “Extremes of Gaussian processes over an infinite horizon,” Stochastic Processes and their Applications, vol. 115, no. 2, pp. 207–248, 2005.
  • [33] V. Piterbarg, Asymptotic methods in the theory of Gaussian processes and fields, vol. 148 of Translations of Mathematical Monographs. Providence, RI: American Mathematical Society, 1996.
  • [34] G. Samorodnitsky and M. S. Taqq, “Stable Non-Gaussian Random Processes,” The Annals of Applied Probability, vol. 15, no. 4, pp. 2651–2680, 2006.
  • [35] N. Bingham, C. Goldie, and J. Teugels, Regular variation, vol. 27. Cambridge university press, 1989.
  • [36] N. L. Zadi and D. Nualart, “Smoothness of the law of the supremum of the fractional Brownian motion,” Electronic Communications in Probability, vol. 8, pp. 102–111, 2003.
  • [37] T. Mikosch, Regular variation, subexponentiatility and their applications in probability theory. in: Lecture Notes for the Workshop ”Heavy Tails and Queues”, EURANDOM Institute, Eindhoven, The Netherlands, 1999.
  • [38] B. Basrak, R. A. Davis, and T. Mikosch, “Regular variation of GARCH processes,” Stochastic Processes and their Applications, vol. 99, pp. 95–116, 2002.
  • [39] S. I. Resnick, Extreme Values, Regular variation, and Point Processes. London: Springer, 1987.